Updated on 2025/03/21

写真a

 
HUANG Wen Chin
 
Organization
Graduate School of Informatics Department of Intelligent Systems 1 Assistant Professor
Graduate School
Graduate School of Informatics
Undergraduate School
School of Informatics Department of Computer Science
Title
Assistant Professor
Profile
2018年台湾・国立台湾大学学士号,2021年名古屋大学修士号,2024年同大学博士号.2017年から2019年まで台湾・中央研究院情報科学研究所にて研究助手を務める.現在,名古屋大学大学院情報学研究科助教.Voice Conversion Challenge 2020およびVoiceMOS Challenge 2022の共同オーガナイザー.音声変換と音声品質評価を中心に,音声処理へのディープラーニングの応用を研究.ISCSLP2018最優秀学生論文賞,APSIPA ASC2021最優秀論文賞受賞

Degree 3

  1. 博士(情報学) ( 2024.3   名古屋大学 ) 

  2. 修士(情報学) ( 2021.3   名古屋大学 ) 

  3. Bachelor of Science ( 2018.6   National Taiwan University ) 

Research Interests 6

  1. voice conversion

  2. 音声品質評価

  3. speech processing

  4. speech synthesis

  5. voice conversion

  6. 音声情報処理

Research Areas 2

  1. Informatics / Perceptual information processing

  2. Informatics / Perceptual information processing  / 音声情報処理

Research History 2

  1. Nagoya University   Graduate School of Informatics   Assistant Professor

    2024.4

  2. Google DeepMind   Student researcher

    2023.4 - 2024.3

      More details

    Country:Japan

Education 1

  1. Nagoya University   Graduate School of Informatics   Department of Intelligent Systems

    2021.4 - 2024.3

      More details

    Country: Japan

Professional Memberships 1

  1. 日本音響学会

    2024.4

Committee Memberships 5

  1. VoiceMOS Challenge   Organizing Committee Member  

    2024   

  2. VoiceMOS Challenge   Organizing Committee Member  

    2023   

  3. Singing Voice Conversion Challenge   Organizing Committee Member  

    2023   

  4. VoiceMOS Challenge   Organizing Committee Member  

    2022   

  5. Voice Conversion Challenge   Organizing Committee Member  

    2020   

 

Papers 8

  1. A review on subjective and objective evaluation of synthetic speech Open Access

    Cooper Erica, Huang Wen-Chin, Tsao Yu, Wang Hsin-Min, Toda Tomoki, Yamagishi Junichi

    Acoustical Science and Technology   Vol. 45 ( 4 ) page: 161 - 183   2024.7

     More details

    Language:English   Publishing type:Research paper (scientific journal)   Publisher:ACOUSTICAL SOCIETY OF JAPAN  

    <p>Evaluating synthetic speech generated by machines is a complicated process, as it involves judging along multiple dimensions including naturalness, intelligibility, and whether the intended purpose is fulfilled. While subjective listening tests conducted with human participants have been the gold standard for synthetic speech evaluation, its costly process design has also motivated the development of automated objective evaluation protocols. In this review, we first provide a historical view of listening test methodologies, from early in-lab comprehension tests to recent large-scale crowdsourcing mean opinion score (MOS) tests. We then recap the development of automatic measures, ranging from signal-based metrics to model-based approaches that utilize deep neural networks or even the latest self-supervised learning techniques. We also describe the VoiceMOS Challenge series, a scientific event we founded that aims to promote the development of data-driven synthetic speech evaluation. Finally, we provide insights into unsolved issues in this field as well as future prospects. This review is expected to serve as an entry point for early academic researchers to enrich their knowledge in this field, as well as speech synthesis practitioners to catch up on the latest developments.</p>

    DOI: 10.1250/ast.e24.12

    Open Access

    Web of Science

    Scopus

    CiNii Research

  2. Objective assessment of synthetic speech and the VoiceMOS Challenge

    Cooper Erica, Huang Wen-Chin, Tsao Yu, Wang Hsin-Min, Toda Tomoki, Yamagishi Junichi

    THE JOURNAL OF THE ACOUSTICAL SOCIETY OF JAPAN   Vol. 80 ( 7 ) page: 381 - 392   2024.7

     More details

    Language:Japanese   Publisher:Acoustical Society of Japan  

    DOI: 10.20697/jasj.80.7_381

    CiNii Research

  3. Pretraining and Adaptation Techniques for Electrolaryngeal Speech Recognition. Open Access

    Lester Phillip Violeta, Ding Ma, Wen-Chin Huang, Tomoki Toda

    IEEE ACM Trans. Audio Speech Lang. Process.   Vol. 32   page: 2777 - 2789   2024

     More details

    Publishing type:Research paper (scientific journal)  

    DOI: 10.1109/TASLP.2024.3402557

    Open Access

    Web of Science

    Scopus

  4. A Large-Scale Evaluation of Speech Foundation Models. Open Access

    Shu-Wen Yang, Heng-Jui Chang, Zili Huang, Andy T. Liu, Cheng-I Lai, Haibin Wu, Jiatong Shi, Xuankai Chang, Hsiang-Sheng Tsai, Wen-Chin Huang, Tzu-hsun Feng, Po-Han Chi, Yist Y. Lin, Yung-Sung Chuang, Tzu-Hsien Huang, Wei-Cheng Tseng, Kushal Lakhotia, Shang-Wen Li 0001, Abdelrahman Mohamed, Shinji Watanabe 0001, Hung-yi Lee

    IEEE ACM Trans. Audio Speech Lang. Process.   Vol. 32   page: 2884 - 2899   2024

     More details

    Publishing type:Research paper (scientific journal)  

    DOI: 10.1109/TASLP.2024.3389631

    Web of Science

    Scopus

  5. Electrolaryngeal Speech Intelligibility Enhancement through Robust Linguistic Encoders. Open Access

    Lester Phillip Violeta, Wen-Chin Huang, Ding Ma, Ryuichi Yamamoto, Kazuhiro Kobayashi, Tomoki Toda

    ICASSP     page: 10961 - 10965   2024

     More details

    Publishing type:Research paper (international conference proceedings)  

    DOI: 10.1109/ICASSP48485.2024.10447197

    Web of Science

    Scopus

    Other Link: https://dblp.uni-trier.de/db/conf/icassp/icassp2024.html#VioletaHMYKT24

  6. The Voicemos Challenge 2024: Beyond Speech Quality Prediction

    Huang W.C., Fu S.W., Cooper E., Zezario R.E., Toda T., Wang H.M., Yamagishi J., Tsao Y.

    Proceedings of 2024 IEEE Spoken Language Technology Workshop, SLT 2024     page: 803 - 810   2024

     More details

    Publisher:Proceedings of 2024 IEEE Spoken Language Technology Workshop, SLT 2024  

    We present the third edition of the VoiceMOS Challenge, a scientific initiative designed to advance research into automatic prediction of human speech ratings. There were three tracks. The first track was on predicting the quality of 'zoomed-in' high-quality samples from speech synthesis systems. The second track was to predict ratings of samples from singing voice synthesis and voice conversion with a large variety of systems, listeners, and languages. The third track was semi-supervised quality prediction for noisy, clean, and enhanced speech, where a very small amount of labeled training data was provided. Among the eight teams from both academia and industry, we found that many were able to outperform the baseline systems. Successful techniques included retrieval-based methods and the use of non-self-supervised representations like spectrograms and pitch histograms. These results showed that the challenge has advanced the field of subjective speech rating prediction.

    DOI: 10.1109/SLT61566.2024.10832295

    Scopus

  7. Multi-Speaker Text-to-Speech Training With Speaker Anonymized Data Open Access

    Wen-Chin Huang, Yi-Chiao Wu, Tomoki Toda

    IEEE Signal Processing Letters   Vol. 31   page: 2995 - 2999   2024

     More details

    Publishing type:Research paper (scientific journal)  

    DOI: 10.1109/LSP.2024.3482701

    Open Access

    Web of Science

    Scopus

  8. AAS-VC: On the Generalization Ability of Automatic Alignment Search based Non-autoregressive Sequence-to-sequence Voice Conversion.

    HUANG Wen-Chin, 小林和弘, 小林和弘, 戸田智基

    日本音響学会研究発表会講演論文集(CD-ROM)   Vol. 2024   2024

     More details

▼display all

MISC 1

  1. AAS-VC: On the Generalization Ability of Automatic Alignment Search based Non-autoregressive Sequence-to-sequence Voice Conversion.

    HUANG Wen-Chin, 小林和弘, 小林和弘, 戸田智基

    日本音響学会研究発表会講演論文集(CD-ROM)   Vol. 2024   2024

     More details

Presentations 2

  1. Fundamentals, Prospectives and Challenges in Deep-learning based Voice Conversion Invited

    HUANG Wen-Chin

    Research Center for Information Technology Innovation (CITI), Academia Sinica  2024.8.14 

     More details

    Presentation type:Public lecture, seminar, tutorial, course, or other speech  

  2. Progress and Future Perspectives on Deep-learning based Voice Conversion Invited

    HUANG Wen-Chin

    2024.10.22 

     More details

    Language:Japanese   Presentation type:Oral presentation (invited, special)  

Research Project for Joint Research, Competitive Funding, etc. 2

  1. Audiobox Responsible Generation Grant

    2024.11

    Unrestricted Research Gift

      More details

    Authorship:Principal investigator 

  2. Google Research Grant

    2024.9

    Unrestrcited gift

      More details

    Authorship:Principal investigator 

KAKENHI (Grants-in-Aid for Scientific Research) 1

  1. Augmented speech communication using multi-modal signals with real-time, low-latency voice conversion

    Grant number:21J20920  2021.4 - 2024.3

    Japan Society for the Promotion of Science  Grants-in-Aid for Scientific Research  Grant-in-Aid for JSPS Fellows

 

Teaching Experience (On-campus) 2

  1. プログラミング2演習

    2024

  2. 確率及び統計演習

    2024