Hanyang University
College of Engineering

Cultivating Korea¡¯s Technological Outcomes

ȨÀ¸·Î NewsFaculty

Faculty

board detail contents
New Tech for Voice Recognition Through Mouthing in VR
ÀÛ¼ºÀÚ : ÇѾç´ëÇб³ °ø°ú´ëÇÐ(help@hanyang.ac.kr)   ÀÛ¼ºÀÏ : 22.04.11   Á¶È¸¼ö : 226
URL :

High recognition accuracy through real-time facial electromyogram and easy for commercialization

Hanyang University Department of Biomedical Engineering Professor Lim Chang-hwan¡¯s team developed a new technology that accurately recognizes the user¡¯s intention through mouthing without actual vocalization when wearing a virtual reality(VR) headset, according to Hanyang University on the 21st. The technology is expected to be of great help to users who have difficulties in precise vocalization due to laryngectomy or vocal cord nodules.

Metaverse services using VR is drawing public attention. In particular, as the cost-effectiveness improved and voice recognition technologies used in AI speakers is integrated, the VR headset distribution rate is also increasing.

However, previous voice recognition technologies used in VR headsets can have poor accuracy due to the ambient noise and are difficult to use in quiet environments such as libraries. Also, those who cannot have loud or clear voice due to laryngectomy or vocal cord nodules are hard to use voice recognition technology.

To solve such problems, Professor Lim developed a VR headset. It has 6 bioelectrodes attached below the eye that touches the skin when wearing a VR headset. When the user does not speak out loud and only imitates the lip movements, the VR headset analyzes real-time facial electromyogram signals measured in the electrode, which figures out the speaker¡¯s intention.

Such technology that uses facial electromyogram to understand utterance intention is called ¡°silent speech recognition¡±, and because previous silent speech recognition research did not consider applying VR, a higher accuracy rate could be obtained by adding electrodes around the lips.

Nonetheless, the recognition accuracy rate could decrease due to the far distance of the VR headset from the lips, but Professor Lim¡¯s team solved this problem through deep learning algorithms applied with a new data augmenting technology.

Professor Lim¡¯s team devised a system that categorizes 6 commands that are often used in VR settings, which are ¡°next¡±, ¡°before¡±, ¡°back¡±, ¡°menu¡±, ¡°home¡±, and ¡°select¡±, and evaluated the system recognition rate using 20 test subjects. The result was a high 92.53% accuracy rate of silent speech recognition.

Professor Lim said, ¡°We could observe the accuracy rate increasing as the users became more familiar with our system¡± and that ¡°Easy adaptation was possible by only changing the VR headset pad and fast commercialization is expected from adding more command words.¡¯

Professor Lim¡¯s research was published in ¡°Virtual Reality¡±, which is the most prestigious journal in the virtual reality field, on the 2nd. This research was supported by Samsung Science & Technology Foundation, Korea Research Foundation¡¯s Junior Researcher Support Project, and the Institute of Information & Communication Technology Planning & Evaluation¡¯s Artificial Intelligence University Support Project.

Professor Lim Chang-hwan 
Professor Lim Chang-hwan 
Trial performance video screen: When the user puts on the VR headgear and reads command words silently, the categorization results are shown in real-time.
Trial performance video screen: When the user puts on the VR headgear and reads command words silently, the categorization results are shown in real-time.
ÀÌÀü±Û New Platform for Brain Tumor Treatment
´ÙÀ½±Û New Catalyst to Prevent Global Warming
¸®½ºÆ®