Binghamton scientists have recently cultivated a new technology that allows control virtual reality by using only mouth gestures.
The expansion of moderate virtual reality head-mounted showcases furnishes clients with sensible immersive visual encounters. However, head-mounted displays occlude the upper half of a user’s face and prevent facial action recognition from the entire face. To overcome this issue, scientists created a new framework that read mouth gestures as a medium for interaction within virtual reality in real-time.
When a user put on a head-mounted display, they are represented to a simplistic game. The objective of the game is to guide the player’s avatar around a forest and eat as many cakes as possible. Here, by just rotating the head, players here can select their movement, move using mouth gestures and eat the cake by smiling.
Furthermore, the system could potentially detect user’s mouth movements and thus achieved high correct recognition rates.
Binghamton University Professor of Computer Science Lijun Yin said, “We hope to make this applicable to more than one person, maybe two. Think Skype interviews and communication. Imagine if it felt like you were in the same geometric space, face to face, and the computer program can efficiently depict your facial expressions and replicate them so it looks real.”
“The virtual world isn’t only for entertainment. For instance, healthcare uses VR to help disabled patients. Medical professionals or even military personnel can go through training exercises that may not be possible to experience in real life. This technology allows the experience to be more realistic.”