‘Earable’ Computing: Earphone To Be The Next Significant Milestone

Have you heard that ‘earable’ computing is about to become the next significant milestone in wearable devices? Very soon, the new hardware, software and apps will all run on this platform.

A research group is defining a new sub-area of mobile technology called ‘earable’ computing and the team believes that earphones are about to become the next big thing in wearable devices.

According to Romit Roy Choudhury, a professor in electrical and computer engineering (ECE), “The leap from today’s earphones to ‘earable’ would mimic the transformation that we had seen from basic phones to smartphones.” “Today’s smartphones are hardly a calling device anymore, much like how tomorrow’s earables will hardly be a smartphone accessory.”

RELATED: Basecamp Review – Easy-to-Use System to Organize Your Business Better

The group of researchers believes tomorrow’s earphones will continuously sense human behavior, have Siri and Alexa whisper just-in-time, run acoustic augmented, track user motion and health, and provide seamless, security, among many other capabilities.

 

The research questions that underlie earable computing draw from a wide range of fields, including signal processing, sensing, communications, machine learning and embedded systems. The SyNGR team is at the forefront of developing new algorithms while also experimenting with them on real earphone platforms with live users.

 

Computer science Ph.D. student Zhijian Yang and other members of the SyNRG group, including his fellow students Liz Li and Yu-Lin Wei are leading the way. They have published a series of papers in this area, beginning with one on the topic of hollow noise cancellation that was published at ACM SIGCOMM 2018.

 

Lately, the group had three papers published at the 26th Annual International Conference on Mobile Computing and Networking (ACM MobiCom) on three different aspects of earable research: voice localization, acoustic augmented reality and facial sensing for earphones.

 

According to Zhijian, “If you want to find a store in a mall, the earphone could estimate the relative location of the store and play a 3D voice that simply says ‘follow me’ in your ears and the  sound would appear to come from the direction in which you should walk as if it is a voice escort.”

 

The second paper, EarSense: Earphones as a Teeth Activity Sensor, looks at how earphones could sense in-mouth and facial activities such as teeth movements and taps, enabling a hands-free modality of communication to smartphones.

 

Moreover, many medical conditions manifest in teeth chatter, and the proposed technology would make it possible to identify them by wearing earphones during the day. In the future, the team is thinking of looking into analyzing facial muscle movement and emotion with earphone sensors.

 

The third publication, Voice Localization Using Nearby Wall Reflection, investigates the use of algorithms to detect the direction of a sound. It means that if Mark and Janet are having a conversation, Janet’s earphones would be able to tune into the direction Mark’s voice is coming from.

 

Wei says, “We have been working on mobile sensing and computing for 10 years.” “We have a lot of experience to define this emerging landscape of earable computing.”

 

Haitham Hassanieh, assistant professor in ECE, is also involved in this research and the team is funded by NIH and NSF, as well as companies such as Google and Nokia.

RELATED: 11 Best Project Management Software For Free

Leave a Reply