Skip to Main Content

sensor diagram

Most people hear through two ears. What if you could hear through hundreds?

Ryan Corey, who leads the Listening Technology Lab at DPI and UIC, has received a new grant from the DEVCOM Army Research Lab to study resource-constrained acoustic sensor networks that combine sound from dozens or hundreds of devices.

Hearing through hundreds of ears
If you’ve ever tried to follow a conversation in a crowded restaurant or make a phone call on the train at rush hour, you’ll know how challenging it is to pick out one sound from a noisy mixture. That’s also true for audio technology like hearing aids, cell phones, and security systems. Even devices with several microphones, like high-end smart speakers and videoconferencing hardware, have trouble identifying and locating sounds from far away.

To solve the problem, Corey’s team will pull together data from several devices scattered across a wide area. Because the devices are far apart and spread among the sound sources, each microphone gets a different mix of sounds. Using spatial signal processing algorithms, the researchers can combine those different noisy sound mixes to identify an individual sound as well as where it’s coming from, even in loud noise. The team has used similar methods to enhance hearing for group conversations in noise, including an experiment with the high-end ceiling microphones in DPI’s classrooms.

Practical challenges
Corey says that in the real world, we usually can’t use high-end ceiling microphones for an acoustic sensor network. However, nowadays we’re surrounded by network-connected microphones in our cell phones, wearables, and smart-home devices.

“If we could tap into those ubiquitous devices,” Corey said, “we could have a powerful sensor network everywhere we go.”

Unfortunately, he points out a problem: Most of those devices use low-quality microphones, and they can’t stream audio all the time or they’d quickly run out of battery. They also aren’t synchronized: Each one uses its own “clock”, so the signals don’t line up perfectly.

In this project, Corey’s team will develop new algorithms that combine recordings from low-power, low-quality microphones that are not perfectly synchronized and might have limited network access. Ultimately, this will lead to new lightweight, low-cost, low-power, and high-performance acoustic sensing, even in crowds or places with a lot of background noise.

Improving hearing technology
This project complements Corey’s larger research program to improve assistive and augmentative hearing technology, such hearing aids, wireless assistive listening systems, smart headsets, and mixed-reality experiences. The research team in the Listening Technology Lab uses signal processing and AI to make it easier for people with and without hearing loss to hear, communicate, and experience the world through sound.


Author: Discovery Partners Institute