USU Audiology Professor Awarded NIH Grant to Use AI to Identify Hearing Diseases

Professor Kamerer works hands-on with audiology graduate student at USU.
Aryn Kamerer, assistant professor in the Department of Speech and Hearing Sciences, has been awarded a three-year $500K Early Career Research grant from the National Institute on Deafness and Other Communication Disorders within the National Institutes of Health (NIH). The research team aims to use artificial intelligence (AI) to distinguish auditory diseases in the inner ear and brain.
“Dr. Kamerer is an incredible scientist with the ability to see novel research directions to help people with hearing problems,” said Karen Muñoz, professor and department head of Speech and Hearing Sciences at USU. “Hearing is a complex process and her work has the potential to augment tools and procedures in the future that allow hearing care professionals to provide targeted interventions and further improve outcomes.”
Kamerer describes the need for precise hearing loss diagnoses as a critical factor to providing better care for people with hearing loss and for the development of targeted drug-related therapies, which could cure certain types of hearing loss.
The problem rests with the outmoded testing methods used to diagnose hearing disorders. “The primary test that is used to give information to audiologists about a person’s hearing hasn’t changed since the forties,” Kamerer explained. This test requires patients to raise their hand when they hear a beeping sound, which tells audiologists whether their patients can hear quiet sounds or not, but very little else.
Kamerer explains that this simple test is unable to tell audiologists where in the auditory system the breakdown is occurring. It could be in various places in the inner ear, in the brain stem, or in the cortex of the brain. Despite not knowing the source of the problem, audiologists fit their patients with hearing aids to see if their hearing improves.
“Hearing aids do a great job of replacing the function of a very specific type of cell in the inner ear called hair cells,” said Kamerer. “The hearing aid works by replacing those hair cells that are damaged. The problem is that we don’t have tests to know whether just those hair cells are damaged. It is likely that people who don’t find much benefit from their hearing aids have underlying pathologies that hearing aids are not meant to treat but that we have no tests to find.”

Dr. Aryn Kamerer
The simple testing method also impacts the development of new drugs. Kamerer explains that there are many clinical trials going on right now for pharmaceuticals that could likely cure certain diseases that cause hearing loss, but with no good tests to know who has a particular pathology, researchers can’t select the best pool of participants for the clinical trials. Instead, they include anyone who has some type of hearing loss but no specific diagnosis. The trial results vary dramatically, which naturally scares off investors. As a result, most drugs don’t go through final trials and are never brought to market.
“The future of hearing loss treatment is in pharmaceuticals,” said Kamerer. “Four years ago, there were four clinical trials registered on clinicaltrials.gov. Today, there are 40 clinical trials.”
Kamerer believes there is hope. When she was in her postdoctoral program, she participated on a research team where one of her roles was manually marking features on thousands of human auditory waveforms collected through EEGs (or electroencephalograms).
“Because we wanted to reduce human error,” she said about the research project, “we had another person in the lab do the same thing I was doing. Then, if there were any differences, we had a third person arbitrate. It was hours and hours of work for multiple people. There were literally thousands of data points, and I began asking myself why we haven’t developed an automated way to do this work.”
Recalling mathematical modeling components she used in a statistics class during graduate school, Kamerer recognized each peak and valley in the waveform as the common Gaussian curve. She then used the Gaussian curve as the basis for a new model that would automate the work she was doing.
“I wrote a mathematical formula that adds a bunch of Gaussian shapes together,” she explained. “When you add them together it beautifully recreates what looks like the auditory waveform. People have written models like this to look at single waves, but no one has ever modeled the entire thing by just adding the waves together.”
Kamerer continued, “My method automatically pulls out much more data than the human eye can see. It picks out where the peaks are in time, determines how tall and wide they are, measures the area, and determines when the peak starts going up. It also measures the curvature. You just hit a button, and it does it in less than a second.”
With this grant project, Kamerer is hopeful she has created a way to quickly analyze huge quantities of waveform data. And, by harnessing the power of AI, she is hopeful the new technology will be able to distinguish among specific pathologies.
The research team on this project comprises Kamerer as principal investigator; Kevin Moon, director of the Data Science and AI Center and associate professor in Department of Mathematics and Statistics at USU, as co-investigator; Jarrod Mau, doctoral student in Mathematical Sciences and machine learning specialist;and Will Allen, an undergraduate research assistant who will begin his audiology doctoral degree in the Fall. Additionally, Kamerer has another co-investigator, Jefferey Lichtenhan, who is an expert in animal physiology at the University of South Florida.
The team is using data collected from animals who already have a known pathology. The data are provided by 12 research labs from across the country that are studying various auditory pathologies. “I wanted to begin with animal data rather than human data because we know all these animals have a pathology, which allows me to test in a closed loop. These labs all do the same hand analysis, and they say that they’re seeing these waveforms that look wild in these animals, but they have no way to quantify it because all they have is an eyeballed marker,” explains Kamerer.
The research team will give AI thousands of animal waveforms in addition to Kamerer’s model and ask it to pull out all the data. “We won’t tell AI what the pathology is or if there even is a pathology. This is known as ‘unsupervised learning,’ and it’s quick,” explained Kamerer. “The more data you give AI, the better it works because it gets smarter and more accurate. We’ll give it the data and then tell it to group the animal waveforms by what looks similar. Then we can go in and say the similar thing is the pathology.”
She continued, “In this three-year study we’re training AI to find a pathology. At the end of this grant, we’ll start on human research. We’re already collecting data from human labs across the United States.”
For now, Kamerer and her team are focused on their research goals. One of which is to turn Kamerer’s model into a tool that can be used by clinicians to automatically analyze their data, saving them time and the difficult task of interpreting the markers to identify where the auditory problem is the system.
The primary goal, though, is to provide precise diagnoses for researchers. “I want to make the model available for researchers doing clinical trials for pharmaceutical drugs,” said Kamerer. “If they can use it to get a better idea of what their participant’s actual problem is and whether their pharmaceutical treatment is going to target that problem, it’s going to help them find a better pool for their trials. I’m hopeful. I think within 10 to 15 years we are going to have drugs that give people their hearing back.”