Cheap and Effective Hearing Tests and Hearing
Biplov Ale1,3, Dr. Ruchi Sharma (Au.D)2, Andrew Long1, and Steven Wilkinson1
- Department of Mathematics and Statistics, Northern Kentucky University, Highland Heights, KY 41099
- TriHealth, 625 Eden Park Drive, Cincinnati OH 45206
- Department of Physics, Geology, and Engineering Technology, Northern Kentucky University, Highland Heights, KY 41099
Many older adults need glasses as their vision fails, and frequently those glasses are provided by insurance (at least in United States). Hearing may also start to fail, and hearing aids would provide a measure of improved hearing; however, and in stark contrast, hearing aids are usually not provided by insurance, and are often out of financial reach for many. This problem is even worse in third world countries where only a few can afford hearing assistance. Furthermore, the technology for hearing tests has advanced little over the fifty years that one author has had his hearing tested: one is still ushered into a sound-proof booth, where tones of various frequencies are played into one’s ears. The patient signals whenever a tone is heard, and a certain amount of second guessing occurs (sometimes based upon perceived patterns of sound input from the tester). This imparts an odd psychological match of wits between the tester and the testee, which likely has nothing to do with the patient's hearing.
We propose to develop a simple and inexpensive program that will allow one to test one’s own hearing, and then a simple hearing aid based on the profile generated by the hearing test.
- What causes hearing loss?
"Noise-induced hearing loss is caused by long-term exposure to sounds that are either too loud or last too long. This kind of noise exposure can damage the sensory hair cells in your ear that allow you to hear. Once these hair cells are damaged, they do not grow back and your ability to hear is diminished.
"Conditions that are more common in older people, such as high blood pressure or diabetes, can contribute to hearing loss. Medications that are toxic to the sensory cells in your ears (for example, some chemotherapy drugs) can also cause hearing loss."
Our objective in this project is two-fold:
- To create a simple Mathematica-driven hearing profiler (“Tester”), to identify those frequencies at which one needs correction.
- To create an inexpensive noise-cancelling device (either headphones or earbuds) to provide that correction, which is possible in one of two ways:
- Volume enhancement of frequencies not heard well by the patient
- Frequency shifting; remapping sounds to frequencies at which one still hears well
The project has two main phases: “Prototyping” and “Device Development and Miniaturizing”. In prototyping, we decided to go with simple yet versatile approach of using Arduino UNO board. Signal processing, hearing profiling, and frequency shifting codes will be written in Mathematica and will be run on a laptop, which connects to the Arduino board. The Arduino acts as the interface for the digital components such as speakers, potentiometers, contact sensors, and microphones. The initial schematic for the prototyping has been illustrated below:
The next step of “Device Development and Testing” involves initially designing a hearing aid device structure and scaling down the size of the components. It also might be necessary to convert the Mathematica code into another language more suitable for the hardware -- something public domain, such as python or R. The code will be uploaded to the components and the device performance will be tested along with any bug resolution handling.
We began by considering some of the currently available options for hearing testing apps and devices, to see if our objectives were already being met by existing technologies. While we found some interesting options, nothing was doing exactly what we intended to do. We might mention, in particular, a few of the better options:
- Hearing Test: Audiometry, Tone: This one essentially attempts to reproduce the results of a standard hearing test (see the figures above).
Biplov: please fill in the back ground here
Audiograms are visual representations of a patient's state of hearing ability (we will call this a hearing "profile").
|Average pure tone audiograms for
hearing-impaired and normal-hearing
subjects averaged over both ears.
Error bars denote standard deviation.(source)
Unfortunately, changes are usually in the wrong direction. As one can see, patient A's curves are "descending", meaning that it takes more decibels for a particular frequency to be heard. The 2021 results for patient A show severe hearing loss at the highest frequencies.
Notice that there is a profile for each ear, left and right. Sometimes hearing loss is symmetric, but it needn't be; each ear should be tested individually.
The Profile Function
For each individual there is a left ear and right ear threshold profile function, and , which the audiogram seeks to capture. Let's assume that the hearing loss is symmetric, i.e. . We argue that traditionally we have done a poor job of modeling this function : replacing (what we might imagine is generally) a smooth continuous function with a linear spline built on only a few data points:
where by we mean the linear spline (line segment) joining two adjacent data points obtained from the hearing test, e.g. and .
A pure tone can be represented by two parts: the frequency and the amplitude :
The loudness of the tone is a function of the amplitude, proportional to the square of the amplitude:
However, this is a function of time: we are familiar with the interference of waves, which results in audible beats -- we hear sounds as pulses.
- How does Mathematica turn "loudness" into decibels, especially since we're passing through speakers, which allow us to also adjust loudness?
- How do loudness levels translate into decibels?
Adaptively Sampling Frequencies for the Profile Function
We use adaptive techniques to explore the frequency space at which dramatic changes occur. We do this in order to identify potentially significant structural issues with one's hearing, as well as to provide the best response from the hearing aid.
We built the adaptive method off of an adaptive plotting routine: where the function is essentially flat (has little slope), we assume that there is little change in the profile; but whenever there is a dramatic drop between frequencies, the adaptive method provokes a study in the vicinity of those areas.
With our Arduino-based tester, one listens to a frequency until one can just barely hear it (the threshold); a button is pressed to indicate that the sample should be registered, and the testing continues on to the next frequency. We might retest once or twice again, to get an average -- we expect there to be some variation in one's perception of the threshold.
Then we do a second pass, with a slight variation: the pitch drops from above the reported threshold from the first pass (double what the patient reported as first audible), and then we record the volume at which the subject reports losing contact with the frequency. Then we perform whatever magic we're going to perform to obtain each threshold hearing value for each frequency.
Then, if there are any intervals that cause problems, we go a little deeper. For each interval requiring refinement (6 of them), we must split it into two subinterval left and right of the midpoint of the interval. So if one's hearing is wildly varying, then one might have another 6*2 = 12 tests to conduct at other frequencies, for a total of 25. But these are people who are likely going to require hearing assistance, so we are basically doubling the cost for those requiring treatment over those who do not.
- We start our estimate of the profile function with (some of) the standard frequencies one frequently sees on audiograms (all frequencies in Hertz):
(See Figure Mom, an audiogram from 2022, top row.)
You'll notice from the audiogram that a second set of frequencies -- midpoints of some of the intervals determined by the first set -- appear in the second row: 750, 1500, 3000, and 6000.
- Thus we start with 7 frequencies (creating six intervals), which we test in the usual way.
- Then the adaptive scheme kicks in: on each of the six subintervals, we invoke recursive testing, the first step of which is testing of the midpoints of the intervals; hence we pick up the next 6 frequencies, for a total of 13. That may be all: if your hearing is normal, then you should be done. So everyone has to do 13 tests.
- The adaptive scheme has two stopping criteria. It is built off a measure of the goodness of fit between a straight line and the midpoint value: if the midpoint value falls right along the line connecting the two endpoints, then the routine stops, and refinement ends; if, however, the midpoint value varies too greatly (determined by some -value of our own choosing, then further refinement occurs, and the interval is split into two at the midpoint (and so on...).
Another stopping criterion is the number of subdivisions allowed -- testing may be tedious, and we don't want to overtax our patient's patience.
If someone's hearing is highly variable, then we recommend further exploration (which corresponds to a smaller -value and a larger number of subdivisions permitted); or the user may request additional refinement, for any reason whatsoever (perhaps to get a better understanding of their cochlea's response to various frequencies). Since we have stored the user's profile, they can load it, and the adaptive scheme could continue to further subdivide intervals (e.g. performing more iterations, or epsilon could be reduced).
The recurrence relation for maximum potential tests of frequencies, one solves in closed form:
, with : in this case, the first few iterates are 7, 13, 25, 49....
For m iterations,
doubling the cost per iteration, basically. People with time on their hands (and we, in the method/testing phase) should do lots, to get a good picture of one's cochlea. Perhaps we will discover that audiologists should be doing a lot more testing, on more refined scales. And this could feed into better treatment: we might find very specific frequency ranges which need to be moved for a particular patient. A "cookie cutter frequency loss" is just a crude version of what we might find on a much finer scale.
Interpolating or Estimating the Profile Function
Once the sampling has occurred, we construct the profile function using one of a variety of techniques. One strategy is to compute an interpolator, such as a spline. The simplest spline is linear, but we don't expect the ear to evidence such dramatic slope discontinuities along the frequency spectrum. We chose to use a Hermite spline, which fits both points and (numerically approximated) derivatives of a data set.
Alternatively, one could smooth the profile function.
There are several different approaches to "correcting" hearing. The traditional idea is to reinstate the sounds that a patient is missing, by augmenting volume. This is a somewhat crude approach: we have all been witnesses to the sad spectacle of some hearing-impaired person being shouted at by a friend or loved one.
Our approach is to retain a sound signal's uniqueness. In particular, we wanted to avoid aliasing -- two distinctly different sounds having the same profile to a patient. In effect deafness is the most dramatic example of this problem: all sounds sound the same (they are all unheard equally; equally silent); and so no information can be conveyed aurally (athough information may be conveyed by lip reading, digital speech translators, or other means).
We have jokingly referred to a case where a colleague's spouse can no longer hear his daughter's voice, because it's pitched in frequencies which he can't hear; so why not turn her voice down an octive or two, thus making it more accessible (even if your daughter ends up sounding like Darth Vadar).
What one hears -- or rather, what impinges upon one's ears -- is an analog (continuous) combination of frequencies, with time varying amplitudes:
We would replace that sound with a corrected sound:
where a person's threshold profile dictates (in some fashion)
- the function, which tells us how to alter (typically augment) the volume of particular frequencies; and
- the function, which tells us how to handle each frequency (e.g. leave it alone, or shift it, or split it, add light intensity and color, etc.).
As examples, consider these two scenarios:
- Volume correction alone (the typical stratgy of hearing aids):
This bump function leaves frequencies less than 4000 unaltered, but augments sounds above 4000 by a weight function based on one's profile, e.g. . This particular (and very simple) weight function would quadruple the energy ("loudness") of the frequencies above 4000.
This rule function leaves frequencies unaltered: we simply pump up the volume, based on the bump function.
- Frequency correction alone:
alternatively, one might leave volumes unaltered, but shift frequencies down where they are too high for the patient to hear (e.g. if the patient can't hear high frequencies):
This bump function leaves energies intact; meanwhile
This rule function leaves frequencies less than 4000 unaltered, but pitches above 4000 fall by octaves until they fall into the the highest octave of frequencies (between 2000 and 4000) that a patient can actually hear. This results in an aliasing of those frequencies.
In fact, we frequently digitize the signal first, so that we end up with a correction that combines corrected frequencies, and which sounds like this:
One of our objectives is to maintain the "apparent energy" of the signal (that is, the energy as perceived by the patient). We all know that there are signals that drive some individuals crazy -- because they can detect them -- while others are oblivious. Dogs, for example, are frequent victims of our sounds (e.g. sirens). The important thing is that loud sounds should alert (sometimes alarm) the patient more than soft sounds -- no matter the frequencies of the actual sounds.
It would be wonderful if one's hearing aid could determine the precise location of the desired sound, and then focus on sounds emitted from that location, in particular. In order to achieve this, one might monitor the eyes, and the point upon which they're focousing. This suggests that a helmet might be the better model for the future, albeit one which is as discreet as possible. While we're at it, we might envision an app for lip-reading, to help facilitate the collection of speech, which might even be fed back through Mathematica and then read aloud using more distinguishable speech (e.g. speech in frequencies easier for the listener to hear).
Everyone should have access to cheap and readily available hearing tests and hearing aids. Hearing is no less important than vision, and the loss of hearing has a profound impact on one’s ability to socialize and remain an engaged member of one’s society. This research should help to overcome that by providing a simple diagnostic audiogram, which can then be integrated with a relatively cheap, accessible, and effective hearing aid.
This work was partially funded by a 2022 Summer Greaves Research Award (Ale).
Thanks to Department of Mathematics and Statistics Chair Brooke Buckley for departmental support.
- Hochberg, Julian E. Perception. 1978. 2nd Edition. Prentice-Hall Foundations of Modern Psychology Series Richard S. Lazarus, Editor.
- Freeman, Ira M. Sound and Ultrasonics. 1968. Illustrated by George T. Resch. Random House Science Library, New York.
- Simple wav sound files for analysis
- How to Read an Audiogram
- Audiograms are used to diagnose and monitor hearing loss.
- Audiograms are created by plotting the thresholds at which a patient can hear various frequencies.
- Details from that page, including a somewhat disparaged formula for computing hearing loss
- Presbycusis, or age-related hearing loss:
- "The low and mid frequencies of human speech carry the majority of energy of the sound wave. This includes most of the vowel information of words (figure 4). It is the high frequencies, however, that carry the consonant sounds, and therefore the majority of speech information. These consonant sounds tend to be not only high pitched, but also soft, which makes them particularly difficult for patients with presbycusis to hear. As a result of their hearing loss pattern, patients with high-frequency hearing loss will often report being able to hear when someone is speaking (from the louder, low-frequency vowels) but not being able to understand what is being said (due to the loss of consonant information)."
- According to the NIH, "Hearing aids are electronic instruments you wear in or behind your ear. They make sounds louder." (our emphasis)
- Music and Noise -- a lovely discussion, from The Physics Hypertextbook]