Perceptions and Prescriptions

I’m currently 24 years old and have worn hearing aids for the majority of my life. I still wear them with pediatric DSL prescriptive targets.

(For the non-audiologists out there, the quick and dirty explanation of hearing aid prescriptions is that these prescriptions represent different mathematical formulas used to determine how much amplification a hearing device should give to the user at each frequency range based on their hearing loss. Historically these approaches have differed, but in recent years they’ve converged somewhat – though methods for children, like what I’ve worn, still give a significant boost to ensure learning brains get all the information they need. The prescriptive methods “DSLv5” and “NAL-NL2” are today’s empirical, quantifiable standards that hearing care professionals use to ensure that those with hearing impairments are able to benefit from their devices as much as possible.)

Now I’ve tried to change. At the advice of one audiologist who couldn’t believe how loud my hearing aids were relative to my loss and age, I used NAL-NL2 for a few months. I got to the point where I was fully adapted to the change in sound and specifically the overall reduction of sound. I felt I was doing as well as I ever had in everyday circumstances and even in busy environments. I didn’t think I was missing any words or sounds that I hadn’t before. Objectively, I was hearing as well as I was with my previous settings, and quiet environments actually felt more comfortable.

But I still switched back.

When you’ve trained your brain to hear things a certain way for 20 years its hard to change. I was personally surprised at how well my sensory system could adapt to the reduced inputs of NAL-NL2, but I couldn’t get the rest of my brain to adapt the same way.

For example, imagine you’re in a busy restaurant and you’re trying to participate in an important conversation. Challenging, but not impossible. Now imagine you have to do the same but with a pair of ear plugs. The impact of this new challenge affects you before anyone says anything. Your heart-rate rises, your eyes get darty as they look for visual cues, you just feel on-alert the whole time. It doesn’t matter what you’re actually hearing; all that matters is that you feel that you’re going to have a difficult time hearing.

Communication challenges are anxiety-inducing. In fact, I’ve realized that even perceived communication challenges are anxiety-inducing. When I switched from my familiar DSLv5-child prescription to NAL-NL2 I wasn’t missing anything – but, I couldn’t get over the fear that I might be missing things. 

What have your experiences been as users trying other fitting methods, or as professionals attempting to transition young people to more adult targets?


What Bluetooth Really Means for Hearing Devices


Imagine you’re someone who relies on hearing devices and you find yourself in the following situation:

Your cell phone starts ringing – or rather you notice your cell phone’s ringing. You’re in a crowded shopping centre, so who knows how long it took for you to notice the ringtone. Your stomach sinks a bit as you realize how difficult it will be to hear your caller in this place. You lift the phone up and see its your boss. Your heartbeat speeds up and your skin goes cold with the thought of trying to hear them speak. You swipe to answer the call and you hold up your phone to your ear, but not really to your ear. Instead you have to find that perfect angle between the cell phone speaker and the hearing device microphone. The one that best transmits sound without causing too much feedback. You finally find it and you hear only the last part of their sentence. You reply with what you hope is the socially correct response. Between the background noise, feedback, and the fading of their voice as the phone drifts just millimetres away from the hearing device’s microphone, you only hear bits and pieces of what they’re saying. Your heart pounds with worry and stress that you might be missing something important.

The other scenario? Your cell phone rings, you hear it through your hearing devices. You swipe to answer and the speaker’s voice is right in your ears – perfectly clear. With one button press you can mute everything around you and just focus on the call.

As someone who’s been through both scenarios, I can absolutely tell you which one I prefer to be in.

The anxiety conveyed in the first situation may sound like pure hyperbole, but its not at all. For many of those who rely on hearing devices, there’s not much that’s more stressful than having to answer a phone call in a busy place and worrying that you’ll say the wrong thing or miss out on a critical piece of information. However, all of that stress is removed with today’s crop of Bluetooth-equipped hearing devices.

And this benefit doesn’t end with simple phone calls. Listening to music or podcasts, participating in conference call meetings, and making Skype calls are all potentially difficult situations that benefit form direct Bluetooth technology. Even non-communicative tasks benefit; for example, I can look at my phone anytime and see exactly how much battery life my hearing devices have remaining. I can even have my devices send me a text when they get to 30%. Eliminating the surprise of a low battery warning is a huge reduction in stress for me.

Its common for people in the hearing health world to say these new devices act like Bluetooth headsets and wireless speakers – but that’s drastically underselling the impact these products have on human lives. These new devices aren’t coming equipped with the capability to answer calls and stream music – they’re coming equipped with quality-of-life improving, anxiety reducing capabilities. They’re able to let users feel comfortable and relaxed and prepared to meet their daily communication needs as they arise, stress and worry free. Bluetooth-enabled hearing devices aren’t just a gimmick, they can have a real impact on the wellbeing of those who rely on hearing devices in their everyday lives.



Translators and Hearing Tech

There’s a good chance that you’ve already seen and been wowed by this video from Google’s latest product launch:

It looks pretty amazing, and literally countless articles have been written stating how these new wireless headphones – the Pixel Buds – could change the world by providing realtime speech translation in the palm of everyones’ hands.

Imagine if your hearing devices could do that….

Could they? Its not as though the translation we hear is happening in the earbuds themselves. Instead, it takes place in the phone and is relayed to the earbuds in near-real-time. So really, any Bluetooth-equipped headphones or hearing devices should be able to accomplish this in theory, right?

Of course I had to go ahead and try this myself. One unique aspect that the Google Pixel – Pixel Bud pairing offers is the Swedish-speaker in the video has the use of the earbud touch-button at her disposal. That single button push switches the output from the earbuds to the phone and changes the language the phone is expecting to hear. Therefore any demonstration using any other products will need a couple extra steps.

Using my Oticon Opn hearing devices, my iPhone 7, and the free Google Translate app, I made a simulation of what the Swedish woman from Google’s presentation would have to do to accomplish the same thing with these products:

If you’re having trouble following the video, imagine you’re the Swedish-speaker from the Google clip, but the phone you’re holding is the one seen here. A man asks you a question and you hear that translation through your hearing devices. You then have to manually change the audio output from the hearing aids (named “Remington” on this screen) to the iPhone speakers before replying. The phone translates your response for him, and then you have to switch the audio output back to your hearing devices before he replies. This all repeats a couple times.

I think this experiment demonstrates that using your hearing devices as real-time speech translators is absolutely doable.

Is it practical?

Not really.

But, its so, so close – and that should be enough for all of us who work with hearing devices to be very excited.



Turn Down To What?

When you’re driving a car, you have a speedometer that tells you exactly how fast you’re going. You also have road signs that tell you how fast you should be going. You can compare the two and adjust your speed to match the safe speed indicated by the road sign.

Now imagine your speedometer is broken. How do you know how fast to go? You might try to keep up with those around you and unwittingly go too fast. Or maybe you’ll get a little lead-footed when you hit the open road. There’s no denying that fast is fun.

Loudness is fun too. But we currently drive without speedometers when it comes to enjoying sound.

You can put up as many posters as you want to telling people how long they can listen to a certain volume, but how many have any idea how loud the sound they’re listening to really is? At least an experienced driver might have some idea what moving at 100 km/h might look like. How many people know what 80 dB SPL feels like?

When we hear “turn down the volume,” that’s about the same as only having highway signs that say “slow down.” You could be driving 140 km/h, slow down to 130 km/h, and not really be a whole lot safer. Or maybe you’re only doing 90 km/h, slow down to 80 km/h. You were safe already, but now you’re just enjoying the experience less.

What about bringing attention to the possible consequences of excessive noise? Some have tried to use images of hearing aids to discourage young people from listening to music with positive results (if your idea of positive is stigmatizing hearing aids further). What would the equivalent be in our driving metaphor? Billboards full of photos of crashes and ambulances? Maybe that’d change behaviour a bit, but it still doesn’t offer any gauge of how fast you’re currently driving.

A different approach is using devices that limit volume. Just as a car might have a governor that limits its max speed, you can also purchase headphones with a limit on their maximum volume. However, there’s nothing stopping a person from swapping out their current car, or pair of headphones, for one with a higher top end.

Some devices, such as Samsung Galaxy smartphones, offer a “soft-governor” approach. This appears as a warning when you turn the volume above a certain level. I think this helps, but its still far from perfect. I’m not sure a warning light on my dash would do much to slow me down.

There’s no replacement to a speedometer in a car, just as there’s no replacement to actually measuring the level of sounds that might be damaging. In open spaces, such as concerts and sporting events, this shouldn’t be too hard. There are a tonne of sound level meter mobile apps that can do this. In theory, if you know how loud your surroundings are, and you know how well your earplugs attenuate sound, you can easily figure out how long you’ll be safe for. However, there area  couple of hiccups here. These sound level apps are often inaccurate, and your ratings can vary between apps and devices. As well, the ear plug attenuation level is usually a best-case scenario. Few people properly insert earplugs consistently.

The problem gets more complex when looking at headphone use. The volume registered on a sound level meter on your phone is very different than the actual sound level when all that sound is focussed in your ear canal. I’ve maxed out my headphones and held them against my phone’s mic and the sound level app I was using told me the level was fine, but the loudness of these headphones in my ears indicated otherwise. The opposite can happen too. You might have your volume very high, but because of poor-fitting devices, some of the sound might leak out.

In a clinical or research setting, accurate real-time sound level measurements are done with sophisticated measuring equipment like probe tubes and standardized 2 cc couplers. Is there a way to bring this to everyday use? I don’t know. I have some ideas, but I’m not sure how feasible they are. Maybe the solution lies in the hands of those designing and building the next generation of hearable technology. For example, what if Apple’s next generation of Air Pods featured integrated microphones that told your phone in real time how much sound your ears are being subjected to?



(I apologize if my title, a bad reference to an outdated song, offended your sensibilities. It offended mine too.)

Are We Missing The Boat When if Comes to Hearables?

Hearables are here. All of the big tech manufacturers you know – Apple, Samsung, Bose – and many start-ups you may not know yet – Bragi, Doppler, RippleBuds – are designing and selling all sorts of devices with incredible capabilities. Some of these devices let you focus on conversations in busy environments, maintain clear phone conversations, monitor heart rate and fitness, or countless other things – all at far more affordable price points than conventional hearing aids.

Thus far, the response from the hearing health industry has been… basically nonexistent? Sure, most of the big hearing aid manufacturers now offer made-for-iphone products, as well as their own accessory devices. Oticon’s IFTTT integration is interesting as well, but thus far, none have introduced anything that could be described as a true hearable. Instead, they’re still limiting themselves to hearing aids that happen to have some of the same capabilities as hearables.

But that’s not really enough in my opinion. Right now the average age of hearing aid adoption is 63, with the majority of first-time hearing aid users are aware of their hearing loss for several years beforehand. This scenario presents a unique opportunity for both hearing aid manufacturers and audiology practices to get their brands into the consciousness of those who are ready to adopt some technology, but not quite ready for conventional hearing aids.

Lets imagine a scenario to demonstrate what this might look like from the manufacturer standpoint. A hearing aid company designs and builds a hearable product that does similar things to other products out there. Maybe it doesn’t amplify in exactly the same way as their conventional HAs, but it addresses the needs of those with milder hearing loss by having programs for hearing in noise, adjustable directionality, media streaming at safe volume levels, and making speaking on a phone easier. The consumers embracing this product become familiar with the brand and hopefully form a strong positive association with them. They download the associated apps to their phones and tablets, and see the brand’s logo every day. Through their increased awareness of this brand, they may become aware of the manufacturer’s other offerings. In their minds, they may arrive at the internal belief of “This product meets my needs for the time being, but eventually I know I’ll need to upgrade to the hearing aid product from this manufacturer.”  In this scenario, the hypothetical customer is not someone who would be purchasing hearing aids at this point in their life, so the manufacturer is not taking away business from themselves. Rather, its likely that if this customer didn’t purchase a product from the manufacturer, they would meet their needs by buying a product from a hearable company.

Many of these points are paralleled when examining the audiology practice’s standpoint. Typically younger patients leave the clinic having been told “Your hearing is poorer than it was, but you’re not ready for a hearing aid yet – let’s just keep an eye on it for now” and no commitment is made on the appointment day. If practices could provide an array of hearable options for the patient to choose from to address their needs discussed above, the client is making a commitment to the practice to return for follow-up services, and will be far more likely to return to that practice when they eventually decide to advance to conventional hearing aids. The absolute extreme example of this scenario is a practice that doesn’t even deal in hearing aids, but just hearables. (This exists already in fact –

So what do you think? Are manufacturers and audiologists missing the boat? Or is jumping into the trend of hearables still too premature?

Raspberry Pi Experimenting: Part I

A Raspberry Pi is a neat little, super-affordable computer that has endless potential uses. For less than $100, and without a ton of technical know-how, people can use them to build anything from weather-monitoring stations, to internet radio streamers, to emulators that let you play your favourite old NES or SEGA games. In fact, some people have even tried building a simple hearing aid using a Raspberry Pi (


A BTE hearing aid powered by a Raspberry Pi.

That last project was what really interested me. I never hoped to produce a $100 hearing aid that could compete with anything available on the market, but I thought that this might be a cool way to learn more about how computers work, computer programming, and hearing aid technology. So about a month ago I ordered a Raspberry Pi to play around with, and hopefully eventually build at least a semi-functional hearing aid device.

What I’ve used so far:

  • A Raspberry Pi 3 B. This latest model has built-in WiFi and Bluetooth.
  • A microSD card.
  • A USB microphone.
  • A micro-USB charger, same as my cell phone.
  • An old pair of ear-buds to transmit the sound to my ears.
  • A network cable.
  • Code! I’m no programmer, so for now I’m working with someone else’s hearing aid code (

Now there are ways to set-up a Raspberry Pi that are supposed to be quick and easy, but I didn’t use these methods. They required having a spare HDMI monitor, keyboard, and mouse laying around, which I didn’t have (and the staff at my local library were reluctant to lend me any). So instead, I did what’s called a “Headless” setup. In this method, I had to manually set up the Pi’s SD card, connect it to my router with a cable, and then access the Pi through my own laptop over the internet so I could see the Pi’s “desktop” as a window on my own. None of this should have been particularly difficult, but it took me a long, long time to make my way through each little step. Unfortunately, many of the resources out there assume the reader has a certain level of computer literacy – more than my own at least. Eventually, I made everything work out and had a functional Raspberry Pi.

However, the more difficult parts were yet to come. The Raspberry Pi software includes some of the necessary programs that my code required, but not all of it. So by trial and error, I slowly figured out what I needed, downloaded the incorrect versions, realized my mistakes, and finally had all of the correct software on my Pi to make the code run. To make sure the sound system worked, I ran a quick white noise program I’d found and was feeling pretty good about everything. However, the code for the hearing aid program wasn’t 100% correct so it took some time to figure out what needed to be modified so that it would actually run. Finally, with the deletion of one errant word in line 116 – it worked!


The Raspberry Pi running the hearing aid program.

…and by worked, I mean it… sorta made noise?

In setting everything up, I confirmed that my microphone was capable of recording high quality sound, and my headphones were receiving sound from stored files on the Pi. However, when I put everything together with this program, I only received a harsh clicking noise that followed the relative volume of the voice entering the microphone, but was basically unrecognizable as speech. After all this, I felt only a little disappointed. To be honest I was pretty thrilled to have produced something with any resemblance of functionality at all.

I’m definitely gonna keep puttering away at this project in my spare time and see what I can do. I’ll try and get in touch with the people who built the code I used (and I suspect they may have been the same crew that actually built the hearing aid shown above too). I’m also going to study their code and try to figure out exactly what its doing, and if there’s anything that can be cut out to try and improve the sound quality. However, I may also try to build up my own code. The code I used had a very complicated compression function, but only a single frequency channel. For a project in my undergrad, I experimented building the opposite – 8 frequency bands but little compression – with Praat. Ultimately my limited knowledge and the limited functionality of Praat didn’t allow for any success with that project, but maybe it’ll be easier here.

So that’s where things stand today. Thanks for reading and I hope you’ll continue to follow along with my successes and failures in this project of mine.

Carrots for Everyone

“The carrot is a long, reddish-yellow vegetable which has several thin leaves on a long stem, and which belongs to the parsley family. Carrots are grown all over the world in gardens, and in the wild in the field.”

This sentence will forever be embedded in my brain. As a child, I would hear this sentence at countless hearing aid fittings, adjustments, and check-ups. As I grew older, and ended up working at the same audiology clinic I’d been going to for years, I continued to hear this sentence as it was played for every single fitting performed there.

This carrot sentence is probably the most common speech stimulus used for performing real-ear-measurements (REMs) on hearing aids, a quick, simple process used to verify that the actual output of the hearing aids is in fact the same output as claimed by the programming software (and to correct the fitting when the software is slightly wrong, which it often is).

It doesn’t take much time and the benefits are significant (here’s a good summary of those benefits – However, the truth is that less than half of all audiologists regularly perform REMs at fittings. Even when only surveying those who have the equipment, that number improves to just shy of 60%. While fewer hearing instrument specialists report regularly performing REMs, the difference isn’t huge. (See the link at the bottom of this post for a full report on these numbers).

I know there are arguments to be made regarding the time/cost-to-benefit factor here, but if you’ve worn hearing aids and tried them on the manufacturer-programmed settings then you’ll know that these numbers can be way off. However, some manufacturers’s software is much better than it ever used to be. The last pair of hearing aids I was fit with only needed a couple small tweaks to reach targets. Now I know not every fitting will get that lucky, but even if they did, I still believe that performing REMs is a critical part of amplification. Performing REMs is a simple way to clearly demonstrate how the services you are providing meet the needs of the patient.

In a time where patients have more options than ever for where and how they can access hearing services and products, audiologists need to work extra hard and add additional value to their services wherever they can. If you run REMs, and find that you don’t need to make even a single change, you’re still adding value to your patient’s experience because you proved that your service and product reached a real, standardized target illustrated on the screen of whatever device you use to perform REMs. You can’t quantify that bit of value, but it absolutely exists.

Further, I think this same concept of “adding value by proving your work” can be extended to other services. Why can’t every  otoscopic examination incorporate the use of video otoscopy? Instead of telling patients their eardrum looks healthy, show them. Instead of having patients try their new hearing aids in the real world, put them in the sound booth and redo whatever speech test you did during the initial evaluation. Demonstrate that your  services have made a clear, quantifiable change to their hearing.

In today’s competitive market, audiologists need to do everything they can do demonstrate their skills and add easily apparent value to their services. Adding more carrots to everyone’s experience is one easy way to do just that.