Natural Sciences and Engineering Research Council of Canada
Symbol of the Government of Canada

Common menu bar links

Past Winner
2009 E.W.R. Steacie Memorial Fellowship

Ingrid Johnsrude

Cognitive Neuroscience

Queen's University


Ingrid Johnsrude
Ingrid Johnsrude

Spouses who claim they can’t hear one another may have only a short window of opportunity to come up with a better excuse for ignoring their better halves, thanks to Ingrid Johnsrude’s research. Building on the knowledge that a familiar voice actually comes in more clearly than a stranger’s, she is hot on the trail of the strategies and cognitive mechanisms that help people understand speech and pick a voice out of a crowd.

That’s just part of the work she has done on speech perception and the complex processes that enable humans to turn sound waves into understandable language—research that the Queen’s University psychologist will continue during her 2009 NSERC E.W.R. Steacie Memorial Fellowship.

Difficulties with speech perception have traditionally been seen as either a hearing problem or a cognitive problem, with researchers restricting their focus to one or the other. Dr. Johnsrude argues that this assumption has left important gaps in our understanding. In fact, cognitive and auditory processes appear to overlap significantly.

She is one of only a few researchers who strive to bring together aspects of both traditional areas of study. “Much more attention is being paid now to the question of how the hearing and the cognitive sides of speech perception fit together,” she explains. “I would say that dividing it into the auditory side and the speech side is probably starting from a false premise.”

Part of Dr. Johnsrude’s work involves studying how people perceive speech in a noisy environment, especially those who are 50 or older and are more likely to report difficulty understanding others when there are multiple sounds competing for attention—situations with a poor “signal to noise” ratio. Her results, however, should be applicable to people of any age who are trying to understand a conversation in places such as crowded restaurants.

Her hope is to find ways to make it easier for people to understand one another when background noise creates a lot of interference. So far, her findings indicate that people use a number of strategies to help them decipher speech.

One of the most important factors is familiarity with the other person’s voice, which means learning to understand a particular accent, tone of voice and even habitual word choices. That should make a spouse’s words among the easiest for people to pick up, even in a crowd.

While people tend to adapt fairly quickly to a new voice when that is all they hear, Dr. Johnsrude is still on the hunt for how long it takes to build the ability to separate only that voice from a number of other sounds. “We don’t know whether you got it in the first day of meeting your spouse or whether it took years to develop,” she says. “If older people are getting such a dramatic benefit from hearing their spouse’s voice, this is potentially a very powerful way of mitigating against those unfavourable signal to noise ratios... as long as you’re listening to your spouse...”

Another valuable aid involves being able to predict words. In an experiment where participants listened to speech that was artificially degraded to simulate an unintelligible accent, they adapted far faster when they received cues about what was about to be said. “You use your guesses about what people are going to say to help you hear what they are saying more clearly,” she says. “We know a little bit about that, but we don’t really understand how it works in real time.”

In addition to advancing theoretical knowledge, these findings could help improve methods of training recipients of cochlear implants (electronic devices that replace the function of the cochlea, the part of the ear that translates sound into electrical signals).

Over the next few years, Dr. Johnsrude hopes to come closer to the answers to some important remaining questions, including the role of personal interaction and visual cues such as facial expressions.

Using imaging tools such as functional magnetic resonance imaging, she also plans to delve more deeply into the complex brain processes supporting speech perception in an effort to understand how these change with age and determine when speech becomes difficult to understand.

Dr. Johnsrude hopes to have enough evidence soon to have an impact on other clinical and rehabilitation settings, including improving the way hearing aids are designed and prescribed. Research that is currently in its early stages suggests that a person’s cognitive characteristics and abilities can indicate how well that person will do with a specific type of hearing aid, raising the exciting possibility that taking these factors into account could increase the benefits offered by hearing aids.