Search

To be a better listener at cocktail parties

Date
2023/05/18
Affiliation
Departments of Communication Sciences and Disorders The University of Iowa
Event Location
Speaker
최인용
An accurate and rapid understanding of speech in noise is essential for real-world communications. However, persistent variability in speech-in-noise perception exists even in young listeners with normal hearing thresholds as well as in impaired auditory systems. Yet, no diagnostic or therapeutic method has been established for listeners who struggle with speech-in-noise understanding. This lecture will introduce our effort to investigate brain mechanisms of speech-in-noise and recent advances in therapeutic methods to improve speech-in-noise abilities.
Studies in normal auditory systems have considered speech-in-noise understanding as a cocktail party problem, a problem of extracting a target sound from intermixed competing sounds. A solution for the cocktail party problem is a successful auditory scene analysis. Auditory scene analysis can be understood as a chain of processes including 1) sensory encoding of acoustic dynamics, 2) grouping of acoustics features to form auditory objects, and 3) across-object competition facilitated by selective attention that enhances the neural representation of foreground objects and suppresses background. The result of successful auditory scene analysis is target speech unmasking revealed as increases in the neural form of a target-to-masker ratio. Under this auditory scene analysis framework, individual differences in speech-in-noise understanding may originate from each of the auditory scene analysis processes.Recent studies suggest that auditory nerve degeneration can degrade such auditory scene analysis processes by deteriorating the encoding of supra-threshold acoustic dynamics and/or the early-cortical inhibition of background noise. Either deterioration may result in a diminished internal signal-to-noise ratio during the neural encoding of an auditory scene. Our key hypothesis is that redundant, compensatory central mechanisms may assist in recovering target unmasking in the degraded auditory system. Such mechanisms include auditory grouping that recruits the temporo-parietal cortical network and selective cortical inhibition that recruits the executive attentional network. Importantly, such cortico-cortical auditory cognitive functions exhibit greater plasticity than subcortical processes, which allows us to consider those functions as the better target of perceptual training for speech-in-noise. Thus, our research aims to develop perceptual training protocols for each of the two critical auditory cognitive functions that contribute to speech-in-noise performance: 1) auditory grouping and 2) selective attention.