Auditory perception and speech comprehension

Principle investigator:
Stephan Getzmann

Staff:
Alexandra Begau, Laura Klatt, Benjamin Stodt

Funds:
DFG GE 1920/4-1; DFG-SPP AUDICTIVE

Cooperation/partners: Rainer Martin (Ruhr-University Bochum), Edward J. Golob (UTSA University of Texas)

The preservation of speech comprehension in difficult listening conditions is one of the greatest challenges of healthy aging. Deficits in speech comprehension can occur already at middle age and cause significant impairments in professional and private life. Our group investigates the sources of age-related difficulties in speech perception in realistic complex experimental environments using modern neurophysiological methods. The aim is to detect the neural basis of successful speech perception in younger and older adults and to identify potential factors influencing these processes. A major focus here is the elucidation of the neurophysiological backgrounds of the interindividual differences in performance when listening in complex acoustic environments. The results of our work will be used to develop and evaluate methods for improving speech comprehension in older age.

The DFG project ‘Auditory scene analysis and focusing of attention in speech perception during complex dynamic listening situations in younger and older adults’ investigates speech perception in realistic listening scenarios, focusing on the critical role of changes in auditory scenery. We are especially interested in effects of audio-visual speech on re-orienting and re-focusing of attention in “cocktailparty” situations.

In the completed BMBF-project TRAINSTIM, we investigated possibilities for improving selective attention in speech comprehension under difficult conditions by combining brain stimulation and training in older adults. This project was conducted in cooperation with the IfADo Department of Psychology and Neuroscience and an international consortium of scientists.

As part of the DFG priority program AUDICTIVE (Auditory Cognition in Interactive Virtual Environments), we are investigating audio-visual attentional processes during interaction between humans and robots both in the real world and in virtual environments. This project is conducted in cooperation with the Institute of Communication Acoustics (IKA) at the Ruhr University Bochum.

Publications:

  • Begau, A., Klatt, L.-I., Wascher, E., Schneider, D. & Getzmann, S. (2021). Congruent lip movements facilitate speech processing in a dynamic audiovisual multitalker scenario: An ERP study with older and younger adults. Behavioural Brain Research, 412, 113436.
  • Hanenberg, C., Schlüter, M.-C., Getzmann, S. & Lewald, J. (2021). Short-term audiovisual spatial training enhances electrophysiological correlates of auditory selective spatial attention. Frontiers in Neuroscience, 15, 645702.
  • Getzmann, S., Klatt, L.-I., Schneider, D., Begau, A. & Wascher, E. (2020). EEG correlates of lateralized shifts of attention in a dynamic multi-talker speech perception scenario. Hearing Research, 108077.
  • Klatt, L.-I., Schneider, D., Schubert, A.-L., Hanenberg, C., Lewald, J., Wascher, E. & Getzmann, S. (2020). Unraveling the relation between EEG-correlates of attentional orienting and sound localization performance: a diffusion model approach. Journal of Cognitive Neuroscience, 32, 945-962.
  • Hanenberg, C., Getzmann, S. & Lewald, J. (2019). Transcranial direct current stimulation of posterior temporal cortex modulates electrophysiological correlates of auditory selective spatial attention in posterior parietal cortex. Neuropsychologia, 131, 160-170.
  • Klatt, L.-I., Getzmann, S. & Schneider, D. (2018). The contribution of selective spatial attention to sound detection and sound localization: evidence from event-related potentials and lateralized alpha oscillations. Biological Psychology, 138, 133-145.
  • Lewald, J., Schlüter, M.-C. & Getzmann, S. (2018). Cortical processing of location changes in a “cocktail-party” situation: Spatial oddball effects on electrophysiological correlates of auditory selective attention. Hearing Research, 365, 49-61.