DFG GE 1920/4-1; DFG-SPP AUDICTIVE
Cooperation/partners: Rainer Martin (Ruhr-University Bochum), Edward J. Golob (UTSA University of Texas)
The preservation of speech comprehension in difficult listening conditions is one of the greatest challenges of healthy aging. Deficits in speech comprehension can occur already at middle age and cause significant impairments in professional and private life. Our group investigates the sources of age-related difficulties in speech perception in realistic complex experimental environments using modern neurophysiological methods. The aim is to detect the neural basis of successful speech perception in younger and older adults and to identify potential factors influencing these processes. A major focus here is the elucidation of the neurophysiological backgrounds of the interindividual differences in performance when listening in complex acoustic environments. The results of our work will be used to develop and evaluate methods for improving speech comprehension in older age.
The DFG project ‘Auditory scene analysis and focusing of attention in speech perception during complex dynamic listening situations in younger and older adults’ investigates speech perception in realistic listening scenarios, focusing on the critical role of changes in auditory scenery. We are especially interested in effects of audio-visual speech on re-orienting and re-focusing of attention in “cocktailparty” situations.
In the completed BMBF-project TRAINSTIM, we investigated possibilities for improving selective attention in speech comprehension under difficult conditions by combining brain stimulation and training in older adults. This project was conducted in cooperation with the IfADo Department of Psychology and Neuroscience and an international consortium of scientists.
As part of the DFG priority program AUDICTIVE (Auditory Cognition in Interactive Virtual Environments), we are investigating audio-visual attentional processes during interaction between humans and robots both in the real world and in virtual environments. This project is conducted in cooperation with the Institute of Communication Acoustics (IKA) at the Ruhr University Bochum.
- Getzmann, S., Schneider, D. & Wascher, E. (2023). Selective spatial attention in lateralized multi-talker speech perception: EEG correlates and the role of age. Neurobiology of Aging, 126, 1-13.
- Klatt, L.-I., Begau, A., Schneider, D., Wascher, E. & Getzmann, S. (2023). Cross-modal interactions at the audiovisual cocktail-party revealed by behavior, ERPs, and neural oscillations. NeuroImage, 271, 120022.
- Begau, A., Arnau, S., Klatt, L.-I., Wascher, E. & Getzmann, S. (2022). Using visual speech at the cocktail-party: CNV evidence for early speech extraction in younger and older adults. Hearing Research, 426, 108636.
- Begau, A., Klatt, L.-I., Wascher, E., Schneider, D. & Getzmann, S. (2022). The role of informational content of visual speech in an audiovisual cocktail party: Evidence from cortical oscillations in young and old participants. European Journal of Neuroscience, 56, 5215-5234.
- Klatt, L.-I., Getzmann, S. & Schneider, D. (2022). Attentional modulations of alpha power are sensitive to the task-relevance of auditory spatial information. Cortex, 153, 1-20.
- Getzmann, S., Arnau, S., Gajewski, P.D. & Wascher, E. (2022). Effects of auditory distraction on event-related potential correlates of time perception. European Journal of Neuroscience, 55, 121-137.
- Begau, A., Klatt, L.-I., Wascher, E., Schneider, D. & Getzmann, S. (2021). Congruent lip movements facilitate speech processing in a dynamic audiovisual multitalker scenario: An ERP study with older and younger adults. Behavioural Brain Research, 412, 113436.
- Hanenberg, C., Schlüter, M.-C., Getzmann, S. & Lewald, J. (2021). Short-term audiovisual spatial training enhances electrophysiological correlates of auditory selective spatial attention. Frontiers in Neuroscience, 15, 645702.
- Getzmann, S., Klatt, L.-I., Schneider, D., Begau, A. & Wascher, E. (2020). EEG correlates of lateralized shifts of attention in a dynamic multi-talker speech perception scenario. Hearing Research, 108077.