To Be Announced
Wednesday, April 24, 2024;
11:15 a.m. – 12:15 p.m. (ET)
Zoom
Speaker: Pinlei Chen from Penn State
Hosted by: Jaclyn Stimely, juc52@psu.edu
Computational and theoretical modelling of neurostimulation: connectivity, plasticity, personalized brain simulations, and insights into principles of brain organization
Wednesday, April 24, 2024;
4:00 - 5:00 pm
108 Wartik Laboratory
Speaker: John Griffiths, Assistant Professor from Biomedical Engineering, University of Toronto
Hosted by: Rebecca Benson, rle4@psu.edu
Combating Climate Change: Insights from Systems Biology and Computational Materials Discovery
Thursday, April 25, 2024;
10:35a - 11:35am
CBEB 001
Speaker: Ranjan Srivastava from University of Connecticut Health Center
Hosted by: Angela Dixon, adc12@psu.edu
Speech and music in the mirror of the auditory mind
Wednesday, April 24, 2024;
3:35 PM - 4:25 PM
060 Willard Building
Speaker: Shihab Shamma from University of Maryland
Abstract: Action, Perception, and Imagination are three intertwined functions that form the pillars of cognition, enabling us to learn skilled tasks, understand the world, and develop intuition. As we listen and play music, or when we dance and speak, we integrate articulatory, hand and body movements to produce elaborate sensory (auditory, visual, somatosensory) signals, that originate from our mind’s social and musical culture and language. Such complex behaviors are ultimately facilitated by a back-and-forth mirroring of the sensory world onto the mind. When we act, we simultaneously perceive its effects, understand the consequences, and sense its pleasure or pain. These two-way sensorimotor and sensory-cognitive interactions exist within the framework of the Mirror Network, a system of encoding pathways, motor mappings, and predictive projections that are intricate and rapidly adaptive. I shall summarize in this talk some of the latest research on the neural mechanisms underlying how humans listen and learn to play music and speak, and how they acquire and enjoy linguistic and musical cultures. I shall also briefly describe parallel animal experiments in ferrets performing an auditory-motor theremin-like task. Implications of this work for the decoding of imagined music and speech are also discussed.
Biography: Shihab Shamma is a Professor of Electrical and Computer Engineering at the University of Maryland (College Park, USA), and a visiting Professor at the Cognitive Sciences Department at the Ecole Normal Superieure (Paris, France). His research focuses on the neural processing of speech and music in the auditory system, employing both animal and human behavioral and imaging experiments. The range of topics addressed is fairly large: On the sensory side, they include how we perceive sound in real noisy environments, sort its sources, and encode it rapidly and adaptively in the brain. On the cognitive side, he investigates how humans and animals marshal the cognitive functions of attention, decision making, categorization, and motor actions to understand and become emotionally engaged with sounds, especially music and speech. On the engineering-side, applications of this research have spanned medical prosthetics and diagnoses, audio processing, and neuromorphic robotics.
Hosted by: Bethany Illig, buh196@psu.edu