In recent years, tremendous efforts have been made to understand how emerging hearing technology affects auditory function. By understanding the effect technology has on listening and spoken language acquisition, researchers can better develop interventions to assist individuals with hearing loss who choose listening and spoken language. On Sunday, July 1, AG Bell will present its eighth Research Symposium, "From the Ear to the Brain: Advances in Understanding Auditory Function, Technology and Spoken Language Development," where researchers will present the latest findings in this field of focus.
Sponsored by the National Institutes of Health, National Institute on Deafness and Other Communication Disorders, the Symposium will provide attendees with information about advances in hearing aid and cochlear implant technology, the development of speech and language using amplification, and new research on auditory function that will help make strides in developing new interventions for hearing loss. The Symposium is structured for presenters to translate their scientific findings to a non-scientific authence. To preview the Symposium and to help better understand this complex issue, Volta Voices reached out to the presenters to learn more about their research and where it will lead in the future.
Volta Voices: What does your current research focus on?
Joseph Santos-Sacchi: My lab works on the sensory cells from the inner ear. We study the electro-mechanical activity of outer hair cells responsible for our keen sense of hearing. We are interested in the molecular workings of the motor molecule prestin, which enables outer hair cells to work as tiny amplifiers. We are also interested in synaptic transmission from hair cells to eight nerve fibers as the synapse at the hair cell is specialized to work continuously. Finally, we are collaborating with colleagues to understand some genetic causes of hearing loss, and to understand mechanisms that may promote regeneration of hair cells.
Jont Allen: My research group is studying speech perception of individuals with hearing loss and with typical hearing. We are trying to understand how people with typical hearing decode speech. Why do you hear "ta" when I say "ta," and how can I make you hear "da" when I say "ta" by manipulating the wave form? How can I control what you hear? What is your brain decoding precisely in the wave form? While we think we understand how this works with individuals who have typical hearing, the next step is to ask the same question about people with hearing loss. They have trouble hearing speech in noisy environments. So if we could figure out exactly what is going on with the deaf ear and provide some relief, extend the ability to communicate accurately, that would be a tremendously important contribution, and that is where I'm headed.
Michael Dorman: My research lab is comparing the benefits of a single cochlear implant, bilateral cochlear implants, a cochlear implant plus low-frequency hearing on the "other" ear, and a cochlear implant plus low-frequency hearing in both ears. We are also including one patient with two cochlear implants and low-frequency acoustic hearing in both ears. In addition, we are testing new signal processing strategies for cochlear implant companies. In the past I've worked on the problem of critical periods for the development of speech comprehension in children with cochlear implants.
Tonya Bergeson-Dana: The overall objectives of my research are to investigate the development of auditory attention to speech by infants and children with hearing loss and to assess how the characteristics of maternal speech input are affected by infants' hearing status. Caregivers typically speak and sing to their infants and children using a distinct style, commonly referred to as "motherese," babytalk or infant-directed speech. Not only do caregivers speak and sing in a special way to their infants, …