Research

March 22, 2018 at 3:26 pm

Day and Haragopal Present at Otolaryngology Research Conference

Hariprakash Haragopal presents "A Rabbit Model of Sensorineural Hearing Loss for Sound Localization Research." Shown here standing in front of his poster.

Hariprakash Haragopal presents “A Rabbit Model of Sensorineural Hearing Loss for Sound Localization Research.”

Dr. Mitchell Day and doctoral student Hariprakash Haragopal presented research from the Auditory Neurophysiology Lab at the Mid-Winter Meeting of the Association for Research in Otolaryngology, the main international conference devoted to research of the auditory and vestibular systems, Feb. 10-14 in San Diego.

Day is Assistant Professor of Biological Sciences at Ohio University.

“Our research is about the neural circuits in the brain that underlie our ability to localize where a sound comes from. Hariprakash’s work in particular is about how these neural circuits may be affected by hearing loss, ” Day said.

Haragopal’s presentation was titled “A Rabbit Model of Sensorineural Hearing Loss for Sound Localization Research.”

Abstract: Human listeners with sensorineural hearing loss (SNHL) have impaired ability to localize sound sources. However, the neural correlates of this behavioral impairment remain unknown. Our aim is to investigate the effects of SNHL on neural coding of sound source location. Here, we create a rabbit model of SNHL—a species which 1) has a hearing range that overlaps with humans, 2) uses both interaural time and level differences to localize sound, 3) is easy to work with during awake, head-fixed neural recordings, and 4) has been used previously in sound localization research. To produce an SNHL, we presented loud, octave-band noises centered at 750 Hz to anesthetized Dutch-belted rabbits from two free-field speakers—one directed at each ear. Noise waveforms from each speaker were independent and uncorrelated with each other in order to produce the perception of a spatially diffuse source. Hearing loss was quantified as the difference in threshold levels between auditory brainstem responses (ABRs) measured prior to and 2 weeks after acoustic trauma (using clicks and tones from 0.5 to 16 kHz in octave steps). We tested exposure levels from 122 to 135 dB SPL and exposure duration from 15 to 90 min. An exposure of 133 dB SPL for 60 min produced, on average, an approximately 20-dB increase in pure-tone thresholds at all frequencies and a 15-dB increase in click thresholds. Exposure levels below 133 dB SPL failed to produce threshold shifts while those above 133 dB SPL produced profound deafness. Threshold shifts could be progressively increased by increasing the exposure duration at 133 dB SPL.

Dr. Mitchell Day presents on "Individual Inferior Colliculus Neurons Encode Sound Location Over a Wide Range of Stimulus Frequencies." Shown here standing with his poster.

Dr. Mitchell Day presents on “Individual Inferior Colliculus Neurons Encode Sound Location Over a Wide Range of Stimulus Frequencies.”

Day’s presentation was on “Individual Inferior Colliculus Neurons Encode Sound Location Over a Wide Range of Stimulus Frequencies.”

Abstract: A hallmark of the central auditory system is the topographic representation of the cochlear basilar membrane in each auditory brain area. From the base of the basilar membrane to the apex, there is an orderly progression of spectral sensitivity from high to low frequencies. In auditory brain areas, the characteristic frequencies of neurons (CF: the frequency that evokes a response at the lowest level) follow a similar topographic progression. Tuning to frequency in neurons at the level of the inferior colliculus (IC) and below is relatively narrow about the CF at low sound levels. Therefore, signal processing of complex sound features—including sound source location—is often described as occurring through an array of relatively narrow frequency channels. On the other hand, at high sound levels, frequency tuning (in response to tones) can be quite wide. To determine the dependence of sound location coding on both stimulus spectrum and level, we measured azimuth tuning curves (firing rate vs. azimuth) of IC neurons in awake rabbits for each of 4 bandpass noises (2/3-oct) centered at different frequencies, and at either a moderately high (70 dB SPL) or low (35 dB SPL) sound level. We quantified the azimuthal information encoded by a neuron as the mutual information (MI) between firing rate and azimuth. We found that at the low sound level, IC neurons have a large amount of MI for stimuli near the CF, while at the high sound level, they can have a large amount of MI for stimuli up to 3 oct away from the CF. Therefore, individual IC neurons can provide azimuthal information over a wide range of stimulus frequencies at high sound levels, and over a narrower range of stimulus frequencies at low sound levels, consistent with level-dependent changes in frequency tuning width. This suggests that a large portion of the IC provides information about sound source location for moderately high-level sounds, even for sounds with narrow spectral content.

Dr. Ryan Dorkoski, who earned a Ph.D. in Plant Biology from the College of Arts & Sciences at OHIO, was a co-author on both presentations.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*