Show simple item record

dc.contributor.authorWild, Conor
dc.contributor.otherQueen's University (Kingston, Ont.). Theses (Queen's University (Kingston, Ont.))en
dc.date2012-08-21 11:22:59.386en
dc.date2012-09-25 10:48:50.73en
dc.date.accessioned2012-09-25T22:20:12Z
dc.date.available2012-09-25T22:20:12Z
dc.date.issued2012-09-25
dc.identifier.urihttp://hdl.handle.net/1974/7511
dc.descriptionThesis (Ph.D, Neuroscience Studies) -- Queen's University, 2012-09-25 10:48:50.73en
dc.description.abstractThe most common and natural human behaviours are often the most computationally difficult to understand. This is especially true of spoken language comprehension considering the acoustic ambiguities inherent in a speech stream, and that these ambiguities are exacerbated by the noisy and distracting listening conditions of everyday life. Nonetheless, the human brain is capable of rapidly and reliably processing speech in these situations with deceptive ease – a feat that remains unrivaled by state-of-the-art speech recognition technologies. It has long been known that supportive context facilitates robust speech perception, but it remains unclear how the brain integrates contextual information with an acoustically degraded speech signal. The four studies in this dissertation utilize behavioural and functional magnetic resonance imaging (fMRI) methods to examine how the normally functioning human brain uses context to support the perception of degraded speech. First, I have observed that text presented simultaneously with distorted sentences results in an illusory experience of perceptually clearer speech, and that this illusion depends on the amount of distortion in the bottom-up signal, and on the relative timing between the visual and auditory stimuli. Second, fMRI data indicate that activity in the earliest region of primary auditory cortex is sensitive to the perceived clarity of speech, and that this modulation of activity likely comes from left frontal cortical regions that probably support higher-order linguistic processes. Third, conscious awareness of the visual stimulus appears to be necessary to increase the intelligibility of degraded speech, and thus attention might also be required for multisensory integration. Finally, I have demonstrated that attention greatly enhances the processing of degraded speech, and this enhancement is (again) supported by the recruitment of higher-order cortical areas. The results of these studies provide converging evidence that brain uses prior knowledge to actively predict the form of a degraded auditory signal, and that these predictions are projected through feedback connections from higher- to lower-order order areas. These findings are consistent with a predictive coding model of perception, which provides an elegant mechanism in which accurate interpretations of the environment are constructed from ambiguous inputs in way that is flexible and task dependent.en_US
dc.languageenen
dc.language.isoenen_US
dc.relation.ispartofseriesCanadian thesesen
dc.rightsThis publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.en
dc.subjectContexten_US
dc.subjectTop-Down Influencesen_US
dc.subjectSpeechen_US
dc.subjectPerceptionen_US
dc.subjectInteractive Modelsen_US
dc.subjectfMRIen_US
dc.subjectAttentionen_US
dc.titlePredictive Coding: How the Human Brain Uses Context to Facilitate the Perception of Degraded Speechen_US
dc.typeThesisen_US
dc.description.degreePh.Den
dc.contributor.supervisorJohnsrude, Ingrid S.en
dc.contributor.departmentNeuroscience Studiesen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record