: Understanding spoken words requires listeners to integrate large amounts of linguistic information over time at multiple levels (phonetic, lexical, syntactic, etc.) There has been considerable debate about how semantic context affects word recognition, with preceding semantic context often viewed as a constraint on the hypothesis space of future words, and following semantic context as a mechanism for disambiguating previous input. In this talk, I will present recent work from my lab and others’ in which it appears that human behavior resembles neither of these options; instead, converging evidence from behavioral, neural, and computational modeling work suggests that listeners _optimally_ integrate auditory and semantic-contextual knowledge across time during spoken word recognition. This holds true even when such sources of information are separated by significant time delays (several words). These results have significant implications for psycholinguistic theories of spoken word recognition, which generally assume rapidly decaying representations of prior input and rarely consider information beyond the boundary of a single word. Furthermore, I will argue that thinking of language processing as a cue integration problem can connect recent findings across other domains of language understanding (e.g., sentence processing.)<br><br>
0 commit comments