You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _pages/schedule.md
+10-2Lines changed: 10 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,11 +15,19 @@ C.Psyd meetings consist of paper presentations, project workshopping, invited sp
15
15
# Invited Speakers
16
16
17
17
**2024**
18
-
***Venkata Govindarajan** (Ithaca College)
18
+
***Noga Zaslavsky** (NYU)
19
+
: _Losing bits and finding meaning: Efficient compression shapes meaning in language_
20
+
: Our world is extremely complex, and yet we are able to exchange our thoughts and beliefs about it using a relatively small number of words. What computational principles can explain this extraordinary ability? In this talk, I argue that in order to communicate and reason about meaning, both humans and machines must efficiently compress their representations of the world. In support of this claim, I present a series of studies showing that: (1) languages evolve under pressure to efficiently compress meanings into words; (2) the same principle can help reverse-engineer the visual representations that underlie human semantic systems; (3) efficient compression may also explain how meaning is constructed in real time, as interlocutors reason pragmatically about each other; and (4) these findings offer a new framework for studying how language may emerge in artificial agents without relying on human-generated training data. This body of research suggests that efficient compression underlies meaning in language and offers a cognitively-motivated approach to emergent communication in multi-agent systems.<br><br>
21
+
22
+
***Wednesday Bushong** (Wellesley College)
23
+
: _How do listeners integrate multiple sources of information across time during language processing?_
24
+
: Understanding spoken words requires listeners to integrate large amounts of linguistic information over time at multiple levels (phonetic, lexical, syntactic, etc.) There has been considerable debate about how semantic context affects word recognition, with preceding semantic context often viewed as a constraint on the hypothesis space of future words, and following semantic context as a mechanism for disambiguating previous input. In this talk, I will present recent work from my lab and others’ in which it appears that human behavior resembles neither of these options; instead, converging evidence from behavioral, neural, and computational modeling work suggests that listeners _optimally_ integrate auditory and semantic-contextual knowledge across time during spoken word recognition. This holds true even when such sources of information are separated by significant time delays (several words). These results have significant implications for psycholinguistic theories of spoken word recognition, which generally assume rapidly decaying representations of prior input and rarely consider information beyond the boundary of a single word. Furthermore, I will argue that thinking of language processing as a cue integration problem can connect recent findings across other domains of language understanding (e.g., sentence processing.)<br><br>
25
+
26
+
:***Venkata Govindarajan** (Ithaca College)
19
27
: _Modeling Intergroup bias in online sports comments_
20
28
: Social bias in language is generally studied by identifying undesirable language use towards a specific demographic group, but we can enrich our understanding of communication by re-framing bias as differences in behavior situated in social relationships — specifically, the intergroup relationship between the speaker and target reference of an utterance. In this talk, I will describe my work modeling this intergroup bias as a tagging task on referential expressions in English sports comments from forums dedicated to fandom NFL teams.<br>
21
29
<br>
22
-
We curate a unique dataset of over 6 million game-time comments from opposing perspectives, each comment grounded in a non-linguistic description of the events that precipitated these comments (live win probabilities). For large-scale analysis of intergroup language variation, we use LLMs for automated tagging, and discover that some LLMs perform best when prompted with linguistic descriptions of the win probability at the time of the comment, rather than numerical probabilities. Further, large-scale tagging of comments using LLMs uncovers linear variations in the form of referent across win probabilities that distinguish in-group and out-group utterances.
30
+
We curate a unique dataset of over 6 million game-time comments from opposing perspectives, each comment grounded in a non-linguistic description of the events that precipitated these comments (live win probabilities). For large-scale analysis of intergroup language variation, we use LLMs for automated tagging, and discover that some LLMs perform best when prompted with linguistic descriptions of the win probability at the time of the comment, rather than numerical probabilities. Further, large-scale tagging of comments using LLMs uncovers linear variations in the form of referent across win probabilities that distinguish in-group and out-group utterances.<br><br>
0 commit comments