Cognitive Science of Language lecture series: Dr. Marco Marelli (Jan. 25, 2021)

Who: Marco Marelli (University of Milano-Bicocca, Milano, Italy) www.marcomarelli.net

What: Compositional effects in the processing of compound words: A computational perspective grounded in linguistic and visual experience

When: Monday January 25, 2021; 2:30-4:20 pm EST

Where: Zoom

Registration: https://mcmaster.zoom.us/meeting/register/tJAlduyqrzMqG9HnqMOA1s0NZLhGj-bf2xtU 

McMaster’s Department of Linguistics and Languages invites you to the next talk in the Cognitive Science of Language lecture series. The lecture will be delivered online by Dr. Marco Marelli. Dr. Marelli is an associate professor of General Psychology at the University of Milano-Bicocca, Milano, Italy. His work focuses on the psychology of language, and in particular on the impact of semantics on word processing and the interface between language and the conceptual system. His more recent research projects combine methods from experimental psychology and computational modelling and are dedicated to compositionality (at the level of both phrases and morphologically complex words) and the interplay between linguistic, emotional and perceptual experience in conceptual processes. He is an associate editor of Behavior Research Methods and a consulting editor of Morphology. 

The talk is free but participants must register. Registration link can be found here:  https://mcmaster.zoom.us/meeting/register/tJAlduyqrzMqG9HnqMOA1s0NZLhGj-bf2xtU

Please make sure to register in advance. For logistic reasons the registration for this event will only be reviewed until 2pm on the event date.  

Abstract:  

Since the seminal LSA proposal (Landauer & Dumais, 1997) distributional semantics has provided efficient data-driven models of the human semantic system, representing word meaning through vectors recording lexical co-occurrences in large text corpora. However, these approaches generate static descriptions of the semantic system, falling short of capturing the highly dynamical interactions occurring at the meaning level during language processing. 

In the present work, I discuss the CAOSS model (Compounding as Abstract Operations in Semantic Space), a first step in this direction that moves from distributional semantics to capture the meaning of compound words (Marelli et al., 2017). 

In CAOSS, word meanings are represented as vectors encoding lexical co-occurrences in a reference corpus (e.g., the meaning of “snow” will be based on how often “snow” appears with the other words), according to the tenets of distributional semantics. A compositional procedure is induced as a weighted sum: given two vectors (constituent words) u and v, their composed representation (the compound) can be computed as c=M*u+H*v, where M and H are weight matrices estimated from corpus examples. The matrices are trained using least squares regression, having the vectors of the constituents as independent words (“car” and “wash”,  “rail” and ”way”) as inputs and the vectors of example compounds (“carwash”, “railway”) as outputs, so that the similarity between M*u+H*v and c is maximized. In other words, the matrices are defined in order to recreate the compound examples as accurately as possible. Once the two weight matrices are estimated, they can be applied to any word pair in order to obtain meaning representations for untrained word combinations (e.g., “snow building”). 

In a series of behavioral experiments, model predictions were tested against psycholinguistic data. CAOSS is shown to mirror evidence related to the processing of novel compounds (Marelli et al., 2017; Günther & Marelli, 2020), and in particular the impact of relational information (Gagné, 2001; Gagné & Shoben, 2007) as well as the “morpheme interference effect” (Crepaldi et al., 2010). Moreover, CAOSS also provides a central contribution to the understanding of semantic transparency in familiar compounds: CAOSS estimates are shown to best characterize the transparency impact in word processing (Günther & Marelli, 2019). Finally, I discuss how CAOSS is not to be considered a “disembodied model”, since one can easily ground it in perception by feeding it images together with text data (Günther et al., 2020). 

The model simulations indicate that compositionality-related phenomena are reflected in language statistics. Human speakers are able to learn these aspects from language experience and automatically apply them to the processing of any word combination. The present model is flexible enough to emulate this procedure, predicting sensible relational similarities for novel compounds and correctly capturing the contribution to semantic transparency provided by compositional operations. The model is also shown to generalize to other kind of data, being able to capture the contribution of perceptual experience in the internal dynamics of compound-word processing. Such evidence directly links linguistic composition to conceptual combination, speaking for the possible role of general-level learning procedures at the foundations of both phenomena. 

Bookmark the permalink.

Comments are closed.