Marta Kutas (UCSD, USA)
Gerry Altmann (University of Connecticut, USA)
Jonathan Whitlock (NTNU, Norway)
Matt Crocker (University of Saarland, Germany)
Kenny Coventry (University of East Anglia, UK)
Inge-Marie Eigsti (University of Connecticut, USA)
Larissa Samuelson (University of East Anglia, UK)
We have a limited number of slots for oral presentations and posters. Please submit your abstracts by March 31st, 2016 indicating whether you prefer an oral presentation or an abstract. Accepted abstracts will receive notification byMay 1st. Please follow the Conference website for updates and registration (https://www.ntnu.edu/lanpercept).
The Conference is organised with support from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 316748 and reflects research conducted under the project LanPercept (https://www.ntnu.edu/lanpercept).
Conference organizing Committee (Language Acquisition and Language Processing Lab, NTNU)
Mila Vulchanova (Conference Chair)
Camilla Hellum Foyn
Address for correspondence and abstract submission:
Language and perception are two central cognitive systems. Until relatively recently, however, the interaction between them has been examined only occasionally (e.g. Miller & Johnson-Laird, 1976). Yet it has become clear that language and perception interactions are essential to understand both typical and atypical human behaviour. Recent work in ‘embodied cognition’ and ‘cognitive linguistics’ has shown that language processing involves the construction of situation models and early activation of perceptual representations (see Barsalou, 2008 for review). Beyond these empirical demonstrations, though, there is a notable absence of an explanatory framework in which language-perception interactions can be understood (see for example Chatterjee, 2010).
There is a rich bi-directional interface between language and perception. Visual perceptual experience informs language and the conceptual system and can shape language processing. Visual information has been shown to activate (prime) language-related information early in development (Mani & Plunkett, 2010), but it is also the case that atypically developing children display a problem in matching object-images to corresponding linguistic labels (von Koss-Torkildsen et al., 2007). The mechanism underlying this interaction (and its failure in some populations) has not been identified. Further open questions concern the extent to which visual perception contributes to word meaning (long-term), and whether the comprehension of certain categories of words depend on the visual system. Here evidence is mixed and existing accounts are conflicting (Bedny et al., 2008; Bedny & Caramazza, 2011; Glenberg & Gallese, 2012; Pulvermüller, 2012).
Language also influences perception at several levels. Language mediates eye-movements to images present immediately in the visual context (cf. the visual-world paradigm, Cooper 1974; Tannenhaus et al., 1995; Spivey et al., 2002; Allopena et al., 1998; Altmann & Kamide, 1999) and language also mediates motion processing of visual stimuli (Coventry et al., 2013). However, while existing studies document the effect of language context, and provide evidence that speakers rely heavily on linguistic cues in deciding what to anticipate as the speech signal unfolds, they fail to show what is the exact nature of the prediction, and the level at which linguistic and visual information integrate.
Recent empirical developments with both typical and atypical populations mean it is now timely to examine synergies across these approaches to understanding language-perception interactions. This meeting brings together leading researchers working on language-perception interactions in typical and atypical populations for the first time, providing a unique opportunity to advance theory and application substantially.