Getting the point: What can caregivers' and children's multimodal language reveal about word learning?


In face-to-face communication, information is not only conveyed by speech, but also by prosody, speakers’ eye-gaze and gesture. The aim of the project is to characterise how caregivers and children use these multimodal cues in their interactions, and establishing the role of these multimodal cues in word learning. Specifically, we ask: (1) how the different multimodal cues by caregivers are orchestrated along with the children’s responses. (2) Whether and how these cues are associated to word learning and (3) what cue combinations do children show greater sensitivity to, as indexed by their electrophysiological (EEG) responses? These questions will be investigated by doing corpus-based and experimental studies. The project goes beyond the traditional way in which language acquisition has been studied so far: first, researchers have mainly looked at the separate contribution of speech, gesture, prosody or eye-gaze on the process of vocabulary growth but not how these different cues are orchestrated in real-world face-to-face communication; second, researchers have primarily focused on the input to the child by the caregiver OR the speech and actions by the child, rather than considering the joint contribution of child-caregiver multimodal communication; finally, we know very little about children’s sensitivity to these cues.





Dr. Y. Gu

Verbonden aan

University College London, Division of Psychology and Language Sciences


01/06/2019 tot 31/05/2021