Verbal and bodily communication

Objectives

The main goal of the project is to develop a formal theory of the interaction between communication modalities, i.e. speech utterances on the one hand, and non-verbal behaviours such as hand gestures, face expressions and gaze on the other. The theory must be able to account for how non-verbal behaviours are integrated with the linguistic sign on different levels. It must be formal in the sense that it must be applicable to naturally occurring data in a consistent way, so that hypotheses about the function of multimodal expressions can be empirically verified.

The empirical material will consist of video clips where people communicate in different situations and different languages. Both speech utterances and non-verbal expressions will be annotated and analysed. Most of the work will be carried out semi-manually. Research on how to use machine learning to automate the process, however, will also be conducted.

Theoretical basis

The theoretical starting point will be the MUMIN model, which was designed to study multimodal communication, especially feedback, turn management and discourse structure.

Examples of multimodal annotation of video material carried out using the MUMIN coding scheme can be consulted from the MUMIN resource page.

Project participants

CST contact

Patrizia Paggio (paggio @ hum.ku.dk)

Costanza Navarretta (costanza @ hum.ku.dk)


Blå linie
Emil Holms Kanal 2, building 22, 3, DK-2300 Copenhagen S
Tlf: +45 35329090 - Fax: +45 35329089
Valid XHTML 1.0 Strict