Font- Font+ Italiano / English

MMLI - Multimodal Multiperson Corpus of Laughter in Interaction

The aim of the Multimodal and Multiperson Corpus of Laughter in Interaction (MMLI) is to collect multimodal data of hilarious laughter with the focus on full body movements. It contains induced and interactive laughs from human triads. It is composed of full-body motion capture data of subjects who participated in several social activities, e.g., playing social games such as "barbichette" or pictionary game, etc. Six (4 triads and 2 dyads) sessions were recorded with the partcipation of 16 participants (3 females) In total we collected nearly 500 laugh episodes. The total duration of the extracted episodes is more than 70 minutes. This corpus has been used for machine learning based laughter detection from full-body movement (for details see this paper).
The data consists of 3D body position information and video channels.
The motion data can be freely used for research purposes.

The corpus is a part of the EU FET Project ILHAIRE n 270780 dedicated to laughter analysis and synthesis.


Why a new laughter corpus?

Existing laughter corpora:
  • consist of audio and eventually facial cues of laughter
  • contain only posed or induced laughter
  • does not contain high quality data of full body movements


Setup Hardware:

  • 3 inertial mocap systems: 2 Xsens, 1 Animazoo
  • 5 webcams and 2 - 60 fps cameras

Available segments

Full-body joints positions of 10 subjects from 316 Laughter Body Movement (LBM) and 485 Other Body Movement (OBM) segments are now available for download:


  • T1 - watching funny videos together - all the participants as well as the technical staff watch 9 minutes video

  • T2 - watching funny videos separately - one participant was separated from everyone else by a curtain, which completely obscured her view of the other participants while still allowing her to hear them

  • T3 - Yes/no game - one participant must respond quickly to questions from the other participants without saying yes, no

  • T4 - Barbichette game - two participants face each other, make eye contact and hold the other's chin and try to avoid laughing

  • T5 - pictionary game - one participant has to convey the secret word to the other participants by drawing on a large board

  • T6 - tongue twisters - each participant in turn has to pronounce tongue twisters in four different languages

If you have used our corpus in your research please cite our work:

Niewiadomski, R., Mancini, M., Varni, G., Volpe, G., Camurri, A., Automated Laughter Detection from Full-Body Movements, in Human-Machine Systems, IEEE Transactions on, vol. 46, no. 1, pp. 113-123, 2016. doi: 10.1109/THMS.2015.2480843. [bibtex]

Full paper can be find here.

If you are interested in using MMLI please write to Radoslaw Niewiadomski or Maurizio Mancini.

Thank you!