MMLI - Multimodal Multiperson Corpus of Laughter in Interaction

The aim of the Multimodal and Multiperson Corpus of Laughter in Interaction (MMLI) is to collect multimodal data of laughter with the focus on full body movements and different laughter types. It contains induced and interactive laughs from human triads. In total we collected 500 laugh episodes of 16 participants. The data consists of 3D body position information, facial tracking, multiple audio and video channels as well as physiological data. The data (video, audio, mocap data) can be freely used for research purposes.

The corpus is a part of the EU FET Project ILHAIRE n 270780 dedicated to laughter analysis and synthesis.

(24-11-2014) NEW: data corpus of laughter and non-laughter segments available

Full-body joints positions of 10 subjects from 316 Laughter Body Movement (LBM) and 485 Other Body Movement (OBM) segments are now available for free download:

This corpus has been used for machine learning based laughter detection from full-body movement.
Experiment description and results have been submitted to IEEE Transactions on Human-Machine Systems.

 

(14-11-2013) Why a new laughter corpus?

Existing laughter corpora:
  • consist of audio and eventually facial cues of laughter
  • contain only posed or induced laughter
  • does not contain high quality data of full body movements

 

Setup Hardware

  • 3 inertial mocap systems: 2 Xsens, 1 Animazoo
  • 2 Kinects (K1, K2)
  • 5 webcams (W1-5) and 2 - 60 fps cameras (C1-2)
  • 2 wireless personal micro
  • 1 respiration sensor
  • 3 additional body markers (green polystyrene balls) per participant



Tasks

  • T1 - watching funny videos together - all the participants as well as the technical staff watch 9 minutes video


  • T2 - watching funny videos separately - one participant was separated from everyone else by a curtain, which completely obscured her view of the other participants while still allowing her to hear them


  • T3 - Yes/no game - one participant must respond quickly to questions from the other participants without saying yes, no


  • T4 - Barbichette game - two participants face each other, make eye contact and hold the other's chin and try to avoid laughing


  • T5 - pictionary game - one participant has to convey the secret word to the other participants by drawing on a large board


  • T6 - tongue twisters - each participant in turn has to pronounce tongue twisters in four different languages


Participants

  • 6 sessions with 16 participants: 4 triads; 2 dyads (groups G3 and G5), age 20 - 35;
  • 3 females 8 French, 2 Polish, 2 Vietnamese, 1 German, 1 Austrian, 1 Chinese and 1 Tunisian
  • 4 hours and 16 minutes of data 439 laughter events 31 minutes (12%)

Segmentation

The data was recorded and synchronized by Social Signal Interpretation (SSI) framework.

EyesWeb XMI platform was used to display and segment the data.



Available segments

The data can be freely used for research purposes.
If you are interested in using MMLI please write to Radoslaw Niewiadomski or Maurizio Mancini.

The following data is available:

1

Full-body joints positions of 10 subjects from 233 Laughter Body Movement (LBM) and 250 Other Body Movement (OBM) segments are now available for free download:

2

Type

Quantity

Format

Resolution

Video

6 webcams

MPEG4

640x480; 30 fps

Example

3 mocap visualizations

MPEG4

640x480; 30 fps

Example

Audio

3 microphones

PCM

Mono, 16kHz

Example

Mocap


2 Xsens

Joint positions and rotations in txt format

22 joints, 120 fps

Example


1 Animazoo

Joint positions and rotations in txt format

25 joints, 120 fps

Example


Example 1 - Seg3-G4S4
Example 1 - Seg17-G4S2

If you have used our corpus in your research please cite our work:
Niewiadomski, R., Mancini, M., Baur, T., Varni, G., Griffin, H., Aung, M.S.H., MMLI: Multimodal Multiperson Corpus of Laughter in Interaction, Fourth Int. Workshop on Human Behavior Understanding, in conjunction with ACM Multimedia'2013, Barcelona, Spain, Lecture Notes in Computer Science, vol. 8212, pages 184-195, 2013. [bibtex]

Full paper can be find here.

Thank you!