A multi-modal dance corpus for research into real-time interaction between humans in online virtual environments
Gowing, Marc and Kelly, Philip and O'Connor, Noel E. (2011) A multi-modal dance corpus for research into real-time interaction between humans in online virtual environments. In: ICMI Workshop on Multimodal Corpora for Machine Learning: Taking Stock and Road mapping the Future, 18 Nov 2011, Alicante, Spain.
Full text available as:
We present a new, freely available, multimodal corpus for research into, amongst other areas, real-time realistic interaction between humans in online virtual environments. The specific corpus scenario focuses on an online dance class application scenario where students, with avatars driven by whatever 3D capture technology are locally available to them, can learn choerographies with teacher guidance in an online virtual ballet studio. As the data corpus is focused on this scenario, it consists of student/teacher dance choreographies concurrently captured at two different sites using a variety of media modalities, including synchronised audio rigs, multiple cameras, wearable inertial measurement devices and depth sensors. In the corpus, each of the several dancers perform a number of fixed choreographies, which are both graded according to a number of specific evaluation criteria. In addition, ground-truth dance choreography annotations are provided. Furthermore, for unsynchronised sensor modalities, the corpus also includes distinctive events for data stream synchronisation. Although the data corpus is tailored specifically for an online dance class application scenario, the data is free to download and used for any research and development purposes.
Archive Staff Only: edit this record