Automated annotation of multimedia audio data with affective labels for information management
Hau Chan, Ching and Jones, Gareth J.F. (2005) Automated annotation of multimedia audio data with affective labels for information management. In: The Fifth International Workshop on Pattern Recognition in Information Systems (PRIS 2005), May 2005, Miami, U.S.A..
Full text available as:
The emergence of digital multimedia systems is creating many new opportunities for rapid access to huge content archives. In order to fully exploit these information sources, the content must be annotated with significant features. An important aspect of human interpretation of multimedia data, which is often overlooked, is the affective dimension. Such information is a potentially useful component for content-based classification and retrieval. Much of the affective information of multimedia content is contained within the audio data stream. Emotional
features can be defined in terms of arousal and valence levels. In this study low-level audio features are extracted to calculate arousal and valence levels of
multimedia audio streams. These are then mapped onto a set of keywords with predetermined emotional interpretations. Experimental results illustrate the use of this system to assign affective annotation to multimedia data.
Archive Staff Only: edit this record