Browse DORAS
Browse Theses
Search
Latest Additions
Creative Commons License
Except where otherwise noted, content on this site is licensed for use under a:

Overview of VideoCLEF 2008: Automatic generation of topic-based feeds for dual language audio-visual content

Larson, Martha and Newman, Eamonn and Jones, Gareth J.F. (2009) Overview of VideoCLEF 2008: Automatic generation of topic-based feeds for dual language audio-visual content. In: CLEF 2008: Workshop on Cross-Language Information Retrieval and Evaluation , 17-19 Sept. 2008 , Aarhus, Denmark, .

Full text available as:

[img]
Preview
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
150Kb

Abstract

The VideoCLEF track, introduced in 2008, aims to develop and evaluate tasks related to analysis of and access to multilingual multimedia content. In its first year, VideoCLEF piloted the Vid2RSS task, whose main subtask was the classification of dual language video (Dutchlanguage television content featuring English-speaking experts and studio guests). The task offered two additional discretionary subtasks: feed translation and automatic keyframe extraction. Task participants were supplied with Dutch archival metadata, Dutch speech transcripts, English speech transcripts and 10 thematic category labels, which they were required to assign to the test set videos. The videos were grouped by class label into topic-based RSS-feeds, displaying title, description and keyframe for each video. Five groups participated in the 2008 VideoCLEF track. Participants were required to collect their own training data; both Wikipedia and general web content were used. Groups deployed various classifiers (SVM, Naive Bayes and k-NN) or treated the problem as an information retrieval task. Both the Dutch speech transcripts and the archival metadata performed well as sources of indexing features, but no group succeeded in exploiting combinations of feature sources to significantly enhance performance. A small scale fluency/adequacy evaluation of the translation task output revealed the translation to be of sufficient quality to make it valuable to a non-Dutch speaking English speaker. For keyframe extraction, the strategy chosen was to select the keyframe from the shot with the most representative speech transcript content. The automatically selected shots were shown, with a small user study, to be competitive with manually selected shots. Future years of VideoCLEF will aim to expand the corpus and the class label list, as well as to extend the track to additional tasks.

Item Type:Conference or Workshop Item (Paper)
Event Type:Workshop
Refereed:Yes
Uncontrolled Keywords:Classification; Translation; Keyframe Extraction; Speech Recognition; Evaluation; Benchmark; Video
Subjects:Computer Science > Information retrieval
DCU Faculties and Centres:Research Initiatives and Centres > Centre for Digital Video Processing (CDVP)
DCU Faculties and Schools > Faculty of Engineering and Computing > School of Computing
Published in:Evaluating Systems for Multilingual and Multimodal Information Access. Lecture Notes in Computer Science 5706. Springer-Verlag.
Publisher:Springer-Verlag
Official URL:http://www.springerlink.com/content/j1072rqu46852032/
Copyright Information:© 2009 Springer-Verlag. The original publication is available at www.springerlink.com
Use License:This item is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 3.0 License. View License
ID Code:16187
Deposited On:16 Jun 2011 14:34 by Shane Harper. Last Modified 24 Aug 2011 09:59

Download statistics

Archive Staff Only: edit this record