Browse DORAS
Browse Theses
Search
Latest Additions
Creative Commons License
Except where otherwise noted, content on this site is licensed for use under a:

User variance and its impact on video retrieval benchmarking

Wilkins, Peter and Troncy, Raphael and Halvey, Martin and Byrne, Daragh and Amin, Alia and Punitha, P. and Smeaton, Alan F. and Villa, Robert (2009) User variance and its impact on video retrieval benchmarking. In: CIVR 2009 - ACM International Conference on Image and Video Retrieval, 8-10 July, 2009, Santorini, Greece. ISBN 978-1-60558-480-5

Full text available as:

[img]
Preview
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
657Kb

Abstract

In this paper, we describe one of the largest multi-site interactive video retrieval experiments conducted in a laboratory setting. Interactive video retrieval performance is difficult to cross-compare as variables exist across users, interfaces and the underlying retrieval engine. Conducted within the framework of TRECVID 2008, we completed a multi-site, multi-interface experiment. Three institutes participated involving 36 users, 12 each from Dublin City University (DCU, Ireland), University of Glasgow (GU, Scotland) and Centrum Wiskunde \& Informatica (CWI, the Netherlands). Three user interfaces were developed which all used the same search service. Using a latin squares arrangement, each user completed 12 topics, leading to 6 TRECVID runs per site, 18 in total. This allowed us to isolate the factors of users and interfaces from retrieval performance. In this paper we present an analysis of both the quantitative and qualitative data generated from this experiment, demonstrating that for interactive video retrieval with ``novice'' users, performance can vary by up to 300\% for the same system using different sets of users, whilst differences in performance of interface variants was in comparison not statistically different. Our results have implications for the manner in which interactive video retrieval experiments using non-expert users are evaluated. The primary focus of this paper is in highlighting that non-expert users generate very large performance fluctuations, which may either mask or create system variability. The discussion of why this happened is not covered by this paper.

Item Type:Conference or Workshop Item (Paper)
Event Type:Conference
Refereed:Yes
Uncontrolled Keywords:TRECVID;
Subjects:Computer Science > Interactive computer systems
Computer Science > Multimedia systems
Computer Science > Information retrieval
DCU Faculties and Centres:Research Initiatives and Centres > CLARITY: The Centre for Sensor Web Technologies
Publisher:Association for Computing Machinery
Official URL:http://dx.doi.org/10.1145/1646396.1646400
Copyright Information:Copyright 2009 ACM
Use License:This item is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 3.0 License. View License
Funders:SFI 07/CE/I1147, Science Foundation Ireland
ID Code:4584
Deposited On:04 Jun 2009 11:17 by Peter Wilkins. Last Modified 25 Nov 2009 14:52

Download statistics

Archive Staff Only: edit this record