In this paper we attempt to characterize resources of information complementary to audio-visual (A/V) streams and propose their usage for enriching A/V data with semantic concepts in order to bridge the gap between low-level
video detectors and high-level analysis. Our aim is to extract cross-media feature descriptors from semantically enriched and aligned resources so as to detect finer-grained events in video.We introduce an architecture for complementary resource analysis and discuss domain dependency aspects of this approach related to our domain of soccer broadcasts.
Item Type:
Conference or Workshop Item (Paper)
Event Type:
Workshop
Refereed:
Yes
Additional Information:
Workshop held in conjunction with SAMT 2007 - 2nd International Conference on Semantic and Digital Media Technologies, Genova, Italy, 5-7 December 2007.