Wang, Peng, Sun, Lifeng, Smeaton, Alan F.ORCID: 0000-0003-1028-8389, Gurrin, CathalORCID: 0000-0003-2903-3968 and Yang, Shiqiang
(2018)
Computer vision for lifelogging.
In: Marco, Leo and Farinella, Giovanni Maria, (eds.)
Computer Vision for Assistive Healthcare.
A volume in Computer Vision and Pattern Recognition
.
Academic Press, pp. 249-282.
ISBN 9780128134467
The rapid development of mobile devices capable of sensing our interaction with the environment has make it possible to assist humans in daily living such as helping patients with cognitive impairment or providing customized food intake plan for patients with obesity, etc. All of this can be achieved through the passive gathering of detailed records of everyday behaviour which is termed as lifelogging. For example, the widely adopted smart mobiles and newly-emerging consumer wearable devices like Google glass, Baidu eye, Narrative clip, etc. are usually embedded with rich sensing capabilities including camera, accelerometer, GPS, digital compass, etc. which can help to capture daily activity unobtrusively. Among such heterogeneous sensor readings, visual media contain more semantics to assist in characterizing everyday activities and visual lifelogging is a class of personal sensing which employs wearable cameras to capture image or video sequences of everyday activities. This chapter will focus on the most recent research methods in understanding visual lifelogs, including semantic annotations of visual concepts, utilization of contextual semantics, recognition of activities, visualization of activities, etc. We also discuss some research challenges which indicates potential directions for future research. This chapter is intended to support readers in the area of assistive living using wearable sensing, computer vision for lifelogging, human behaviour researchers aiming at behavioural analysis based on visual understanding.