Recollecting details from lifelog data involves a higher level of granularity and reasoning than a conventional lifelog retrieval task. Investigating the task of Question Answering (QA) in lifelog data could help in human memory recollection, as well as improve traditional lifelog retrieval systems. However, there has not yet been a standardised benchmark dataset for the lifelog-based QA. In order to provide a first dataset and baseline benchmark for QA on lifelog data, we present a novel dataset, LLQA, which is an augmented 85-day lifelog collection and includes over 15,000 multiple-choice questions. We also provide different baselines for the evaluation of future works. The results showed that lifelog QA is a challenging task that requires more exploration. The dataset is publicly available at https://github.com/allie-tran/LLQA.
Science Foundation Ireland under grant agreement 13/RC/2106_P2, Centre for Research Training in Digitally-Enhanced Reality (d-real) under Grant No. 18/CRT/6224
ID Code:
27300
Deposited On:
05 Jul 2022 13:51 by
Ly Duyen Tran
. Last Modified 05 Jul 2022 14:06