DCU's experiments in NTCIR-8 IR4QA task
Min , Jinming and Jiang, Jie and Leveling, Johannes and Jones, Gareth J.F. and Way, Andy (2010) DCU's experiments in NTCIR-8 IR4QA task. In: NTCIR-8 - The 8th NTCIR Workshop, 15-18 June 2010, Tokyo, Japan. ISBN 978-4-86049-053-9
Full text available as:
We describe DCU's participation in the NTCIR-8 IR4QA
task . This task is a cross-language information retrieval(CLIR) task from English to Simplified Chinese which seeks to provide relevant documents for later cross language question answering (CLQA) tasks. For the IR4QA task, we submitted 5 official runs including two monolingual runs and three CLIR runs. For the monolingual retrieval we tested two information retrieval models. The results show that the KL-Divergence language model method performs better than the Okapi BM25 model for the Simplified Chinese retrieval task. This agrees with our previous CLIR experimental results at NTCIR-5. For the CLIR task, we compare query translation and document translation methods. In the query translation based runs, we tested a method for query expansion from external resource (QEE) before query translation. Our result for this run is slightly lower than the run without QEE. Our results show that the document translation method achieves 68.24% MAP performance compared to our best query translation run. For the document translation method, we found that the main issue is the lack of named entity translation in the documents since we do not have a suitable parallel corpus for training data for the statistical machine translation system. Our best CLIR run comes from the combination of query translation using Google translate and the KL-Divergence language model retrieval method. It achieves 79.94% MAP relative to our best monolingual run.
Archive Staff Only: edit this record