Deep syntax language models and statistical machine translation
Graham, Yvette and van Genabith, Josef (2010) Deep syntax language models and statistical machine translation. In: SSST-4 - 4th Workshop on Syntax and Structure in Statistical Translation at COLING 2010, 28 August 2010, Beijing, China.
Full text available as:
Hierarchical Models increase the reordering capabilities of MT systems by introducing non-terminal symbols to phrases that map source language (SL) words/phrases to the correct position in the target language (TL) translation. Building translations via discontiguous TL phrases increases the difficulty of language modeling, however, introducing the
need for heuristic techniques such as cube pruning (Chiang, 2005), for example. An additional possibility to aid language modeling in hierarchical systems is to use a language model that models fluency of words not using their local context in the string, as in traditional language models, but instead using the deeper context of a word. In this paper, we explore the potential of deep syntax language models providing an interesting comparison with the traditional string-based language model. We include an experimental evaluation that compares the two kinds of models independently of any MT system to investigate the possible potential of integrating a deep syntax language model into Hierarchical SMT systems.
Archive Staff Only: edit this record