Consistency is a key requirement of highquality translation. It is especially important
to adhere to pre-approved terminology and
adapt to corrected translations in domainspecific projects. Machine translation (MT)
has achieved significant progress in the
area of domain adaptation. However,
real-time adaptation remains challenging.
Large-scale language models (LLMs) have
recently shown interesting capabilities of
in-context learning, where they learn to
replicate certain input-output text generation
patterns, without further fine-tuning. By
feeding an LLM at inference time with a
prompt that consists of a list of translation
pairs, it can then simulate the domain and
style characteristics. This work aims to
investigate how we can utilize in-context
learning to improve real-time adaptive MT.
Our extensive experiments show promising
results at translation time. For example,
LLMs can adapt to a set of in-domain
sentence pairs and/or terminology while
translating a new sentence. We observe
that the translation quality with few-shot incontext learning can surpass that of strong
encoder-decoder MT systems, especially
for high-resource languages. Moreover,
we investigate whether we can combine
MT from strong encoder-decoder models
with fuzzy matches, which can further
improve translation quality, especially for
less supported languages. We conduct our
experiments across five diverse language
pairs, namely English-to-Arabic (EN-AR),
English-to-Chinese (EN-ZH), English-toFrench (EN-FR), English-to-Kinyarwanda
(EN-RW), and English-to-Spanish (EN-ES).
Proceedings of 24th Annual Conference of the European Association for Machine Translation (EAMT 2023.
.
European Association for Machine Translation (EAMT).
Publisher:
European Association for Machine Translation (EAMT)
Science Foundation Ireland (SFI) Centre for Research Training in Digitally-Enhanced Reality (d-real) under Grant No. 18/CRT/6224, Science Foundation Ireland 9SFI) Grant No. 13/RC/2106 P2, Microsoft Research
ID Code:
28326
Deposited On:
02 Jun 2023 13:44 by
Thomas Murtagh
. Last Modified 22 Sep 2023 09:52
Documents
Full text available as:
Preview
PDF
- Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader Creative Commons: Attribution-Noncommercial-No Derivative Works 4.0 437kB