Meta-evaluation of machine translation evaluation methods
Han, LifengORCID: 0000-0002-3221-2185
(2021)
Meta-evaluation of machine translation evaluation methods.
In: Workshop on Informetric and Scientometric Research (SIG-MET), 23 -24 Oct 2021, Salt Lake City/Online.
Starting from 1950s, Machine Transla- tion (MT) was challenged from different scientific solutions which included rule- based methods, example-based and sta- tistical models (SMT), to hybrid models, and very recent years the neural mod- els (NMT). While NMT has achieved a huge quality improvement in comparison to conventional methodologies, by taking advantages of huge amount of parallel corpora available from internet and the recently developed super computational power support with an acceptable cost, it struggles to achieve real human parity in many domains and most language pairs, if not all of them. Alongside the long road of MT research and development, qual- ity evaluation metrics played very impor- tant roles in MT advancement and evo- lution. In this tutorial, we overview the traditional human judgement criteria, automatic evaluation metrics, unsupervised quality estimation models, as well as the meta-evaluation of the evaluation methods.
This item is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 3.0 License. View License
Funders:
ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.
ID Code:
26280
Deposited On:
22 Oct 2021 09:17 by
Lifeng Han
. Last Modified 30 Jan 2023 12:38