Login (DCU Staff Only)
Login (DCU Staff Only)

DORAS | DCU Research Repository

Explore open access research and scholarly works from DCU

Advanced Search

Measuring acceptability of machine translated enterprise content

Castilho, Sheila orcid logoORCID: 0000-0002-8416-6555 (2016) Measuring acceptability of machine translated enterprise content. PhD thesis, Dublin City University.

This research measures end-user acceptability of machine-translated enterprise content. In cooperation with industry partners, the acceptability of machine translated, post-edited and human translated texts, as well as source text were measured using a user-centred translation approach (Suojanen, Koskinen and Tuominen 2015). The source language was English and the target languages German, Japanese and Simplified Chinese. Even though translation quality assessment (TQA) is a key topic in the translation field, academia and industry greatly differ on how to measure quality. While academia is mostly concerned with the theory of translation quality, TQA in the industry is mostly performed by making use of arbitrary error typology models where “one size fits all”. Both academia and industry greatly disregard the end user of those translations when assessing the translation quality and so, the acceptability of translated and un-translated content goes largely unmeasured. Measuring acceptability of translated text is important because it allows one to identify what impact the translation might have on the end user – the final readers of the translation. Different stakeholders will have different acceptability thresholds for different languages and content types; some will want high quality translation, others may make do with faster turnaround, lower quality, or may even prefer non-translated content compared with raw MT. Acceptability is defined as usability, quality and satisfaction. Usability, in turn, is defined as effectiveness, efficiency in a specified context of use (ISO 2002) and is measured via tasks recorded using an eye tracker. Quality is evaluated via a TQA questionnaire answered by professional translators, and the source content is also evaluated via metrics such as readability and syntactic complexity. Satisfaction is measured via three different approaches: web survey, post-task questionnaire, and translators’ ranking. By measuring the acceptability of different post-editing levels for three target languages as well as the source content, this study aims to understand the different thresholds users may have regarding their tolerance to translation quality, taking into consideration the content type and language. Results show that the implementation of light post-editing directly and positively influences acceptability for German and Simplified Chinese languages, more so than for the Japanese language and, moreover, the findings of this research show that different languages have different thresholds for translation quality.
Item Type:Thesis (PhD)
Date of Award:November 2016
Supervisor(s):O'Brien, Sharon
Uncontrolled Keywords:machine translation; usability; acceptability; post-editing
Subjects:Humanities > Translating and interpreting
Humanities > Linguistics
DCU Faculties and Centres:DCU Faculties and Schools > Faculty of Humanities and Social Science
DCU Faculties and Schools > Faculty of Humanities and Social Science > School of Applied Language and Intercultural Studies
Research Institutes and Centres > ADAPT
Use License:This item is licensed under a Creative Commons Attribution-NonCommercial-No Derivative Works 3.0 License. View License
Funders:Centre for Next Generation Localisation
ID Code:21342
Deposited On:16 Nov 2016 10:45 by Sharon O'brien . Last Modified 20 Jan 2021 16:53

Full text available as:

[thumbnail of PhD thesis]
PDF (PhD thesis) - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader


Downloads per month over past year

Archive Staff Only: edit this record