Search By Tags

From CAT to HAT? One day in the life of the 2019 Machine Translation Summit.

September 6, 2019


Halfway through my summer holiday in Ireland, on 19 August 2019, I was lucky enough to find myself at the 2019 Machine Translation Summit held in DCU's Glasnevin Campus for a day-long workshop entitled Workshop on Human-Aided Translation. Chaired by Professor Andy Way, Deputy Director of the ADAPT Centre for Digital Content Technology at Dublin City University, the MT Summit sought to bring together MT researchers and developers, but also translators, in one place to discuss current work and future developments in this fast-growing field of AI. There has traditionally been a disconnect between developers (scientists) and practitioners (translators) of machine

translation, and one aim of this summit was "to bring translators and system developers closer together". Although I only had a taste of this cooperation for one day, it certainly did just that.


The Workshop on Human-Aided Translation consisted of 7 talks and a panel discussion. Lucia Specia of Imperial College London and University of Sheffield started proceedings with her talk Quality Estimation and Automatic Post-editing in the Neural Machine Translation Era. Neural machine translation (NMT) is now regarded as the best-performing automated translation technology, given sufficient data to build the translation model, and Lucia discussed the need to make NMT more accurate, exploring two research areas seeking to do this: quality estimation (QE) and automatic post-editing (APE). Markus Freitag, Senior Software Engineer at Google Translate, professed to suffering from jet-lag, but there were no signs of this in his fast-moving talk APE at scale and its Implications on MT Evaluation Biases. All translations, whether human or machine, may be biased by the source language structure, i.e. are to some extent "translationese", and the aim of Google's APE was to convert this translationese into natural text. Jiayi Wang from Alibaba gave an excellent and enthusiastic talk on Quality Estimation Technology and its Applications in E-Commerce Machine Translation. She presented her work on automatic evaluation of MT output in a global E-commerce environment dealing with millions of queries every day. The final talk before lunch was presented by Fabio Kepler, Senior Research Scientist at Unbabel. Entitled Quality Estimation in Practice: from Implementation to State-of-the Art, Fabio gave a concise but detailed presentation of Unbabel's QE system OpenKiwi - developed in consultation with translators and now available to try out in open source!


There were three talks after lunch. Marco Turchi, from Fondazione Bruno Kessler (FBK), presented Quality estimation in support of automatic post-editing, discussing how APE systems now face a new challenge. Improved neural models resulting in high quality machine translations mean that APE systems may make unnecessary changes that can lower the quality of the original MT. In response, FBK is looking into combining APE and QE in ways that benefit MT output. Tsz Kin Lam of Heidelberg University then discussed experiments with siamese networks to assess the quality of machine-translated titles in E-commerce environments like Ebay. His presentation, MT Quality Estimation for e-Commerce: Automatically Assessing the Quality of Machine Translated Titles, was followed by the final talk of the day given by Dimitar Shterionov. A post-doctoral researcher in Professor Andy Way's ADAPT team, in his talk Neural Quality Estimation as a Bridge for Human-Computer Translation Symbiosis Dimitar discussed how QE systems can be improved in terms of quality and performance using pre-training methods such as word2vec, bert, ELMo and XLNet.


Then came, for me, the most interesting part of the day, the panel discussion, the overarching topic of which was as follows: Is the significant improvement in the quality of machine translation resulting in a shift from computer-assisted translation to human-aided translation. Is there a shift from CAT to HAT? To say the discussion was lively is to understate the case by several computations! There was largely consensus among the panellists that the situation is complex and that, good as machine translation systems now are, we are still a long way from a reality where machines could replace humans. However, many audience members disagreed, and these dissenting voices surprisingly came from the translation industry itself. Their arguments were that translation memory and machine translation systems have been around for many years now, and have slowly but surely reduced the translator's workload. From my own experience, and this view was both reiterated at the workshop and is to be found anecdotally among colleagues, the work translators now receive is most certainly more difficult, technically challenging, and with ever shorter deadlines, and that machine translation has had a part to play in this change. This is not necessarily a bad thing; as translators we welcome complexity and challenge, and as professionals we expect to work to a time limit. But machine translation is a reality and it is having an impact.


Two very positive things emerged from the workshop. The first was that it was clear from the research presented, the questions asked and the discussions held that developers and translators alike have a shared goal of improving the quality of translation output to its very best. The second was that the developers of machine translation systems want, and are inviting, translators to be involved in their research, and we translators need to accept that invitation if we want to be part of the future of translation. Back to Andy Way in in his Welcome Message to the summit, "the MT Summit is the place where researchers, developers, users and vendors all get together under one roof to discuss the issues of the day", which is a very good place to start.




Photo by Cathal mac an Bheatha on Unsplash

Please reload

Please reload

Recent Posts

July 15, 2019

Please reload

  • LinkedIn Social Icon
Follow Us