Pages

Thursday, November 10, 2016

Lilt :- An Interactive & Adaptive MT Based Translator Assistant or CAT Tool

Those who read my blog regularly know that I have been a fan of the Lilt MT technology for some time now. In my opinion, it is a significant leap forward in MT capability, especially for use in professional translation environments, even though the company seems to be struggling to convince and communicate the value of the technology to the target customer base. (BTW, I could help with that.) However, Lilt, is in fact, the pioneer of what they call interactive, adaptive MT. This is the technological step after post-editing as we know it today, a step towards something we might call a virtual translator assistant.

So what is Lilt? It is a combination of all of the following:
  • An MT system that learns and improves in real-time (i.e. instantly) as translators provide corrections,
  • An interactive translation tool environment that responds dynamically to real-time translator edits and typing,
  • A translator environment that integrates TM, Terminology, and MT in the most seamless and effective way I have seen thus far.
It is something that can be used both by agencies and by individual translators who are looking to evolve beyond what is possible with desktop tools. It is something that I think is only possible to do in the cloud, as it draws on data and computing resources that are not likely to exist on the desktop or just do not make sense there. Consider the value of a search engine that only searches your desktop versus one that can search both the desktop and the open internet. As Lilt becomes more neural-network based and AI-based this cloud-centric requirement will only become more so. At this point in time (2016-17), I still believe that Adaptive MT is the most efficient and effective MT option available for professional translation use, even more than Neural MT.  I want to assure readers that my enthusiasm for the product is genuine, and not linked to any kind of potential business relationship. But Lilt should be aware that others are close behind too, especially SDL, and they need to get their marketing act together quickly or lose the first mover advantage they have now.

The post was written by John DeNero with some assistance from Spence Green. They are the founders of the company, and this post hopefully helps to provide a clearer differentiation of Lilt from other MT technology alternatives available in the market. Emphasis below is mine. For those who still live in the DIY dream, this technology overview might once and for all convince them that the requirements of  leading-edge MT of the future almost definitely closes that door. I will also add some additional comments after the guest post.

-----------------



Lilt is an interactive and adaptive computer-aided translation tool that integrates machine translation, translation memories, and termbases into one interface that learns from translators. Using Lilt is an entirely different experience from post-editing machine translations — an experience that our users love, and one that yields substantial productivity gains without compromising quality. The first step toward using this new kind of tool is to understand how interactive and adaptive machine assistance is different from conventional MT, and how these technologies relate to exciting new developments in neural MT and deep learning.

Interactive MT doesn’t just translate each segment once and leave the translator to clean up the mess. Instead, each word that the translator types into the Lilt environment is integrated into a new automatic translation suggestion in real-time. While text messaging apps autocomplete words, interactive MT autocompletes whole sentences. Interactive MT actually improves translation quality. In conventional MT post-editing, the computer knows what segment will be translated, but doesn’t know anything about the phrasing decisions that a translator will make. Interactive translations are more accurate because they can observe what the translator has typed so far and update the MT suggestions based on all available information. Even the first few words of a translation provide strong cues about the intended structure of a sentence and the Lilt system provides real-time support to a translator as he/she work through a translation, usually providing a significant productivity boost.

Lilt and Stanford researchers published the best method yet for making accurate interactive translation suggestions in their 2016 Association for Computational Linguistics paper, Models and Inference for Prefix-Constrained Machine Translation. Often when using interactive MT, correcting the first part of a translation is all that is required; the rest is corrected automatically by the system, which follows the lead of the translator.

Interactive MT vs Adaptive MT vs Custom MT


While interactive MT improves suggestions within a segment, adaptive MT works across segments. Lilt’s adaptive assistance learns automatically from translators in real-time as they work, so that any errors made in one segment and corrected by the translator are typically not repeated in later segments. Interactive MT focuses on making suggestions for a single segment, updated after each word, and it doesn't involve multiple users. Adaptive MT learns as a user works through a document or project, updated after each sentence, and adaptation improvements can be shared across a whole team. Aside from the difference in granularity, there's a difference in how these two techniques learn: interactive MT incorporates everything typed, but adaptation only uses confirmed segments (which are more reliable). By using both techniques together, users get the freshest possible suggestions without having to worry that unconfirmed typos will influence their future suggestions outside of the current segment.

By contrast, a conventional generic MT system such as Google Translate always returns the same translation for the same segment and often repeats mistakes, which must be corrected each time. Custom MT, built for example by Microsoft Translator Hub, learns from example translations that are specific to a project or client, and this additional training can improve quality substantially. However, custom MT does not continue to adapt dynamically as translators work; instead it must be retrained periodically when new projects are completed. Lilt’s adaptive MT does not require retraining at all because it learns automatically from each segment that is confirmed by a translator.

The Group Impact

When a group of translators collaborate on a large project, each confirmed segment is integrated into the translation engine, so that suggestions given to all teammates are improved each time anyone confirms a segment. Each update takes less than a half of a second. When using adaptive MT, translators find that the suggestions improve as they progress through a document because the system has learned how they translated the first part of the document when suggesting translations for the rest. Much like interactive MT, adaptive MT uses all available information to make the best suggestions possible.

Here you see interactive, adaptive MT in action.
  


Machine translation, translation memories, and termbases have conventionally been three isolated sources of information that translators have to merge together manually. Lilt combines all three sources automatically. A translation memory provides exact and partial matches, but is also used to automatically customize machine translation suggestions. A termbase is used to ensure terminological consistency, even within the automatic suggestions generated by the interactive translation engine. An integrated lexicon panel includes public bilingual dictionaries and project-specific terms, along with concordance search that automatically merges together examples from public corpora and translation memory matches. Lexicon and concordance matches are ranked for relevancy using neural network models of word and sentence similarity. These neural models learn automatically from usage to detect which words in a document are inflected or derived forms of terms in a user’s termbase, discovering patterns of both inflectional and derivational morphology. Integrated use of all the data relevant to translation not only improves accuracy and efficiency, but actually reduces the amount of configuration required to use the system. Translators simply add all relevant resources to Lilt when starting a project, and all of it is used together to optimize the interface and suggestions during translation, lexicon lookup, and concordance search.


The Lilt Interface



One of the most exciting new developments in the field of machine translation is neural MT. For the first time, neural systems are able to discover similarities between related words and phrases, and whole sentences are generated coherently instead of being pieced together from small independent fragments. The quality of the best neural translation systems exceeds those of conventional statistical systems by a large margin, often cleaning up agreement and sentence structure errors that have persisted in MT systems for many years. These gains are available even in a conventional post-editing setting.

Quality gains from interactive and adaptive translation are often larger than the gains from switching to neural MT, which is not surprising—interactive and adaptive translation can utilize new information from the translator. However, the most exciting discovery is that the gains from neural MT and interactive MT can be combined. In 2016, Lilt and Stanford researchers showed that using a neural translation system to autocomplete partial translations interactively can be extremely effective—correctly predicting the next word that a translator would type 53-55% of the time in software and news documents translated from English to German. Our research suggests that interactive translation has even more to gain from neural MT than conventional post-editing.

The combination of interactive and adaptive machine translation has proven particularly effective when large teams of translators are working under a tight deadline. In May 2016, the European travel portal GetYourGuide contracted e2f, based in California, to localize 1.77 million words from their catalog into 6 languages within a two-week window before summer travel began. More than 100 experienced translators contributed to the effort, using a shared Lilt adaptive MT engine for each language pair. The translators achieved speeds far above the industry average of 335 words per hour in all language pairs, and the project was completed ahead of schedule.

Language Pair Average Words / Hour
English-Dutch 1053
English-German 888
English-Portuguese 820
English-Spanish 701
English-Italian 608
English-French 464
Overall 731

Lilt won the 2016 TAUS Innovation Excellence Award last month, by replacing post-editing with interactive and adaptive machine assistance. This approach takes advantage of new discoveries in neural machine translation much more effectively than traditional MT. Moreover, this approach leaves translators in control of their workflow and provides a new integrated interface designed to make it easy to use this advanced technology. We believe that the future is bright for translators who adopt Lilt’s new way of combining machine translation, translation memories, and termbases—a future that is more enjoyable, more productive, and more focused on the aspects of translation that only humans can get right.

***

The Authors: Co-founders John DeNero and Spence Green met while working on Google Translate during the summer of 2011. They believed that better machine assistance could make translation more enjoyable for translators and more available for those who seek information. By 2014 a research prototype of an interactive, machine-assisted translation system called Predictive Translation Memory had been built. Lilt was incorporated in March 2015 with the mission of making fast, high-quality translation available to everyone.

 
John DeNero is Chief Scientist. He is also Assistant Professor of Computer Science at UC Berkeley. Before that he spent four years working on Google Translate. He received a Ph.D. in Computer Science from UC Berkeley in 2010. 

 
Spence Green is CEO. He received a Ph.D. in Computer Science from Stanford University in 2014 under the direction of Chris Manning and Jeff Heer.

---------------------------

I think it is worth pointing out that several translators have spoken out openly about how dramatically different and more positive the Lilt MT experience is, compared to any other MT they have interacted with. Jost Zetzsch has written several times about Lilt in his newsletter, which you can link to via the logo on the right column. He also prepared this video demo for those who are interested to hear some specific comments and see the system in action.

I also saw this unsolicited comment from a Lilt user that illustrates one translator's experience and I would suggest to translators to explore and add this tool to their overall toolkit, as it is a definite productivity booster when used even semi-skillfully.


 Peace and God Bless America

4 comments:

  1. The approach from Lilt is indeed interesting! However, the productivity reported in the use case is far below what we typically achieve in the Travel & Hospitality vertical... So even if the technology is great, I think we need to take into account not only the technology, but also the process, and the people involved to bring MT to human quality. Everything counts :-)

    ReplyDelete
  2. Certainly the overall process matters, as you say. We will soon be conducting direct experiments that compare productivity, accuracy, and consistency using different translation environments, in order to demonstrate the benefits of Lilt more directly.

    ReplyDelete
  3. I keep trying Lilt, and SDL's version of adaptive MT (in Studio 2017), but the main reason they are just NO GOOD is the following: vanilla Google Translate output (accessed via the API in CAT tools) is always way and way better than the Lilt MT engine and SDL's cloud engines. Period. Make it work with vanilla GT and you might have a winner. Otherwise, I think I'll stick with Déjà Vu X3.

    ReplyDelete
    Replies
    1. Michael, It is quite possible and even likely that Google will start of with a better quality baseline. Adaptive MT requires active corrective feedback to improve and theoretically will improve for your specific content unless you are translating widely varying content across very disparate domains. This adaptation process is key to getting better quality but it may be possible that for your very specific kind of content, Google just does a really good job. You should use whatever does the best for you but I can only suggest that you give the Adaptive MT some time to learn before you dismiss it.

      Delete