Pages

Monday, November 20, 2017

Post Editing - What does it REALLY mean?

While many people may consider that all post-editing is the same, there are definitely variations that are worth a closer look. This is a guest post by Mats Dannewitz Linder that digs into three very specific PEMT scenarios that a translator might view quite differently. Mats has a more translator-specific perspective and as the author of the Trados Studio manual, I think provides a greater sensitivity to the issues that do matter to translators. 

From my perspective as a technology guy, this post is quite enlightening as it provides real substance and insight on why there have been communication difficulties between MT developers and translator editors. PEMT can be quite a range of different editor experiences as Mats describes here, and if we now factor in the change that Adaptive MT can have, we now have even more variations on the final PEMT user experience.  

I think a case can be made for both major cases of PEMT that I see from my vantage post, the batch chunk mode and the interactive TU inside the CAT mode.  Batch approaches can make it easier to do multiple corrections in a single search and replace action, but interactive CAT interfaces may be preferred by many editors who have very developed skills in a preferred CAT tool. Adaptive MT, I think, is a blend of both and thus I continue to feel that it is especially well suited for any PEMT scenario as described in this post. The kind of linguistic work done for very large data sets is quite different and focuses on correcting high-frequency word patterns in bulk data, described in this post: The Evolution in Corpus Analysis Tools. This is not PEMT as we describe here, but it is linguistic work that would be considered high value for eCommerce, customer support and service content and any kind of customer review data that has become the mainstay of MT implementations today. 

For those in the US, I wish you a Happy Thanksgiving holiday this week, and I hope that you enjoy your family time. I have pointed out previously, however, that for the indigenous people of the Americas, Thanksgiving is hardly a reason to celebrate.“Thanksgiving” has become a time of mourning for many Native People, hopefully, this changes, but it can only change when at least a few recognize the historical reality and strive to alter it in small and sincere ways.

The emphasis and images below are all my doing so please do not blame Mats for them.

==========


I have read – and also listened to – many articles and presentations and even dissertations on post-editing of machine translation (PEMT), and strangely, very few of them have made a clear distinction between the editing of a complete, pre-translated document and the editing of machine-translated segments during interactive translation in a CAT tool. In fact, in many of them, it seems as if the authors are primarily thinking of the latter. Furthermore, most descriptions or definitions of “post-editing” do not even seem to take into account any such distinction. All the more reason, then, to welcome the following definition in ISO 17100, Translation services – Requirements for translation services:

      post-edit

      edit and correct machine translation output

Note: This definition means that the post-editor will edit output automatically generated by a machine translation engine. It does not refer to a situation where a translator sees and uses a suggestion from a machine translation engine within a CAT (computer-aided translation) tool.

And yet… in ISO 18587, Translation services – Post-editing of machine translation output – Requirements, we are once again back in the uncertain state: the above note has been removed, and there are no clues as to whether the standard makes any difference between the two ways of producing the target text to be edited.


This may be reasonable in view of the fact that the requirements on the “post-editor” arguably are the same in both cases. Still, that does not mean that the situation and conditions for the translator are the same, nor that the client – in most cases a translation agency, or language service provider (LSP) – see them as the same. In fact, when I ask translation agencies whether they see the work done during interactive translation using MT as being post-editing, they tell me that it’s not.

But why should this matter, you may ask. And it really may not, as witness the point of view taken by the authors of ISO 18587 – that is, it may not matter to the quality of the work performed or the results achieved. But it matters a great deal to the translator doing the work. Basically, there are three possible job scenarios:
  1. Scenario A:- The job consists of editing (“post-editing”) a complete document which has been machine-translated; the source document is attached. The editor (usually an experienced translator) can reasonably assess the quality of the translation and based on that make an offer. The assessment includes the time s/he believes it will take, including any necessary adaptation of the source and target texts for handling in a CAT tool.
  2. Scenario B:- The job is very much like a normal translation in a CAT tool except that in addition to, or instead of, an accompanying TM the translator is assigned an MT engine by the client (normally a translation agency). Usually, a pre-analysis showing the possible MT (and TM) matches is also provided. The translator is furthermore told that the compensation will be based on a post-analysis of the edited file and depend on how much use has been made of the MT (and, as the case may be, the TM) suggestions. Still, it is not possible for the translator either to assess the time required or the final payment. Also, s/he does not know how the post-analysis is made so the final compensation will be based on trust.
  3. Scenario C:- The job is completely like a normal translation in a CAT tool, and the compensation is based on the translator’s offer (word price or packet price); a TM and a customary TM matches analysis may be involved (with the common adjustment of segment prices). However, the translator can also – on his or her own accord – use MT; depending on the need for confidentiality it may be an in-house engine using only the translator’s own TMs; or it may be online engines with confidentiality guaranteed; or it may be less (but still reasonably) confidential online engines. Whatever the case, the translator stands to win some time thanks to the MT resources without having to lower his or her pricing.
In addition to this, there are differences between scenarios A and B in how the work is done. For instance, in A you can use Find & replace to make changes in all target segments; not so in B (unless you start by pre-translating the whole text using MT) – but there you may have some assistance by various other functions offered by the CAT tool and also by using Regular expressions. And if it’s a big job, it might be worthwhile, in scenario A, to create a TM based on the texts and then redo the translation using that TM plus any suitable CAT tool features (and regex).

Theoretically possibly, but practically not

There is also the difference between “full” and “light” post-editing: Briefly, the former means that the resulting text is comprehensible and accurate, but the editor need not – in fact, should not – strive for a much “better” text than that, and should use as much of the raw MT version as possible. The purpose is to produce a reasonably adequate text with relatively little effort. The latter situation means that the result should be of “human” translation quality. (Interestingly, though, there are conflicting views on this: some sources say that stylistic perfection is not expected and that clients actually do not expect the result to be comparable to “human” translation.) Of course these categories are only the end-points on a continuous scale; it is difficult to objectively test that a PEMT text fulfils the criteria of one or the other (is the light version really not above the target level? is the full version really up to the requirements?), even if such criteria are defined in ISO 18587 (and other places).

Furthermore, all jobs involving “light-edit” quality is likely to be avoided by most translators 

Source: Common Sense Advisory

These categories mainly come into play in scenario A; I don’t believe any translation agency will be asking for anything but the “full” quality in scenario B. Furthermore, all jobs involving “light” quality is likely to be avoided by most translators. Not only does it go against the grain of everything a translator finds joy in doing, i.e. the best job possible; experience also shows that all the many decisions that have to be made regarding which changes need to be made and which not often take so much time that the total effort with “light” quality editing is not much less than that with “full” quality.

Furthermore, there are some interesting research results as to the efforts involved, insights which may be of help to the would-be editor. It seems that editing medium quality MT (in all scenarios) takes more effort than editing poor ones – this is cognitively more demanding than discarding and rewriting the text. Also, the amount of effort needed to detect an error and decide how to correct it may be greater than the rewriting itself and reordering words and correcting mistranslated words takes the longest time of all. Furthermore, it seems that post-editors differ more in terms of actual PE time than in the number of edits they make. Interestingly, it also seems that translators leave more errors in TM-matched segments than in MT-matched ones. And the mistakes are of different kinds.

These facts, plus the fact that MT quality today is taking great steps forward (not least thanks to the fast development of neural MT, even taking into account the hype factor), are likely to speed up the current trend, which according to Arle Lommel, senior analyst at CSA Research and an expert in the field, can be described thus:
"A major shift right now is that post-editing is being replaced by “augmented translation.” In this view, language professionals don't correct MT, but instead, use it as a resource alongside TM and terminology. This means that buyers will increasingly just look for translation, rather than distinguishing between machine and human translation. They will just buy “translation” and the expectation will be that MT will be used if it makes sense. The MT component of this approach is already visible in tools from Lilt, SDL, and others, but we're still in the early days of this change."

In addition, this will probably mean that we can do away with the “post-editing” misnomer – editing is editing, regardless of whether the suggestion presented in the CAT tool interface comes from a TM or an MT engine. Therefore, the term “post-editing” should be reserved only for the very specific case in scenario A, otherwise, the concept will be meaningless. This view is taken in, for instance, the contributions by a post-editor educator and an experienced post-editor in the recently published book Machine Translation – What Language Professionals Need to Know (edited by Jörg Porsiel and published by BDÜ Fachverlag).

Thus it seems that eventually we will be left with mainly scenarios B and C – which leaves the matter, for translators, of how to come to grips with B. This is a new situation which is likely to take time and discussions to arrive at a solution (or solutions) palatable to everyone involved. Meanwhile, we translators should aim to make the best possible use of scenario C. MT is here and will not go away even if some people would wish it to.


-------------



Mats Dannewitz Linder has been a freelance translator, writer and editor for the last 40 years alongside other occupations, IT standardization among others. He has degrees in computer science and languages and is currently studying national economics and political science. He is the author of the acclaimed Trados Studio Manual and for the last few years has been studying machine translation from the translator’s point of view, an endeavour which has resulted in several articles for the Swedish Association of Translators as well as an overview of Trados Studio apps/plugins for machine translation. He is self-employed at Nattskift Konsult.

Thursday, November 16, 2017

How Adaptive MT turns Post-Editing Janitors into Cultural Consultants

At the outset of this year, I felt that Adaptive MT technology would rapidly establish itself as a superior implementation of MT technology for professional and translator use. Especially, in those scenarios where extensive post-editing is a serious requirement. However, it has been somewhat overshadowed by all the marketing buzz and hype that floats around Neural MT's actual capabilities. Had I been a translator, I would have at least experimented with Adaptive MT, even if I were not to use it every day. If one does the same type of translation (focused domain) work on a regular basis, I think the benefits are probably much greater. Jost Zetzsche has also written favorably about his experiences with Adaptive MT in his newsletter.

We have two very viable and usable Adaptive MT solutions available in the market that I have previously written about:

Lilt :- An Interactive & Adaptive MT Based Translator Assistant or CAT Tool

and

A Closer Look at SDL's Adaptive MT Technology

 

I am told that MMT also offers a solution, but my efforts to gather more information about the product have not met with success, and I am loathe to suggest that anybody seriously look at something I have little knowledge of. Given my unfortunate experience with MT development that never actually lived up to promises, I think it is wiser to focus on clearly established and validated products, that have already been examined by many.

 We are now already reaching that point where Neural MT and Adaptive MT  come together and  Lilt recently announced their Adaptive Neural MT. I am also aware that SDL is exploring this combination and has beta versions of Adaptive Neural MT running as well. 

The Lilt announcement stated:
"In a blind comparison study conducted by Zendesk, a Lilt customer, reviewers were asked to choose between Lilt’s new adaptive NMT translations and Lilt’s previous adaptive machine translation (MT) system. They chose NMT to be of superior or equal quality 71% of the time."
From all that we know about these technologies, it seems that Adaptive Neural MT should become a preferred technology for the "localization" content that receives a lot of post-editing attention. It is, however, not clear if this approach makes sense for every type of content and MT use scenario where custom NMT models may make more sense.  

This is a guest post by Greg Rosner who assures me that he believes that human skills of authenticity, idea generation and empathy will only grow more important, even as we add more and more technology to our daily and professional lives.  

We should remember that as recently as 1980, official business correspondence was produced by typist pools, usually, groups of women working on  (IBM Selectric) typewriters who also knew something called shorthand.  Often, these women were called secretaries. When word processor systems from a company called Wang reduced the need for these kinds of workers, a trend exacerbated by PC Word Processing software, many of these workers evolved into new roles. Secretaries became Executive Assistants who have often had Office Suite expertise and thus perform much more complex and hopefully more interesting work. Perhaps we will see similar patterns with translation, where translators will need to pay less attention to handling file format transformations and developing arcane TM software expertise. And rather, focus on real linguistic issue resolution and developing more strategic language translation strategies for ever-growing content volumes that can improve customer experience.

====== 

I saw the phrase “linguistic janitorial work” in this Deloitte whitepaper on “AI-augmented government, using cognitive technologies to redesign public sector work”, used to describe the drudgery of translation work that so many translators are required to do today through Post-editing of Machine Translation. And then it hit me what's really going on.

The sad reality over the past several years is that many professional linguists, who have decades of particular industry experience, expertise in professional translation and have earned degrees in writing, whose jobs have been reduced to sentence-by-sentence clean-up of translations that flood out of Google Translate or other Machine Translation (MT) systems.



The Deloitte whitepaper takes the translator's job as an example of how AI will help automate tasks through different approaches, such as relieving work, splitting-up work, replacing work, and augmenting work.

THE FOUR APPROACHES APPLIED TO TRANSLATION

"...A relieve approach might involve automating lower-value, uninteresting work and reassigning professional translators to more challenging material with higher quality standards, such as marketing copy.

To split up, machine translation might be used to perform much of the work—imperfectly, given the current state of machine translation—after which professional translators would edit the resulting text, a process called post-editing. Many professional translators, however, consider this “linguistic janitorial work,” believing it devalues their skills.

With the replace approach, the entire job a translator used to do, such as translating technical manuals, is eliminated, along with the translator’s position.

And finally, in the augment approach, translators use automated translation tools to ease some of their tasks, such as suggesting several options for a phrase, but remain free to make choices. This increases productivity and quality while leaving the translator in control of the creative process and responsible for aesthetic judgments.”

Many translators hate translation technology because it has reduced their enormous cultural understanding, language knowledge and industry expertise that they can offer organizations who want to connect with global customers, to being grammaticians.



HOW IS ADAPTIVE MACHINE TRANSLATION DIFFERENT FROM STATISTICAL OR NEURAL MACHINE TRANSLATION?

 

Post-editing whatever comes out of the machine has been a process used since the 1960’s when professional linguists would clean up poor translations that were output from the system. Sadly, this is still most of what’s still happening today in spite of Adaptive systems available in the market. But more on why this might be in my next blog,

The biggest problem with the job of post-editing machine translation is having to make the same corrections again and again since there is no feedback mechanism when translators make a change. This is true with the output of every Machine Translation system today including Google Translate and Microsoft Translator. Training MT engines for a specific domain is time-consuming and costs a lot of money, thus it typically only happens once or twice a year. The effort results in a static system that will inevitably need to be trained again to create yet another static system.

Adaptive Machine Translation is a new category of AI software which is learning all the time. The training it is getting happens as the translator is working, so there is never a re-training. This side-by-side translation activity is poised to be the biggest revolution in the language translation industry since Translation Memory (statistical sentence matching) was introduced in the 1980s.



(Example of Lilt Adaptive Machine Translation interface working in collaboration with the translator sentence by sentence.)

HOW DOES ADAPTIVE MACHINE TRANSLATION INCREASE THE VALUE OF A PROFESSIONAL TRANSLATOR?

 

There is an enormous amount of untapped value that a professional translator can bring to an organization working in an Adaptive Machine Translation model vs through a Post-Editing model. Given that they are a native linguist, familiar with the country and customs of the target market, there is a lot of human intelligence and understanding ready to be tapped right in the localization process. In addition, over time their familiarity with a product and service will have them be a much more valuable asset to localizing content than simply an in-language grammatician.

Like other fields, AI will help remove tasks of our jobs that can be replicated or made more efficient. It’s sad that the current mode of translation technology that we’ve been working with so long has put professional translators in a position to clean up the mess a machine makes. It seems it should be the other way around. (i.e Grammarly) I’m optimistic that AI will help us become better translators, enabling us to spend more time being creative, having more connected relationships and become more what it means to be human.
“Chess grand master Garry Kasparov pioneered the concept of man-plus-machine matches, in which AI augments human chess players rather than competes against them. “Centaur,” which is the human/AI cyborg that Kasparov advocated, will listen to the moves suggested by the AI but will occasionally override them - much the way we use the GPS. Today the best chess player alive is a centaur. It goes by the name of Intagrand, a team of several humans and several different chess programs. AI can help humans become better chess players, better pilots, better doctors, better judges, better teachers.”

========




Greg started in the translation business counting sentences to be translated with a chrome hand-tally-counter and a dream to help business go global in 1995. Since then, he’s worked as a language solution advisor for the global 5,000 clients of Berlitz, LanguageLine, SDL and Lilt.

Greg Rosner


Greg Rosner


 

Wednesday, November 15, 2017

BabelNet - A Next Generation Dictionary & Language Research Tool

This is a guest post by Roberto Navigli of BabelNet, a relatively new "big language data" initiative that is currently a lexical-semantic research and analysis tool, that can do disambiguation and has been characterized by several experts as a next-generation dictionary. It is a tool where "concepts" are linked to the words used to express them. BabelNet can also function as a semantics-savvy, disambiguation capable MT tool. The use possibilities are still being explored and could expand as grammar related big data is linked to this foundation. As Roberto says:"We are using the income from our current customers to enrich BabelNet with new lexical-semantic coverage, including translations and definitions. In terms of algorithms, the next step is multilingual semantic parsing, which means moving from associating meanings with words or multiword expressions to associating meanings with entire sentences in arbitrary languages. This new step is currently funded by the European Research Council (ERC). The Babelscape startup already has several customers, among them, are Lexis Nexis, Monrif (a national Italian newspaper publisher), XTM (computer-assisted translation), and several European and national government agencies. 

While the initial intent of the project was broader than being a next-generation dictionary, attention and interest from the Oxford UP have steered this initiative more in this direction. 

I expect we will see many new kinds of language research and analysis tools become available in the near future, as we begin to realize that all the masses of linguistic data that we have access to that can be used for many different linguistically focused projects and purposes.  The examples presented in the article below are interesting, and the Babelscape referenced tools mentioned here are easy to access and experiment with. I would imagine that these kinds of tools and built-in capabilities would be an essential element of next-generation translation tools where this kind of a super-dictionary would be combined and connected with MT, Translation Memory, Grammar checkers and other linguistic tools that can be leveraged for production translation work.



=========

 

 

 

BabelNet: a driver towards a society without language barriers?


In 2014 the Crimean war broke out and is currently going on, but no national media is talking about it anymore. So now Rachel has been trying for 1 hour to find information about the current situation but she can only find articles written in Cyrillic that she is not able to understand. She is about to give up when her sister says: “Have you tried to use BabelNet and its related technology? It is the best way to understand a text written in a language that you do not know!”, so she tries and gets the information from the article.

The widest multilingual semantic network


But what is BabelNet and how could it help Rachel in her research? 

We are talking about the largest multilingual semantic network and encyclopedic dictionary, created by Roberto Navigli, founder and CTO of Babelscape and full professor at the Department of Computer Science at the Sapienza University of Rome, born as a merger of two different resources, WordNet and Wikipedia. However, what makes BabelNet special is not the specific resources used, but how they interconnect with each other. In fact, it is not the first system to exploit Wikipedia or WordNet, but it is the first one to merge them, taking encyclopedic entries from Wikipedia and lexicographic entries from WordNet. Thus BabelNet is a combo of resources that people usually access separately.  

Furthermore, one of the main features of BabelNet is its versatility, since its knowledge enables to design applications to analyze text in multiple languages and extract various types of information. For example, Babeltex, a concept and entity extraction system based on BabelNet, is able to spot entities and extract terms and their meaning from sentences in a text (an article, a tweet, and any other type of phrase) and, as a result, Rachel is able to understand what the article is talking about. However, she realizes that Babelfy is not a translator, but a tool to identify concepts and entities within text and get their meanings in different languages: when Rachel uses it, the network spots the entities in the article, it finds the multiple definitions of a word and matches their meaning with an image and their translations in other languages, so in this way she can get the content itself of the text. In addition, Babelfy shows the key concepts related to any entities.
               
Let me show you two examples of how Babelfy works. First, look at the following statement. “Lebron and Kyrie have played together in Cleveland”.


In this case Babelfy has to disambiguate a text written in English and to explain it in the same language: its task is to recognize concepts (highlighted in green) and named entities (highlighted in yellow) and to match the proper meaning to every concept according to the sentence; finally it provides an information sheet based on the BabelNet’s knowledge for any entity and concept. Thus in the previous example, Babelfy works first as a disambiguator able to understand that “Cleveland” means “the basketball team of the city” and not the city itself, and then as an encyclopedia, by providing information sheets about the various entities and concepts.

The second example shows how Babelfy faces Rachel’s problem. We have a text written in Spanish. Babelfy recognizes the concepts (es, abogado, político, mexicano) and the named entity (Nieto) and provides the information sheets in the selected language (English). Babelfy can, therefore, help you understand a text written in a language you do not speak.



This can be repeated in hundreds of languages: no one can guarantee a linguistic coverage as wide as Navigli’s company, currently about 271 languages, including Arabic, Latin, Creole, and Cherokee. That is why BabelNet won the META Prize in 2015, motivated by the jury “for groundbreaking work in overcoming language barriers through a multilingual lexicalized semantic network and ontology making use of heterogeneous data sources. The resulting encyclopedic dictionary provides concepts and named entities lexicalized in many languages, enriched with semantic relations”.


Roberto Navigli awarded the META Prize  (Photo of the META-NET 2015 prize ceremony in Riga, Latvia).


Make your life better: have fun with knowledge, get insights for your business


But, we have to be practical. We know what this network does, but can it improve our life? The answer is “of course!”, whether you are a computer scientist or just a user. If you are a user, you can follow BabelNet’s slogan, “Search, translate, learn!”, and enjoy your time by exploring the network and dictionary. People can have fun by discovering the interconnection among the words, playing with the knowledge.  But BabelNet is not just about this: in her article “"Redefining the modern dictionary", Katy Steinmetz, a journalist of Time Magazine, states that BabelNet is about to revolutionize the current dictionaries and to take them to the next level. According to Steinmetz, the merit of BabelNet is “going far beyond the ‘what’s that word mean’ use case” because the multilingual network has been organized using the meaning of the words, not their spelling and the useless alphabetical order as the print dictionaries do, and in addition it offers a wider language coverage and an illustration for any term.  Why should you use a common dictionary when you have another with any entry matched to a picture and definitions in multiple languages? Thus BabelNet is a pioneer at the turning point from dictionaries to a semantic network structure with labeled relations, pictures, and multilingual entries, and makes gaining knowledge and information easier for users.

Computer scientists, on the other hand, can exploit BabelNet to disambiguate a written text in one of the hundreds of covered languages. For example, BabelNet can be used to build a term extractor able to analyze tweets or any social media chat about the products of a company and spot the entities with the matched picture and concepts. In this way, the marketing manager can understand what a text is talking about regardless of language and can get insights to improve the business activities.


A revolution in progress


Even though its quality is already very high, the current BabelNet should be considered as “a starting point” for much richer versions to come of the multilingual network, because new lexical knowledge is continuously added with daily updates  (for example, when a new state president is elected this fact will be integrated into BabelNet as soon as Wikipedia is updated). The focus on upgrading technology and linguistic level comes from the background of Roberto Navigli (winner of the Prominent Paper Award 2017 from Artificial Intelligence, the most prestigious journal in the field of AI), who has put together a motivated operational team.

After the starting combination between Wikipedia and WordNet, new and different resources (Open Multilingual WordNet, Wikidata, Wiktionary, OmegaWiki, ItalWordNet, Open Dutch WordNet, FrameNet, Wikiquote, VerbNet, Microsoft Terminology, GeoNames, WoNeF, ImageNet) have been added to the next versions in order to provide more synonyms and meanings and to increase the available knowledge. The BabelNet team is not going to stop innovating, so who knows which other usages BabelNet could offer in the future: the revolution of our life by BabelNet has just begun. Should we start thinking about a society without language barriers?


REFERENCES



Wednesday, November 8, 2017

Taking Translation Metadata Beyond Translation Memory Descriptors

 This is a guest post on Translation Metadata by Luigi Muzii. Some may recall his previous post: The Obscure and Controversial Importance of Metadata. Luigi's view of translation metadata is much broader and all-encompassing than most descriptions we see in the translation industry which usually only reference TM descriptors. In addition to descriptors about the TM, it can also be about the various kinds of projects, the kinds of TM, translators used, higher levels of an ontological organization, client feedback, profitability and other parameters that are crucial to developing meaningful performance indicators (KPI).

As we head into the world of AI-driven efficiencies, the quality of the data and the quality and sophistication of the management of your data becomes significantly more strategic and important. I have observed over the years that LSPs struggle to gather data for MT engine training and that for many if not most, the data sits in an unstructured and unorganized mass on network drives, where one is lucky to even see intelligible naming conventions and consistent formats. Many experts now say the data is even more important than the ML algorithms which will increasingly become commodities.  Look at the current hotshot on the MT technology block: Neural MT,  which already has 4+ algorithmic foundations available for the asking (OpenNMT, TensorFlow, Nematus and Facebook's  Fairseq). I bet more will appear and that the success of NMT initiatives will be driven more by the data than the algorithm.


Despite the hype, we should understand that deep learning algorithms are increasingly going to be viewed as commodities. It's the data where the real value is.


 Good metadata implementations will also help to develop meaningful performance indicators, and as Luigi says could very well be elemental to disintermediation. Getting data right, is of strategic value and IMO that is why something like DeepL is such a formidable entrant. DeepL very likely has data in a much more organized structure that is metadata rich and can be re-deployed in any/many combinations with speed and accuracy. Data organization, I expect will become a means and instrument for LSPs to develop strategic advantage as good NMT solution platforms become ubiquitous.
 
 ** ------------------- **


If you were not to read on the subject focus of this post here, you are hardly likely to read it elsewhere: many people love to talk and write about metadata, but rarely actually care about it. This is because, usually, no one is willing to spend much of his their time filling out forms. Although this is undoubtedly a boring task, there is nothing trivial in assembling compact and yet comprehensive data to describe a job, however simple or complex, small or huge.

On the other hand, this monitoring and documenting activity is a rather common task for any project manager. In fact, in project management, a project charter must always be compiled stating scope, goals, stakeholders, and outlining roles and responsibilities. This document serves as a reference for the statement of work defining all tasks, timelines, and deliverables.

When part of a larger project, translation is managed as a specific task, but this does not exempt the team in charge to collect and provide the relevant data to execute this task. This data ranges from working instructions to project running time, from team members involved to costs, etc. but even LSPs and translation buyers, who might benefit from this job and procedural documentation action, whatever the type and size of the project or the task, often skip this step.

The descriptive data describing these projects and tasks is called metadata, and the information it provides can be used for ongoing or future discovery, identification or management. Metadata can sometimes be captured by computers, but more often it has to be created manually. Alas, translation project managers and translators often neglect to create this metadata, or they do not create enough metadata, or the metadata they create is not accurate enough; this makes metadata scarce and partial, and rapidly totally irrelevant.


The Importance of Metadata


On the other hand, metadata is critical for the extraction of business intelligence from workflows and processes. In fact, to produce truly useful stats and get practical KPIs, automatically-generated data is insufficient for any business inference whatsoever and the collation of relevant data is crucial for any measurement efforts to be effective.

The objective of this measurement activity is all about reducing uncertainty, which is critical to business. Translation could well be a very tiny fraction of a project, and although it is small, no buyer is willing to put a project at risk on independent variables that are not properly understood. Therefore, to avoid guessing, buyers require factual data to assess their translation effort, to budget it, and to evaluate the product they will eventually receive.

Every LSP should then first be capable of identifying what is important from the customer’s perspective, to make its efforts more efficient, cost-effective, and insightful. In this respect, measurements enable a company to have an accurate pulse on the business.

Measurements should be taken against pre-specified benchmarks to derive indicators and align daily activities with strategic goals, and analytics are essential to unlocking relevant insights, with data being the lifeblood of analytics. At the same time, measurements allow buyers to assess vendor capability and reliability.

ERP and CRM systems are commonly used in most industries to gather, review and adjust measurements. TMSs are the lesser equivalent of those systems in the translation industry.
In a blog post dating back to 2011, Kirti Vashee asked why are there so many TMS systems (given the size of the industry and the average size of its players,) each one with a tiny installed base. The answer was in the following question: because every LSP and corporate localization department think that their translation project management process is so unique that it can only be properly automated by creating a new TMS.

The Never-ending and Unresolved Standards Initiatives


More or less the same thing happens when it comes to any industry discussion on standards, with any initiative starting with an overstated claim and an effort focused on covering every single aspect of the topic addressed, no matter how vague or huge, in contrast with the spirit of standardization, which should result from a general consensus on straightforward, lean, and flexible guidelines.


In the same post, Kirti Vashee also reported about Jaap van der Meer predicting at LISA’s final standards summit event that GMS/TMS would disappear over time, in favor of plug-ins to other systems. Apparently, he also said that TMs would be dead in 5 years or less. Niels Bohr is often misquoted for saying that predictions are always hard, especially about the future.

While translation tools as we have known them for almost three decades have now lost centrality, they are definitely not dead, and we also see that GMS/TMS systems have not disappeared, and three years from now, we will see whether Grant Straker’s prediction is going to prove right that a third of all translation companies would disappear by 2020 due to technology disruption.

Technology has been lowering costs, but it is not responsible for increasing margin erosion. People who cannot make the best use of technology are. The next big thing in the translation industry might, in fact, be the long announced and awaited disintermediation. Having made a significant transition to the cloud, and having learned how to exploit and leverage data, companies in every industry are moving to API platforms. As usual, the translation industry is reacting quite slowly and randomly. This is essentially another consequence of the industry’s pulverization, which also brings industry players to the contradiction of considering their business too unique to be properly automated, due to its creative and artistic essence, and yet trying to standardize every aspect of it.

In fact, ISO 17100, ASTM F2575-14 and even ISO 18587 on post-editing of machine translation contains a special annex or a whole chapter on project specifications and registration or parameters, while a technical specification, ISO/TS 11669, has been issued on this topic.

Unfortunately, in most cases, all these documents reflect the harmful conflation of features with requirements that are typical of the translation industry. Another problem is the confusion coming from the lack of agreement on the terms used to describe the steps in the process. Standards did not solve this problem, thus proving essentially uninteresting for industry outsiders.

The overly grand ambitions of any new initiative are a primary reason for them being doomed to irrelevance, while gains may be made by starting with smaller, less ambitious goals.

 Why Metadata is Important


Metadata is one of the pillars for disintermediation, along with protocols on how it is managed, exchanged between systems, and its exact exchange format.

In essence, metadata follows the partitioning of the translation workflow into its key constituents:
  • Project (the data that is strictly relevant to its management);
  • Production (the data that pertains to the translation process);
  • Business (the transaction-related data).
Metadata in each area can then be divided into essential and ancillary (optional.) To know which metadata is essential in each area, find where and how it can be used.

KPIs typically fall within the scope of metadata, especially project metadata, and their number depends on available and collectible data. Most of the data to “feed” a KPI dashboard can, in fact, be retrieved from a job ticket, and the more detailed a job ticket is, the more accurate the indicators are.
For example, from a combination of project, production and business metadata, KPIs can be obtained to better understand which language pair(s,) customer(s,) service and domain are most profitable. Cost-effectiveness can also be measured through cost, quality and timeliness indicators.

A process quality indicator may be computed out of other performance indicators such as the rate of orders fulfilled in-full, on-time, the average time from order to customer receipt, the percentage of units coming out of a process with no rework and/or the percentage of items inspected requiring rework.

The essential metadata allowing for the computation of basic translation KPIs might be the following:
  • Project
  • Unique identifier
  • Project name
  • Client’s name
  • Client’s contact person
  • Order date
  • Start date
  • Due date
  • Delivery date
  • PM’s name
  • Vendor name(s)
  • Status
  • Rework(s)
  • Production
  • Source language
  • Target language(s)
  • Scope of work (type of service(s))
  • TM
  • Percentage of TM used
  • MT
  • Term base
  • Style guide
  • QA results
  • Business
  • Volume
  • Initial quotation
  • Agreed fee
  • Discount
  • Currency
  • Expected date of payment
  • Actual date of payment

Although translation may be seen as a sub-task of a larger project, it may also be seen as a project itself. This is especially true when a translation is broken down into chunks to be apportioned to multiple vendors for multiple languages or even for a single language in case of large assignments and limited time available.

In this case, the translation project is split into tasks and each task is allotted in a work package (WP.) Each WP is then assigned a job ticket with a group ID so that all job tickets pertaining to a project can eventually be consolidated for any computations.

This will allow for associating a vendor and the relevant cost(s) to each WP for subsequent processing.

Most of the above metadata can be automatically generated by a computer system to populate the fields of a job ticket. This information might then be sent along with the processed job (in the background) as an XML, TXT, or CSV file, and stored and/or exchanged between systems.

To date, the mechanisms for compiling job tickets are not standardized in TMS systems, metadata is often labeled differently too. And yet, the many free Excel-based KPI tools available to process this kind of data basically confirm that this is not a complicated task.

To date, however, TMS systems do not seem to pay much attention to KPIs and to the processing of project data and focus more on language-related metadata. In fact, translation tools and TMS systems all add different types of metadata to every translation unit during processing. This is because metadata is used only for basic workflow automation, to identify and search translatable and untranslatable resources, provide translatable files to suitable translators, identify which linguistic resources have been used, what status a translation unit has, etc. Also, the different approach every technology provider adopts to manipulate the increasingly common XLIFF format makes metadata exchange virtually impossible; indeed, data, as well as metadata, are generally stripped away when fully compliant XLIFF files are produced.

This article is meant as a position paper to present my opinion about a topic that has recently risen to prominence and is now under the spotlights thanks to GALA’s TAPICC initiative for which I’m volunteering, in the hope to put the debate on practical and factual tracks.

For any advice on this and other topic related to authoring, translation, and the associated technologies, the author can be reached via email or Skype.


Appendix

This supplement to the post contains some excerpts from the annexes to the two major industry standards, ISO 17100 Translation Services — Requirements for translation services and ISO/TS 11669 Translation projects — General guidance.

The first excerpt comes from Annex B (Agreements and project specifications) and Annex C (Project registration and reporting) to ISO 17100. The second excerpt comes from clause 6.4 Translation parameters of ISO/TS 11669.

This data is perfectly suitable candidates as ancillary (optional) metadata.
All excerpts are provided fur further investigation and comments.

ISO 17100

Annex B (Agreements and project specifications)

  1. scope,
  2. copyright,
  3. liability,
  4. confidentiality clauses,
  5. non-disclosure agreements (NDAs),
  6. languages,
  7. delivery dates,
  8. project schedule,
  9. quotation and currency used,
  10. terms of payment,
  11. use of translation technology,
  12. materials to be provided to the TSP by the client,
  13. handling of feedback,
  14. warranties,
  15. dispute resolution,
  16. choice of governing law.

Annex C (Project registration and reporting)

  1. unique project identifier,
  2. client’s name and contact person,
  3. dated purchase order and commercial terms, including quotations, volume, deadlines and delivery details,
  4. agreement and any ancillary specifications or related elements, as listed in Annex B,
  5. composition of the TSP project team and contact-person,
  6. source and target language(s),
  7. date(s) of receipt of source language content and any related material,
  8. title and description of source [language] content,
  9. purpose and use of the translation,
  10. existing client or in-house terminology or other reference material to be used,
  11. client’s style guide(s),
  12. information on any amendments to the commercial terms and changes to the translation project.

ISO/TS 11669

Translation parameters


1.      source characteristics

a.    source language

b.    text type

c.     audience

d.    purpose

2.      specialized language

a.    subject field

b.    terminology

3.      volume

4.      complexity

5.      origin

6.      target language information

a.    target language

b.    target terminology

7.      audience

8.      purpose

9.      content correspondence

10.   register

11.   file format

12.   style

a.    style guide

b.    style relevance

13.   layout

14.   typical production tasks

a.    preparation

b.    initial translation

c.     in-process quality assurance

                              i.     self-checking

                            ii.     revision

                           iii.     review

                          iv.     final formatting

                            v.     proofreading

15.   additional tasks

16.   technology

17.   reference materials

18.   workplace requirements

19.   permissions

a.    copyright

b.    recognition

c.     restrictions

20.   submissions

a.    qualifications

b.    deliverables

c.     delivery

d.    deadline

21.   expectations

a.       compensation

b.      communication


=======================

Luigi Muzii's profile photo


Luigi Muzii has been in the "translation business" since 1982 and has been a business consultant since 2002, in the translation and localization industry through his firm. He focuses on helping customers choose and implement best-suited technologies and redesign their business processes for the greatest effectiveness of translation and localization-related work.

This link provides access to his other blog posts.