Machine translation, or MT, is the use of software to translate text or speech from one language to another based on computational methods not involving humans.

Best Practices on how to use machine translation in a translation workflow

Collecting "Best Practices" for working with machine translation in existing (or not yet existing) environments should help developers to adapt their technology to match the expectation of translators.

Here is one: Have several MT engines connected within your translation environment simultaneously, don't look at the MT suggestions as a whole and only use them if they match your typing through AutoSuggest/AutoWrite etc. The benefit is that you are not unduly influenced with sub-par suggestions (which can be a big problem), you speed up the translation process because the system completes "your thoughts" and you drive the process vs. the machine driving you.

Cognitive load

Nowadays Machine translation suggestions are dynamically generated and presented to translators and post-editors. They are even adapted depending on the input by the translator/post-editor. It seems that these new methods yield better output from MT and the also seem to get the translator more "involved" in the post-editing process. Do these additional resources pose additional cognitive load on the translator/post-editor? Particularly when working in longer segments where the suggestions change frequently and rapidly?

Interactive exchange of MT suggestions, termbase hits and TM fragments

Arguably the three most important assets are MT suggestions, TM matches and fragments and termbase hits. What we need is a deep integration of these retrieval features with each other so that

  • MT suggestions can be automatically fixed with termbase entries and TM fragments
  • TM matches can be fixed with MT fragments and termbase entries
  • MT suggestions can be validated via fuzzy TM matches

From SMT to Neural MT to DeepL

Neural machine translation is currently a hot topic in the industry, mainly due to some claims that the output can reach human translation quality level. Unlike statistical machine translation where translations are produced based on statistical models, neural machine translation is using neural networks and machine learning technology to transfer the meaning of the source text to the target language.Research has shown that NeuralMT produces better output in terms of fluency, while SMT gives better results in terms of adequacy. However,NMT post-editing experiments have shown that the fluent output does not necessarily mean a correct translation.

Proposed sub-topics for discussion:

  • The impact of NMT on the translation industry in general
  • The impact of NMT on the post-editing process:
    • Since NMT behaves like a "black box", how should post-editors correct the NMT output? What strategies should they learn?
    • Issues in quality evaluation

Post-editing interfaces

The way post-editing is carried out has been changing in the last few years. Traditionally post-editing was done on the "half-finished", machine-translated (static) document without much interactivity (a typical example might be an MS Word document with track changes). Later the unit of post-editing was broken-down to segments as provided in CAT-Tools and the CAT-Tools themselves started to serve as post-editing interfaces. Currently, post-editing has become more interactive. The post-editors are provided with several suggestions from MT (and possibly other sources) while post-editing segments. In some cases the suggestions are dynamically generated depending on what input is provided or confirmed by the post-editors.

Average Rating 5 from 1 votes