Machine translation engines are nothing new. In fact, the vision behind machine translation – fully automated translation of texts from one language to another – goes all the way back to the beginnings of the computer age. But it has taken a quantum leap forward in technology for that vision from the 1930s to become today’s reality.
It all started with an idea
Machine translation was an idea ahead of its time when it was first mooted in the 1930s – the technology required to produce fully automated translations simply didn’t exist. When the world’s first programmable, digital computer was built in 1936, it was the size of a double bed and had a processor that wasn’t exactly fast.
Then came the technology
Nonetheless, by the late 1940s the first rule-based algorithms were being created which allowed texts to be translated by the revolutionary new computers without any human input. But after the initial euphoria of the Georgetown experiment, when IBM and the University of Georgetown publicly demonstrated that a computer could translated 60 carefully prepared sentences from Russian to English, research into machine translation seemed to tread water for a long time.
That was until the 1970s and 1980s, when two factors combined to produce a renaissance in machine translation:
- The growing demand for language services as a result of an increasingly digital and global economy.
- The disruptive innovations in technology which enabled the first intelligent translation systems to be developed.
These factors revitalized research and development of machine translation engines – but even so, it’s only in the last three or four years that the big breakthrough has been made in the form of neural engines.
Machine translation engines
1. Rule-based engines
The most rudimentary machine translation engines use rule-based systems, which means they work on the basis of the grammar, word order and vocabulary rules of a language. The sentence in the source language is broken down into its grammatical components, translated into the target language and put back together again. It’s fair to say this approach has not proven to be particularly successful.
2. Statistical engines
Statistical translation engines represent the next stage in the evolution of rule-based engines. They analyse bilingual texts (in the source and target language) to determine the likelihood of a particular phase being correct in the given context. The quality of the translations produced by statistical engines therefore depends heavily on how many texts have previously been fed into the engine.
3. Neural engines
Like statistical engines, neural machine translation engines rely on analysis of bilingual texts. These texts are used to train the neural network, which stores a variety of contextual and linguistic information. This means neural systems are better at learning that statistical systems, and it also makes them more flexible in the translation process.
When should you use machine translation?
Speed and cost are clearly the two major benefits of machine translation, but then you have to factor in the risk of contextual errors, grammatical mistakes and unsuitable style. So good candidates are technical documentation that follows strict rules and style guides, content written in carefully checked language, and texts that are only used for internal information purposes.
Machine translation is suitable for
- very long texts that need to be translated quickly,
- texts where quality is not the biggest priority,
- and temporary solutions until the translation can be delivered by an experienced human translator.
As humans still understand linguistic features such as context, humour, irony and idioms better than machines, they should be the first port of call when high-quality translations are needed. Sales and marketing texts, legal documents and safety instructions are just some examples of content that needs the full attention of an experienced translator.
Machine translation is not suitable for
- sales and marketing texts, legal documents or safety instructions,
- texts where errors could lead to liability claims,
- or content crucial in establishing a brand identity.
Have human translators become obsolete?
Absolutely not. Both as part of the translation process itself and in managing all the accompanying aspects of a translation project, human beings have not yet been replaced. One issue is that the growing variety and complexity of translation systems has increased the need for consulting services. It’s not easy to decide which is the right translation technology, especially when switching between machine translation engines – it might take time to train the new engine to the same level as the old one, which can cost you a considerable amount of money in the meantime. So make sure you choose a language service provider who knows what you need and can recommend (and if required, set up) solutions compatible with your systems. Another issue is that managing everything surrounding the translation itself is becoming more and more important. Key questions need to be answered, such as how best to optimize workflows, how to go about choosing and training post-editors, and who’s responsible for managing and maintaining term bases (if your response to that question is “What’s a term base?”, then click here).
Here to stay
It’s clear that machine translation engines are here to stay. Although the technology has taken a long time to catch up with the original vision of machine translation, we’re now in a situation where, in accordance with Moore’s law, global computing power is doubling roughly every two years. And software is becoming more and more intelligent at a similar pace – which means machine translation engines are getting better and better every day. But if machine translation is here to stay, then so are we. You can talk to us to find out whether and how machine translation might be the right choice for you. And so far we’ve always done that the old-fashioned way: face to face.