It’s usually not easy to judge how good or bad a translation is – it can be hard to get away from subjective opinions. That’s where the SAE-J2450 standard comes in. Originally developed for objective evaluation of workshop documentation, it can now be used much more widely. The standard helps identify potential risks resulting from poor-quality translations, and it can also highlight the benefit of having core translators.
For most businesses it’s practically impossible to know whether they got the high-quality translation they paid for. The staff responsible for managing the various languages their business uses almost certainly won’t speak all of them, which means in many cases the only feedback on translations comes from their subsidiaries or from end customers. And this feedback may well simply be “The translation is bad”, without giving any examples: if you dig deeper, you’ll often find that the criticism is based in the stylistic preferences of the one person who looked at it.
And this is precisely the problem that often occurs with conventional reviews: the reviewer may change sentences that were actually correct but didn’t fit their stylistic preferences. Style is very subjective, after all, which makes it difficult to evaluate how good or bad a translation actually is. The clear errors that need to be fixed are terminology, misunderstanding, punctuation, spelling and grammar – but even here there’s a big difference in terms of quality between a 200-word translation with two errors and a 20,000-word translation with two errors.
SAE-J2450, a standard for workshop literature, was developed by the automotive industry in order to evaluate the quality of translations. The aim was to create a standard that could apply to all source and target languages and be used to assess both human translation and machine translations. So SAE-J2450 isn’t a quality standard that defines and governs processes and sequences – it’s a standardized process for measuring the linguistic quality of a translation.
The system is based on four elements: seven primary error categories, two sub-categories (“serious” and “minor” errors), four meta-rules, and numerical weighting for the primary and sub-categories. Rather than simply correcting errors in the translation, the reviewer – a specialist translator familiar with the SAE-J2450 standard – highlights each error and assigns it to one of the seven primary error categories:
@ MEINRAD
As you can see, stylistic errors are not covered by an SAE-J2450 revision. This means it’s best suited to technical texts, or texts containing lots of specialist terminology – and it’s less likely to help with marketing or advertising texts.
Each error is then categorized as “serious” or “minor” in order to estimate its impact on the end user. A serious error is one which produces the wrong meaning, leading to confusion for the user and creating a risk (however likely) of them doing the wrong thing. A minor error will lead only to slight confusion or no confusion at all. The quality of the translation is then calculated using a point system for the respective categories and the frequency of errors in relation to the overall text (see diagram). Depending on what the “pass mark” is, the translation either passes or fails the test.
SAE-J2450 is therefore an excellent method for objectively evaluating the linguistic quality of both human translations and machine translations plus post-editing – it makes the reviewer’s subjective opinions (and the problems they can cause) irrelevant. SAE-J2450 revision is ideal in the following situations:
It’s also easy to adapt the error categories to suit the needs of a particular business. If the results are unsatisfactory, the necessary quality management measures can be put in place in order to identify and eliminate risks before they cause problems.
If you have existing translations which you think would benefit from an SAE-J2450 revision, get in touch!
Main Image: © Storyblocks