Optimizing Language QA: Sampling Translations for Quality and Alternatives to Screenshot Reviews

Optimizing Language QAOur recent webinar on optimizing the process of language quality assurance (recording available) has generated far more questions than we were able to answer during the live session, or immediately afterwards. Paying back our debt, we’re publishing the remaining answers in this post.

Question: Sample checks are good – what do you recommend to achieve confidence in the finalized product though? 

Sample checks are only efficient if done on randomly selected elements (strings, segments or other parts) of the localized content.

The best practice is to either review 100% of the content, and if this is not possible (e.g. due to content size and/or TATs), then pick the to-be-checked sections out of the whole deliverable submitted by translators, i.e. translators should never know what will be sampled. When doing so, we recommend considering the visibility and importance of the to-be-checked content for end-users.

At the same time, one needs to pay attention to assessing the localized content within the context: i.e. logical sequences of strings, sections of help or documentation, etc.

At Moravia, we have excellent experience with reviewing a random selection of UI strings in projects where an in-context reference has been provided (e.g. screenshots, links to pre-release source language builds/websites). Such a sampling model would reveal many more terminology and context-related issues than a review of a logical sequence of strings.

It is important to have a QA (=sampling) strategy defined upfront since different project types require different frequency and sampling scope. We also recommend performing as many interim sampling checks as possible to have continuous visibility into the quality of translations and be able to define and take appropriate actions.

Question: Do you see an alternative to screenshot reviews to assess the in-context quality of software?

Ideally, in-context references are available at the time of translation (i.e. strings are appended with links to screenshots/pre-release builds; separate tools allowing for context-viewing are provided; translators have remote access to the build).

One alternative is standard linguistic testing to verify UI translation quality in-context. But besides screenshot reviews done by the linguists recruited from the translation team, we have recently seen scenarios where a screenshot review was done by a community of potential end-users selected by the software producer.

Feedback provided by them according to a pre-defined scenario is moderated by a linguist appointed by an LSP and triaged either by the LSP or by the Producer. Such a screenshot review should be scheduled as soon as the first usable set of screenshots of the localized content is available.

A much more efficient alternative to the screenshot review done by the community in the early phase of the localization cycle is the pre-release build review executed according to the same scenario.

Adobe is one company that is leading the way in this regard with their Adobe Localized Prerelease Program. This program is using volunteer testers from around the world to test localized pre-release versions of several Adobe products, in order to feed end-user perspective early on in the localization cycle. More on the process and results in the Localization World 2011 Barcelona presentation “A6: International User Outreach and Prerelease Program at Adobe Systems.”

Needless to say that a screenshot/build review early on in the localization cycle reduces the number of bugs reported during the standard in-context linguistic and functional testing phase applied in later phases. It also contributes to improving the quality at source for ongoing, long-term or batch-based localization projects since localizers can rely on the translations existing in the file as being correct in the given context.

To learn more, please view the recording of our webinar "Optimizing the Process of Language Quality Assurance". 

Click me