For a long time, multilingual communication followed a familiar routine. As Language Service Providers, we worked smoothly most of the time. Still, there were some unclear areas, especially when it came to workflows, roles, and accountability.
Sometimes, things went wrong and customers complained. There were many possible reasons: the source text might be unclear, the translator could misunderstand the message, a reviewer might make things worse, or a client could insist on using old, imperfect terminology.
Responsibility was spread out, and that seemed normal. But now, with AI everywhere, more content to handle, and tighter deadlines, that sense of comfort is disappearing.
As organizations produce multilingual content faster than ever, often using AI, a new problem is becoming clear. The issue isn’t quality - it’s accountability.
Multilingual workflows often involve more people and steps than you might expect. Some steps are handled by humans, others by AI. Usually, translation alone includes at least three steps: translation, review, and quality assurance. In regulated industries, the process is even longer.
For example (and starting with content creation), a subject-matter expert drafts the content, sometimes with AI. Marketing adapts it, possibly with AI as well. Legal reviews it, ideally without AI. AI might translate the text, then humans post-edit it. In-country teams may request changes, and either humans or AI make updates. They also update translation memories and terminology databases for future use. Finally, compliance gives approval. That’s a lot of steps.
When something goes wrong and the translation is seen as low quality or just not 'good enough,' the real question isn’t just about accuracy. After so many decisions throughout the workflow, it’s better to ask: Who made the key decision, and why? In other words, was that decision defensible?
We should stop seeing translation as just a simple task or a commodity. Multilingual workflows involve many decisions that can impact patient safety, legal risks, business operations, and public trust.
Among the questions to ask are:
What source version was used, and how was it created?
Which terminology was enforced or relaxed?
What was machine-generated versus human-validated?
Who approved deviations or local adaptations, and on what basis?
What evidence exists if decisions are later questioned?
If organizations can’t answer these questions clearly, the issue isn’t with translation; it’s with governance.
Here’s a hint: AI isn’t to blame. It’s easy to point fingers at AI because it’s new, disruptive, and always changing. But while AI didn’t create the accountability gap, it did help reveal it.
Even before neural machine translation, we assumed that whoever handled a step in the workflow was responsible. But this was never clearly defined. Decisions were spread across teams and vendors, with the hope that someone was in charge. But were they really?
This approach worked as long as workloads and deadlines were reasonable, and there was time to catch mistakes during reviews. But when volume grows and timelines get shorter, informal and unclear processes no longer work.
In this situation, AI acts like a stress test - and I don’t think we’re passing it. AI didn’t break the system; it just showed that we’ve been relying on old habits instead of building strong governance.
The real question isn’t whether AI can be trusted with translation. It’s whether trust alone, without structure, ever counted as true accountability in the first place.
New clients often ask, “What does your QA or review process look like?” Many people think that having a formal review means the process is under control. But review and control aren’t the same. Reviews are reactive. As W. Edwards Deming said, "quality can’t be inspected into a product — it has to be built into the system from the start."
A reviewer can:
Catch errors but cannot retroactively define intent.
Flag risk but cannot reconstruct why a decision was made.
Approve a version but cannot explain why it exists.
Reviews take place at the end, focusing on the final product. Accountability, on the other hand, is about the whole process.
People who worry about risk rarely ask who reviewed the content. They want to know what system made sure the right decisions were made.
I believe the main risk in translation isn’t just mistakes. More often, it’s the workflow or the whole system that creates risk. Here are some examples:
Inconsistent terminology across the organization. For example, marketing and clinical research use different vendors, each with its own terminology.
Unclear ownership of localized adaptations introduced without authority or documented rationale, such as when in-country reviewers, who are not trained translators, make changes.
Silent edits by well-intentioned bilingual staff without translation expertise or traceability. No explanation needed here.
Machine outputs taken at face value and approved without human validation
Missing documentation explaining why changes were made, leaving no audit trail
These are failures of governance, not language, and they’re almost impossible to justify after the fact.
Like any system, accountability needs structure. This includes:
Clear ownership of multilingual decision-making, including the source content
Defined rules for when and how AI maybe used
Documented escalation paths for ambiguity
Version control tied to rationale, not just text
Recognition that some decisions are business decisions, not linguistic ones
Accountability isn’t really about how well individuals perform. It’s built into the systems we create for multilingual communication.
When we look at multilingual communication from a risk perspective, it changes. Translation isn’t just about accuracy anymore. It affects whether organizations meet compliance requirements and calls for clear rules about who can make decisions and why. In this way, AI becomes a tool that supports the process, not one that leads it.
The organizations that do best are those that set up clear frameworks for using AI and make sure everyone knows who is responsible for what. It would be easier if multilingual communication was only about being understood, but it’s more complex than that. We want to be understood, while also being able to stand behind what we say in every language, especially when it matters most.