For decades, quality in translation revolved around one thing: accuracy. If a translation faithfully reflected the source text and was linguistically sound, it was considered quality work.
That definition served us well for a long time. Today, given the technology available to us, it no longer feels sufficient.
In regulated and high-stakes environments, language now sits squarely at the intersection of compliance, risk, accountability, and technology. A text can be linguistically accurate and still fail operationally, legally, or ethically. As organizations scale globally and rapidly adopt AI-assisted workflows, the definition of “quality” has to expand.
Consider regulatory documentation, informed consent materials, safety communications, or employee policies. These texts do not exist simply to be read. They exist to be relied upon.
A translation can be accurate word-for-word and still obscure legal responsibility, drift away from regulatory intent, fall short of readability or accessibility requirements, introduce ambiguity where none existed, or create false confidence in automated workflows.
Do not use if the packaging seal is broken.
An AI-generated translation may be technically accurate. The terminology is correct. The sentence is grammatically sound. Yet the phrasing can shift from a clear prohibition to a conditional description, such as:
The device should not be used in cases where the packaging seal is broken.
On the surface, the meaning appears unchanged. Functionally, it is not. The translation no longer signals a non-negotiable condition of use. The manufacturer’s intent is softened, and liability exposure increases.
This may seem like a minor shift, but the effect is cumulative. If you enter coordinates into an airplane’s navigation system that are off by a single degree, you may end up landing far from where you intended to go.
In the same way, small changes in modality, tone, and emphasis compound across a document. An IFU that is accurate at the sentence level can become, as a whole, a materially different and riskier document than the original.
AI-driven translation has dramatically increased speed and lowered cost. That efficiency is real and valuable. But it has also changed how risk enters the system.
In regulated environments, plausibility is not a sufficient safeguard. Quality can no longer be assessed only at the sentence level. It must be evaluated at the system level.
Modern multilingual quality requires clear answers to questions that were once assumed rather than articulated. Who is accountable for language decisions? What standards govern the use of AI and human review? How are errors detected, escalated, corrected, and prevented from recurring? What documentation exists to demonstrate due diligence? How are linguists trained to work responsibly alongside AI, and by whom?
If these questions cannot be answered, even otherwise excellent linguistic output becomes fragile.
The assumption that AI will replace translators misunderstands both translation and AI.
This work requires more skill, not less. But it also requires different training, clearer expectations, and stronger professional signals than the industry currently provides.
Organizations are making long-term decisions about language workflows, staffing, and technology based on short-term assumptions. Several of our clients are building proprietary AI solutions. At the same time, public narratives about AI are discouraging new talent from entering the language profession altogether.
The result is a widening gap between the complexity of multilingual risk and the capacity of systems and people to manage it. Quality, as traditionally defined, cannot close that gap.
In regulated environments, multilingual quality can no longer be reduced to linguistic correctness alone. Accuracy remains necessary, but it is only the starting point.
Language must also align with regulatory intent, not merely replicate wording. A translation that technically reflects the source text but fails to meet regulatory expectations, readability standards, or accessibility requirements introduces risk rather than mitigating it.
Quality also depends on whether a document can be used as intended by the people who rely on it. Instructions that are difficult to follow, consent language that obscures rights, or policies that blur obligations undermine the function of the document, even when the translation itself is defensible. In this sense, usability is a compliance concern.
As AI becomes embedded in multilingual workflows, quality increasingly hinges on governance. Language service providers must be able to demonstrate how language decisions are made, who is accountable for them, and what safeguards exist when automation is involved. Without documented accountability, it becomes difficult to distinguish efficiency from negligence after the fact.
Responsible AI integration does not mean avoiding automation. It means understanding where AI adds value and where it introduces risk, and designing workflows accordingly.
Taken together, these elements redefine what quality means in regulated multilingual communication. It is not simply about getting the words right. It is about ensuring that language performs its intended function under scrutiny, at scale, and over time.
This does not mean rejecting AI. It means understanding it well enough to use it effectively while retaining responsibility.
That, I think, is the work ahead.