
What is the translator’s duty in an AI-driven world?
Today, AI is ushering in sweeping changes across virtually all the knowledge professions, raising new questions about our role, human versus artificial intelligence, and what we might call AI responsibility versus human intelligence responsibility. One interesting aspect of this sea change is professional responsibility and liability for translators in the AI era.Let’s examine three components of our duty as translators and how they might relate to AI.

By Joachim Lépine, Certified Translator
Professional Ethics and Conduct
When our clients experiment with ChatGPT and similar tools, they are often bamboozled by the halo effect when the text they write or translate sounds incredibly humanlike, natural, and authoritative. This “wow reaction” is a testament to the power of large language models, which tap into advanced and multi-layered algorithms to statistically predict the next word in a sequence.
However, as many professionals across different sectors are learning today, “wow” does not equate to task accuracy. If you haven’t heard the term before, “task accuracy” refers to how well AI actually performs a specific job. An example would be producing a contextually and culturally appropriate translation that will have a specific effect on the reader.
AI has a knack for producing humanlike and “accurate-ish” content. However, if you’re building a house, you can’t rely on a “accurate-ish” blueprint whipped up by ChatGPT in a few seconds. Likewise, if you need a professional translation, you can’t expect generative AI output to be accurate and faithful to the source text in any meaningful way. It’s not what AI is designed for. Generative AI engines are first and foremost trained to produce humanlike content, not to thoughtfully craft accurate translations.
As a side note, there are, of course, translator-specific tools such as TAIGR that cleverly leverage generative AI and previous translations to produce better and better drafts. They’re a very promising way forward—with benefits for both the translator and the client (in the form of added value).
Another interesting aspect of professional ethics and conduct is confidentiality. Translators are required to uphold confidentiality and preserve their client’s trade secrets. While it’s normal for professionals to use a range of software to best meet the needs of their clients, there’s a big difference between a human carefully choosing a confidential and task-appropriate tool and a client throwing text into an AI tool, such as ChatGPT, Gemini or Claude, that has been hacked and subjected to multiple data leaks.
Finally, let’s talk about one more aspect of ethics: cultural sensitivity. Professional translators don’t just transliterate words from one language into another ; they must be culturally sensitive and empathetic enough to gauge the impact of the text and weigh that up against the message the writer was trying to convey—even if said writer didn’t always do a stellar job at it.
Experienced translators frequently receive image- and mission-critical text to translate that features not just a specific level of language, but registers that are best described as idiolects—in other words, ways of speaking or writing that are specific to a person or a group and that AI will almost never master or render appropriately.
Civil and Professional Liability
Translators are obviously expected to hand in quality work. This is true whether they did the work themselves or hired a subcontractor. By the time the text reaches the client, it is expected to be errorfree and fit for purpose. In contrast, anyone who turns to AI knows that these platforms accept no liability for outcomes arising from their use.
Another aspect of liability is scope. When clients submit a text to a translator, they can expect the professional to tell them whether or not they are able to produce an accurate, professional, and sensitive translation that will achieve the client’s desired outcome.
But AI will often do the opposite, acting as a yes-man that professes to deliver authoritative answers and translations, often without asking any questions at all to understand the context, or even whether it should have been entrusted with a job in the first place.
Finally, let’s talk risk management. Members of professional associations, such as OTTIAQ, typically take out errors and omissions insurance to protect the client for losses totaling up to $1 million. AI is unlikely to offer any guarantees, let alone insurance that protects the client.
The Duty of Competence
Professional translators have a moral obligation to engage in continuous professional development to update their knowledge and continue serving their clients well in a changing world.
Marketing-savvy translators will proudly display their professional development credentials in their email signatures, online portfolios, and LinkedIn profiles, for instance, to proudly showcase their added value. In contrast, little is known about the exact training processes for the major LLMs on the market today, since they are covered by trade secrets, and we only get a small glimpse into the workings of the machine.
Another aspect of competence is self-assessment. A professional is expected to have their competencies assessed periodically, whether through inspection or revision by other qualified translators who can point out blind spots and help the translator continue to improve, just like any professional in any field would be expected to do.
To our knowledge, AI performs no such self-assessment or self-inquiry; rather, its training responds to the imperatives of the tech giants who make billions of dollars off software intended to serve purposes about which little is ultimately known.
Finally, in terms of competence, a human is expected to master their industry landscape and tech evolutions so they can choose the best tool for each job and provide maximum added value to the client, whether in terms of faster turnaround or better quality assurance. But this requires real judgment and expertise with a variety of tools. AI can scrape data from websites, but it does not have the human experience of trying things out, pivoting, and reiterating to better serve clients in a specific niche. For all its glamour and “wow,” AI falls short here too.
There are many more ways that professional responsibility and liability differ between humans and machines. We could go on to discuss the pitfalls of AI with respect to inclusivity, safety, environmental footprint and audience suitability, or the fact that AI just flat-out makes stuff up some percentage of the time. By the way, it is likely to continue doing so in the foreseeable future, since its text creations draw on a mixed bag of sources—some highly credible and others consisting of a hodgepodge of opinions.
What’s the takeaway?
The intent of this article has not been to bash AI. After all, we are now living in an AI-driven world, and we will have to get used to it to continue to survive and thrive. But taking an honest look at the difference between how humans and machines handle the components of professional responsibility and liability can be quite eye opening… and even shocking.
AI is incredibly useful for everyday applications, including translation-adjacent tasks and even translation drafting in many cases. But the contrast between what we might call AI Responsibility and Human Intelligence Responsibility could not be starker.
When it comes to ethics, professionalism, and competence, humans can be counted on to hold up their end of the bargain and serve their clients as dedicated professionals and partners end to end. AI is best understood as a second brain or assistant that can research, spark ideas, and generate “sometimes helpful” content.
The key is to thoroughly grasp the responsibility of each and to be able to talk to clients about what they are getting into when they use AI versus when they hire a human who may—or may not—choose to use AI to be sure to achieve the client’s desired outcome.



