AI uses and risks: Benefits of human translators and subtitlers

The use of generative AI has increased in the translation industry. Tools such as an AI subtitle generator are being used in audiovisual translation.

Generative AI is a subset of narrow AI that learns from large datasets. It recognises patterns and draws conclusions based on the data.

Many companies have been using these tools for their translations, believing them to be a quicker and cheaper solution.

Even TED, who solely relied on volunteer translations, is now taking TED talks “one step further” with AI adaptation.

But not everyone is fully on board with AI, as it has many risks and limitations.

Its rise in audiovisual translation has also affected the industry, with fewer jobs and less pay.

Keep reading to learn about the uses and risks of Generative AI for translation, and the benefits of human translators, especially in subtitling.

A hand reaching out towards a robot hand
Photo by Cash Macanaya on Unsplash

AI use cases in translation

Depending on the case, artificial intelligence can be handy for technical translation, repetitive materials, or texts using a very specific language.

In episode 14 of the #NoExcuse podcast, Ellie Kemp, strategic partnership director of CLEAR Global discussed Language: the power to heal or harm with World Health Organization’s Gunanga Dias and talked about how the way we use language reinforces power dynamics, and how this is more evident in the aid sector.

Right now, it is very easy for commercial AI language tools to learn dominant languages, as they are readily available on digital platforms.

Ellie said that these technologies have the potential to improve communication by increasing digitalising data to serve as input for machine learning of marginalised languages.

She mentioned that such initiatives could be helpful in the prevention and reporting of sexual misconduct by survivors speaking in those less powerful languages.

In that sense, AI could be used to reduce language exclusion in the humanitarian sector.

Another way AI can be helpful is to identify human errors. Obviously, humans make mistakes, and we can automate processes to eliminate some of them.

But whichever way you look at it, it is important to use AI responsibly and manage its risks.

Risks of AI in translation 

There are countless risks in using generative AI to translate documents: ethical, legal, contextual, and many more.

Even in areas where AI could be helpful, it is not risk-free.

In the worst-case scenario, potential harm could be disastrous.

You could, for example, use AI to translate legal texts, because it could find related terms and information.

But one unnoticed error could contradict what was intended to say in a contract, causing significant damages.

Not to mention issues around privacy, as data might be stored and used without explicit consent.

It is important to look at AI in a realistic way of what it can and cannot do.

There needs to be safeguards in place. It can’t be allowed to roam free in the Wild West.

Generative AI needs regulation

The use of AI to generate text implies the ethical and legal viewpoints of the impact on society and copyright issues.

Although AI can be used for good causes, e.g. the advancements of the humanitarian sector; it has also been used in a negative way with deepfakes (images or recordings altered and manipulated to misrepresent someone).

In a recent legal challenge, Scarlett Johansson was left “shocked and angered” after OpenAI used an “eerily similar” voice to hers for their new chatbot ChatGPT-4o. 

This occurred after she rejected the proposal by CEO of OpenAI, Sam Altman, for using her voice. 

In another court case, The New York Times Sues OpenAI and Microsoft for Copyright Infringement, based on the use of the Times’ articles and content in training their LLMs (Large Language Models).

All these issues show that AI companies need to be more transparent about how they are getting their data.

In subtitling, apart from the need for regulations preventing companies from infringing author’s rights laws to train their systems, there are other concerns to keep in mind when it comes to using the technology.

Let’s look at some of them below.

AI Subtitle Generator limitations

The limitations of AI are very apparent in audiovisual translation.

In April this year, AVTE (Audiovisual Translators of Europe) issued a Statement on AI regulation, discussing issues such as the suitability of AI in the industry, the theft of human work and the spread of misinformation.

They are not against new technologies per se. They agree AI can make our work more efficient when used for terminology management and translation memory, for example.

But they have concerns regarding the use of generative AI in machine translation (MT), as MT uses massive amounts of data (LLMs) to predict words based on the context provided by preceding words.

Audiovisual translation involves creative and original work, both in the source audiovisual content and in the produced translations.

“And yet, in the audiovisual translation industry, it [the use generative AI in MT engines] is increasingly seen as an adequate substitute for human translation. Or, rather, it is deemed sufficient for the human to act as a mere supervisor who patches up AI-generated machine-translated audiovisual contents (machine translation post-editing or MTPE) in order to lower costs and raise production volume.” (AVTE Statement on Generative AI)

For translators to turn the output of AI-generated subtitles into satisfactory results, it is very time and energy-consuming.

Still, most tasks are to fix noticeable errors rather than to produce quality subtitles, so professional translators can be paid even less.

Moreover, AI has technical limitations, struggling with text length, timing, line breaks, and so on.

On a transcription project that I am currently working on for TED, these struggles are obvious. But, at least, they are using AI for a good cause.

Remember that using an AI Subtitle Generator to lower costs and increase production volume often leads to inferior results.

Prioritise human talent.

Professional subtitles require the human touch

Even with the increase of AI in many industries, including audiovisual translation, professional subtitles will always require human work.

Besides the previously mentioned legal and ethical reasons to be aware of when using AI, remember that the capability of humans to understand context is what makes high-quality subtitles.

As I mentioned in Human Audiovisual Translator vs. Machine Translation, human translators:

  • Adapt the text to the target audience. 
  • Choose expressions that are used in the target culture.
  • Translate considering the context.
  • Compress information while keeping the meaning of the original text.

Finally, human subtitle translators can identify meaningful errors in the source text, which AI Subtitle Generator cannot. And ensure proper timing, formatting, and line breaks; to provide the best viewing experience.

Conclusion

Generative AI has its use cases in the translation industry, including the potential to make impact on humanitarian pursuits. However, one of the biggest issues of LLM-based technologies is their misuse.

Leveraged for terminology/glossary management, they can help human translators work more efficiently. But they also have many limitations and risks, which often leads to unfair working conditions and translations of lesser quality.

While the use of AI for translation is hotly debated, one thing is for sure: Human translators will always be needed, especially for creative translations such as subtitles.

What are your thoughts on the matter? How could AI be used efficiently in the translation industry? Let me know.

If you need English to Brazilian Portuguese high-quality translations or subtitles, please contact me, tatiane@rochatranslations.com.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top