The image that accompanies this article was created by artificial intelligence (AI) in response to the prompt “communication, spoken language.” It is unsettling, confusing, curious, and maybe even a little scary.
Any mention of AI within the professional translation and interpretation community seems to trigger a lot of these same responses. Among some who use T&I services, recent developments in artificial intelligence have unleashed a range of unfortunate expectations. So it behooves us, as professionals, to learn everything we can about AI and how it can—or does–intersect with language services. Indeed, there will be numerous presentations on the topic at the 64th annual ATA conference (ATA#64) taking place in Miami later this week. The following article first appeared in Multilingual Magazine on March 4, 2023. Reposting on the ATA Interpreters Division blog was delayed by website issues, but it is a pleasure to bring it to you now, with the kind permission of the author.
Artificial intelligence (AI) has made significant advances in the field of language translation in recent years, with machine-learning algorithms able to accurately translate large volumes of text from one language to another. These systems use techniques such as tokenization, part-of-speech tagging, dependency parsing, named entity recognition, and sentiment analysis to analyze and interpret the input text and generate equivalent output text in the target language. In some cases, the output may be post-edited by a human translator to ensure that it reads naturally in the target language. But what about AI interpretation?
While AI may be able to perform some tasks traditionally carried out by translators, it is unlikely to fully replace the role of human interpreters in the near future. One of the main reasons for this is the complexity of interpreting spoken language, which requires more than just a thorough understanding of grammar and vocabulary. Interpreters must also be able to listen to and understand spoken language in real-time, accurately conveying the meaning and intent of the original message. This requires a range of skills that are difficult for AI to replicate, including the ability to understand and interpret local dialects, idioms, and cultural norms.
For example, when working with international organizations such as the United Nations (UN) in the Middle East, interpreters may be called upon to work with local dialects such as Iraqi Arabic, Egyptian Arabic, or Syrian Arabic. These dialects can vary significantly from standard Arabic and may include unique idioms and expressions that are specific to the region. In addition, Arabic dialects often have a high degree of regional and social variation, which means that the same word or phrase can have multiple meanings depending on the context in which it is used. Disambiguating these nuances of meaning can be particularly challenging for machine translation systems, as they may not have access to the necessary contextual information. An interpreter who is familiar with these dialects and their cultural context will do better.
In addition to understanding local dialects and idioms, interpreters in the Middle East may also need to be familiar with the socio-economic status of the people they are working with. For example, in some areas, there may be significant disparities in wealth and access to education, which can impact the way people speak and the words they use. An interpreter who is aware of these differences will be better able to accurately convey the message to the target audience, as they will be able to understand the context in which the spoken language is being used.
Interpreting spoken language is a complex and active area of research in the field of artificial intelligence. Automatic speech recognition (ASR) systems, which are designed to transcribe spoken language into written text, use techniques such as acoustic modeling, language modeling, dialect identification, and dialect adaptation to analyze and interpret spoken input. However, interpreting spoken language in more complex and varied environments, such as conversations between multiple people with different accents and speaking styles, or in noisy or crowded settings, remains a challenge for ASR systems. In these situations, they may struggle to accurately transcribe the spoken input, or may require additional context or guidance to disambiguate the meaning of the words.
To conclude, while AI may be able to perform some tasks traditionally carried out by translators, the complex nature of interpreting spoken language means that it is unlikely to fully replace the role of human interpreters in the near future. In order to accurately convey the meaning and intent of spoken language, interpreters must be able to understand and interpret local dialects, idioms, and cultural norms, something that is difficult for AI to do.
Nasseer J. Cannan is an accomplished Freelance English<>Arabic Translator and Researcher based in Iraq. With a B.Sc in Agricultural Sciences from the University of Baghdad, He has demonstrated expertise in various fields, including agriculture, psychology, and sociology. A certified member of the Arab Professional Translators Society, He has completed online certificates in Feminism and Social Justice, Energy Technology and Policy, and Philosophy from prominent American schools. As a versatile translator and interpreter, Nasseer has collaborated with numerous UN organizations, providing services such as simultaneous interpretation and research assistance.
In addition to our email forum and the range of services on our website, such as the Blog and a Resources page, the ATA Interpreters Division invites members to connect with us on social media. Join the conversations on LinkedIn, Twitter, and Facebook!