{"id":1794,"date":"2023-12-05T11:01:18","date_gmt":"2023-12-05T11:01:18","guid":{"rendered":"https:\/\/www.ata-divisions.org\/AVD\/?p=1794"},"modified":"2023-12-05T11:01:18","modified_gmt":"2023-12-05T11:01:18","slug":"ai-and-its-implementation-in-the-dubbing-process","status":"publish","type":"post","link":"https:\/\/www.ata-divisions.org\/AVD\/ai-and-its-implementation-in-the-dubbing-process\/","title":{"rendered":"AI and its implementation In The Dubbing Process"},"content":{"rendered":"<p>by Sebasti\u00e1n Arias<\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Translated from Spanish <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">by Luc\u00eda Hern\u00e1ndez<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">AI&#8217;s popularity has grown in recent years, and one notable example of its potential is AlphaGo, software developed by DeepMind Technologies. This software uses a neural network to learn to play the game Go by analyzing moves made by professional players. If you&#8217;re unfamiliar with Go, it\u2019s a strategic boardgame developed in China over 2,500 years ago with black and white stones on a grid.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Once AlphaGo understood the game mechanics, its abilities improved so much that it began to play moves that followed the logic of the game but had never been considered by a human, ultimately leading it to beat the South Korean world champion, Lee Sedol, in 2016.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Other Applications of AI<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">While AlphaGo offers an early example of AI&#8217;s potential, the ability of neural networks to use large amounts of data to learn independently is now being used to solve more and more complex problems in diverse arenas. This has enabled innovative changes and previously unimagined applications, leading to meaningful advances in task automation and real-time decision making.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Over the past two years, many use cases for AI have been developed. AI technology is advancing so quickly that it\u2019s likely the information in this article will be outdated by the time you finish reading it.<\/span><\/p>\n<p><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">We\u2019ve all heard of ChatGPT, which has captured headlines the world over for months, but what&#8217;s it all about? Rather than defining it, let\u2019s let GPT-3 describe itself:<\/span><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-large wp-image-1795\" src=\"https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/12\/gpt-1024x332.jpeg\" alt=\"\" width=\"1024\" height=\"332\" srcset=\"https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/12\/gpt-1024x332.jpeg 1024w, https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/12\/gpt-300x97.jpeg 300w, https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/12\/gpt-768x249.jpeg 768w, https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/12\/gpt.jpeg 1253w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">In addition to its ability to converse, AI has been used to generate hyperrealist imagery, design graphics from scratch, edit videos, write research and journalism, code and more. The list is constantly growing.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Despite its varied use cases, this article will set aside machine translation and post-editing, which have been discussed at length, and focus on recent innovations for dubbing.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">The Dubbing Process<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Before diving in, let\u2019s review the main features of the dubbing process, as compared with subtitling. Subtitling only requires the creation of a text file that is superimposed onto original content. While production time can vary depending on the content&#8217;s runtime and complexity, subtitling can be carried out by a single translator.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Dubbing, on the other hand, is more involved, as it requires the recording of voice actors working closely with a dubbing director, a translator and a sound engineer. In practice, any audiovisual content can be dubbed in voice over (aka UN style).<\/span><\/p>\n<p><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">In this type of dubbing, commonly used in documentaries and reality shows, the original language track is played at a low volume and overlayed by a target language track.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Synchronized dubbing, on the other hand, is a bit different. Here the aim is to ensure that the translation is synchronized to on-screen characters\u2019 lip movements. Dubbing doesn\u2019t require reading, thus increasing its accessibility, especially on mobile devices. Due to its complex nature, dubbing usually takes longer and is more costly than subtitling. Well, that is until recently.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Things are Changing<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Papercup, a London-based company, uses text-to-speech technology with a focus on voice modelling to make synthetic voices sound more natural and expressive. This means, when dubbing UN style, rather than using a group of actors to record dialogue, the software \u201creads\u201d translated lines using surprisingly natural-sounding synthetic voices, to create a target language audio track that is superimposed on original voices.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">This naturalness is achieved through deep learning. Just as AlphaGo \u201clearned\u201d to play Go from professional players, Papercup used thousands of hours of people speaking to learn prosody, intonation, tone, accents, etc. Interestingly, the data used was more than 47,000 hours of podcasts made available for research and development on Spotify. Source: <\/span><a class=\"OYPEnA text-decoration-underline text-strikethrough-none\" draggable=\"false\" href=\"https:\/\/engineering.papercup.com\/posts\/interspeech2022-tts-overview\/\" target=\"_blank\" rel=\"noopener\">https:\/\/engineering.papercup.com\/posts\/interspeech2022-tts-overview\/<\/a><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">This innovative production model has enabled unprecedented turnaround times. Conventional dubbing of 100 minutes of video, including translation, adaptation, recording, and mixing, takes three to four weeks. Papercup can do it in one week.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">But that&#8217;s not the only difference. It costs significantly less. In the previous example, conventional dubbing would cost approximately $20,000, while Papercup can do it for 80% less. Source: <\/span><a class=\"OYPEnA text-decoration-underline text-strikethrough-none\" draggable=\"false\" href=\"https:\/\/www.papercup.com\/blog\/why-now-is-the-time-to-adopt-ai-dubbing\" target=\"_blank\" rel=\"noopener\">https:\/\/www.papercup.com\/blog\/why-now-is-the-time-to-adopt-ai-dubbing<\/a><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">While their technology is emerging, its results can already be seen on two YouTube channels: Bloomberg en espa\u00f1ol, a news channel, and chef Jamie Oliver&#8217;s Spanish-language channel.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Does it really sound natural? While there\u2019s room for improvement, it seems that it&#8217;s only a matter of time.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Another company using deep learning and AI algorithms to disrupt the dubbing process is DeepDub.ai. Their focus is on making voices have the same timbre as the original, even in different languages. So, for example, you can make Pedro Pascal speak Croatian without having him learn the language and record the lines. <\/span><a class=\"OYPEnA text-decoration-underline text-strikethrough-none\" draggable=\"false\" href=\"https:\/\/youtu.be\/59cDtx-XFPw\" target=\"_blank\" rel=\"noopener\">deepdub | Global entertainment experience, reimagined.<\/a><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Other Related Apps<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">In a recent post, GPT itself declared, \u201cAI will not steal your job. A person who is comfortable using AI will!\u201d So, the least we could do is familiarize ourselves with this new technology. To this end, we\u2019re sharing a brief list of game-changing apps that can streamline the dubbing production workflow in ways previously unimagined.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">TrueSync<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Dubbing is a post-production sound service, meaning we \u201cwork\u201d on the sound track and adjust it as required by the image. For example, so on-screen characters actually look like they\u2019re saying what you hear, we change the order of utterances or modify grammar in the translation to achieve synchronicity with onscreen mouth movements. Now, this is changing.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">TrueSync goes about this another way. Their technology modifies video so that lips sync with a sound track in any language. The software digitally changes mouth movements so that Pedro Pascal&#8217;s lips synch perfectly to Greek dubbing, for example. The result is a convincing image of an actor speaking the language of your choosing. <\/span><a class=\"OYPEnA text-decoration-underline text-strikethrough-none\" draggable=\"false\" href=\"https:\/\/www.youtube.com\/watch?v=iQ1OPpj8gPA\" target=\"_blank\" rel=\"noopener\">https:\/\/www.youtube.com\/watch?v=iQ1OPpj8gPA<\/a><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Lalal.ai<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">When content reaches a dubbing studio, its audio is usually on two separate tracks: 1. dialogue and 2. music and effects (M&amp;E). M&amp;E contains music, ambient sounds, and foley (sounds made by footsteps, objects, clothing, etc.). Dubbing replaces the original dialogue track with target language voice actors. This track is then mixed with the M&amp;E track so that when an actor says, \u201cPrepare to die,\u201d a gunshot can be heard right on cue.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Usually, for recent big-budget movies, a track is created during audio postproduction for dubbing into several languages. While being provided this separate track is ideal, for low-budget films or ones where this wasn&#8217;t considered when the film was first produced, this is often not the case. This complicates matters.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">But this is also changing. If we don&#8217;t have access to two separate tracks, Lalal.ai can separate and erase dialogue from sound tracks like non-musical karaoke. Then, we can mix target language dialogue on this track to achieve professional dubbing.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Vocalremover.org<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Vocalremover.org is similar to Lalal.ai, but for music and songs. This free online application allows you to remove vocals from a song to create a karaoke track.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">You upload a song from your computer. Then AI separates the vocals from the instrumentals. You get two tracks: a karaoke version of your song (no vocals) and an acapella version (isolated vocals). This process usually takes about 10 seconds.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Respeecher<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">While Papercup uses text-to-speech technology to disrupt the conventional dubbing process, Respeecher converts speech to speech by transforming spoken content with different characteristics. It applies a filter to the voice, not unlike Instagram filters that rejuvenate and beautify images. This technology takes a voice and makes it sound younger, older, like another gender, or even another person. This technology could be disruptive to documentary dubbing production as one actor could voice all lines, and this filter would round out the rest of the cast.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><a class=\"OYPEnA text-decoration-underline text-strikethrough-none\" draggable=\"false\" href=\"https:\/\/www.youtube.com\/watch?v=ReqUnEi74yQ&amp;t=272s\" target=\"_blank\" rel=\"noopener\">Abigail Savage works ADR magic with Respeecher\u2019s Voice Marketplace<\/a><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">This technology has been used on The Mandalorian to create a younger version of Luke Skywalker. Before the voice was \u201crejuvenated,\u201d it had to be \u201ccloned.\u201d. On their website, they explain that their system can \u201clearn\u201d a voice with one or two hours of high-quality recordings. But, clearly progress has been made, because Anna Bulakh, Head of Ethics and Partnerships at Respeecher, stated at Media &amp; Entertainment Services Alliance (MESA)\u2019s ITS: Localisation! event that a 30-minute or even two-minute sample was sufficient. <\/span><a class=\"OYPEnA text-decoration-underline text-strikethrough-none\" draggable=\"false\" href=\"https:\/\/www.respeecher.com\/case-studies\/respeecher-synthesized-younger-luke-skywalkers-voice-disneys-mandalorian\" target=\"_blank\" rel=\"noopener\">https:\/\/www.respeecher.com\/case-studies\/respeecher-synthesized-younger-luke-skywalkers-voice-disneys-mandalorian<\/a><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Synthesia<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">This app creates videos of a hyperrealist AI avatar with lips that synchronize to text in more than 120 languages, as if they were really reading your script. With a script of your own making, you can make a video with an AI speaker saying whatever you need to communicate. This technology is ideal for how-to videos and e-learning. It&#8217;s best understood by seeing it, so check it out at: <\/span><a class=\"OYPEnA text-decoration-underline text-strikethrough-none\" draggable=\"false\" href=\"https:\/\/www.youtube.com\/watch?v=G-7jbNPQ0TQ\" target=\"_blank\" rel=\"noopener\">How are Synthesia AI Avatars created?<\/a><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Another company, SignAll, uses similar technology to generate sign language interpretation.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Reverso<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">While this app doesn\u2019t work with voices, it could be invaluable to a dubbing translator. Reverso is a multilingual thesaurus that contains all the usual resources found in a dictionary, and using AI and its Rephraser tool, also offers analogies, reformulates phrases, offers alternatives and improves the flow of poorly written sentences.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Is the future synthetic?<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Deep Blue&#8217;s victory over Kasparov in 1997 made us question the limits of professional chess players. Nevertheless, some 20 years later, computers are used to train players to get better and be more creative, and chess is more popular than ever.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Perhaps dubbing will go through similar changes. In some genres, the use of synthetic voices is no longer science fiction, but reality. The fact remains, though, that for film, while some aspects can be improved, voice acting still cannot be replaced by a machine. Nonetheless, with the speed at which technology is advancing, this could change. For example, in the time it took to write this article, Vox News, Sky News, and even the BBC, The Guardian and The Washington Post started to localize their content using Papercup.<\/span><\/p>\n<p><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">AI is a powerful tool that can change the world. When integrated into content production, it can streamline and improve many limitations inherent to the multilingual dubbing process. By automating tedious tasks so we can focus our time and resources on our strengths as humans\u2015creativity, problem solving and teamwork\u2015, AI can help dubbing better deliver on its goal: to make information more accessible to greater and more diverse audiences.<\/span><\/p>\n<p>&nbsp;<\/p>\n<hr \/>\n<h5 class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Sebasti\u00e1n Arias<\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\"> is a freelance dubbing director. He teaches adaptation for dubbing at <\/span><a class=\"OYPEnA text-decoration-underline text-strikethrough-none\" draggable=\"false\" href=\"http:\/\/tallerestav.com\/\" target=\"_blank\" rel=\"noopener\">TalleresTAV.com<\/a><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">, <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">which he founded, <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">and Sofi\u0301a E. Broquen <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">de Spangenberg <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Higher Education <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">School. He holds a <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">degree in Audio-vision <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">and from 2006 to <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">2018, he worked as a <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">dubbing director for <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Civisa Media where he recorded over 1,700 hours of documentaries, films, and series. He also works as project advisor and QC specialist of LatAm Spanish dubbing for different studios. <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Instagram: @sebastian_arias_doblaje. <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Contact: <\/span><a class=\"OYPEnA text-decoration-underline text-strikethrough-none\" draggable=\"false\" href=\"mailto:doblaje.arias@gmail.com\" target=\"_blank\" rel=\"noopener\">doblaje.arias@gmail.com<\/a><\/h5>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>by Sebasti\u00e1n Arias Translated from Spanish by Luc\u00eda Hern\u00e1ndez AI&#8217;s popularity has grown in recent years, and one notable example of its potential is AlphaGo, software developed by DeepMind Technologies. This software uses a neural network to learn to play the game Go by analyzing moves made by professional players. If you&#8217;re unfamiliar with Go, [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[650,12],"tags":[378,681,688,680,683,677,670,42,672,45,685,682,271,669,684,674,676,221,679,666,38,675,687,689,671,673,686,678,573],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/posts\/1794"}],"collection":[{"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/comments?post=1794"}],"version-history":[{"count":1,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/posts\/1794\/revisions"}],"predecessor-version":[{"id":1796,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/posts\/1794\/revisions\/1796"}],"wp:attachment":[{"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/media?parent=1794"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/categories?post=1794"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/tags?post=1794"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}