{"id":1374,"date":"2021-12-13T16:27:53","date_gmt":"2021-12-13T16:27:53","guid":{"rendered":"https:\/\/www.ata-divisions.org\/AVD\/?p=1374"},"modified":"2021-12-13T16:27:53","modified_gmt":"2021-12-13T16:27:53","slug":"revisiting-translation-environment-tools-tents-for-subtitling-can-we-use-tms-and-tbs-to-translate-tv-series","status":"publish","type":"post","link":"https:\/\/www.ata-divisions.org\/AVD\/revisiting-translation-environment-tools-tents-for-subtitling-can-we-use-tms-and-tbs-to-translate-tv-series\/","title":{"rendered":"Revisiting Translation Environment Tools (TEnTs) for Subtitling: Can We Use TMs and TBs to Translate TV Series?"},"content":{"rendered":"<p><strong>by Dami\u00e1n Santilli<\/strong><\/p>\n<p>I\u2019ve always loved technology, and during my years at university studying translation back in the early 2000s, I quickly realized that I would always be able to link my passion for computers with translation. Additionally, I have always also been a movie buff, so that\u2019s why I knew I had to become a subtitler. Luckily, I have been able to both experiment with and teach translation technology to students and colleagues and work as a professional subtitler since 2007.<\/p>\n<p>When I started subtitling and dealing with my first clients, I was surprised to learn that they would not demand that I use any specific software. Nonetheless, I decided to dive into Subtitle Workshop and the like and spent several years subtitling with free software. Simultaneously, I was working as a technical translator using CAT Tools, and teaching them at university and other places, so it always baffled me that there wasn\u2019t a subtitling-specific CAT tool. In recent years, as we moved towards translation environment tools (TEnTs), it became more and more apparent that<br \/>\nwe needed to integrate translation memories (TM), termbases (TB), and all the other tools these systems offer, into subtitling environments.<\/p>\n<p>In 2019, I decided to check the state of play of translation tools for my presentation at HispaTAV in Barcelona, and I was glad to see that the leading translation software companies offered some alternatives using TMs and TBs for subtitling. Today, I\u2019m going to be revisiting the subject for <em>Deep Focus<\/em>.<\/p>\n<p><strong>We have the technology. Let\u2019s use it!<\/strong><\/p>\n<p>As a subtitler and technical translator myself, I find it hard to believe that we spent so many years not using TMs and TBs in subtitling projects. Luckily, that\u2019s changing in some areas, particularly for projects where a timed text file is provided. I\u2019ve been able to translate some subtitling projects using Trados Studio, and I imagine that some LSPs are also using memoQ. But my main concern here is that, as a freelance subtitler, we don\u2019t always deal with timed text files. When you\u2019re working with direct clients, like an independent movie director or a production company, there\u2019s seldom a need to create a timed text file in the original language, so you usually produce an SRT file in the target language. This is where we fall short, as no translation tool available offers a combination of speech recognition technology and TMs and TBs to create translation units when working directly from source audio and not from a timed text file. Nonetheless, I encourage anyone reading this article to use available tools when subtitling into your target language from a timed text file.<\/p>\n<p>I will talk about possibilities for the future a bit more at the end of this article, but now I want to focus on what we can do with the tools we have available at the moment.<\/p>\n<p><strong>Using TMs and TBs for Your Subtitling Projects<\/strong><\/p>\n<p>Traditionally, you could always add an SRT filter to a CAT Tool to translate these files, but it really wasn\u2019t such a good idea to do so, because you didn\u2019t have control over simple things such as segmentation or time code editing, and you couldn\u2019t even preview the video in the same software. All of that changed a couple of years ago with the introduction of Studio Subtitling in Trados Studio and the Video Preview tool in memoQ. So, let\u2019s say you have a series of webinars you need to translate or a whole season of a TV series, and you\u2019ll be needing to create a team of 3 or 4 subtitlers to make the deadline. If\u2013and this is a big if\u2013 the client provides timed text SRTs, you should probably stop using Subtitle Edit, or even EZTitles and the like, and jump into Trados Studio or memoQ, and carry out the project using translation memories and termbases, which, by the way, you can easily share in real-time with your colleagues without having to buy a professional version of the software. Both Trados Studio Freelance version and memoQ translator pro will suffice.<\/p>\n<p>Currently, Trados Studio offers, in my opinion, the most comprehensive utility for subtitling projects with its Studio Subtitling app. While memoQ only allows you to translate SubRip (SRT) files and custom-made XLSXs, it offers a module for creating custom filters for files like Advanced SubStation Alpha (ASS), SubStation Alpha (SSA), Spruce Subtitle File (STL), WebVTT (VTT), and other common subtitle file types. On the other hand, Trados Studio offers support for ASS, SRT, VTT, STL and YouTube (SBV). So, while you\u2019re welcome to try memoQ if you\u2019ve never used a translation environment tool for subtitling, I will swiftly guide you through Trados\u2019s utility.<\/p>\n<p>The first thing you need to do is visit the <a href=\"https:\/\/appstore.sdl.com\/\">SDL AppStore website<\/a> and login into your account, or if you have the latest version of Trados Studio, you will find the AppStore embedded right into the tool, and you can access it on the Welcome screen. Once you\u2019re in the AppStore, you\u2019ll need to download and install the Studio Subtitling app and the filetypes you\u2019ll need. I suggest you install them all, just in case. After the app is on your Trados Studio, you just need to open a file for translation as you would normally, and then, in the Editor, you\u2019ll have to click on View &gt; Subtitling Preview, to activate the video preview window, and then View &gt; Subtitling Data to see information like the start and end times of subtitles, characters, words, WPM, CPS, CPL, etc., which you can change, if you need to.<\/p>\n<p>Then, you can create as many translation memories and termbases as you want, share them among your team, and take advantage of the same features you would use while translating any technical document. This can be extremely helpful, given the right working conditions, as you can reduce terminology problems, and depending on the subject you\u2019re dealing with, you might even save a lot of time using Trados Studio\u2019s upLIFT technology (TM matching based on fragments). This is also true, obviously, when working with memoQ, but remember, only with SRT files.<\/p>\n<p><strong>Okay, but I don\u2019t work with timed text files. What can I do?<\/strong><\/p>\n<p>I\u2019ve been passionate about the subject of using TMs and TBs for subtitling for a while now, mainly because I have plenty of direct clients, and as I mentioned before I don\u2019t usually get timed text files. Recently, I have had projects where my client wanted me to translate their videos into English, but also needed a closed caption file in Spanish. In those cases, I was able to use a translation environment tool, but in most situations, I\u2019ve been dealing with my files in Subtitle Edit and Ooona. At least they offer robust QA options and even machine translation.<\/p>\n<p>So, I have to say that there\u2019s not much you can do if you\u2019re like me and want to integrate TMs and TBs into your workflows. And here, the question is: what would it take to use TMs and TBs for all of my projects? Well, the answer is kind of simple, and there\u2019s no need to reinvent the subtitling wheel. We need software, whether it\u2019s a translation tool like memoQ or a subtitling tool like Subtitle Edit, that adds a speech recognition feature that helps you create a timed text source file. I mean, it\u2019s not that difficult, since YouTube, Facebook, and others are already creating timed closed caption files. You just need to have this in your subtitling software and then you\u2019ll be able to create translation units, even though the speech recognition quality might not be the best.<\/p>\n<p><strong>So, Is the Future Almost Here?<\/strong><\/p>\n<p>Well, the problem is that we need three things: excellent subtitling software, a speech recognition feature, and a robust translation tool that can manage translation memories, termbases, and other common features. But those three things are usually sold by three different companies, and it might not be as easy as it seems to create one tool that combines Google\u2019s speech recognition technology with Ooona\u2019s professional tools for subtitling and captioning, and Trados\u2019 or memoQ\u2019s translation software suite.<\/p>\n<p>Sure, some streaming companies are already using translation memories, termbases, and machine translation and Ooona may soon incorporate speech recognition technology, but there\u2019s always going to be something missing, right? Maybe it\u2019s a missing feature, or maybe you can only access those features if you work for a particular company.<\/p>\n<p>Maybe the future is almost here, and developers will surprise us soon. I can only hope that, if this technology arrives in the next couple of years, it is accessible to freelance translators and not only to big companies.<\/p>\n<hr \/>\n<h5>Dami\u00e1n Santilli is an English&lt;&gt;Spanish translator specialized in subtitling, software localization, information technology (IT), engineering, and mechanics. He\u2019s a Lecturer of translation technology<br \/>\nand audiovisual translation at UBA, UMSA, and Lenguas Vivas. In 2018, he was in charge of creating the Latin American version of Netflix\u2019s Hermes test. He\u2019s co-author of the Manual de inform\u00e1tica aplicada a la traducci\u00f3n, the first comprehensive book on translation technology in Spanish, and he co-hosts En sincron\u00eda, an AVT podcast, with Guillermo Parra and Blanca Arias Badia.<\/h5>\n","protected":false},"excerpt":{"rendered":"<p>by Dami\u00e1n Santilli I\u2019ve always loved technology, and during my years at university studying translation back in the early 2000s, I quickly realized that I would always be able to link my passion for computers with translation. Additionally, I have always also been a movie buff, so that\u2019s why I knew I had to become [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[283,12],"tags":[93,41,50,307,305,306,91,303,52,240,304,38,95,301,280,302,308],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/posts\/1374"}],"collection":[{"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/comments?post=1374"}],"version-history":[{"count":1,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/posts\/1374\/revisions"}],"predecessor-version":[{"id":1375,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/posts\/1374\/revisions\/1375"}],"wp:attachment":[{"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/media?parent=1374"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/categories?post=1374"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/tags?post=1374"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}