{"id":1749,"date":"2023-10-13T10:36:55","date_gmt":"2023-10-13T10:36:55","guid":{"rendered":"https:\/\/www.ata-divisions.org\/AVD\/?p=1749"},"modified":"2023-10-13T10:36:55","modified_gmt":"2023-10-13T10:36:55","slug":"eye-tracking-research-on-subtitling-what-do-we-know-and-what-else-is-there-to-find-out","status":"publish","type":"post","link":"https:\/\/www.ata-divisions.org\/AVD\/eye-tracking-research-on-subtitling-what-do-we-know-and-what-else-is-there-to-find-out\/","title":{"rendered":"Eye-Tracking Research on Subtitling &#8211; What Do We Know and What Else is There to Find Out?"},"content":{"rendered":"<p>by Agnieszka Szarkowska and \u0141ukasz Dutka<\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">How Can We Learn More About How You Watch Subtitles?<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">As researchers, we&#8217;re always trying to learn more. When it comes to subtitling or captioning (we&#8217;ll be talking about subtitles here, but it all applies to captions as well), what we can do is collect subtitles and analyze them. We can calculate how many words there are, how many characters are displayed per second or how cultural references are translated from one language to another.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">And when we want to learn more about how people watch subtitled content and whether subtitles are fit for purpose, we can meet with a group of viewers and conduct interviews or send out an online questionnaire. While all of this can be useful, there are some challenges. Some people are not very keen on filling out questionnaires (\u201cAnother questionnaire?\u201d). And when asked about subtitles, viewers can be biased in their opinions or misrepresent their experience (because they don&#8217;t remember it well). We could ask you questions immediately after you have seen a subtitle, but imagine that you\u2019re watching an episode of your favorite TV series, and we&#8217;re pausing it every few seconds to <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">ask you whether you understood or if you had enough time to read all of the words. You would probably be furious at us for ruining your viewing experience. And it wouldn\u2019t be a good representation of how you normally watch content anyway.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Our aim is to collect information that is as objective as possible and gives us insight into how you watch subtitles as a viewer and how your brain processes the video, the audio and the text. In the ideal world, we would love to have a peek into your brain as you watch subtitled content. (But if you&#8217;re like us, you&#8217;re probably not that excited at the prospect of having your skull opened, are you?). So how can we get insight into what is happening in your brain as you binge-watch another series?<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Luckily, we now have other, less invasive ways of looking into your brain with technologies such as EEG (putting electrodes to your skin to measure the electrical activity of your brain). And then there&#8217;s our favorite toy: eye tracking. We&#8217;ll focus on this one today.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">A Window into the Brain<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Eye tracking doesn&#8217;t actually let us look into your brain. It allows us to record your eye movements so that we know how your eyes move and which areas attract your attention. Perhaps you&#8217;re wondering: but wasn&#8217;t our objective to look into the brain? Have you heard the saying that eyes are a window to the soul? While we don&#8217;t know about the soul, as researchers, we certainly believe that the eyes are a window to the brain. In fancy research terms, we call it the eye-mind hypothesis. We assume that when your eyes are looking at something, your brain is processing it more or less at the same time. And if your eyes stay on a certain area longer, this means your brain is taking longer to process it &#8211; possibly because it\u2019s more difficult (or more interesting!). This way, by analyzing the movements of your eyes, we can learn more about how your brain processes subtitles (and your skull can remain intact).<\/span><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-medium wp-image-1750\" src=\"https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/10\/comput-300x225.jpeg\" alt=\"\" width=\"300\" height=\"225\" srcset=\"https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/10\/comput-300x225.jpeg 300w, https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/10\/comput.jpeg 705w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/p>\n<h6 class=\"cvGsUA direction-ltr align-start para-style-title\" style=\"text-align: center;\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Fig. 1. An EyeLink desktop eyetracker with a PC set-up <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Photo: Agnieszka Szarkowska.<\/span><\/h6>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">So, what we do is we use an eye tracker (see Fig. 1), which is essentially an advanced infrared camera that records your eye movements. We usually put it on a desk in front of you as you watch some content on a screen (there is also a head-mounted version but we prefer not to ruin your hair). Based on this recording, we know which areas attracted your attention and how your eyes moved from one area to another. <\/span><\/p>\n<h6><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-medium wp-image-1751\" src=\"https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/10\/face-300x220.jpeg\" alt=\"\" width=\"300\" height=\"220\" srcset=\"https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/10\/face-300x220.jpeg 300w, https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/10\/face.jpeg 667w\" sizes=\"(max-width: 300px) 100vw, 300px\" \/><\/h6>\n<h6 style=\"text-align: center;\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Fig. 2. A scanpath of viewers watching a subtitled video (\u201cJoining the dots\u201d directed by Pablo Romero Fresco) PHOTO: Agnieszka Szarkowska, authorized to use by Pablo Romero-Fresco.<\/span><\/h6>\n<p><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">The picture above illustrates how three viewers looked at this shot and how they read the accompanying subtitle. Each viewer is represented by a different colour. Each little dot that you can see on the images corresponds to <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">a fixation, that is, a moment when a viewer fixates (stays relatively still) on an area. In other words, eyes move to this place and stay there for a while. Then the eyes move somewhere else. This movement, shown here by a continuous line, is called a saccade. There&#8217;s a fixation, a saccade and a new fixation. So now you know how people watch subtitled videos.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Eye-Tracking Can Help Us Verify Long-Standing <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Assumptions About Subtitling<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Eye-tracking research into subtitling goes back to the early 1980s, when Gery d\u2019Ydewalle, Professor of Psychology, and his research team in Belgium set off to study how viewers engage with subtitled videos. Their research set the stage for later studies and some of their findings remain valid today. So, what do eye-tracking studies tell us about the reading of subtitles?<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Automatic Reading Behavior<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Subtitles attract viewers\u2019 attention, regardless of whether viewers depend on them for understanding or not. So as a viewer you are very likely to be reading subtitles even if you can understand the language of the soundtrack or if you don\u2019t know the language of the subtitles! To people who are accustomed to subtitles, their reading is \u201cautomatic\u201d and \u201ceffortless\u201d, as stated by prof. D\u2019Ydewalle, provided that the subtitles are good quality of course.<\/span><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter  wp-image-1752\" src=\"https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/10\/maps-225x300.jpeg\" alt=\"\" width=\"272\" height=\"363\" srcset=\"https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/10\/maps-225x300.jpeg 225w, https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/10\/maps.jpeg 450w\" sizes=\"(max-width: 272px) 100vw, 272px\" \/><\/p>\n<h6 style=\"text-align: center;\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Fig. 3. Subtitles are great gaze attractors. By quickly appearing and disappearing on screen, they attract viewers\u2019 attention. Compare the screenshot on top with a heatmap visualization below in the fist picture showing a news anchor in a TV news bulletin. Viewers\u2019 attention, visualized as red blobs (the redder the area, the more attention it attracted), is mostly on her face. However, the moment a subtitle appears (the screenshot below), viewers shift their attention to read it, at the cost of looking at the presenter: Images: Royalty-free stock photos from Canva and graphic elements by Canva.<\/span><\/h6>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">What Attracts Your Attention the Most?<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Perhaps you noticed it&#8217;s human faces that attract a lot of attention. As human beings, we&#8217;re very much interested in other humans and especially their faces. It&#8217;s interesting that people who can hear well tend to look at the eyes, while if you have hearing loss, you will tend to look more at the mouth. That&#8217;s because looking at lip movements might help you understand speech better.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Thanks to research by Prof. d\u2019Ydewalle and his team, we know that subtitles are a bit like faces and they attract a lot of attention, too. That&#8217;s not surprising because we need subtitles to understand the scene. But do you know that your eyes will try to read subtitles even when you don&#8217;t need to or if they are in a language that you don&#8217;t understand?<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">For instance, we don&#8217;t understand Chinese but a subtitle in Chinese appears, our eyes would move to this area, trying to read this subtitle.You have to consciously decide to start ignoring subtitles. That&#8217;s because when subtitles appear and disappear, all this creates something that looks like movement on screen. And any sort of movement is something that attracts our eyes a lot.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Not All Words Are Read in the Same Way<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Despite what many people may think, when reading subtitles, viewers do not focus on all words in the same way (see Fig. 4). In fact, they don\u2019t even read every single word in the subtitle: about 30% words are skipped during reading. Most of them are short, grammatical words such as articles, pronouns, prepositions or conjunctions.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">What viewers tend to focus on more are long words, especially those that are not very common (linguists call them \u201clow-frequency words\u201d). What does it mean for you as a subtitler? If you have long, rare words in the video you are translating, make sure you give viewers enough time to read them, as it is very likely that people will focus on these words and will gaze at them longer than usual.<\/span><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter  wp-image-1753\" src=\"https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/10\/animal-234x300.jpeg\" alt=\"\" width=\"301\" height=\"385\" srcset=\"https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/10\/animal-234x300.jpeg 234w, https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/10\/animal.jpeg 443w\" sizes=\"(max-width: 301px) 100vw, 301px\" \/><\/p>\n<h6 style=\"text-align: center;\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Fig. 4. Heatmaps showing viewers looking at different words in the subtitles. Note that in this subtitle the words such as \u201cdugongs\u201d and \u201cherbivores\u201d are much more likely to attract viewers\u2019 attention than more common words, such as \u201cin\u201d, \u201cthe\u201d or \u201csea\u201d. Images: Royalty-free stock photos from Canva and graphic elements by Canva.<\/span><\/h6>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Watching or Reading?<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">The moment a subtitle appears, it attracts your attention. Usually, you read it in a number of fixations depending on how many words there are in the subtitle. Once you have finished reading, you go back to the center of the screen, exploring the images (see Fig. 2). And then another subtitle appears, and your eyes move again to this subtitle, and you read it, and go back to looking at the images. In other words, you switch from the images to the subtitle and then back to the images.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Now, if there are consecutive subtitles, one after another, without longer gaps in between, and all these subtitles have high reading speeds, once you have read one subtitle, you might have very little or no time left to look at the images because another subtitle appears. In such a case, a lot of your attention will be on subtitles, and you might miss what&#8217;s happening in the images (see Fig. 5).<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">When this happens, instead of watching the film, you&#8217;re actually just reading. And while reading is great, if you just wanted to read, you would probably sit down with a book in your lap, right? And then, in the worst-case scenario, if reading speeds are very high, you might not have enough time to fully read subtitles. Before you get to fixate on the last words in the current subtitle, it will disappear, substituted by a new subtitle. In such a case, you will start missing bits of dialogue, you might struggle to understand what&#8217;s going on, or you&#8217;ll be confused. Of course, you can pause the video to reread the subtitle, or you can rewind the video if you missed something, and you can watch the scene again <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">(unless you\u2019re in the cinema). But this is not a comfortable viewing experience, so at some point you will probably just quit watching.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Reading Speed<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">A great deal of eye-tracking research on subtitling was done to find the impact of subtitle speed on viewers. Searching for the ideal reading speed to be included in subtitling guidelines is like looking for the Holy Grail. Reading speed depends on so many factors that it\u2019s not really possible to say what the best reading speed is (but if you really want to know, some recent research led by Prof. Jan-Louis Kruger in Australia has shown that 28 cps is way too fast). The factors that affect the way we read subtitles include the complexity of the scene (who much is going on visually?), how fast the characters are talking, what they are talking about (rocket science or just chit chat?), what words they are using (high or low frequency?), whether we understand the language of the soundtrack, and so much more.<\/span><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-medium wp-image-1754\" src=\"https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/10\/speed-239x300.jpeg\" alt=\"\" width=\"239\" height=\"300\" srcset=\"https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/10\/speed-239x300.jpeg 239w, https:\/\/www.ata-divisions.org\/AVD\/wp-content\/uploads\/2023\/10\/speed.jpeg 475w\" sizes=\"(max-width: 239px) 100vw, 239px\" \/><\/p>\n<h6 style=\"text-align: center;\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Fig. 5. When subtitle speed is too high, viewers may spend a lot of time reading the subtitle and as a result they may not have enough time to follow the image. Images: Royalty-free stock photos from Canva and graphic elements by Canva.<\/span><\/h6>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">How Much Time Do Viewers Spend on Reading Subtitles vs. Watching The Images?<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Thanks to eye-tracking research we know that the higher the subtitle speed, the more time viewers spend gazing at subtitles. This typically ranges from about a third to about half of the time when the subtitle is on screen. Why does it matter? As subtitles are never viewed on their own, but are always shown together with a video, we subtitlers need to allow viewers sufficient time to read the text and follow the images.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Shot Changes<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">When you start learning subtitling, you are taught one of the most fundamental rules related to timing subtitles: they should not straddle shot changes. One of the reasons often given to explain this rule is when a subtitle crosses a shot change, viewers will go back with their gaze to re-read this subtitle. Having tested this assumption with eye tracking a few years ago, however, we weren\u2019t able to confirm it. In the study, we showed viewers clips with subtitles which straddled shot changes for at least 20 frames on each side of the shot change. We found that the vast majority of viewers simply continued reading the subtitles or gazing at the on-screen action, and they in fact did not return with their eyes to re-read the subtitles. Yet, this is not to say that subtitles can now cross shot changes in every possible way. Personally, viewers find the flickering effect &#8211; when subtitles are displayed over a shot change for just a few frames &#8211; very irritating. But we have no hard research-based evidence to support it.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Subtitle Display Mode<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Testing live subtitles displayed in the scrolling mode, Prof. Pablo Romero-Fresco noted that scrolling subtitles are much more difficult to read than subtitles displayed in blocks, as words appear and disappear erratically. He metaphorically compared the subtitle reading process to walking on quicksand.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">The conclusion from his study was that subtitles should be displayed in blocks (as is the case with pre-recorded subtitling) rather than in the scrolling\/roll-up mode. However, block display results in a larger delay, which is problematic in live subtitling, so it is unlikely that broadcasters will decide to change the scrolling display any time soon.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">How Subtitlers Work<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Last but not least, thanks to eye-tracking research, we have also been able to study differences between how professionals and trainee subtitlers work and use subtitling software. For instance, novices had more rounds of spotting-translating-revision and they relied much more on the mouse. They also tended to watch the entire video first, before proceeding to subtitling. Professionals, on the other hand, used keyboard shortcuts much more (probably with less wrist pain!) and they completed the task faster than novices.<\/span><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><strong><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">What\u2019s Next?<\/span><\/strong><\/p>\n<p class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Having reached the end of this article, you can see that eye tracking has helped us better understand how we engage with subtitled content (without having to cut viewers\u2019 skulls). But this is just the beginning! There are plenty of other questions that remain unanswered. If you have any ideas on what else should be studied with eye tracking when it comes to subtitling, don\u2019t hesitate to drop us a line!<\/span><\/p>\n<hr \/>\n<h5 class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Agnieszka Szarkowska<\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\"> is a researcher, academic teacher, <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">translator trainer, <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">and audiovisual <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">translation <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">consultant. <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">She works <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">as University <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Professor <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">at the University <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">of Warsaw, where <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">she is the Head of AVT Lab, a research group <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">that works on audiovisual translation. She is one of the founders of AVT Masterclass, an online platform that provides training in subtitling and media localization. <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Contact: <\/span><a class=\"OYPEnA text-decoration-underline text-strikethrough-none\" draggable=\"false\" href=\"mailto:a.szarkowska@uw.edu.pl\" target=\"_blank\" rel=\"noopener\">a.szarkowska@uw.edu.pl<\/a><\/h5>\n<h5 class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">\u0141ukasz Dutka<\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\"> is an audiovisual translator, subtitling <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">trainer <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">and expert <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">in multilingual <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">media <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">localization <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">workflows <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">and media <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">accessibility. <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">He&#8217;s one of the <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">founders of AVT Masterclass, a director of the Global Alliance of Speech-to-Text Captioning, and a member of the Management Board of Dostepni.eu, an accessibility services provider. He\u2019s experienced in interlingual subtitling, subtitle template creation, postproduction captioning, live captioning, theatre surtitling and live events accessibility. <\/span><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">Contact: <\/span><a class=\"OYPEnA text-decoration-underline text-strikethrough-none\" draggable=\"false\" href=\"mailto:lukasz@lukaszdutka.pl\" target=\"_blank\" rel=\"noopener\">lukasz@lukaszdutka.pl<\/a><\/h5>\n<h5 class=\"cvGsUA direction-ltr align-start para-style-title\"><span class=\"OYPEnA text-decoration-none text-strikethrough-none\">\u00a0<\/span><\/h5>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>by Agnieszka Szarkowska and \u0141ukasz Dutka How Can We Learn More About How You Watch Subtitles? As researchers, we&#8217;re always trying to learn more. When it comes to subtitling or captioning (we&#8217;ll be talking about subtitles here, but it all applies to captions as well), what we can do is collect subtitles and analyze them. [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[588,12],"tags":[36,617,622,50,623,612,613,611,614,618,616,624,37,619,296,110,620,615,621,423,38],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/posts\/1749"}],"collection":[{"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/comments?post=1749"}],"version-history":[{"count":1,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/posts\/1749\/revisions"}],"predecessor-version":[{"id":1755,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/posts\/1749\/revisions\/1755"}],"wp:attachment":[{"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/media?parent=1749"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/categories?post=1749"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.ata-divisions.org\/AVD\/wp-json\/wp\/v2\/tags?post=1749"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}