By Ernest Niño-Murcia
This is an adaptation/updating of a piece originally published in NAJIT Proteus.
When much of the country locked down in the spring of 2020, interpreters faced the challenge of moving their work online. Adapting to remote proceedings involved learning to use new programs and investing in new equipment and, perhaps most difficult of all, navigating hybrid in-person/remote proceedings (in which some of the parties were located in the same space while others appeared by phone or video). While a lot has been written about interpreting in a purely virtual space (including some pieces by myself and my partners at T.E.A. Language Solutions), the unique challenges of hybrid in-person/remote proceedings require their own discussion. We’d like to share some of our experience in this setting, discuss the challenges of such a setup, and reflect on solutions we’ve found to mitigate common issues.
In an out-of-court setting, a hybrid proceeding would look like this:
This Photo by Unknown Author is licensed under CC BY-SA-NC
Monitor image provided by author with permission from parties shown
Everybody at the table looks happy. The room has a big screen so everyone can see each other and what looks like some sort of omnidirectional microphone is on the table. What could possibly go wrong, right? While we admit that the setup pictured could be viable (though still far from ideal) in perfect circumstances (if all speakers are in a tight radius of the microphone and speaking clearly), the pandemic complicated matters by forcing parties to be farther from the microphone because of social distancing requirements and to speak less clearly if they were masked.
When done right, hybrid in-person/remote proceedings can offer a good solution when a party in a proceeding being held in-person is unable to appear for some reason. Organizing an in-person proceeding during a pandemic or a virtual proceeding at any time is a challenge in its own right, but combining the two increases the degree of difficulty exponentially. Doing these proceedings the right way requires thoughtful planning and execution – otherwise, you risk getting the worst of both worlds: all of the health risk of working in-person plus all of the audiovisual inconvenience of working remotely (poor audio and its attendant fatigue and/or vocal strain).
My initial experience with hybrid proceedings was entirely pleasant thanks to good work and planning by my local federal court. With the local jail on full lockdown, parties able to appear in person were in the courtroom, while defendants appeared by video from the jail. In this setting, I really benefited from some very good technology infrastructure: every party present spoke into microphones that fed into a headset the court provided me. I was able to interpret simultaneously for the defendant first over the court’s video connection and later through a dedicated phone line in parallel with the court’s video connection, always using a professional headset with a boom microphone to ensure the defendant heard me clearly. When interpreting into English for the Court, I spoke into the gooseneck microphone on the table in front of me and my sound would come out loud and clear through the courtroom’s built-in speakers. So, the essential elements I needed to work effectively were covered: being able to hear and be heard clearly, plus social distancing and masks to mitigate health risks. One enormous luxury was having IT staff on hand to help troubleshoot the technology setup in general and as it related to interpretation. It was only when I later ventured out to do hybrid events in lawyers’ offices that I truly appreciated how good I’d had it.
The flip side to the excellent experience I had in my local court are the hybrid horror stories like a proceeding in which all participants seated at a 15-foot-long conference table were expected to speak into the microphone on a webcam at one end of the room. In another, the interpreter was present in a conference room along with the witness, but had to almost yell to be heard because the only microphone provided was a speakerphone that had to be positioned too far from the interpreter for it to also capture audio from the attorney asking the questions. In yet another, the interpreter was the one appearing remotely and battling through muddy audio from a defendant speaking into a speakerphone that was too far from them to capture their speech clearly. The common thread in these anecdotes is that there was technology deployed that someone thought would be fine, but was in actual practice not up to the demands of the task at hand. We didn’t wait for these clients to magically see the light. We devised some solutions and began to deploy them where we could.
The first opportunity to deploy some technology to mitigate the worst problems encountered in hybrid proceedings came in a private deposition on a civil matter. The (defense) witness, plaintiff, plaintiff’s attorney, the court reporter and a videographer were present in the same location, while the interpreter and defense counsel appeared remotely by video. Here’s the setup that made the interpreter appearing remotely say the sound was “Better than in person.”
Let’s break this down piece by piece to understand how hardware and software support good audio in this hybrid setting. The stars of the show are the microphones: USB table mics that are all plugged in to the same computer via a USB hub. The program running on the computer (Voicemeeter Potato) allows all of those mics to be on at the same time and not cause echo or feedback. Think of it as a digital version of the kind of sound mixer you might see at a concert. The sound that goes through this mixer can be channeled to a video platform using what are referred to as virtual audio cables – a digital version of a cable you would use to play your iPod through your car stereo in the mid-2000s. In this case, that cable allows you to take the sound from your digital mixer and send it to another program (Zoom, Webex, Teams or whatever platform you are using for this proceeding). Portable USB speakers are connected to the laptop so all parties present in the room can hear the audio coming from the remote parties loud and clear. Although the laptop has a built-in webcam, I set up an external webcam so that the laptop could be turned to face the court reporter (who wanted to be able to see the interpreter) while the camera itself stayed on the witness so that they would be visible to the attorney appearing remotely. A limitation of the setup pictured is that with only one screen in play on the laptop, parties other than the court reporter aren’t able to see the parties appearing remotely. This has a simple solution, however. Any party in the room is free to connect to the proceeding from their own device to see the remote parties, but only if they are not connected to the virtual meeting audio. Because the laptop pictured is already giving and taking sound to/from the virtual meeting, having mic or speakers on for other devices in the same space would cause screeching feedback. Finally, the long cable running the length of the table is an audio feed into the videographer’s mixer (not pictured). It gives me great pleasure to share that the Polycom speakerphone visible at the end of the table is only there for decoration (!).
The scenario above benefits from certain advantages: it takes place in a smaller space with a limited number of people. And while this can be hard to replicate in a courtroom (a larger space with more people in it), the principles behind good sound are the same: every party speaking is mic’d and their audio flows through a single device. One major advantage of a courtroom is that it is likely already wired for sound from multiple microphones. The challenge is taking that sound and channeling it onto a video platform (and the speakers in the courtroom simultaneously). Navigating a court’s administrative layers to advocate for such a solution is perhaps a bigger challenge than making the technology itself work. That is a fertile topic for another day.
About the Author
As a state and federally certified court interpreter based in Des Moines, Iowa, Ernest Niño-Murcia has interpreted legal proceedings and prepared translations, transcriptions and expert witness reports/testimony for clients in the private and public sectors. Outside of court, he has interpreted for public figures such as Newt Gingrich, Bernie Sanders, Elizabeth Warren and Iowa Governor Kim Reynolds. In 2020, he co-founded T.E.A. Language Solutions to provide remote interpreting training, consulting and live technical support to interpreters and clients. Additionally, Ernest is a Jeopardy! Champion (2012), whose greatest achievement on the show was beating an attorney to the buzzer to answer “co-defendant” in the “11 letter words” category.
Leave a Reply