They’re not closed captions and they’re not subtitles – they’re something else altogether. If you work in translation & localization or provide video subtitling services, you’ve probably heard of subtitles for the deaf and hard-of-hearing, usually by its acronym SDH. You’ve probably also wondered why this deliverable exists at all – after all, don’t captions provide accessibility for the deaf and hard-of-hearing? There seems to be a lot of confusion about SDH, but it’s crucial to understand this service, since it’s quickly becoming a staple of video localization.
This post will explain what SDH is exactly, how it makes content more accessible, and what translation & localization professionals must know to provide it.
[Average read time: 3 minutes]
First – what are they?
SDH are a relatively recent innovation in video & film accessibility. To understand what they are, it’s good to understand the difference between captioning and subtitling – if you haven’t already, check out our previous post, Video Translation 101: Are subs & captions the same thing? (No.)
Very briefly, closed captions are intended for viewers who are deaf or hard-of-hearing, and provide text for any audible information in the film or video, also known as “audibles.” Most of this information in videos is what characters or presenters say – in short, the dialogue.
Subtitles, on the other hand, are intended for audiences that don’t speak a show or video’s language, and translate linguistic content. Most of this is dialogue or voice-over as well, of course, but it also includes things like signs, newspaper headlines, or on-screen titles, which wouldn’t be covered by captions since they’re not audible information. And, of course, subs are a different language from the video, while captions are the same language.
SDH combines all this information – both audible and linguistic – and creates one file with all of it. Then, that file is translated for foreign-language audiences, and makes shows accessible to the deaf and hard-of hearing in those locales. As an example, we've taken a still from Night of the Living Dead (1968, directed by George A. Romero, in the public domain) and added SDH in Spanish:
Note that both the background music and the street sign are included in the text.
How did SDH come about?
Aside from having different content from captions & subs, SDH also came about out of technical necessity. The history is complex, but basically, early digital delivery authoring systems (DVDs, essentially) often didn’t support standard captioning formats, so developers created their own, leveraging subtitle technical specs and visual formats – thus the new name. Initially, SDH were really just captions for English-language movies & shows by another name. With the multimedia translation boom driven by online distribution, they’ve effectively become a new localization deliverable with expanded accessibility.
Can I just translate a captions file to make SDH?
No – remember that traditional captions don’t include non-audible content, like newspaper headlines, forced narratives or titles. You can’t get SDH from just translating a captions file. Likewise, captioning treats content slightly differently (for example, naming speakers), so you can’t just add audibles content to an existing subs file – it requires more work than that.
Does it cover an entire audience accessibility-wise?
No. Remember that there are blind or sight-impaired users as well, and you’ll need audio description to reach them. JBI provides this service – in multiple languages, of course.
Any production challenges?
Yes – two of them.
- The aforementioned confusion with captions. Remember, SDH must be done from a linguistic as well as an accessibility perspective. Make sure that your SDH is done by an experienced audio & video translation provider – again, like JBI Studios.
- Extra steps if also dubbing content. Many providers will dub their video content, and provide SDH as well. If so, the translations in the SDH and the dubbing must be the same linguistic content, and they synchronize as well. This means, first, that SDH time-codes can’t be finalized until the dubbing is locked – otherwise, there will be discrepancies in the timing and linguistic segmentation. Second, while ideally the SDH would be translated separately from the dubbing to maximize the quality of each delivery (especially since the latter can require heavy editing for lip-sync), the reality is that often SDH is used by hard-of-hearing viewers in conjunction with the same-language audio track, as a support for phrases or sound effects they miss. In these cases, discrepancies between the actual audio and the SDH are quite jarring to the viewing experience – therefore, the content must match.
Technically, SDH can be provided in just about any text format, like SRT, STL & WebVTT, making it ideal for online media streaming.
Will you see this service soon?
Yes – if you haven’t already seen a request for it.
Many countries around the world already have accessibility requirements, and many more are drafting or expanding them. In the US, the ADA requires that more and more online content become accessibility-compliant. Beyond the legal requirements, however, SDH just makes good sense. For a relatively small investment, corporations, e-Learning authors and studios can make their content accessible to a larger audience. That's a win-win in the localization world.