Multi-language subtitling – in which subtitles in two languages are displayed at once – is an increasingly common deliverable for video localization projects. Why? Because many clients have multilingual workforces or audiences who will watch a video at the same time, often in a live setting or an in-store display. However, these subtitle projects are tricky to pull off – and it’s critical to know what you’re doing.
This post will list the five tips multimedia localization professionals must keep in mind to produce high-quality multi-language subtitled videos.
[Average read time: 3 minutes]
First, what exactly is multi-language subtitling?
The term refers specifically to a video that is subtitled in two or more languages at once, as opposed to the standard convention of displaying only one language at a time. The video screenshot that follows has multi-language Spanish and French subtitles.
Note that the two languages are different colors – this is to help viewers to distinguish between them more quickly.
When are they an ideal video localization solution?
Multi-language subtitles are perfect for videos which have a multilingual audience – when the members of that audience can’t customize their language settings.
For example, customers watching an in-store display in a multilingual locale won’t be able to change the video language – in that scenario, it’d be essential to provide subtitles in at least the major languages for that locale. Same for locations with heavy multinational traffic, like airports, where videos or other information is usually localized into various languages. And of course, they’re ideal for corporate events or presentations with large multilingual audiences. In all of these cases, multi-language subtitling ensures that the videos are accessible to the largest audience possible.
So what do you need to know for multimedia localization?
These projects require a slightly different video localization workflow, with a few critical requirements. Here’s what you need to know.
1. Can’t really accommodate more than two languages.
Multi-language subtitles cover up more of the video frame than single-language ones. On top of that, viewers have to scan the titles to look for their language, and this can be particularly tricky when the languages subtitled are very similar – for example, on a project that has both Urdu and Arabic subtitling. For this reason, multi-language subs can really only accommodate two languages at once and still be readable.
2. Source time-coded template text should be shorter.
In general, it’s good to limit subtitle length for readability. However, this is particularly critical when you have two languages on the screen at the same time. When creating the time-coded translation template, or spotting, it’s critical to keep the source text on the shorter side whenever possible.
3. Better to keep each language to one line.
Along with keeping the text segments shorter, it’s best to keep each language on one line if at all possible. This is tricky, however – it means really committing to keeping subtitle text segments within a certain length. Likewise, it may require lowering font sizes slightly to fit all the text comfortably in the frame – this may be an issue for videos at lower resolutions. Finally, it means making sure that linguists don’t add line breaks during translation – yes, the breaks can be deleted, but it’s always best to alter translated texts as little as possible. This is especially critical to remember for Japanese subtitles – since this language requires manual line breaks, linguists often default to adding them.
4. Not manageable for videos with many on-screen titles.
Subtitles don’t work very well for videos that have a large number of on-screen titles like supers or lower-thirds since the texts compete for space on the screen. As you can imagine, this problem is even worse for multi-language projects. For this reason, it’s only a good solution for videos that have very minimal titling.
5. Requires a burned-in video delivery.
It’s best practice to burn multi-language subs to picture, to avoid issues with formatting or text size. While most subtitle text formats like SRT, WebVTT or TTML can take Unicode encoding, viewers can still run into glitches. For example, a user may raise the viewer font size for readability, adding line breaks that make the subtitle content flow outside of the frame. Likewise, languages with different text directions may not display well together, depending primarily on a user’s player and browser setups. Burning to picture avoids these issues and provides a deliverable that will display all of the languages properly.
Multi-language subtitles require a more involved workflow
Aside from the tweaks during time-coding/spotting and implementation, these projects require a customized post-production workflow to ensure the readability of the final videos. For example, aside from different colors, a client may request further font and format specifications for each language. Likewise, since accessibility may be an issue, clients may request a final QA with a speaker of each language who doesn’t know the content, to ensure that the text can be read in real time. And of course, pairing two languages will almost always mean tweaking the visual specifications so that they work well together in the frame – and often testing them before final implementation. In short, multi-language subtitle projects almost always have special requirements, so that it’s critical to get them set up properly during pre-production. As with video localization in general, that’s the only way to avoid delivery delays and costly re-work.