Subtitling

  • Source language file edited to perfection
  • Source language files annotated for translation
  • Work with only certified translators with several years’ experience with subtitling
  • Each translation is edited by an ATA certified translator
  • Create a low-resolution viewing video copy for client to view
  • Subtitle in over 200 languages
  • A great team of over 500 vetted, experienced and certified translators in target language countries

Working with some of the best trained linguists in almost every spoken language in the world, we learned a very strange thing — that experience did not matter as much as proper training did right off the bat. Consider this: how many times have you met someone who lives in a country for over 20 years, that was born in another country, and still speaks incorrect language of the adopted country!

During the vetting process for our translators we were shocked to learn that some of the most confident translators were not as good as they claimed to be. In fact, they were shocked to see the amount of red color (that the editor uses to correct the translation) in their work. Lesson – if you speak incorrectly, you will continue to speak incorrectly for the rest of your life unless you consciously understand the correct use of a language. And, also, to learn to speak correctly, one must first unlearn to speak incorrectly.

Our association with over 300 language specialists that are best in their language combination has resulted in our creating some of the best translations for hundreds of television shows, operas, musical and movies. Each of our translators goes through a very vigorous vetting program and only the best, and very few make it to our hallowed list of translators whom we use for our clients’ subtitling jobs. Each job is double edited (single edited if you don’t have the budget), to perfection.

We consider ourselves a boutique subtitling house. Give us a try, is all we can say–nothing will be lost in translation!

English <> German
English <> French
English <> Québécois French
English <> Italian
English <> Latin American Spanish
English <> Castilian Spanish (European)
English <> Brazilian Portuguese
English <> Danish
English <> Dutch
English <> Japanese
English <> Korean
English <> Hindi
English <> Arabic
English <> Chinese (Simplified)
English <> Chinese (traditional)
English <> Polish
English <> Norwegian
English <> Swedish
English <> Icelandic
English <> Turkish
English <> Finnish

And many more…at unbeatable prices!

• 1-2 lines per caption placed at bottom center

• No more than 32 characters per line across

• For spoken dialog, line breaks should follow the natural rhythm of speech for maximum readability.

• Captions should be timed to when the speaker begins, and disappear once the speaker is finished and before a camera change, unless that causes the caption to be on the screen for less than one second. The maximum length a caption should appear onscreen is 7 seconds.

• Timing and sentence break:
Time the text according to the lyrics. Which means, the English should read like the opera is being sung. For example, this would be the sentence break:

“He saw the red car
and ran after it.”

But for operas and musicals, it would have to be broken like they are being sung. In the above instance if the singer holds to the word ‘saw’ you’d break it like this:

“He saw
the red car and ran after it.”

In short, the line break has to follow the singing.

• Please fill in any missing lyrics/dialog/spoken words. All spoken and sung words must be there in the file.

• Speaker identification will be required only if necessary for comprehension (Example: when someone is off screen, but it still apparent who is speaking, a speaker identification is not required).

• When speaker identification is required, the speaker’s name should be in all capital letters, with a colon, and a space. (Example: JOHN: I went to the library.)

• Sound effects will only be required when plot pertinent, and when included, should be bracketed and formatted in all capital letters. (Example: [PHONE RINGS])

• Italicization will only be required in the case of narration/voiceover speech, dialogue from on-screen television or radio, or when a character is speaking over a phone and is not physically present in the scene.

• Numerals 1-12 should be written out. All other numbers should be written as digits. (Example: I bought five books, so now I have a total of 15.)

• Speaker trailing off should be formatted with ellipses, and abrupt pauses or interruptions formatted with a long dash. (Example: “Yes…” or “I was going – ”)

Talking Type’s Subtitling process

1. We will first QC the English stl file that you will send to us
2. We export a time code and subtitle file. This file has the English subtitles with time codes.
3. We create and excel file with this exported text that has time code in, time code out and the two lines of subtitles in individual columns
4. These are sent to our vetted translators who add the translations along the English text in a separate column. The time codes are maintained throughout.
5. We receive this translation after inspection, import it into our captioning software. Since these subtitles have used exactly the same time codes are the English text, all the time codes are perfect.
6. We create a proxy video with subtitles and send it either to our client to check (if that’s we’re asked to do) or to another editor who watches the subtitled video and makes and corrections to the excel file that we have received from the first translator.
7. We incorporate the changes to the imported subtitle file.
8. After the final QC, we export the final deliverable subtile file and send it off to the client.

The SDH stands for Subtitles for the deaf or hard-of-hearing.

Closed captioning is specifically for the deaf or hard-of-hearing people, so it has information such as sound effects, music symbols or music descriptions, character ID when the character is not seen on the screen and also pertinent descriptions of the dialog such as:

Where are you?
(whispers)

For a deaf or hard-of-hearing person, the same information is needed to make it easier to enjoy the content. So SDH subtitles not only translate the foreign language to their native language, it also provides the enhanced descriptions of closed captioning where important non-dialog information has been added, as well as speaker identification, useful when the viewer cannot otherwise visually tell who is saying what.

In the words of Neil Hunt, Netflix Chief Product Officer (text copied from Quora):

“Forced narrative is a jargon term that means text in the picture that is not part of the primary language dialog – “A long time ago, in a galaxy far far away…”, “9 months earlier”, “London, 2012”, “15:30h”, as well as dialog by speakers not in the primary language of the film (e.g. Spanish spoken by a Mexican as part of a US film whose primary language is English).

When a show or film is exhibited in a different country, the primary language dialog is translated into that country’s language, either as dubs, subs, or both. However non-primary-language dialog, as well as place-names, explanations, times, backstories etc., are all forced to be displayed in the translated language, even if subtitles are off (e.g. because the viewer is fluent in the primary language, or is enjoying the content in dubbed form).”

Avid Media Composer:

  • Open your video file in Avid Media Composer.
  • Drag SubCap onto the video track in your timeline.
  • Next, go to Tools > Effect Editor. Navigate to Caption Files > Import Caption Data and select your caption file from your computer.

Adobe Premier

  • Import subtitle File into your Project in Adobe Premiere Pro.
  • Open your video project file in Adobe Premiere Pro.
  • From the top navigation bar, select File > Import . Select your .scc/.stl file and hit Open.