Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
УМКД СИТиП 08.04.2014.doc
Скачиваний:
13
Добавлен:
21.02.2016
Размер:
1.79 Mб
Скачать

3. Centralized translation memory

Centralized translation memory systems store TM on a central server. They work together with desktop TM and can increase TM match rates by 30-60% more than the TM leverage attained by desktop TM alone. They export prebuilt "translation kits" or "t-kits" to desktop TM tools. A t-kit contains content to be translated pre-segmented on the central server and a subset of the TM containing all applicable TM matches. Centralized TM is usually part of a globalization management system (GMS), which may also include a centralized terminology database (or glossary), a workflow engine, cost estimation, and other tools.

References:

1. Hutchins, j.H. And h. Somers: Machine Translation. Academic Press, 1992.

2. Hovy, e.H. Overview article in mitecs (mit Encyclopedia of the Cognitive Sciences). 1998.

3. Hovy, e.H. Review in byte magazine, January 1993.

4. Knight, k. 1997. Automating Knowledge Acquisition for Machine Translation. Ai Magazine 18(4), (81–95).

Lecture 4. Speech translation

Speech Translation is the process by which conversational spoken phrases are instantly translated and spoken aloud in a second language. Speech translation technology enables speakers of different languages to communicate. It thus is of tremendous value for humankind in terms of science, cross-cultural exchange and global business.

A speech translation system would typically integrate the following three software technologies: automatic speech recognition (ASR), machine translation (MT) and voice synthesis (TTS).

The speaker of language A speaks into a microphone and the speech recognition module recognizes the utterance. It compares the input with a phonological model, consisting of a large corpus of speech data from multiple speakers. The input is then converted into a string of words, using dictionary and grammar of language A, based on a massive corpus of text in language A.

The machine translation module then translates this string. Early systems replaced every word with a corresponding word in language B. Current systems do not use word-for-word translation, but rather take into account the entire context of the input to generate the appropriate translation. The generated translation utterance is sent to the speech synthesis module, which estimates the pronunciation and intonation matching the string of words based on a corpus of speech data in language B. Waveforms matching the text are selected from this database and the speech synthesis connects and outputs them.

1. History of speech translation

In 1983, NEC (Nippon Electric Company) Corporation demonstrated speech translation as a concept exhibit at the ITU Telecom World (International Telecommunication Union).

The first individual generally credited with developing and deploying a commercialized speech translation system capable of translating continuous free speech is Robert Palmquist, with his release of an English-Spanish large vocabulary system in 1997. This effort was funded in part by the Office of Naval Research. To further develop and deploy speech translation systems, in 2001 he formed SpeechGear, which has broad patents covering speech translation systems.

In 1999, the C-Star-2 consortium demonstrated speech-to-speech translation of 5 languages including English, Japanese, Italian, Korean, and German.

In 2003, SpeechGear developed and deployed the world's first commercial mobile device with on-board Japanese-to-English speech translation.

One of the first translation systems using a mobile phone, "Interpreter", was released by SpeechGear in 2004.

In 2006, NEC developed another mobile device with on-board Japanese-to-English speech translation.

Another speech translation service using a mobile phone, “shabette honyaku”, was released by ATR-Trek in 2007.

In 2009 SpeechGear released version 4.0 of their Compadre:Interact speech translation product. This version provides instant translation of conversations between English and approximately 35 other languages.

Today, there are a number of speech translation applications for smart phones, e.g. Jibbigo which offers a self-contained mobile app in eight language pairs for Apple's AppStore and the Android Market.