{
"paper_id": "2005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:22:09.639020Z"
},
"title": "Rapid Development of an Afrikaans-English Speech-to-Speech Translator",
"authors": [
{
"first": "Herman",
"middle": [
"A"
],
"last": "Engelbrecht",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Stellenbosch",
"location": {
"country": "South Africa"
}
},
"email": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Schultz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we investigate the rapid deployment of a twoway Afrikaans to English Speech-to-Speech Translation system. We discuss the approaches and amount of work involved to port a system to a new language pair, i.e. the steps required to rapidly adapt ASR, MT and TTS component to Afrikaans under limited time and data constraints. The resulting system represents the first prototype built for Afrikaans to English speech translation.",
"pdf_parse": {
"paper_id": "2005",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we investigate the rapid deployment of a twoway Afrikaans to English Speech-to-Speech Translation system. We discuss the approaches and amount of work involved to port a system to a new language pair, i.e. the steps required to rapidly adapt ASR, MT and TTS component to Afrikaans under limited time and data constraints. The resulting system represents the first prototype built for Afrikaans to English speech translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper we describe the rapid deployment of a two-way Afrikaans to English Speech-to-Speech Translation system. This research was performed as part of a collaboration between the University of Stellenbosch and Carnegie Mellon University. Using speech and text data supplied by the University of Stellenbosch, a native Afrikaans speaker developed the Afrikaans automatic speech recognition (ASR), machine translation (MT) and text-to-speech synthesis (TTS) components over a period of 2.5 months. The components were built using existing software tools created by the Interactive Systems Laboratories (ISL). The prototype is designed to run on a laptop or desktop computer using a close-talking headset microphone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Afrikaans is a Dutch derivative that is one the 11 official languages in the Republic of South Africa. The 11 languages consists of 2 Germanic languages: English and Afrikaans, and 9 Ntu (or Bantu) languages: isiNdebele, Sepedi, SeSotho, Swazi, Xitsonga, Setswana, Tshivenda, isiXhosa, isiZulu. The majority of the population speaks two of the 11 languages: their native mother-tongue and English most often chosen as the second language. Therefore English can be regarded as the pivot language in South African culture and is the most natural choice to translate to and from. Afrikaans was chosen because of the following three reasons: (i) Of the remaining 10 official languages, Afrikaans has the longest written history and therefore the most available text data. (ii) Unlike the Ntu languages, Afrikaans has the same language root as English and therefore the similarities should help in developing Afrikaans-English translation. (iii) The developer is fluent in both Afrikaans and English, but does not speak any of the Ntu languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The paper is organised into four parts. In the first part we will discuss some of the characteristics of Afrikaans. In the second part we will present the system architecture of the prototype and discuss the different development strategies that were chosen for each component of the system. The third part will discuss the Afrikaans data resources that were available and the last part will discuss the implementation details and performance of the prototype system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The following discussion of the characteristics of Afrikaans has been obtained from [1] .",
"cite_spans": [
{
"start": 84,
"end": 87,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Characteristics of Afrikaans",
"sec_num": "2."
},
{
"text": "Afrikaans is linguistically closely related to 17th century Dutch, and to modern Dutch by extension. Dutch and Afrikaans are mutually understandable. Other less closely related languages include the Low Saxon spoken in northern Germany and the Netherlands, German, and English. Cape Dutch vocabulary diverged from the Dutch vocabulary spoken in the Netherlands over time as Cape Dutch was influenced by European languages (Portuguese, French and English), East Indian languages (Indonesian languages and Malay), and native African languages (isiXhosa and Khoi and San dialects). The first Afrikaans grammars and dictionaries were published in 1875.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "History",
"sec_num": "2.1."
},
{
"text": "Besides vocabulary, the most striking difference from Dutch is the much more regular grammar of Afrikaans, which is likely the result of mutual interference with one or more Creole languages based on the Dutch language spoken by the relatively large number of non-Dutch speakers (Khoisan, Khoikhoi, German, French, Malay, and speakers of different African languages) during the formation period of the language in the second half of the 17th century.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "History",
"sec_num": "2.1."
},
{
"text": "Grammatically, Afrikaans is very analytic. Compared to most other Indo-European languages, verb paradigms in Afrikaans are relatively simple. With a few exceptions, there is no distinction for example between the infinitive and present forms of verbs. Unlike most other Indo-European",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar",
"sec_num": "2.2."
},
{
"text": "p b t tS d dZ k g P m n \u00f1 N r \u00f6 f v w T Consonants s S z Z H j l",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammar",
"sec_num": "2.2."
},
{
"text": "i y u e \u00f8 E oe O a @ ae Long vowels i: y: u: e: \u00f8: o: E: oe: 3: O: a: ae: Diphthongs iu ia ui eu oi Oi ai aU a:i @i @u aey Table 1 : Afrikaans phone set (IPA).",
"cite_spans": [],
"ref_spans": [
{
"start": 123,
"end": 130,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Short vowels",
"sec_num": null
},
{
"text": "languages, verbs do not conjugate differently depending on the subject e.g. \"ek is, jy is, hy is, ons is\" = Eng. \"I am, you are, he is, we are\". Unlike in Dutch, Afrikaans nouns do not have grammatical gender, but there is a distinction between the singular and plural forms of nouns. The most common plural marker is the suffix -e, but several common nouns form their plural instead by adding a final -s. No grammatical case distinction exists for nouns, adjectives and articles, with the universal definite article being \"die\" = Eng. \"the\" and the universal indefinite article being \" 'n \" = Eng. \"a/an\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Short vowels",
"sec_num": null
},
{
"text": "Vestiges of case distinction remain for certain personal pronouns. No case distinction is made though for the plural forms of personal pronouns, i.e \"ons\" means both \"we\" and \"us\"; \"julle\" means \"you\", and \"hulle\" means both \"they\" and \"them\". There is often no distinction either between objective pronouns and possessive pronouns when used before nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Short vowels",
"sec_num": null
},
{
"text": "In terms of syntax, word order in Afrikaans follows broadly the same rules as in Dutch. A particular feature of Afrikaans is its use of the double negative, something that is absent from the other West Germanic standard languages, e.g: \"Hy kan nie Afrikaans praat nie\" = Eng. \"He cannot Afrikaans speak not\" (literally). It is assumed that either French or San are the origins for double negation in Afrikaans. The double negative construction has been fully grammaticalized in standard Afrikaans and its proper use follows a set of fairly complex rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Short vowels",
"sec_num": null
},
{
"text": "As English, Afrikaans is written using the Roman alphabet and words are separated by spaces. Written Afrikaans differs from Dutch in that the spelling reflects a phonetically simplified language, and so many consonants are dropped. The spelling is also considerably more phonetical than Dutch. Notable features include the use of 's' instead of 'z', hence South Africa in Afrikaans is written as \"Suid-Afrika\", whereas in Dutch it is \"Zuid-Afrika\". The Dutch letter combination 'ij' is written as 'y', except where it replaces the Dutch suffix -lijk, as in \"waarskynlik\" = Dutch \"waarschijnlijk\". The letters 'c', 'q' and 'x' are rarely seen in Afrikaans, and words containing them are almost exclusively borrowings from English, Greek or Latin. This is usually because words with 'c' or 'ch' in Dutch are transliterated as 'k' or 'g' in Afrikaans. The following special letters are used in Afrikaans:\u00e8,\u00e9,\u00ea,\u00eb,\u00ee,\u00ef,\u00f4\u00fb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Orthography",
"sec_num": "2.3."
},
{
"text": "The Afrikaans phoneme set (shown in ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phone Set",
"sec_num": "2.4."
},
{
"text": "The target platform of the Afrikaans-English speech translation prototype is a desktop or laptop. Speech input is obtained using a standard PC sound card and a close-talking PC headset microphone. The demonstration prototype consists of 3 main components: ASR, MT and TTS. Each component was developed separately and then integrated into the prototype. The breakdown of the prototype system is shown in Fig. 1 . The working of the speech translation prototype is broken into three actions:",
"cite_spans": [],
"ref_spans": [
{
"start": 403,
"end": 409,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "3."
},
{
"text": "1. Conversion of source language speech into source language text (ASR).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "3."
},
{
"text": "2. Translation of source language text into target language text (MT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Architecture",
"sec_num": "3."
},
{
"text": "The choices of the recognition, translation and synthesis strategies were heavily influenced by the amount of labor-intensive work and time that is required to implement each strategy. Data-driven techniques were preferred over knowledge-based techniques as it would enable the prototype to be developed more rapidly. The following strategies were therefore chosen:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversion of target language text into target language speech (TTS).",
"sec_num": "3."
},
{
"text": "\u2022 For the speech recognition a statistical n-gram language model based recognition strategy was chosen as this does not involve the labor-intensive task of writing recognition grammars.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversion of target language text into target language speech (TTS).",
"sec_num": "3."
},
{
"text": "\u2022 For the translation strategy a statistical machine translation (SMT) approach was chosen instead of an Interlingua based approach. An Interlingua based approach would require the development of a part-ofspeech tagger, an analysis grammar and a generation grammar. The SMT approach only requires the development of a translation model (TM) and a statistical language model (SLM), both which can be learned directly from text data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversion of target language text into target language speech (TTS).",
"sec_num": "3."
},
{
"text": "\u2022 For the synthesis strategy a concatenative speech synthesis approach was chosen as a first implementation. Concatenative speech synthesis requires the construction of databases of natural speech for the target domain. A new utterance in the target domain is synthesized by selection and concatenation of appropriate subword units. The disadvantage of unit-selection concatenative speech synthesis is that it requires large amounts of memory. For each of the main components it was necessary to develop the following subcomponents:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversion of target language text into target language speech (TTS).",
"sec_num": "3."
},
{
"text": "\u2022 ASR: Acoustic Models, Language Models and Pronunciation Dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversion of target language text into target language speech (TTS).",
"sec_num": "3."
},
{
"text": "\u2022 SMT: Translation Models and Language Models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversion of target language text into target language speech (TTS).",
"sec_num": "3."
},
{
"text": "\u2022 TTS: Pronunciation Dictionary and Letter-To-Sound Rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversion of target language text into target language speech (TTS).",
"sec_num": "3."
},
{
"text": "The main components were finally integrated by simply using the output of each preceding component as the input of the next component. The best ASR output was used as input for the SMT component and the best SMT translation output was used as input for the TTS component. Only the first best ASR output was used as input for the SMT component. No effort was made to compensate for recognition errors (by using word lattices as input) or for speech disfluencies. that are sometimes used in an attempt to reduce the impact of using recognised speech as input instead of text, on SMT performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversion of target language text into target language speech (TTS).",
"sec_num": "3."
},
{
"text": "The biggest challenge to developing the system was the limited amount of available Afrikaans speech and text data. Over the past 100 years Afrikaans has developed a rich literature which resulted in the accumulation of large text data. In contrast, very little efforts have been undertaken so far to record and transcribe spoken speech (suitable for speech recognition). In the rest of this section we will describe the data resources in more detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Data Resources",
"sec_num": "4."
},
{
"text": "The text data consists of multilingual parliament sessions that were translated into both Afrikaans and English. The data consists of 39 parliamentary sessions from the year 2000-2001 for a total of 43k parallel sentences. The sentences were aligned using Koehn's Europarl sentence alignment tool based the Church and Gale algorithm [2] . The sentence lengths are distributed from sentences that are single words to sentences that are more than 100 words long. The average sentence length is 17.13 words with a standard deviation of 14.36. The translated parliamentary sessions are commonly referred to as Hansards. In the rest of the paper we will refer to the parliamentary domain as the Hansard domain.",
"cite_spans": [
{
"start": 333,
"end": 336,
"text": "[2]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Data",
"sec_num": "4.1."
},
{
"text": "The Afrikaans speech data was collected during a period of 3 years ending in March 2004 by a consortium known as African Speech Technology (AST) [3, 4] . The AST speech corpus consists of 5 languages for a total of 11 dialects. The data was collected over the telephone and cellphone networks and each participant had to read a datasheet containing 40 utterances. This included a phonetically balanced sentence consisting of 40 words for each dialect. The AST data are orthographically and phonetically transcribed. Speech and non-speech utterances have also been marked and the phonetic transcriptions have been corrected by hand. Only the mother-tongue Afrikaans speech data was used in this research (referred to as the AA data). The AA speech data consists of a total of 265 speakers, 113 male and 152 female, for a total of 10768 utterances. 191 of the recordings were made using landlines and 74 of the recordings were made using the cell phone network for a total of about 6 hours of transcribed Afrikaans speech data.",
"cite_spans": [
{
"start": 145,
"end": 148,
"text": "[3,",
"ref_id": "BIBREF2"
},
{
"start": 149,
"end": 151,
"text": "4]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AST data",
"sec_num": "4.2.1."
},
{
"text": "As the prototype was designed to be used with a close-talking PC headset microphone, a channel mismatch would have occurred if only the available Afrikaans speech was used for training the acoustic models. In order to reduce the channel mismatch it was decided to collect a limited amount of Afrikaans speech under the same acoustic conditions as the target application. This would also enable the evaluation of the complete demonstration prototype (excluding the synthesis). As there was only two native Afrikaans speakers, it was decided to record 1,000 utterances (500 utterances per speaker). The utterances were recorded at a sampling frequency of 16kHz using a laptop and a close-talking PC headset microphone (Andrea Anti-noise NC-61). The utterances were recorded in a medium-sized room with low to medium noise levels. The 1,000 sentences were chosen from the par-allel text data so that the distribution of sentence lengths in the evaluation data would be representative of the distribution found in the parallel text corpus (up to a sentence length of 40 words per utterance). The utterances are classified as read speech, as the utterances were recorded by prompting the speaker. The utterances were only orthographically transcribed and no manual time-alignment of the speech signal and transcription were performed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hansard data",
"sec_num": "4.2.2."
},
{
"text": "As the AST speech data had been orthographically and phonetically aligned, a pronunciation dictionary containing 5,361 words can be extracted from the transcriptions. The AST pronunciation dictionary has a vocabulary size of 3,795 words and a total of 1.41 pronunciation variants (rounded to the second decimal). Another syllable annotated pronunciation dictionary, developed by the University of Stellenbosch, was also available. The Stellenbosch dictionary has a vocabulary size of 36,783 words and does not contain any pronunciation variants. By combining the AST dictionary and the Stellenbosch dictionary a new dictionary was formed that has a vocabulary size of 38,960 words and a total of 1.08 pronunciation variants (which roughly means that each entry has only one pronunciation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pronunciation Dictionaries",
"sec_num": "4.2.3."
},
{
"text": "In order to be able to evaluate the complete prototype as well as each component separately, it was decided to use the same evaluation set for all evaluations. As previously mentioned 1,000 utterances were selected from the parallel text data and recorded using a close-talking microphone. The 16kHz Hansard utterances are downsampled to 8kHz in order to match the acoustic models. The 200 longest utterances were used for adaptation of the recogniser and the remaining 800 utterances were used for evaluation purposes (which will be referred to as the Hansard evaluation set). The rest of the 41k sentences were used for the development of the translation models. In Table 3 information regarding the Afrikaans and English parallel text data is shown. Although the Afrikaans text data only has a vocubulary size of 25k words and the pronunciation dictionary consists of 39k words, not all the words in the Afrikaans text data were covered by the pronunciation dictionary. The following three constraints were used when selecting the 1,000 sentences to be recorded:",
"cite_spans": [],
"ref_spans": [
{
"start": 668,
"end": 675,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Partitioning of data sets",
"sec_num": "5.1."
},
{
"text": "1. Every word in a recorded sentence had to be covered by the pronunciation dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partitioning of data sets",
"sec_num": "5.1."
},
{
"text": "2. The distribution of words per sentence had to be representative of the distribution in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partitioning of data sets",
"sec_num": "5.1."
},
{
"text": "3. No sentence containing more than 40 words were recorded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partitioning of data sets",
"sec_num": "5.1."
},
{
"text": "The Hansard evaluation set has an average sentence length of 24.39 words with a standard deviation of 14.34. The AST speech data was divided into training, development and evaluation sets which each respectively consists of 70%, 15% and 15% of the AST data. The AST training data contains 187 speakers and 7696 utterances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partitioning of data sets",
"sec_num": "5.1."
},
{
"text": "The Afrikaans acoustic models were bootstrapped from the GlobalPhone [5, 6] MM7 multilingual acoustic models using a web-based tool called SPICE [7] . The MM7 phones did not cover all the Afrikaans phones and it was decided to reduce the 62 phone set to 39 phones which was done by splitting the diphthongs into two separate phones and by not distinguishing between long and short vowels. It is unknown what the impact of the large reduction in the phone set has on the ASR performance. Another possibility would have been to bootstrap unknown Afrikaans phones with neighboring phones, but unfortunately time did not permit the development of a Afrikaans system with a larger phone set. CMU's Janus JrTk [8, 9] was used to train the acoustic models on 4.2 hours of the AST speech data.",
"cite_spans": [
{
"start": 69,
"end": 72,
"text": "[5,",
"ref_id": "BIBREF4"
},
{
"start": 73,
"end": 75,
"text": "6]",
"ref_id": "BIBREF5"
},
{
"start": 145,
"end": 148,
"text": "[7]",
"ref_id": "BIBREF6"
},
{
"start": 704,
"end": 707,
"text": "[8,",
"ref_id": "BIBREF7"
},
{
"start": 708,
"end": 710,
"text": "9]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Speech Recognition",
"sec_num": "5.2."
},
{
"text": "As the recogniser will be used with a close-talking headset microphone a channel mismatch exists between the evaluation conditions and the training conditions. There also exists a domain mismatch as the AST data covers various tasks (as described in section 4.2.1) while the Hansard data covers parliamentary debates. In an attempt to adapt to the acoustic environment and the domain, the acoustic models are further trained on 200 utterances of Hansard speech data. The acoustic models were adapted by simply training on the Hansard speech data and not by using MLLR or MAP adaptation. However, as the Hansard speech data consists of only two speakers, this further training probably adapted to the test speakers rather than the evaluation conditions. The Afrikaans recogniser is a fully-continuous 3-state HMM recogniser with 500 triphone models (tied using decision trees). Each state consists of a mixture of 128 Gaussians. The frontend uses 13 MFCCs, power, and the first and second time derivatives of the features. These are reduced to 32 dimensional feature vectors using LDA. Both vocal tract length normalisation (VTLN) and constrained MLLR speaker adaptive training (SAT) was employed when training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Speech Recognition",
"sec_num": "5.2."
},
{
"text": "The Afrikaans and English language models were trained using SRI's statistical language toolkit SRILM [10] . The indomain Afrikaans SLM is a trigram language model with a perplexity of 103.71 and a OOV rate of 0.0% on the Hansard evaluation set. It was trained on 694,455 words and a vocabulary of 25,623 words.",
"cite_spans": [
{
"start": 102,
"end": 106,
"text": "[10]",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Speech Recognition",
"sec_num": "5.2."
},
{
"text": "Both the Hansard adapted acoustic models and the unadapted acoustic models were evaluated on the Hansard evaluation set which consists of 15,259 words and has a vocabulary size of 2.45k words. The results are shown in Table 2 . It can be seen that the unadapted acoustic models has a fairly poor performance of 46.5% WER. Fortunately the acoustic models that were adapted to the Hansard evaluation conditions has a WER of only 20.0% which is a relative improvement of 54.3%. Thus the channel and domain mismatch that exists between the training conditions and the evaluation conditions are partially solved by adapting on the Hansard data. The speaker-independency of the Afrikaans recogniser could not be determined (as a result of the limited number of available Afrikaans speakers), but because the Hansard adaptation data only contains two native Afrikaans speakers the Afrikaans recogniser is quite possibly very speakerdependent. It can also be seen that the ASR performs significantly better for the male speaker than for the female speaker. 46.5% 20.0% Table 2 : ASR evaluation results on the Hansard set.",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 225,
"text": "Table 2",
"ref_id": null
},
{
"start": 1061,
"end": 1068,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Automatic Speech Recognition",
"sec_num": "5.2."
},
{
"text": "The total development time for the ASR component is estimated to be 8 weeks and was the most difficult and timeconsuming component to develop.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Speech Recognition",
"sec_num": "5.2."
},
{
"text": "According to [11] statistical machine translation defines the task of translating a source language sentence (f = f 1 . . . f J ) into a translation sentence (e = e 1 . . . e I ) of the target language. The SMT approach is based on Bayes' decision rule and the noisy channel approach in that the best translation sentence is given by: where P (e) is the language model of the target language and P (f |e) is the translation model. The arg max denotes the search algorithm, which finds the best target sentence given the language and translation models. For a detailed discussion of CMU's statistical machine translation system refer to [12] . The system contains an IBM1 lexical transducer, a phrase transducer and a class based transducer. Only the IBM1 lexical transducer, which is a one-to-one lexicon mapper, is used in this research. The language model is n-gram based and up to trigrams are used. The decoder is a beam search based on dynamic programming combined with pruning. As words are separated by space in written Afrikaans, it is not necessary to use a segmentor to determine word boundaries in sentences (as is required for languages such as Chinese).",
"cite_spans": [
{
"start": 13,
"end": 17,
"text": "[11]",
"ref_id": "BIBREF10"
},
{
"start": 636,
"end": 640,
"text": "[12]",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "5.3."
},
{
"text": "e = arg max",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "5.3."
},
{
"text": "As the intention was to develop a two-way speech translation demonstration prototype, both Afrikaans and English translation systems were developed. The translation models were trained on the 42k Hansard parallel data and was evaluated using the same 800 Hansard sentences that were used to evaluate the ASR component. The same Afrikaans SLM was used as was trained for the ASR component. The English SLM is also a trigram language model with a perplexity of 86.62 and a OOV rate of 0.0% on the Hansard evaluation set. It was trained on 687,154 words and a vocabulary of 17,898 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "5.3."
},
{
"text": "The influence of punctuation on SMT performance was investigated. In the first case all punctuation was removed from the parallel text before training and in the second case the punctuation was left in the data. Separate SLMs were also trained for the systems with and without punctuation and the SLM perplexities were measured on the evaluation set. Care was taken to ensure that the SLM without punctuation had to predict the evaluation material where the punctuation was first removed. The SLM with punctuation had to predict the evaluation material with punctuation. Table 3 summarizes the information regarding the Afrikaans and English text data. It is interesting to note that the Afrikaans vocabulary size is 43% larger than English vocabulary size. Although Afrikaans is much less inflected than English, Afrikaans has less rigid spelling rules regarding the formation of compound words. Afrikaans compound words can be written in three different ways: (i) as a single word, (ii) as separate words or (iii) as separate words connected with dashes. When preparing the text data, no effort was made to force the Afrikaans text to conform to a single method of forming compound words. It has also been noticed that Hansard domain contains a large number of compound words which results in the large vocabulary size for Afrikaans. Table 3 : Parallel Corpus Statistics.",
"cite_spans": [],
"ref_spans": [
{
"start": 571,
"end": 578,
"text": "Table 3",
"ref_id": null
},
{
"start": 1336,
"end": 1343,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "5.3."
},
{
"text": "In Table 4 the results of the SMT experiments are shown for both Afrikaans-English and English-Afrikaans translation. It can be seen that Afrikaans-English translation does benefit from the use of punctuation as both the NIST and the BLEU metric increase slightly. For English-Afrikaans translation the NIST metric is degraded slightly by the use of punctuation although the BLEU metric is increased. This would seem to indicate that the fluency of the translation benefits from punctuation although the accuracy is not significantly affected.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "5.3."
},
{
"text": "It was decided to compare two-way Afrikaans-and-English translation results with the Europarl two-way Dutchand-English results, as the domain and language pairs are similar (ideally, the comparison should be made when also using a similar size parallel corpus). When using a Dutch-English parallel corpus of 743,880 sentence pairs, Koen reports a BLEU score of 26.35 for Dutch-English translation and a BLEU score of 22.85 for English-Dutch translation [13] and English-Afrikaans (BLEU 34.81, NIST 7.73) translation results are very encouraging when compared to the results obtained by Koehn, as the Afrikaans-English results were obtained using a smaller corpus. Furthermore, there is still much scope for improvement as only the most simple of translation models were applied. The total development time for the SMT component is estimated to be 1 week and was relatively easy when compared to the ASR development.",
"cite_spans": [
{
"start": 453,
"end": 457,
"text": "[13]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation",
"sec_num": "5.3."
},
{
"text": "A limited domain Afrikaans voice is built using the Festival Speech Synthesis System [14] . A male Afrikaans unitselection voice was built following the techniques for building synthetic voices in new languages developed by CMU [15] . The same phone set is used for synthesis as was used for the recogniser. The 500 Hansard utterances that was used for adaptation and evaluation of the recogniser were used for building the unit-selection voice. We were also fortunate to obtain a syllable annotated pronunciation lexicon of 36,783 Afrikaans words. It was therefore not necessary to build a pronunciation lexicon for Afrikaans.",
"cite_spans": [
{
"start": 85,
"end": 89,
"text": "[14]",
"ref_id": "BIBREF13"
},
{
"start": 228,
"end": 232,
"text": "[15]",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Synthesis",
"sec_num": "5.4."
},
{
"text": "A statistical letter-to-sound rule model was trained on 90% of the pronunciation dictionary and evaluated on the remaining 10% [16] . The evaluation pronunciations were chosen by selecting every 10th word in the alphabetically sorted pronunciation dictionary. The results of the letter-to-sound rules are shown in Table 5 . The letter-to-sound rules managed to correctly predict 85.24% of the words which is to be expected as Afrikaans spelling reflects a phonetically simplified language. These results are comparable to the results of German (89.38% word correct) [16] .",
"cite_spans": [
{
"start": 127,
"end": 131,
"text": "[16]",
"ref_id": "BIBREF15"
},
{
"start": 566,
"end": 570,
"text": "[16]",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 314,
"end": 321,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Speech Synthesis",
"sec_num": "5.4."
},
{
"text": "Testset pronunciations 3,680 Phones correct 97.92% Words correct 85.24% Table 5 : Evaluation of Letter-To-Sound rules.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Trainset pronunciations 33,121",
"sec_num": null
},
{
"text": "As only two Afrikaans speakers were available it was not possible to formally evaluate the performance and quality of the Afrikaans speech synthesis. In all cases the Afrikaans pronunciations were understandable, but the following informal observation can be made regarding the quality of the synthesis:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trainset pronunciations 33,121",
"sec_num": null
},
{
"text": "\u2022 The Afrikaans phone set made no distinction between long and short versions of the same vowel. Consequently some pronunciation errors were made when words contained long vowels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trainset pronunciations 33,121",
"sec_num": null
},
{
"text": "\u2022 The lack of diphthongs in the phone set resulted in some incorrect pronunciation of words containing diphthongs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trainset pronunciations 33,121",
"sec_num": null
},
{
"text": "Both of these problems can be corrected by simply using a larger phone set which includes the diphthongs and models both long and short vowels. The total development time of the synthesis component is estimated to have been one week. The availability of a 37k Afrikaans pronunciation dictionary shortened the development of the synthesis component considerably. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Trainset pronunciations 33,121",
"sec_num": null
},
{
"text": "For the development of the prototype we used the \"one4all demonstrator system platform\" as described in [17] and essentially the same software framework was used. The follow- Table 7 : Prototype evaluation results. ing was done to develop the prototype: (i) the recogniser was replaced with an Afrikaans recogniser; (ii) the SMT transducers were replaced with Afrikaans-English and English-Afrikaans transducers; and (iii) the speech synthesis voice was replaced with an Afrikaans voice. The integration, adaptation and evaluation of the prototype system is estimated to have taken one week. Figure 2 shows the interface of the demonstration prototype system.",
"cite_spans": [
{
"start": 104,
"end": 108,
"text": "[17]",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 175,
"end": 182,
"text": "Table 7",
"ref_id": null
},
{
"start": 592,
"end": 600,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Description",
"sec_num": "6.1."
},
{
"text": "The complete prototype was evaluated in order to determine the influence of an imperfect recogniser on the translation. Only the Afrikaans-English speech-to-speech translation was evaluated by using the single best recognition result of the recogniser as input to the SMT engine. The results are shown in Table 7 . The best result of 6.12 on the NIST metric and 25.45 on the BLUE metric is obtained when not using punctuation. As expected the translation performance of the best results is significantly affected as the translation accuracy drops by 20.0% relative and the fluency of the translation drops by 25.4% (as respectively measure by the NIST and BLEU metric). Overall, the use of punctuation results in worse translation performance than not using punctuation. This is to be expected as the ASR component does not add punctuation to the recognition output. It seems that there is a correlation between the WER of the recogniser and the degree by which the translation accuracy is affected, but further experiments are required in order to confirm this theory. A few translation examples of the best Afrikaans-English translation system (Adapted AMs and TMs without punctu-ation) is shown below. The first example is of an utterance with a few recognition errors: One can see the machine translation of the recognised sentence is shorter as a result of the deletion errors. The Afrikaans word \"bly\" can mean \"glad\" or \"remain\" (depending on the context). In this instance the wrong meaning was translated. The second example is of an utterance with a no recognition errors: In the second example the translation is mostly correct except that the person to which \"formally\" applies to is changed.",
"cite_spans": [],
"ref_spans": [
{
"start": 305,
"end": 312,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "6.2."
},
{
"text": "In this paper we presented the rapid 2.5-month development of an Afrikaans-English speech-to-speech translation demonstration system. The recognition component is still the most challenging component to develop as can be seen by the 20% word-error-rate performance of the Afrikaans recogniser. Also, the use of a small in-domain corpus to adapt the acoustic models should only be considered as base-line solutions as dedicated adaptation techniques can be used instead.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "The Afrikaans-English translation results (BLEU 36.11, NIST 7.66) is very encouraging when compared to the result obtained for Dutch-English on the Europarl parallel corpus. As only the most simple statistical translation models were used there is much scope for improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "The evaluation of the complete demonstration prototype shows that errors in the recognition output degrades the translation results, as expected. There seems to be a correlation between the WER of the recogniser and the accuracy of the translation (as measured by the NIST metric) but further experiments are required to confirm this theory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "The development scenario is somewhat idealised, as there was access to all the necessary development tools (for ASR, SMT and TTS) and the appropriate speech and text material was already available. In most cases it would be necessary to collect speech and text material as part of the development of speech-to-speech translation for new language pairs. Having developed this speech translation system, the authors expect that developing a similar system for a new language pair would be faster, if the necessary speech and text material is already available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
}
],
"back_matter": [
{
"text": "The authors wish to thank the following persons for their contributions: Paisarn Charoenpornsawat, Alan Black, Matthias Eck, Bing Zhao, Szu-Chen Jou, Susanne Burger and Thomas Schaaf.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": "8."
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Afrikaans -Wikipedia, the free encyclopedia",
"authors": [
{
"first": "",
"middle": [],
"last": "Wikipedia",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "27",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wikipedia, \"Afrikaans -Wikipedia, the free encyclo- pedia,\" 2005, [Online; accessed 27-June-2005]. [On- line]. Available: http://en.wikipedia.org/wiki/Afrikaans",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Program for Aligning Sentences in Bilingual Corpora",
"authors": [
{
"first": "W",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
},
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
}
],
"year": 1991,
"venue": "Meeting of ACL",
"volume": "",
"issue": "",
"pages": "177--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gale, W.A. and Church, K.W., \"A Program for Align- ing Sentences in Bilingual Corpora,\" in Meeting of ACL, 1991, pp. 177-184.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Developing a Multilingual Telephone Based Information System in African Languages",
"authors": [
{
"first": "J",
"middle": [
"C"
],
"last": "Roux",
"suffix": ""
},
{
"first": "E",
"middle": [
"C"
],
"last": "Botha",
"suffix": ""
},
{
"first": "J",
"middle": [
"A"
],
"last": "Du Preez",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of 2nd Intl. Language Resources and Evaluation Conf",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roux, J.C, Botha, E.C. and Du Preez, J.A., \"De- veloping a Multilingual Telephone Based Information System in African Languages,\" in Proc. of 2nd Intl. Language Resources and Evaluation Conf., Athens, Greece, June 2000.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "GlobalPhone: A Multilingual Speech and Text Database developed at Karlsruhe University",
"authors": [
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schultz, T., \"GlobalPhone: A Multilingual Speech and Text Database developed at Karlsruhe University,\" Proc. of ICSLP, Sept. 2002.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Language-independent and language adaptive acoustic modelling for speech recognition",
"authors": [
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2001,
"venue": "Speech Communication",
"volume": "35",
"issue": "",
"pages": "31--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schultz, T., Waibel, A., \"Language-independent and language adaptive acoustic modelling for speech recog- nition,\" Speech Communication, vol. 35, pp. 31-51, 2001.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards Rapid Language Portability of Speech Processing Systems",
"authors": [
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2004,
"venue": "Conference on Speech and Language Systems for Human Communication",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schultz, T., \"Towards Rapid Language Portability of Speech Processing Systems,\" in Conference on Speech and Language Systems for Human Communication, Delhi, India, Nov. 2004.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The Karlsruhe Verbmobil Speech Recogntion Engine",
"authors": [
{
"first": "M",
"middle": [],
"last": "Finke",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Geutner",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hild",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kemp",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Westphal",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of ICASSP",
"volume": "4",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finke, M., Geutner, P., Hild, H., Kemp, T. Ries, K. and Westphal, M., \"The Karlsruhe Verbmobil Speech Recogntion Engine,\" in Proc. of ICASSP, vol. 4, Mu- nich, Germany, 1997.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A onepass decoder based on polymorphic linguistic context assignment",
"authors": [
{
"first": "H",
"middle": [],
"last": "Soltau",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Metze",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "F\u00fcgen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of the IEEE Automatic Speech Recognition and Understanding Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soltau, H., Metze, F., F\u00fcgen, C. and Waibel, A., \"A one- pass decoder based on polymorphic linguistic context assignment,\" in Proc. of the IEEE Automatic Speech Recognition and Understanding Workshop, Madonna di Campiglio, Italy, 2001.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "SRILM -An Extensible Language Modeling Toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ICSLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke, A., \"SRILM -An Extensible Language Mod- eling Toolkit,\" in Proc. of ICSLP, Denver, Colorado, Sept. 2002.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Mathematics of Statistical Machine Translation: Parameter Estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, P.F., Della Pietra, S.A., Della Pietra, V.J. and Mercer, R.L., \"The Mathematics of Statistical Ma- chine Translation: Parameter Estimation,\" Computa- tional Linguistics, vol. 19, no. 2, 1993.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The CMU Statistical Machine Translation System",
"authors": [
{
"first": "S",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Tribble",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Venugopal",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of the MT Summit IX",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vogel, S., Zhang, Y., Huang, F., Tribble, A., Venugopal, A., Zhao, B. and Waibel, A., \"The CMU Statistical Ma- chine Translation System,\" in Proc. of the MT Summit IX, New Orleans, USA, Sept. 2003.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A Multilingual Corpus for Evaluation of Machine Translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, P., \"A Multilingual Corpus for Evaluation of Machine Translation,\" Dec. 2002.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The Festival Speech Synthesis System",
"authors": [
{
"first": "A",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Caley",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Black, A., Taylor, P. and Caley, R., \"The Festival Speech Synthesis System,\" 1999. [Online]. Available: http://festvox.org/festival",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Building Voices in the Festival Speech Synthesis System",
"authors": [
{
"first": "A",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lenzo",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Black, A. and Lenzo, K., \"Building Voices in the Festival Speech Synthesis System,\" 2000. [Online].",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Issues in Building General Letter to Sound Rules",
"authors": [
{
"first": "A",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lenzo",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Pagel",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of 3rd ESCA Workshop on Speech Synthesis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Black, A., Lenzo, K. and Pagel, V., \"Issues in Build- ing General Letter to Sound Rules,\" in Proc. of 3rd ESCA Workshop on Speech Synthesis, Jenolan Caves, Australia, 1998.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Thai Automatic Speech Recognition",
"authors": [
{
"first": "S",
"middle": [],
"last": "Suebvisai",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Charoenpornsawat",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Woszczyna",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Schultz",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ICASSP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suebvisai, S., Charoenpornsawat, P., Black, A., Woszczyna, M., and Schultz, T., \"Thai Automatic Speech Recognition,\" in Proc. of ICASSP, Philadelphia, USA, 2005.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "The system architecture of the Afrikaans-English speech translation prototype.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "[P (e|f )] = arg max e [P (f |e)P (e)] (1)",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "An example of Afrikaans-English translation prototype.",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "Source sentence: 'ten eerste bly die gebrek aan verpleegpersoneel 'n probleem' Recognised sentence: 'ten eerste by gebrek aan verpleegpersoneel probleem' Machine translation of recognised sentence: 'firstly at the lack of nurses problem' Machine translation of source sentence: 'firstly i am glad the lack of nurses a problem' Reference translation: 'firstly the lack of nursing staff remains a problem'",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"content": "
",
"text": "consists of 27 consonants, 23 vowels and 12 diphthongs for a total of 62 phones. Vowels are further subdivided into 11 short vowels and 12 long vowels.",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF4": {
"content": "",
"text": "SMT evaluation results on the Hansard test.",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF5": {
"content": "",
"text": "the estimate of the total system development time.",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF6": {
"content": "",
"text": "Estimate of system development time.",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF8": {
"content": "Machine translation: 'i ask him again formally after the ruling to |
withdraw that' |
Reference translation: 'i ask him to do so again formally after |
this ruling' |
",
"text": "Source sentence: 'ek vra hom om dit weer formeel na hierdie beslissing terug te trek' Recognised sentence: 'ek vra hom om dit weer formeel na hierdie beslissing terug te trek'",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}