Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
102 kB
{
"paper_id": "O07-5005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:08:17.950475Z"
},
"title": "Multilingual Spoken Language Corpus Development for Communication Research",
"authors": [
{
"first": "Toshiyuki",
"middle": [],
"last": "Takezawa",
"suffix": "",
"affiliation": {
"laboratory": "ATR Spoken Language Communication Research Laboratories",
"institution": "Keihanna Science City",
"location": {
"addrLine": "2-2-2 Hikaridai",
"postCode": "619-0288",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "toshiyuki.takezawa@atr.jp"
},
{
"first": "Genichiro",
"middle": [],
"last": "Kikui",
"suffix": "",
"affiliation": {
"laboratory": "ATR Spoken Language Communication Research Laboratories",
"institution": "Keihanna Science City",
"location": {
"addrLine": "2-2-2 Hikaridai",
"postCode": "619-0288",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "kikui.genichiro@lab.ntt.co.jp"
},
{
"first": "Masahide",
"middle": [],
"last": "Mizushima",
"suffix": "",
"affiliation": {
"laboratory": "ATR Spoken Language Communication Research Laboratories",
"institution": "Keihanna Science City",
"location": {
"addrLine": "2-2-2 Hikaridai",
"postCode": "619-0288",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "mizushima.masahide@lab.ntt.co.jp"
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": "",
"affiliation": {
"laboratory": "ATR Spoken Language Communication Research Laboratories",
"institution": "Keihanna Science City",
"location": {
"addrLine": "2-2-2 Hikaridai",
"postCode": "619-0288",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "eiichiro.sumita@nict.go.jp@atr.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Multilingual spoken language corpora are indispensable for research on areas of spoken language communication, such as speech-to-speech translation. The speech and natural language processing essential to multilingual spoken language research requires unified structure and annotation, such as tagging. In this study, we describe an experience with multilingual spoken language corpus development at our research institution, focusing in particular on speech recognition and natural language processing for speech translation of travel conversations. An integrated speech and language database, Spoken Language DataBase (SLDB) was planned and constructed. Basic Travel Expression Corpus (BTEC) was planned and constructed to cover a variety of situations and expressions. BTEC and SLDB are designed to be complementary. BTEC is a collection of Japanese sentences and their translations, and SLDB is a collection of transcriptions of bilingual spoken dialogs. Whereas BTEC covers a wide variety of travel domains, SLDB covers a limited domain, i.e., hotel situations. BTEC contains approximately 588k utterance-style expressions, while SLDB contains about 16k utterances. Machine-aided Dialogs (MAD) was developed as a development corpus, and both BTEC and SLDB can be used to handle MAD-type tasks. Field Experiment Data (FED) was developed as the evaluation corpus. We conducted an experiment, and based on analysis of our follow-up questionnaire, roughly half the subjects of the",
"pdf_parse": {
"paper_id": "O07-5005",
"_pdf_hash": "",
"abstract": [
{
"text": "Multilingual spoken language corpora are indispensable for research on areas of spoken language communication, such as speech-to-speech translation. The speech and natural language processing essential to multilingual spoken language research requires unified structure and annotation, such as tagging. In this study, we describe an experience with multilingual spoken language corpus development at our research institution, focusing in particular on speech recognition and natural language processing for speech translation of travel conversations. An integrated speech and language database, Spoken Language DataBase (SLDB) was planned and constructed. Basic Travel Expression Corpus (BTEC) was planned and constructed to cover a variety of situations and expressions. BTEC and SLDB are designed to be complementary. BTEC is a collection of Japanese sentences and their translations, and SLDB is a collection of transcriptions of bilingual spoken dialogs. Whereas BTEC covers a wide variety of travel domains, SLDB covers a limited domain, i.e., hotel situations. BTEC contains approximately 588k utterance-style expressions, while SLDB contains about 16k utterances. Machine-aided Dialogs (MAD) was developed as a development corpus, and both BTEC and SLDB can be used to handle MAD-type tasks. Field Experiment Data (FED) was developed as the evaluation corpus. We conducted an experiment, and based on analysis of our follow-up questionnaire, roughly half the subjects of the",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "experiment felt they could understand and make themselves understood by their partners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Various kinds of corpora developed for analysis of linguistic phenomena and statistical information gathering are now accessible via electronic media and can be utilized for the study of natural language processing. Since these include written-language and monolingual corpora, however, they are not necessarily useful for research and development of multilingual spoken language processing. A multilingual spoken language corpus is indispensable for research on areas of spoken language communication such as speech-to-speech translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Many projects on speech-to-speech translation began at that time [Rayner et al. 1993; Roe et al. 1992; Wahlster et al. 2000] . SRI International and Swedish Telecom developed a prototype speech translation system that could translate queries from spoken English to spoken Swedish in the domain of air travel information systems [Rayner et al. 1993] . AT&T Bell Laboratories and Telef\u00f3nica Investigaci\u00f3n y Desarrollo developed a restricted domain spoken language translation system called Voice English/Spanish Translator (VEST) [Roe et al. 1992] . In Germany, Verbmobil [Wahlster 2000] , was created as a major speech-to-speech translation research project. The Verbmobil scenario assumes native speakers of German and of Japanese who both possess at least a basic knowledge of English. The Verbmobil system supports them by translating from their mother tongue, i.e. Japanese or German, into English.",
"cite_spans": [
{
"start": 65,
"end": 85,
"text": "[Rayner et al. 1993;",
"ref_id": "BIBREF11"
},
{
"start": 86,
"end": 102,
"text": "Roe et al. 1992;",
"ref_id": "BIBREF12"
},
{
"start": 103,
"end": 124,
"text": "Wahlster et al. 2000]",
"ref_id": "BIBREF22"
},
{
"start": 328,
"end": 348,
"text": "[Rayner et al. 1993]",
"ref_id": "BIBREF11"
},
{
"start": 528,
"end": 545,
"text": "[Roe et al. 1992]",
"ref_id": "BIBREF12"
},
{
"start": 570,
"end": 585,
"text": "[Wahlster 2000]",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In the 1990s, speech recognition and synthesis research shifted from a rule-based to a corpus-based approach such as HMM and N -gram. However, machine translation research still depended mainly on a rule-based or knowledge-based approach. In the 2000s, wholly corpus-based projects such as European TC-STAR [H\u00f6ge 2002; Lazzari 2006] and DARPA GALE [Roukos 2006 ] began to deal with monologue speeches such as broadcast news and",
"cite_spans": [
{
"start": 307,
"end": 318,
"text": "[H\u00f6ge 2002;",
"ref_id": "BIBREF0"
},
{
"start": 319,
"end": 332,
"text": "Lazzari 2006]",
"ref_id": "BIBREF7"
},
{
"start": 348,
"end": 360,
"text": "[Roukos 2006",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "European Parliament plenary speeches. In this paper, we report corpus construction activities for translation of spoken dialogs of travel conversations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Communication Research",
"sec_num": null
},
{
"text": "There are a variety of requirements for every component technology, such as speech recognition and language processing. A variety of speakers and pronunciations may be important for speech recognition, and a variety of expressions and information on parts of speech may be important for natural language processing. The speech and natural language processing essential to multilingual spoken language research requires unified structure and annotation, such as tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Communication Research",
"sec_num": null
},
{
"text": "In this paper, we introduce an interpreter-aided spoken dialog corpus and discuss corpus configuration. Next, we introduce the basic travel expression corpus developed to train machine translation of spoken language among Japanese, English, and Chinese speakers. Finally, we discuss the Japanese, English, and Chinese multilingual spoken dialog corpus that we created using speech-to-speech translation systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Communication Research",
"sec_num": null
},
{
"text": "We first planned and constructed an integrated speech and language database called Spoken Language DataBase (SLDB) [Morimoto et al. 1994; Takezawa et al. 1998 ]. The task involved travel conversations between a foreign tourist and a front desk clerk at a hotel; this task was selected because people are familiar with it and because we expect it to be included in future speech translation systems. All of the conversations for this database take place in English and Japanese through interpreters because the research at that time concentrated on Japanese and English. The interpreters serve as the speech translation system. One remarkable characteristic of the database is its integration of speech and linguistic data. Each conversation includes data on recorded speech, transcribed utterances, and their correspondences. This kind of data is very useful because it contains transcriptions of spoken dialogs between speakers who speak different mother tongues. However, the cost of collecting spoken languages is too high to expand the size.",
"cite_spans": [
{
"start": 115,
"end": 137,
"text": "[Morimoto et al. 1994;",
"ref_id": "BIBREF10"
},
{
"start": 138,
"end": 158,
"text": "Takezawa et al. 1998",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of Approach",
"sec_num": "2."
},
{
"text": "There are three important points to consider in designing and constructing a corpus for dialog-style speech communication such as speech-to-speech translation. The first is to have a variety of speech samples with a wide range of pronunciations, speaking styles, and speakers. The second point is to have data for a variety of situations. A \"situation\" means one of various limited circumstances in which the system's user finds him-or herself, such as an airport, a hotel, a restaurant, a shop, or in transit during travel; it also involves various speakers' roles, such as communication with a middle-aged stranger, a stranger wearing jeans, a waiter or waitress, or a hotel clerk. The third point is to have a variety of expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of Approach",
"sec_num": "2."
},
{
"text": "According to our previous study [Takezawa et al. 2000] , human-to-machine conversational speech data shared characteristics with human-to-human indirect communication speech data such as spoken dialogs between Japanese and English speakers through human interpreters. Moreover, human-to-human indirect communication data had an intermediate characteristic, i.e., it was positioned somewhere between direct communication data, that is, Japanese monolingual conversations, and speech data from conversational text. If we assume that a speaker would accept a machine-friendly speaking style, we could take a great step forward: a clear separation of speech data collection and multilingual data collection. In the following, we focus on multilingual data collection. In order, Basic Travel Expression Corpus (BTEC) [Takezawa et al. 2002; Kikui et al. 2003 ] was planned to cover the varieties of situations and expressions.",
"cite_spans": [
{
"start": 32,
"end": 54,
"text": "[Takezawa et al. 2000]",
"ref_id": "BIBREF16"
},
{
"start": 812,
"end": 834,
"text": "[Takezawa et al. 2002;",
"ref_id": "BIBREF17"
},
{
"start": 835,
"end": 852,
"text": "Kikui et al. 2003",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of Approach",
"sec_num": "2."
},
{
"text": "Machine-aided Dialogs (MAD) was planned as a development corpus to handle the differences between the target utterance with which speech translation systems must deal and the following two corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of Approach",
"sec_num": "2."
},
{
"text": "SLDB contains no recognition/translation errors because the translations between people speaking different languages are done by professional human interpreters. However, even a state-of-the-art speech translation system cannot avoid recognition/translation errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of Approach",
"sec_num": "2."
},
{
"text": "BTEC contains edited colloquial travel expressions, which are not transcriptions, so some people might not express things in the same way, and the frequency distribution of expressions might be different from actual dialogs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of Approach",
"sec_num": "2."
},
{
"text": "Field Experiment Data (FED) was planned as the evaluation corpus. Table 1 shows an overview of the corpora. In the table, S2ST stands for speech-to-speech translation, MT stands for machine translation, J, E, and C stand for Japanese, English, and Chinese, respectively. ",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 73,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Overview of Approach",
"sec_num": "2."
},
{
"text": "SLDB contains data from dialog spoken between English and Japanese speakers through human interpreters [Morimoto et al. 1994; Takezawa et al. 1998 ]. All utterances in SLDB have been translated into Chinese. The content is entirely travel conversations between a foreign tourist and a front desk clerk at a hotel. Human interpreters serve as the speech translation system. Table 2 is an overview of the corpus, and Table 3 shows its basic characteristics. One remarkable characteristic of SLDB is its integration of speech and linguistic data. Each conversation includes recorded speech data, transcribed utterances, and the correspondences between them.",
"cite_spans": [
{
"start": 103,
"end": 125,
"text": "[Morimoto et al. 1994;",
"ref_id": "BIBREF10"
},
{
"start": 126,
"end": 146,
"text": "Takezawa et al. 1998",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 373,
"end": 380,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 415,
"end": 422,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Interpreter-Aided Spoken Dialog Corpus (SLDB)",
"sec_num": "3."
},
{
"text": "The transcribed Japanese and English utterances are tagged with morphological information. This kind of tagged information is crucial for natural language processing as well as for speech recognition language modeling. The recorded speech signals and transcribed utterances in the database provide both examples of various phenomena in bilingual conversations, and input data for speech recognition and machine translation evaluation purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreter-Aided Spoken Dialog Corpus (SLDB)",
"sec_num": "3."
},
{
"text": "Data can be classified into the following three major categories. Figure 1 has been transcribed into Romanized Japanese for the convenience of readers who do not understand Japanese hiragana, katakana, and kanji (Chinese characters). The original text was transcribed in Japanese characters hiragana, katakana, and kanji. Interjections are bracketed. J, E, JE, or EJ at the beginning of a line denotes a Japanese speaker, an English speaker, a Japanese-to-English interpreter, or an English-to-Japanese interpreter, respectively. \" | \" denotes a sentence boundary. A blank line between utterances shows that the utterance's right was transferred.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 74,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Interpreter-Aided Spoken Dialog Corpus (SLDB)",
"sec_num": "3."
},
{
"text": "The Japanese text is produced by extracting the utterances of a Japanese speaker and an English-to-Japanese interpreter, while the English text is produced by extracting the utterances of an English speaker and a Japanese-to-English interpreter. These two kinds of data are utilized for such monolingual investigations as morphological analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreter-Aided Spoken Dialog Corpus (SLDB)",
"sec_num": "3."
},
{
"text": "The tagged data consists of the following. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Interpreter-Aided Spoken Dialog Corpus (SLDB)",
"sec_num": "3."
},
{
"text": "The Basic Travel Expression Corpus (BTEC) [Takezawa et al. 2002; Kikui et al. 2003 ] was designed to cover utterances for possible travel conversations topic and their translations. Since it is practically impossible to collect them by transcribing actual conversations or simulated dialogs, we decided to use sentences provided by bilingual travel experts based on their experience. We started by looking at phrasebooks that contain bilingual sentence pairs (in this case Japanese/English) that the editors consider useful for tourists traveling abroad. Such sentence pairs were collected and rewritten to make translation as context-independent as possible and to comply with the speech transcription style of our research institution. Sentences that were outside of the travel domain or have very special meanings were removed. Table 4 lists the basic statistics of the BTEC collections, called BTEC1, 2, 3, 4, and 5. Each collection was created using the same procedure in a different time period or using a different translation direction from the source language to target languages. Strictly speaking, morphemes are used as the basic linguistic unit for Japanese (instead of words), since morpheme units are more stable than word units. The aims of the BTEC corpus are for translation and language modeling for automatic speech recognition. For translation, one of the key points to cover is the translation direction from the source language to target languages. For automatic speech recognition in the travel domain, one of the key points to cover is multiple sub-domains such as airport-related dialogs, hotel-related dialogs, and so on.",
"cite_spans": [
{
"start": 42,
"end": 64,
"text": "[Takezawa et al. 2002;",
"ref_id": "BIBREF17"
},
{
"start": 65,
"end": 82,
"text": "Kikui et al. 2003",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 831,
"end": 838,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Basic Travel Expression Corpus (BTEC)",
"sec_num": "4."
},
{
"text": "For translation, the BTEC collections cover both translation directions. BTEC1, BTEC2, and BTEC3 contain expressions for Japanese tourists visiting the USA, UK, or Australia. The translation direction is from Japanese to English and Chinese. BTEC4 mainly contains expressions for American tourists who visit Japan. The translation direction is from English to Japanese and Chinese. BTEC5 contains various expressions, such as those for American tourists who go to Korea. The translation direction is from English to Japanese and Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Travel Expression Corpus (BTEC)",
"sec_num": "4."
},
{
"text": "For automatic speech recognition, BTEC covers multiple domains. Domain information is given for BTEC1, BTEC2, and BTEC3. Table 5 shows an overview. BTEC sentences, as described above, did not come from actual conversations but were generated by experts as reference materials. This approach enabled us to efficiently create a broad corpus; however, it may have two problems. First, this corpus may lack utterances that occur in real conversation. For example, when people ask the way to a bus stop, they often use a sentence like (1). However, in BTEC this is expressed more directly, as in (2).",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 128,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Basic Travel Expression Corpus (BTEC)",
"sec_num": "4."
},
{
"text": "(1) I'd like to go downtown. Where can I catch a bus?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Travel Expression Corpus (BTEC)",
"sec_num": "4."
},
{
"text": "(2) Where is a bus stop (to go downtown)?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Travel Expression Corpus (BTEC)",
"sec_num": "4."
},
{
"text": "We will discuss this issue in the section on MAD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Travel Expression Corpus (BTEC)",
"sec_num": "4."
},
{
"text": "The second problem is that the frequency distribution of this corpus may be different from the actual distribution. In this corpus, the frequency of an utterance most likely reflects the best trade-off between usefulness in real situations and compactness of the collection. Therefore, it is possible to think of this frequency distribution as a first approximation of reality, but this is an open question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Travel Expression Corpus (BTEC)",
"sec_num": "4."
},
{
"text": "A part of BTEC was distributed to the participants in the International Workshop on Spoken Language Translation (IWSLT) [IWSLT 2006 ]. ",
"cite_spans": [
{
"start": 120,
"end": 131,
"text": "[IWSLT 2006",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Travel Expression Corpus (BTEC)",
"sec_num": "4."
},
{
"text": "The approach exemplified by BTEC focuses on maximizing the coverage of the corpus rather than creating an accurate sample of reality. Users may use different wording when they speak to the system. In addition, there may be differences between the target utterance with which speech translation systems must deal and the following two corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation-Aided Spoken Dialog Corpus (MAD)",
"sec_num": "5."
},
{
"text": "SLDB contains no recognition/translation errors because the translations between people speaking different languages are done by professional human interpreters. However, even a state-of-the-art speech translation system cannot avoid recognition/translation errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation-Aided Spoken Dialog Corpus (MAD)",
"sec_num": "5."
},
{
"text": "BTEC contains edited colloquial travel expressions, which are not transcriptions, so some people might not express things in the same way and the frequency distribution of expressions might be different from actual dialogs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation-Aided Spoken Dialog Corpus (MAD)",
"sec_num": "5."
},
{
"text": "Therefore, MAD is intended to collect representative utterances that people will input into S2ST systems. For this purpose, simulated dialogs (i.e., role play) were carried out between two native speakers of different mother tongues with a Japanese/English bi-directional S2ST system, instead of using human interpreters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation-Aided Spoken Dialog Corpus (MAD)",
"sec_num": "5."
},
{
"text": "During the first half of the research program, human typists were used instead of speech recognizers to ensure that we collected good quality data. During the second half of the research program, the S2ST system between English and Japanese was used. Figure 2 is an overview of the data collection environment. An English typist transcribes an English utterance and inputs it into a machine translation system from English to Japanese. The translated Japanese text and its synthesized speech are sent to a Japanese speaker. Likewise, a Japanese typist transcribes a Japanese utterance and inputs it into a machine translation system from Japanese to English. The translated English text and its synthesized speech are sent to an English speaker. By repeating this process, an MT-aided bilingual dialog continues. Speech waves, transcriptions, and translated texts are stored in log files.",
"cite_spans": [],
"ref_spans": [
{
"start": 251,
"end": 259,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Machine Translation-Aided Spoken Dialog Corpus (MAD)",
"sec_num": "5."
},
{
"text": "Five sets of simulated dialogs (MAD1 through MAD5) have so far been developed, changing parameters such as system configurations, complexity of dialog tasks, instructions to speakers, and so on. Table 6 shows a summary of the five experiments, MAD1-MAD5. In this table, the number of utterances includes both Japanese and English.",
"cite_spans": [],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Collecting Spoken Dialog Data Using Typists",
"sec_num": "5.1"
},
{
"text": "The first set of dialogs (MAD1) was collected to see whether conversation through a machine translation system is feasible. The second set (MAD2) focused on task achievement by assigning complex tasks to participants. The third set (MAD3) contains carefully recorded speech data of medium complexity. MAD4 and MAD5 aim to investigate how utterances change based on a change in setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collecting Spoken Dialog Data Using Typists",
"sec_num": "5.1"
},
{
"text": "It is very likely that people would speak differently to a spoken language system based on the instructions given to them. Instructions were conveyed to subjects for all sets other than MAD1 using instructional movies to ensure that the same instructions were given to each subject. Before starting the experiments, subjects were asked to watch these movies and then try the system with test dialogs. Instructions and practice took about 30 minutes. We gave different types of instructions for the fourth set (MAD4). Reference ] [ Kikui 2003] [Takezawa et al. 2003 ] [ Kikui 2004] [Mizushima et al. 2004] Number S2ST presupposes that each user understands the translated utterances of the other. However, the dialog environment described so far allows the user to access other information, such as translated text displayed on a PDA. We tried to control the extra information in MAD5 to see how utterances would be affected.",
"cite_spans": [
{
"start": 531,
"end": 564,
"text": "Kikui 2003] [Takezawa et al. 2003",
"ref_id": "BIBREF4"
},
{
"start": 569,
"end": 604,
"text": "Kikui 2004] [Mizushima et al. 2004]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Collecting Spoken Dialog Data Using Typists",
"sec_num": "5.1"
},
{
"text": "Part of the MAD corpus has been translated into Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2. Data collection environment of MAD",
"sec_num": null
},
{
"text": "Spoken dialog data was collected using the S2ST system for English and Japanese. This data collection experiment is called MAD6 because five data collection experiments were carried out using typists. The system was configured as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collecting Spoken Dialog Data Using Speech Translation Systems",
"sec_num": "5.2"
},
{
"text": "\u2022 Acoustic model for Japanese speech recognition: Speaker-adapted models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collecting Spoken Dialog Data Using Speech Translation Systems",
"sec_num": "5.2"
},
{
"text": "\u2022 Language model for Japanese speech recognition: Vocabulary size 52,000 words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collecting Spoken Dialog Data Using Speech Translation Systems",
"sec_num": "5.2"
},
{
"text": "\u2022 Acoustic model for English speech recognition: Speaker-adapted models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collecting Spoken Dialog Data Using Speech Translation Systems",
"sec_num": "5.2"
},
{
"text": "\u2022 Language model for English speech recognition: Vocabulary size 15,000 words. Table 7 is an overview of MAD6. Data collected by typists (MAD1 through MAD5) contains some translation errors but very few recognition errors. However, MAD6 data contains both recognition errors and translation errors. We found that translation errors caused by recognition errors sometimes caused great confusion. That is, users need many more turns to recover from translation errors caused by recognition errors than to recover from mere translation errors. Moreover, we found that the user's speaking style changed similar to read speech when using speech recognizers. This was because users could confirm their recognition results using a PC display. Experienced users soon understood that they were confused by translation errors caused by recognition errors and adopted strategies to avoid recognition errors. As a result, their speaking style seemed to change from a natural dialog style to a read speech style. ",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 7",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Collecting Spoken Dialog Data Using Speech Translation Systems",
"sec_num": "5.2"
},
{
"text": "BTEC and SLDB are designed to be complementary. BTEC is a collection of Japanese sentences and their translations, and SLDB is a collection of transcriptions of bilingual spoken dialogs. Whereas BTEC covers a wide variety of travel domains, SLDB covers a limited domain, i.e., hotel situations. BTEC contains approximately 588k utterance-style expressions, while SLDB contains about 16k utterances. Thus, we can hypothesize that BTEC and SLDB together cover the same content as MAD. This hypothesis is partly validated by the cross-perplexity shown in Table 8 . In this table, BTEC1 + SLDB combines two language Communication Research models trained on BTEC1 and SLDB with linear interpolation. Similarly, BTEC1 + Extra combines BTEC1 and a corpus called Extra, which is a sample of a BTEC-type extra corpus of about the same size as SLDB. This clearly shows that both BTEC1 and SLDB are required for handling MAD-type tasks. Further discussion is available in [Kikui et al. 2006] . ",
"cite_spans": [
{
"start": 961,
"end": 980,
"text": "[Kikui et al. 2006]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 552,
"end": 559,
"text": "Table 8",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Comparative Analysis and Discussion",
"sec_num": "5.3"
},
{
"text": "An ideal approach to applying a system to real utterances is to let people use the system in real world settings to achieve real conversational goals (e.g., booking a package tour). This approach, however, has at least two problems. First, it is difficult to back up the system when it makes errors because current technology is not perfect. Second, it is difficult to control tasks and conditions to do meaningful analysis of the collected data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Field Experiment Data (FED)",
"sec_num": "6."
},
{
"text": "The new experiment reported here was still in the role-play style but its dialog situations were designed to be more natural. The S2ST system for travel conversation was set up at tourist information centers in an airport and a train station, and non-Japanese-speaking people were asked to talk with the Japanese staff at information centers using the S2ST system. Figure 3 is a diagram of the overall experimental system. The system includes two PDAs, one for each language, and several PC servers. The PC servers are controlled by a special controller called the gateway for component engines, consisting of automatic speech recognition (ASR) [Itoh et al. 2004] , machine translation (MT) [Sumita et al. 2004] , and speech synthesis (SS) [Kawai et al. 2004] PCs for each language and each language-pair. The gateway is responsible for controlling information flow between PDAs and engines. It is also responsible for mediating messages from the ASR and MT engines to PDAs. Each PDA is connected to the gateway with a wireless LAN. The gateway and component engines are wired. Headset microphones were used in the FED experiment.",
"cite_spans": [
{
"start": 645,
"end": 663,
"text": "[Itoh et al. 2004]",
"ref_id": "BIBREF1"
},
{
"start": 691,
"end": 711,
"text": "[Sumita et al. 2004]",
"ref_id": "BIBREF14"
},
{
"start": 740,
"end": 759,
"text": "[Kawai et al. 2004]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 365,
"end": 373,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Field Experiment Data (FED)",
"sec_num": "6."
},
{
"text": "An utterance spoken into a PDA is sent to the gateway server, which calls the ASR, MT, and SS engines in this order to have the utterance translated. Finally, the gateway sends the translated utterance to the other PDA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3. Overview of experimental system",
"sec_num": null
},
{
"text": "Speaker-adapted acoustic models were used for Japanese speech recognition because only a few Japanese staff at the tourist office agreed to participate in the FED experiment. A few proper names that were deemed necessary to carry out the planned conversations were added to the lexicon. These included names such as those of stations near the locations of the experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3. Overview of experimental system",
"sec_num": null
},
{
"text": "Data collection was conducted near two tourist information centers. One was in Kansai International Airport (hereafter, KIX), and the other was at Osaka City Air Terminal (hereafter, OCAT) in the center of Osaka. The former is in the main arrival lobby of the airport, which many tourists pass as they emerge from customs. The latter is a semi-enclosed area of about 40 2 m enclosed by glass walls (but with two open doors).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locations",
"sec_num": "6.2"
},
{
"text": "The English speech recognizer was trained on North American English. Again, however, it was difficult to find volunteer subjects who speak North American English. We expected to recruit many individual tourists, and most of the English-speaking volunteer subjects were indeed tourists arriving at or leaving the airport during the experiment. In addition to these volunteers, Osaka prefecture provided nine subjects who were working in Japan as English teachers. The resulting 39 subjects were not all North Americans, as shown in Table 9 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 531,
"end": 538,
"text": "Table 9",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "English Speakers",
"sec_num": "6.4.3"
},
{
"text": "First, we set up the S2ST system and asked the Japanese subjects (i.e., service personnel at the tourist information centers) to stand by at the experimental sites.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collecting Data",
"sec_num": "6.5"
},
{
"text": "When an English or Chinese speaking subject visited a center, he or she was asked to fill out the registration form. Then, the staff explained for 2-3 minutes how to use the S2ST system and asked the subject to try very simple utterances like \"hello\" or \"thank you.\" After the trial utterances, we had the subject try two dialogs: one dialog for practice using a level 1 scenario, and the other for the \"main\" dialog, which was a scenario chosen randomly from level 1 through level 3. Finally, the subject was asked to answer a questionnaire.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collecting Data",
"sec_num": "6.5"
},
{
"text": "The average time from registration to filling out the questionnaire was 15-20 minutes. Since we conducted 4-5 hours of experiments each day, excluding system setup, we were able to obtain dialog data for 15 subjects per day. Table 10 is an overview of FED data. Table 11 shows results of English-Japanese translation. About 50% of the utterances were translated into the target language with their original meaning preserved (e.g., at rank B or above). The ultimate goal of speech translation systems is to help users achieve their conversational goals. Instead of evaluating \"goal achievement,\" we asked them to subjectively evaluate during the course of conversations to what extent 1) they could understand their partner's utterances, and 2) they felt that their utterances were correctly understood. Table 12 shows the questionnaire results on these issues. Note that, although the number of subjects (i.e., samples) is limited, the table does show that roughly half the subjects felt they could almost understand and make themselves understood by their partners. The result seems to coincide with the overall performance shown in Table 11 .",
"cite_spans": [],
"ref_spans": [
{
"start": 225,
"end": 233,
"text": "Table 10",
"ref_id": "TABREF0"
},
{
"start": 262,
"end": 270,
"text": "Table 11",
"ref_id": "TABREF0"
},
{
"start": 804,
"end": 812,
"text": "Table 12",
"ref_id": "TABREF0"
},
{
"start": 1135,
"end": 1143,
"text": "Table 11",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Collecting Data",
"sec_num": "6.5"
},
{
"text": "This paper described our experience with multilingual spoken language corpus development at our research institution, focusing in particular on speech recognition and natural language processing for speech translation of travel conversations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "First, we introduced an interpreter-aided spoken dialog corpus called SLDB, and mentioned corpus configuration. Next, we introduced BTEC, which was built to train machine translation of spoken language among Japanese, English, and Chinese speakers. BTEC and SLDB are designed to be complementary. BTEC is a collection of Japanese sentences and their translations, and SLDB is a collection of transcriptions of bilingual spoken dialogs. Whereas BTEC covers a wide variety of travel domains, SLDB covers a limited domain, i.e., hotel situations. BTEC contains approximately 588k utterance-style expressions, and SLDB contains about 16k utterances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "Finally, we discussed a multilingual spoken dialog corpus between Japanese, English, and Chinese created using speech-to-speech translation systems. MAD was developed as a development corpus and we presented both BTEC and SLDB can be used to handle with MAD-type tasks. FED was planned as the evaluation corpus. According to analysis of the questionnaire, roughly half the subjects felt they could understand and make themselves understood by their partners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "In the future, we plan to expand our activities to multilingual spoken language communication research and development involving both verbal and nonverbal communication. Information is available at the following URL: http://www.atr.jp.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "Communication Research",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The work reported here was mainly conducted at ATR Spoken Language Communication Research Laboratories. The authors are grateful to Prof. Seiichi Yamamoto, Dr. Satoshi Nakamura, and all other staff who helped construct corpora.The FED corpus was collected within the framework of \"Social experiments on supporting non-Japanese-speaking tourists using information technologies\", which was carried out by the Osaka prefectural government and related offices in the winter of 2004. In these experiments, Osaka's prefectural government negotiated with management of facilities frequented by foreign tourists, such as airports and bus terminals, to provide the necessary assistance (e.g., use of public space and electricity). The government also gathered the volunteer subjects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Environmental noise was 60-65 dBA in both places but rose to 70 dBA when the public address system was in use.The language pairs were English-Japanese/Japanese-English and Chinese-Japanese/Japanese-Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Communication Research",
"sec_num": null
},
{
"text": "A good method of collecting real utterances is to just let subjects talk freely without using predetermined scenarios. Analyzing uncontrolled dialog, however, is very difficult. In the FED experiment, eight dialog scenarios were prepared. These scenarios, listed below, are categorized by expected number of turns for each speaker into three levels of complexity.Level-1 : Requires one or two turns per speaker plus greetings. E.g., \"Please ask where the bus stop for Kyoto station is.\"Level-2 : Requires three or four turns per speaker plus greetings. E.g., \"Please ask the way to Kyoto station.\"Level-3 : Free discussion. E.g., \"Please ask anything related to traveling in the Osaka area.\"Real dialogs included many clarification sub-dialogs necessitated by incomprehensible output from the system. This means that the number of turns was actually larger than we expected or planned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scenario",
"sec_num": "6.3"
},
{
"text": "We asked staff at the tourist information centers to participate in the experiments, and six people at KIX and three at OCAT agreed to take part.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Japanese Speakers",
"sec_num": "6.4.1"
},
{
"text": "Since the Chinese speech recognizer was trained on Mandarin speech, we needed to recruit subjects from the Beijing region of China. It was, however, difficult to find tourists from China who had time to participate in the experiment because most of them came to Osaka as members of tightly scheduled group tours. Therefore, we relied on 36 subjects gathered by the Osaka prefectural government. These subjects are college students from China majoring in non-technical areas such as foreign studies and tourism. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chinese Speakers",
"sec_num": "6.4.2"
},
{
"text": "We collected questionnaires from all subjects. As mentioned above, all of the Chinese-speaking subjects were college students. Therefore, they had at least a basic understanding of Japanese because they attend lectures given in Japanese. Therefore, in the following, we will focus on the English side.First, overall performance is measured in terms of subjective scores from A to D, defined as follows. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Evaluation",
"sec_num": "6.6"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Project Proposal TC-STAR: Make Speech to Speech Translation Real",
"authors": [
{
"first": "H",
"middle": [],
"last": "H\u00f6ge",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "136--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H\u00f6ge, H., \"Project Proposal TC-STAR: Make Speech to Speech Translation Real,\" Proc. of International Conference on Language Resources and Evaluation, 2002, pp. 136-141.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Summary and Evaluation of Speech Recognition Integrated Environment ATRASR",
"authors": [
{
"first": "G",
"middle": [],
"last": "Itoh",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Ashikari",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Jitsuhiro",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of Autumn Meeting of the",
"volume": "",
"issue": "",
"pages": "221--222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Itoh, G., Ashikari, Y., Jitsuhiro, T., and Nakamura, S., \"Summary and Evaluation of Speech Recognition Integrated Environment ATRASR,\" Proc. of Autumn Meeting of the Acoustical Society of Japan, 1-P-30, 2004, pp. 221-222.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Proc. of International Workshop on Spoken Language Translation",
"authors": [],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "IWSLT, Proc. of International Workshop on Spoken Language Translation, Kyoto, Japan, 2006.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "XIMERA: a New TTS from ATR Based on Corpus-based Technologies",
"authors": [
{
"first": "H",
"middle": [],
"last": "Kawai",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Toda",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Tsuzaki",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Tokuda",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of 5th ISCA Speech Synthesis Workshop",
"volume": "",
"issue": "",
"pages": "179--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kawai, H., Toda, T., Ni, J., Tsuzaki, M., and Tokuda, K., \"XIMERA: a New TTS from ATR Based on Corpus-based Technologies,\" Proc. of 5th ISCA Speech Synthesis Workshop, 2004, pp. 179-184.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Creating Corpora for Speech-to-Speech Translation",
"authors": [
{
"first": "G",
"middle": [],
"last": "Kikui",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "381--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kikui, G., Sumita, E., Takezawa, T., and Yamamoto, S., \"Creating Corpora for Speech-to-Speech Translation,\" Proc. of European Conference on Speech Communication and Technology, 2003, pp. 381-382.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Comparative Study on Corpora for Speech Translation",
"authors": [
{
"first": "G",
"middle": [],
"last": "Kikui",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE Trans. on Audio, Speech, and Language Processing",
"volume": "14",
"issue": "5",
"pages": "1674--1682",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kikui, G., Yamamoto, S., Takezawa, T., and Sumita, E., \"Comparative Study on Corpora for Speech Translation,\" IEEE Trans. on Audio, Speech, and Language Processing, 14 (5), 2006, pp. 1674-1682.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "JANUS-III: Speech-to-speech Translation in Multiple Language",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lavie",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Waibel",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Finke",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Gates",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gavald\u00e0",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Zeppenfeld",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Zhan",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing",
"volume": "",
"issue": "",
"pages": "99--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lavie, A., A. Waibel, L. Levin, M. Finke, D. Gates, M. Gavald\u00e0, T. Zeppenfeld, and P. Zhan, \"JANUS-III: Speech-to-speech Translation in Multiple Language,\" Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing, 1997, pp. 99-102.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "TC-STAR: a Speech to Speech Translation Project",
"authors": [
{
"first": "G",
"middle": [],
"last": "Lazzari",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "xiv--xv",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lazzari, G., \"TC-STAR: a Speech to Speech Translation Project,\" Proc. of International Workshop on Spoken Language Translation, 2006, pp. xiv-xv.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Effects of Audibility of Partner's Voice and Visibility of Translated Text in Machine-Translation-Aided Bilingual Spoken Dialogues",
"authors": [
{
"first": "M",
"middle": [],
"last": "Mizushima",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kikui",
"suffix": ""
}
],
"year": 2004,
"venue": "IPSJ SIG Technical Reports",
"volume": "",
"issue": "74",
"pages": "99--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mizushima, M., T. Takezawa, and G. Kikui, \"Effects of Audibility of Partner's Voice and Visibility of Translated Text in Machine-Translation-Aided Bilingual Spoken Dialogues,\" IPSJ SIG Technical Reports, 2004 (74), 2004-HI-109-19/2004-SLP-52-19, 2004, pp. 99-106.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "ATR's Speech Translation System: ASURA",
"authors": [
{
"first": "T",
"middle": [],
"last": "Morimoto",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Yato",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sagayama",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Tashiro",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kurematsu",
"suffix": ""
}
],
"year": 1993,
"venue": "Proc. of European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "1291--1294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morimoto, T., T. Takezawa, F. Yato, S. Sagayama, T. Tashiro, M. Nagata, and A. Kurematsu, \"ATR's Speech Translation System: ASURA,\" Proc. of European Conference on Speech Communication and Technology, 1993, pp. 1291-1294.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A Speech and Language Database for Speech Translation Research",
"authors": [
{
"first": "T",
"middle": [],
"last": "Morimoto",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Uratani",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Furuse",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sobashima",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Iida",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sagisaka",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Higuchi",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yamazaki",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. of International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "1791--1794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morimoto, T., N. Uratani, T. Takezawa, O. Furuse, Y. Sobashima, H. Iida, A. Nakamura, Y. Sagisaka, N. Higuchi, and Y. Yamazaki, \"A Speech and Language Database for Speech Translation Research,\" Proc. of International Conference on Spoken Language Processing, 1994, pp. 1791-1794.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Spoken Language Translation with Mid-90's Technology: a Case Study",
"authors": [
{
"first": "M",
"middle": [],
"last": "Rayner",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Bretan",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Carter",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Digalakis",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kaja",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Karlgren",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Lyberg",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Pulman",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Price",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Samuelsson",
"suffix": ""
}
],
"year": 1993,
"venue": "Proc. of European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "1299--1302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rayner, M., I. Bretan, D. Carter, M. Collins, V. Digalakis, B. Gamb\u00e4ck, J. Kaja, J. Karlgren, B. Lyberg, S. Pulman, P. Price, and C. Samuelsson, \"Spoken Language Translation with Mid-90's Technology: a Case Study,\" Proc. of European Conference on Speech Communication and Technology, 1993, pp. 1299-1302.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A Spoken Language Translator for Restricted-domain Context-free Languages",
"authors": [
{
"first": "D",
"middle": [
"B"
],
"last": "Roe",
"suffix": ""
},
{
"first": "P",
"middle": [
"J"
],
"last": "Moreno",
"suffix": ""
},
{
"first": "R",
"middle": [
"W"
],
"last": "Sproat",
"suffix": ""
},
{
"first": "F",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
},
{
"first": "M",
"middle": [
"D"
],
"last": "Riley",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Macarr\u00f3n",
"suffix": ""
}
],
"year": 1992,
"venue": "Speech Communication",
"volume": "11",
"issue": "",
"pages": "311--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roe, D. B., P. J. Moreno, R. W. Sproat, F. C. N. Pereira, M.D. Riley, and A. Macarr\u00f3n, \"A Spoken Language Translator for Restricted-domain Context-free Languages,\" Speech Communication, 11, 1992, pp. 311-319.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Recent Results on MT Evaluation in the GALE Program",
"authors": [
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "xvi--xvii",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roukos, S., \"Recent Results on MT Evaluation in the GALE Program,\" Proc. of International Workshop on Spoken Language Translation, 2006, pp. xvi-xvii.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Corpus-Based Translation Technology for Multi-lingual Speech-to-Speech Translation",
"authors": [
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Nakaiwa",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of Spring Meeting of the",
"volume": "",
"issue": "",
"pages": "57--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumita, E., H. Nakaiwa, and S. Yamamoto, \"Corpus-Based Translation Technology for Multi-lingual Speech-to-Speech Translation,\" Proc. of Spring Meeting of the Acoustical Society of Japan, 1-8-26, 2004, pp. 57-58.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Speech and Language Databases for Speech Translation Research in ATR",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Morimoto",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Sagisaka",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of Oriental COCOSDA Workshop",
"volume": "",
"issue": "",
"pages": "148--155",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takezawa, T., T. Morimoto, and Y. Sagisaka, \"Speech and Language Databases for Speech Translation Research in ATR,\" Proc. of Oriental COCOSDA Workshop, 1998, pp. 148-155.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A Comparative Study on Acoustic and Linguistic Characteristics Using Speech from Human-to-Human and Human-to-Machine Conversations",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sugaya",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Naito",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "522--525",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takezawa, T., F. Sugaya, M. Naito, and S. Yamamoto, \"A Comparative Study on Acoustic and Linguistic Characteristics Using Speech from Human-to-Human and Human-to-Machine Conversations,\" Proc. of International Conference on Spoken Language Processing, III, 2000, pp. 522-525.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Toward a Broad-Coverage Bilingual Corpus for Speech Translation of Travel Conversations in the Real World",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sugaya",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Yamamoto",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "147--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takezawa, T., E. Sumita, F. Sugaya, H. Yamamoto, and S. Yamamoto, \"Toward a Broad-Coverage Bilingual Corpus for Speech Translation of Travel Conversations in the Real World,\" Proc. of International Conference on Language Resources and Evaluation, 2002, pp. 147-152.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Collecting Machine-Translation-Aided Bilingual Dialogues for Corpus-Based Speech Translation",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kikui",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "2757--2760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takezawa, T. and G. Kikui, \"Collecting Machine-Translation-Aided Bilingual Dialogues for Corpus-Based Speech Translation,\" Proc. of European Conference on Speech Communication and Technology, 2003, pp. 2757-2760.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "An Experimental System for Collecting Machine-Translation-Aided Dialogues",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nishino",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Takashima",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Matsui",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kikui",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of Forum on Information Technology, E-036",
"volume": "",
"issue": "",
"pages": "161--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takezawa, T., A. Nishino, K. Takashima, T. Matsui, and G. Kikui,, \"An Experimental System for Collecting Machine-Translation-Aided Dialogues,\" Proc. of Forum on Information Technology, E-036, 2003, pp. 161-162.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A Comparative Study on Human Communication Behaviors and Linguistics Characteristics for Speech-to-Speech Translation",
"authors": [
{
"first": "T",
"middle": [],
"last": "Takezawa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kikui",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "1589--1592",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takezawa, T. and G. Kikui, \"A Comparative Study on Human Communication Behaviors and Linguistics Characteristics for Speech-to-Speech Translation,\" Proc. of International Conference on Language Resources and Evaluation, 2004, pp. 1589-1592.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Verbmobil: Foundations of Speech-to-Speech Translation",
"authors": [
{
"first": "W",
"middle": [],
"last": "Wahlster",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wahlster, W. (Ed.), \"Verbmobil: Foundations of Speech-to-Speech Translation,\" Springer, Germany, 2000.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Conversation between an American tourist and a Japanese front desk clerk.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "shows an example of transcribed conversations. The Japanese text in",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "(d) Japanese morphological data (e) English morphological data SLDB is available to outside research institutions and can be accessed at the following URL: http://www.atr.jp.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td/><td>SLDB</td><td>BTEC</td><td>MAD</td><td>FED</td></tr><tr><td>Name</td><td>Spoken Language DataBase</td><td>Basic Travel Expression Corpus</td><td>Machine-Aided Dialogs</td><td>Field Experiment Data</td></tr><tr><td>Purpose</td><td colspan=\"4\">Developing S2ST Training MT Developing S2ST Evaluation of S2ST</td></tr><tr><td>Domain</td><td>Hotel</td><td>Travel</td><td>Travel</td><td>Travel</td></tr><tr><td>Languages</td><td>J E (C)</td><td>J E C</td><td>J E (C)</td><td>J E C</td></tr><tr><td>Speaker Participants</td><td>71 (+23 Interpreters)</td><td>Not spoken</td><td>45</td><td>84</td></tr><tr><td>Size</td><td>16k</td><td>588k</td><td>13k</td><td>2k</td></tr></table>",
"type_str": "table"
},
"TABREF1": {
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td>Number of collected dialogs</td><td>618</td><td/></tr><tr><td>Speaker participants</td><td>71</td><td/></tr><tr><td>Interpreter participants</td><td>23</td><td/></tr><tr><td>Table 3. Basic characteristics of SLDB</td><td/><td/></tr><tr><td/><td>Japanese</td><td>English</td></tr><tr><td>Number of utterances</td><td>16,084</td><td>16,084</td></tr><tr><td>Number of sentences</td><td>21,769</td><td>22,928</td></tr><tr><td>Number of word tokens</td><td>236,066</td><td>181,263</td></tr><tr><td>Number of word types</td><td>5,298</td><td>4,320</td></tr><tr><td>Average number of words per sentence</td><td>10.84</td><td>7.91</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"text": "The recorded bilingual conversations are transcribed into a text file. The bilingual text contains descriptions of the situations in which a speech translation system is used.",
"num": null,
"html": null,
"content": "<table><tr><td>Transcribed data consists of the following.</td></tr><tr><td>(a) Bilingual text</td></tr><tr><td>(b) Japanese text</td></tr><tr><td>(c) English text</td></tr><tr><td>1. Transcribed data</td></tr><tr><td>2. Tagged data</td></tr><tr><td>3. Speech data</td></tr></table>",
"type_str": "table"
},
"TABREF3": {
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"5\">BTEC1 BTEC2 BTEC3 BTEC4 BTEC5</td></tr><tr><td colspan=\"2\">Number of utterance-style expressions 172k</td><td>46k</td><td>198k</td><td>74k</td><td>98k</td></tr><tr><td>Number of Japanese word tokens</td><td colspan=\"5\">1,174k 341k 1,434k 548k 1,046k</td></tr><tr><td>Number of Japanese word types</td><td>28k</td><td>20k</td><td>43k</td><td>22k</td><td>28k</td></tr><tr><td>Languages (Source:Targets)</td><td>J:EC</td><td>J:EC</td><td>J:EC</td><td>E:JC</td><td>E:JC</td></tr></table>",
"type_str": "table"
},
"TABREF4": {
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td>Domain</td><td>Frequency</td></tr><tr><td>Communication</td><td>20.4%</td></tr><tr><td>Basic</td><td>19.1%</td></tr><tr><td>Trouble</td><td>8.7%</td></tr><tr><td>Shopping</td><td>7.9%</td></tr><tr><td>Stay</td><td>6.9%</td></tr><tr><td>Sightseeing</td><td>6.6%</td></tr><tr><td>Transfer</td><td>6.6%</td></tr><tr><td>Restaurant</td><td>5.9%</td></tr><tr><td>Business</td><td>3.8%</td></tr><tr><td>Airport</td><td>3.6%</td></tr><tr><td>Contact</td><td>3.3%</td></tr><tr><td>Airplane</td><td>2.3%</td></tr><tr><td>Drink</td><td>1.0%</td></tr><tr><td>Home stay</td><td>1.0%</td></tr><tr><td>Exchange</td><td>0.8%</td></tr><tr><td>Snack</td><td>0.8%</td></tr><tr><td>Beauty</td><td>0.5%</td></tr><tr><td>Study overseas</td><td>0.5%</td></tr><tr><td>Go home</td><td>0.3%</td></tr><tr><td>Total</td><td>100.0%</td></tr></table>",
"type_str": "table"
},
"TABREF5": {
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td>Subset ID</td><td>MAD1</td><td>MAD2</td><td>MAD3</td><td>MAD4</td><td>MAD5</td></tr></table>",
"type_str": "table"
},
"TABREF7": {
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td/><td>MAD6</td></tr><tr><td>Purpose</td><td>Spoken dialog data collection using S2ST system</td></tr><tr><td>Task</td><td>Simple as in MAD1</td></tr><tr><td>Number of utterances</td><td>2,507</td></tr><tr><td>Number of dialogs</td><td>139</td></tr></table>",
"type_str": "table"
},
"TABREF8": {
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"3\">. Cross-perplexity for MAD (Japanese)</td><td/><td/></tr><tr><td/><td/><td colspan=\"2\">Training corpus</td><td/></tr><tr><td/><td colspan=\"4\">BTEC1 SLDB BTEC1 + SLDB BTEC1 + Extra</td></tr><tr><td>Size (Number of utterances)</td><td>162k</td><td>12k</td><td>174k</td><td>174k</td></tr><tr><td>Cross-perplexity</td><td>38.2</td><td>94.9</td><td>30.7</td><td>35.7</td></tr></table>",
"type_str": "table"
},
"TABREF9": {
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td>Origin</td><td>Number of subjects</td></tr><tr><td>USA</td><td>15</td></tr><tr><td>UK</td><td>6</td></tr><tr><td>Australia</td><td>5</td></tr><tr><td>Canada</td><td>4</td></tr><tr><td>New Zealand</td><td>2</td></tr><tr><td>Denmark</td><td>2</td></tr><tr><td>Other</td><td>5</td></tr></table>",
"type_str": "table"
},
"TABREF10": {
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td>Rank</td><td>J to E (%)</td><td>E to J (%)</td></tr><tr><td>A</td><td>37.1</td><td>36.2</td></tr><tr><td>B</td><td>10.2</td><td>18.2</td></tr><tr><td>C</td><td>10.9</td><td>5.7</td></tr><tr><td>D</td><td>41.4</td><td>24.5</td></tr></table>",
"type_str": "table"
},
"TABREF11": {
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">Make the hearer understood (%) Understood what the partner said (%)</td></tr><tr><td>Complete</td><td>8.3</td><td>22.2</td></tr><tr><td>Almost</td><td>41.6</td><td>50.0</td></tr><tr><td>Half</td><td>33.3</td><td>22.2</td></tr><tr><td>Little</td><td>16.7</td><td>5.6</td></tr></table>",
"type_str": "table"
}
}
}
}