ACL-OCL / Base_JSON /prefixA /json /amta /1994.amta-1.17.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
42.6 kB
{
"paper_id": "1994",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:06:36.463491Z"
},
"title": "Technical Evaluation of MT Systems from the Developer's Point of View: Exploiting Test-Sets for Quality Evaluation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Isahara",
"suffix": "",
"affiliation": {
"laboratory": "Electrotechnical Laboratory",
"institution": "MITI)",
"location": {
"settlement": "Uchino"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a method of evaluating quality for developers of machine translation systems to easily check imperfections in their own systems. This evaluation method is a systematic, objective method along with test example sets in which we clarified the evaluation procedure by adding yes/no questions and explanations to the example sentences for evaluation.",
"pdf_parse": {
"paper_id": "1994",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a method of evaluating quality for developers of machine translation systems to easily check imperfections in their own systems. This evaluation method is a systematic, objective method along with test example sets in which we clarified the evaluation procedure by adding yes/no questions and explanations to the example sentences for evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Since 1992, we have been developing a method of evaluating quality for the developers of machine translation (MT) systems to easily check imperfections in their own systems [1, 2, and 3] . In this paper, we would like to describe this systematic, objective method along with the test example sets in which we have clarified the evaluation procedure by adding questions and explanations to the examples for the evaluation 1 .",
"cite_spans": [
{
"start": 173,
"end": 186,
"text": "[1, 2, and 3]",
"ref_id": null
},
{
"start": 421,
"end": 422,
"text": "1",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We will first describe how our evaluation method surpasses previous methods, with reference to the following 2 types of objectivity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) Objectivity in the evaluation process (2) Objectivity in judgment of the evaluation results *isahara@etl.go.jp 1 The work described in this paper is being developed by the Special Interest Group on Machine Translation (Chief: Hitoshi ISAHARA, Electrotechnical Laboratory) in the Natural Language Processing System Research Committee (Chairman: Prof. Hozumi TANAKA, Tokyo Institute of Technology) which is a subcommittee of the Natural Language Processing Technology Committee (Chairman: Prof. Makoto NAGAO, Kyoto University) of JEIDA (Japan Electronic Industry Development Association). JEIDA has formulated three criteria for evaluating MT systems: 1) technical and 2) economical evaluations for system users, and 3) technical evaluation for the system developers. For more information on these criteria, please refer to references 1 and 4.",
"cite_spans": [
{
"start": 42,
"end": 45,
"text": "(2)",
"ref_id": "BIBREF1"
},
{
"start": 115,
"end": 116,
"text": "1",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In an evaluation method such as the one proposed in the ALPAC report, \"fidelity\" and \"intelligibility\" are employed as evaluation measures, though they are dependent on human, subjective judgment. Consequently, the results may differ according to who has made the evaluations, that is, they do not satisfy the objectivity criterion (1) . Theoretically, the evaluation method in the ALPAC report satisfies criterion (2) since the evaluation results are given as numbers. The system developers, however, fail to recognize which items cannot be handled in their own system. This is because the test example in question covers various kinds of grammatical items. So, their interpretation of the evaluation result for further improvement of their system must still be subjective. Therefore, for all practical purposes, this evaluation method does not satisfy criterion (2) .",
"cite_spans": [
{
"start": 332,
"end": 335,
"text": "(1)",
"ref_id": "BIBREF0"
},
{
"start": 864,
"end": 867,
"text": "(2)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the other hand, we have been preparing test-sets that can satisfy both objectivity criteria (1) and (2) . There, we have clarified how to evaluate individual examples posing yes/no questions which enable the system developers to make an evaluation just by answering them. With our method, everyone can evaluate MT systems equally, for his/her answer requires only a simple yes or no. Even for imperfect translation results, judgment will not vary widely among evaluators. In addition, we have assigned to each example an explanation which gives the relationship of the translation mechanism to the linguistic phenomenon, thus enabling the system developer to know why the linguistic phenomenon in question was not analyzed correctly. Consequently, with our test-set method, the evaluation results can be utilized for improving MT systems.",
"cite_spans": [
{
"start": 103,
"end": 106,
"text": "(2)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There is another proposed method where example evaluation sentences are collected. Each example sentence relates to a linguistic phenomenon subject to evaluation [5, 6, and 7] . With these test-sets, if a system is evaluated as incapable of properly translating an example, the system developer can immediately recognize that his/her system cannot handle the linguistic phenomenon in question. Therefore, we can conclude that this method satisfies the objectivity criterion (2) . At present, however, this method has the following two problems:",
"cite_spans": [
{
"start": 162,
"end": 175,
"text": "[5, 6, and 7]",
"ref_id": null
},
{
"start": 474,
"end": 477,
"text": "(2)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) The procedure for evaluating the translation output has not been clarified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) Learning deficiencies of the MT system via the evaluation results is dependent on the linguistic intuition of the evaluator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As long as it is based on the example sentences simply collected as the test-sets, this method can be used for ad hoc evaluation only, and cannot be established as an evaluation method. Moreover, to enable evaluation results to be used for improving MT systems, the listing of various linguistic phenomena is not enough; it is also necessary to clarify the positioning of each linguistic phenomenon within the grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our test-sets, we have systematically sampled the grammatical items that ought to be taken up, and listed some examples for each item. The test-sets clearly describe what linguistic phenomenon should be evaluated in each example so that the developers can easily understand the problems they need to solve in their systems. The system developer can identify causes of translation failures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Chapter 2, we will describe our method of quality evaluation, i.e., what information should be provided to system developers as a result of quality evaluation of MT systems. Chapter 3 describes how the test examples were collected and should be evaluated. Chapters 4 and 5 give some examples to show the test-sets for English-to-Japanese MT systems and Japanese-to-English MT systems, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The method we propose here is a quality evaluation method which is totally independent of the MT system design. Therefore, the system developer can use this method regardless of his/her system type, i.e., whether the relevant MT system is rule-based or example-based. Conversely, in this method, if it becomes clear that a specific linguistic phenomenon cannot be processed on the relevant MT system, no solution common to the various system types is indicated, so the solution is entrusted to the developer according to the specific system type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standpoint for the Evaluation Method",
"sec_num": "2"
},
{
"text": "In our test-sets, we give no information on how often the linguistic phenomenon in each test-set appears in general usage. This is because the frequency of appearance of the relevant linguistic phenomenon might differ according to the type of document to be translated. If specific linguistic phenomena regularly appear in the documents handled on a specific MT system, the evaluator needs only to select the test-set which corresponds to the linguistic phenomena in question. Wrong evaluations could be made if scoring was based merely on the frequency of individual linguistic phenomenon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standpoint for the Evaluation Method",
"sec_num": "2"
},
{
"text": "To sum up, this evaluation method is designed in such a way that the system developers, irrespective of their system type, can precisely understand linguistic phenomena which cannot be handled by their systems and thus should be taken into account when improving the system performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standpoint for the Evaluation Method",
"sec_num": "2"
},
{
"text": "The test-sets employed in our evaluation method consist of example sentences for evaluation, their model translations (human translations), and the questions by which MT outputs should be evaluated. With the test-sets, the MT system developers can make objective judgments on the translation quality just by preparing the system output and answering the question assigned to each example sentence. This chapter describes how the example sentences were collected for the test-sets, and how the actual evaluation is made using the test-sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Characteristics of the Test-Sets for Quality Evaluation",
"sec_num": "3"
},
{
"text": "The example sentences in the test-sets were collected by researchers and engineers who have actually dealt with the development of MT systems and/or natural language processing systems. During the collection of the examples, we emphasized the following two points:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collection of Example Sentences for Evaluation",
"sec_num": "3.1"
},
{
"text": "(1) Coverage of all the basic linguistic phenomena",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collection of Example Sentences for Evaluation",
"sec_num": "3.1"
},
{
"text": "(2) Selection of examples with linguistic phenomena that are difficult to handle with MT systems, especially those with ambiguity problems",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collection of Example Sentences for Evaluation",
"sec_num": "3.1"
},
{
"text": "In other words, (1) refers to a systematic specification of the grammatical phenomena to be evaluated (top-down approach) and collecting examples according to these phenomena. On the other hand, (2) refers to a collection of examples that are difficult to translate on MT systems (bottom-up approach). In particular, we concentrated on those linguistic phenomena whose processing difficulties may be solved in the near future. Then, we systematized the examples for evaluation of MT systems. Furthermore, we repeated the translation evaluation tests on those examples using some commercial systems, and improved the test-sets focusing on the following points. All of them are important factors for maintaining objectivity during the evaluation process. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collection of Example Sentences for Evaluation",
"sec_num": "3.1"
},
{
"text": "Evaluation of the translation results is conducted as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Method",
"sec_num": "3.2"
},
{
"text": "(1) Translating the example sentences in the test-sets with MT systems (2) Checking the translation results of (1), and answering each example's individual question",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Method",
"sec_num": "3.2"
},
{
"text": "We specified the judging points in the questions (e.g. which part of the example plays the grammatical role in question, and how that part should be translated), and we posed the questions in a yes/no style, thus avoiding varying judgments among the evaluators. Moreover, sample answers were also assigned to each test-set which were based on the translation results of five types of existing 1 commercial MT systems (at present, in \"the Test-Sets for English-to-Japanese MT Systems\" only). 1 By referring to them, judgment can be easily made on each question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Method",
"sec_num": "3.2"
},
{
"text": "As the initial step in constructing the test example sets for English-to-Japanese MT systems, we selected mainly simple English sentences as test items. We studied and evaluated 309 examples of basic sentences, and compiled them in our \"1993 Test Sets.\" Our current work is to extend the test examples to complex sentences. The test-sets will be entirely completed by the end of March, 1995. We have also been evaluating the test-sets with 5 different English-to-Japanese MT systems in order to examine their practicability, rewriting the questions in the test-sets if necessary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test-Sets for English-to-Japanese MT Systems",
"sec_num": "4"
},
{
"text": "Each test-set consists of: an ID number, an example, a model translation, a yes/no question, translation sample(s) by MT systems, a sentence or sentences with similar syntax, ID number(s) of the related example(s), and explanation (See Fig. 1) . In this chapter, the Quality Evaluation Process, Object's Linguistic Phenomena, and the Simulation on MT systems are described.",
"cite_spans": [],
"ref_spans": [
{
"start": 236,
"end": 243,
"text": "Fig. 1)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Test-Sets for English-to-Japanese MT Systems",
"sec_num": "4"
},
{
"text": "Evaluation of the quality of English-to-Japanese MT systems is conducted as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Process",
"sec_num": "4.1"
},
{
"text": "\u2022 To translate [Example] in each test-set with English-to-Japanese MT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Process",
"sec_num": "4.1"
},
{
"text": "\u2022 To answer \"yes\" or \"no\" (O or X) to a question on each example by referring to the translation result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Process",
"sec_num": "4.1"
},
{
"text": "\u2022 To check the distribution of \"yes'\" and \"no's\" in the test-sets and evaluate the system performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Process",
"sec_num": "4.1"
},
{
"text": "With the yes/no distribution, the system developer can easily pinpoint the items which his/her system did not translate properly. In the test-sets, however, differences in significance and frequency among the examples are not taken into consideration. Therefore, it is meaningless to simply count the number of \"yes\" answers to compare the performance of various MT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Process",
"sec_num": "4.1"
},
{
"text": "The test-sets consist of 309 basic, mainly simple English sentences as follows. As shown above, the quality evaluation items were collected from the following perspectives: \"Structural Analysis\" and \"Structural Selection.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Phenomena as Test Object",
"sec_num": "4.2"
},
{
"text": "In the \"Analysis\" part, MT systems are checked as to whether they can correctly analyze the sentence structure of the test example. This is a top-down approach in which the comprehensiveness of MT systems is checked. In a word, this part is intended to judge whether the MT system in question meets the requirements for an MT with good performance. Grammatical phenomena essential for English were classified into 4 groups, referring to some grammar books (see [8,9 and 10] ): (1) Sentence Pattern, (2) Temporal Information, (3) Auxiliary Verbs and (4) Sentence Type. Sentence Patterns were selected based on Hornby's classification. In doing so, some patterns were intentionally omitted because they were judged to be unnecessary for quality evaluation of MT systems. In addition, some usages of auxiliary verbs were omitted because they were considered to rarely appear in the documents for MT systems.",
"cite_spans": [
{
"start": 461,
"end": 473,
"text": "[8,9 and 10]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Analysis Part",
"sec_num": null
},
{
"text": "In the \"Selection\" part, on the other hand, MT systems are checked as to whether they can identify the correct structure syntactically and/or semantically when example sentences provide ambiguity problems. This is a bottom-up approach in which the disambiguating ability of MT systems should be checked. Thus example sentences were classified into two groups: (1) Structural Disambiguation and (2) Semantic Disambiguation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structural Analysis Part",
"sec_num": null
},
{
"text": "In order to examine the practicability of the test-sets, we conducted a translation simulation on the five MT systems. The correct answer rates of the five systems differed greatly: from 53 to 80 percent. Though these rates alone do not have any significance, they do indicate that the five systems are quite different in performance both in the \"Analysis\" part and in the \"Selection\" part. That is to say, our test-sets have successfully revealed that the range of linguistic phenomena which each MT system can handle is quite different. Therefore, the method that we have proposed here allows an efficient quality evaluation of MT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test-Set Simulation on MT Systems",
"sec_num": "4.3"
},
{
"text": "In order to evaluate the ability of Japanese-to-English MT systems, two kinds of proposals have been made so far. The first one focused on the difference in the way of perception between Englishspeaking people and Japanese people and thus classified Japanese expressions so that they can be used as test examples [5 and 6] . On the other hand, the second one focused on the structure of Japanese expressions and proposed example sentences for evaluation which typically represent the structural characteristics of Japanese expressions [7] .",
"cite_spans": [
{
"start": 313,
"end": 322,
"text": "[5 and 6]",
"ref_id": null
},
{
"start": 535,
"end": 538,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test-Sets for Japanese-to-English MT Systems",
"sec_num": "5"
},
{
"text": "Our test-sets for Japanese-to-English MT systems began to be constructed in 1993. Like those for English-to-Japanese MT systems, they are intended to clarify what is insufficient in their systems by answering the questions. However, we have constructed the test-sets for Japanese-to-English MT systems from a slightly different perspective than we have done for English-to-Japanese MT systems. Fig. 2 shows a sample test-set for Japanese-to-English MT systems.",
"cite_spans": [],
"ref_spans": [
{
"start": 394,
"end": 400,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Test-Sets for Japanese-to-English MT Systems",
"sec_num": "5"
},
{
"text": "In our approach, we have not only employed the test-sets which enables an objective evaluation of MT systems but also established an evaluation method which enables the developers of Japanese processing systems to identify the correspondence between the linguistic phenomena and the processing modules. That is to say, in addition to example sentences and their evaluation procedure, questions have been assigned to each test-set so that the evaluator can check how his/her system handles the linguistic phenomenon in question. In this way, the system developer can evaluate ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test-Sets for Japanese-to-English MT Systems",
"sec_num": "5"
},
{
"text": "the processing ability of his/her system as a whole and also recognize the performance of each processing module of his/her system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample of the Test-Sets for Japanese-to-English MT Systems",
"sec_num": null
},
{
"text": "In our test-sets, linguistic phenomena in Japanese were classified into 40 categories. To each category, a question has been given so as to check how the linguistic phenomenon in question is handled. If necessary, additional questions have been assigned to clarify the knowledge in use and how to deal with the output of the process. Each linguistic phenomenon is exemplified in test sentences and provided with a model translation in English and an explanation about the key factors in translation. So far, 350 technical sentences have been selected as test sentences. These test sentences are currently under examination via translation experiments with commercial MT software. Explanations described in the test-sets are also to be modified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample of the Test-Sets for Japanese-to-English MT Systems",
"sec_num": null
},
{
"text": "Moreover, a check list is available in our test-sets. This list can be used by the system developers to check the correspondence between the linguistic phenomenon in question and the processing module to be engaged in handling it. This makes it possible to judge which processing module is responsible for the inadequacy of the system output. It is also possible for the evaluator to modify this check list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample of the Test-Sets for Japanese-to-English MT Systems",
"sec_num": null
},
{
"text": "Our test-sets for Japanese-to-English MT systems will be completed by March, 1995, along with those for English-to-Japanese MT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample of the Test-Sets for Japanese-to-English MT Systems",
"sec_num": null
},
{
"text": "In this paper, we have proposed systematic and objective methods for evaluating the translation quality of the MT system from the developer's point of view.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our method employs test-sets in which example sentences, their model translations, questions for evaluating the system output, similar examples (if any), and grammatical explanations have been systematically aligned. The example sentences have been collected focusing on wide coverage of (1) basic linguistic phenomena and (2) linguistic phenomena problematic to MT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The questions in the test-sets are designed to clarify the evaluation viewpoints. Given the system outputs corresponding to the example sentence in question, the system developer needs only to answer the question assigned to the example sentence. This judgment does not vary among the evaluators, thus enabling an objective evaluation. Furthermore, with our test-sets, the system developer can precisely recognize which linguistic phenomena cannot be handled by his/her own system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our future plans are (1) to solve existing problems revealed by evaluation experiments with some commercial MT systems and (2) to increase the number of example sentences so as to cover more linguistic phenomena. When our two kinds of test-sets (English-to-Japanese and Japaneseto-English test-sets) are completed next March, we will make them available to the public. Finally, we hope our evaluation method can play a useful role in the development of MT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Japan Electronic Industry Development Association (JEIDA)",
"authors": [],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\"Survey Report on Machine Translation Systems\" (in Japanese), Japan Electronic Industry Develop- ment Association (JEIDA), 1993.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Japan Electronic Industry Development Association (JEIDA)",
"authors": [],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\"Survey Report on the Natural Language Processing Technology\" (in Japanese), Japan Electronic Industry Development Association (JEIDA), 1994.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "JEIDA's Proposed Method for Evaluating Machine Translation (Translation Quality) -A Proposed Standard Method and Corpus",
"authors": [
{
"first": "H",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 1993,
"venue": "IPSJ SIG Report",
"volume": "",
"issue": "",
"pages": "96--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Isahara, et al. : \"JEIDA's Proposed Method for Evaluating Machine Translation (Translation Qual- ity) -A Proposed Standard Method and Corpus -\" (in Japanese), IPSJ SIG Report, NL96-11 ,1993.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "JEIDA's Criteria on Machine Translation Evaluation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Nomura",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 1992,
"venue": "International Symposium on Natural Language Understanding and AI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Nomura and H. Isahara : \"JEIDA's Criteria on Machine Translation Evaluation\", International Symposium on Natural Language Understanding and AI, 1992.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Function Test System for Japanese to English Machine Translation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ikehara",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Shirai",
"suffix": ""
}
],
"year": 1990,
"venue": "IEICE SIG Report",
"volume": "",
"issue": "",
"pages": "90--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Ikehara and S. Shirai : \"Function Test System for Japanese to English Machine Translation\" (in Japanese), IEICE SIG Report, NLC90-43, 1990.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Criteria for Evaluating the Linguistic Quality of Japanese to English Machine Translations",
"authors": [
{
"first": "S",
"middle": [],
"last": "Ikehara",
"suffix": ""
}
],
"year": 1994,
"venue": "J. of Japanese Society for Artificial Intelligence",
"volume": "9",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Ikehara et al. : \"Criteria for Evaluating the Linguistic Quality of Japanese to English Machine Translations\" (in Japanese), J. of Japanese Society for Artificial Intelligence, Vol. 9, No. 4, 1994.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An Criteria of Processing Ability for Sentence Structure",
"authors": [
{
"first": "H",
"middle": [],
"last": "Narita",
"suffix": ""
}
],
"year": 1988,
"venue": "IPSJ SIG Report",
"volume": "",
"issue": "",
"pages": "69--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Narita : \"An Criteria of Processing Ability for Sentence Structure\" (in Japanese), IPSJ SIG Report, NL69-1, 1988.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Guide to Patterns and Usage in English",
"authors": [
{
"first": "A",
"middle": [
"S"
],
"last": "Hornby",
"suffix": ""
}
],
"year": 1975,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. S. Hornby : \"Guide to Patterns and Usage in English, Second edition\", Oxford Univ. Press, 1975.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The Wonder Book of English Grammar",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Ogawa",
"suffix": ""
}
],
"year": 1991,
"venue": "Obun-sha",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Ogawa, et al. : \"The Wonder Book of English Grammar\" (in Japanese), Obun-sha , Tokyo, 1991.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Explanation on the English Grammar",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Egawa",
"suffix": ""
}
],
"year": 1964,
"venue": "Kaneko Shobo",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Egawa : \"Explanation on the English Grammar\" (in Japanese), Kaneko Shobo, 1964.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "No ambiguity in the questions \u2022 No unnecessary complexity in any example \u2022 No ambiguity in the translation of any example"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Sample Test-Sets for English-to-Japanese MT systems"
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Fig. 2"
}
}
}
}