Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
30.8 kB
{
"paper_id": "M98-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:16:18.301767Z"
},
"title": "NYU: DESCRIPTION OF THE JAPANESE NE SYSTEM USED FOR MET-2",
"authors": [
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "New York University",
"location": {
"addrLine": "715 Broadway"
}
},
"email": "sekine@cs.nyu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "M98-1019",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "In this paper, experiments o n t h e Japanese Named Entity t ask are reported. We employed a supervised learning m echanism. Recently, s e v eral systems have been proposed for this task, but m any o f t h em use hand-coded patterns. Creating t h ese patterns is laborious work, and w h en we a d apt these systems to a n ew domain or a new de nition of named entities, it is likely to n eed a large amount of additional work. On the other hand, in a supervised learning system, what i s n eeded to a d apt the system is to m ake n ew training d a t a and m aybe additional small work. While this is also not a very easy task, it would be easier than creating complicated patterns. For example, based on our experience, 100 training articles can be created in a day.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "There also have been several machine learning systems applied to t his task. However, these either 1 partially need hand-made rules, 2 have parameters which m ust be adjusted by h and 3 do not perform well by fully automatic means or 4 need a huge training d a t a. Our system does not work fully automatically, b u t performs well with a s m all training corpus and does not have parameters to be adjusted by h and. We will discuss one o f t h e related systems later..",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "In this section, the algorithm of the system will be presented. There are two p h ases, one for creating t h e d ecision tree from training d a t a training p h ase and t h e o t h er for generating t h e t agged output based on the d ecision tree running p h ase. We use a Japanese morphological analyzer, JUMAN 6 a n d a program package for decision trees, C4.5 7 . We u s e t hree kinds of feature sets i n t h e d ecision tree: Part-of-speech t agged by JUMAN We d e ne t h e set of our categories based on its m ajor category and minor category. Character type information Character type, like Kanji, Hiragana, Katakana, alphabet, number or symbol, etc. and some combinations of these. Special Dictionaries List of entities created based on JUMAN dictionary entries, lists distributed by SAIC for MUC, lists found o n t h e W e b or based on human knowledge. Tab l e 1 s h o ws the n u m ber of entities in each dictionary 1 . Organization name h as two t ypes of dictionary; one for proper names and t h e o t h er for general nouns which s h ould be tagged when they co-occur with proper names. Also, we h a v e a special dictionary which contains words written in Roman alphabetbut most likely these are not an organization e.g. TEL, FAX. We m ade a list of 93 such w ords. ",
"cite_spans": [
{
"start": 928,
"end": 929,
"text": "1",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ALGORITHM",
"sec_num": null
},
{
"text": "First, the training s e n t ences are segmented and part-of-speech t agged by JUMAN. Then each t oken is analyzed by i t s c h aracter type and i s m a t c h ed against entries in the special dictionaries. One t oken can match e n tries in several dictionaries. For example, Matsushita\" could match t h e organization, person and location dictionaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training P h ase",
"sec_num": null
},
{
"text": "Using t h e training d a t a, a decision tree is built. It learns about t h e o pening a n d closing o f n amed entities based on the t hree kinds of information o f t h e previous, current a n d following t okens. The t hree types of information are the part-of-speech, character type and special dictionary information described above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training P h ase",
"sec_num": null
},
{
"text": "If we just use the d eterministic decision created by t h e tree, it could cause a problem in the r u nning p h ase. Because the d ecisions are made locally, t h e system could make an inconsistent sequence of decisions overall. For example, one t oken could be tagged as the o pening of an organization, while the n ext token might b e t agged as the closing of person name. We can think of several strategies to solve t his problem for example, the m ethod by 2 will be described in a later section, but w e used a probabilistic method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training P h ase",
"sec_num": null
},
{
"text": "The instances in the training corpus corresponding t o a leaf of the d ecision tree may not all have t h e same t ag. At a leaf we don't just record the most probable tag; rather, we k eep the probabiliti e s o f t h e all possible tags for that leaf. In this way we can salvage cases where a tag is part of the most probable globally-consistent t agging o f t h e t ext, even though it is not the most probable tag for this token, and s o w ould be discarded if we m ade a d eterministic decision at each t oken.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training P h ase",
"sec_num": null
},
{
"text": "subsectionRunning P h ase In the r u nning p h ase, the rst three steps, token segmentation and part-of-speech t agging b y JUMAN, analysis of character type, and special dictionary look-up, are identical to t h a t i n t h e training p h ase. Then, in order to n d t h e probabilities of opening a n d closing a n amed entity for each t oken, the properties of the previous, current a n d following t okens are examined against the d ecision tree. Figure 2 shows two example paths in the d ecision tree. For each t oken, the probabilities of`none' and t h e four combinations of answer pairs for each n amed entity t ype are assigned. For instance, if we h a v e 7 n amed entity t ypes, then 29 probabilities are generated.",
"cite_spans": [],
"ref_spans": [
{
"start": 449,
"end": 457,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Training P h ase",
"sec_num": null
},
{
"text": "Once the probabilities for all the t okens in a sentence are assigned, the remaining t ask is to discover the most probable consistent p a t h t hrough the s e n t ence. Here, a consistent p a t h m eans that for example, a path can't have org-OP-CN and date-OP-CL in a row, but can have loc-OP-CN and loc-CN-CL. T h e o u t put i s generated from the consistent sequence with t h e highest probability for each s e n t ence. The Viterbi algorithm is used in the search; this can be run i n t ime linear in the length o f t h e input. EXAMPLE Figure 1 shows an example sentence along with t hree types of information, part-of-speech, character type and special dictionary information, and given information of opening a n d closing o f n amed entities. Figure 2 s h o ws two example paths in the d ecision tree. For the purpose of demonstration, we used the rst and second t oken of the example sentence in Figure 1 . Each line corresponds to a question asked by t h e tree nodes along t h e p a t h. The last line s h o ws the probabilities of named entity information which h a v e more than 0.0 probability. This instance demonstrates how t h e probability m ethod works. As we can see, the probability of none for the rst token Isuraeru = Israel is higher than that for the o pening of organization 0.67 to 0.33, but i n t h e second t oken Keisatsu = P olice, the probability of closing organization is much higher than none 0.86 to 0.14. The combined probabilities of the t w o consistent p a t hs are calculated. One of these paths makes the t w o t okens an organization entity while along t h e o t h er path, neither token is part of a named entity. T h e probabilities are higher in the rst case 0.28 than that i n t h e l a t t er case 0.09, So the t w o t okens are tagged as an organization entity. ",
"cite_spans": [],
"ref_spans": [
{
"start": 543,
"end": 551,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 753,
"end": 762,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 908,
"end": 916,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Training P h ase",
"sec_num": null
},
{
"text": "We will report results o f v e experiments d escribed in Table 3 . Here, Training d a t a\", Dry run d a t a\" and F ormal run d a t a\" are the d a t a distributed by SAIC, and seefu data\" is the d a t a created by Oki, NTT data a n d NYU available through 8 . Note t h a t all Training, Dry run a n d seefu data are in the t o pic of The results o f F ormal run a n d t h e best in-house dry-run are shown in Table 4 . We can clearly tell that the recall of Named Entities person, organization and l o c a t ion are bad. This is caused by t h e c h ange of the topic. For example, there are very few foreign person names written in Katakana i n t h e training d a t a, as a foreign person would hardly be a victim of a crash in Japan. However, in the space craft launch, there are many foreign person names written in Katakana. This is the reason why t h e recall of persons is so low. Also, in the t est documents, planet names, the S u n\",\"the Earth\" or Saturn\" are tagged as locations, which could not be predicted from the training t o pic. We missed all such n ames in the formal test.",
"cite_spans": [],
"ref_spans": [
{
"start": 57,
"end": 64,
"text": "Table 3",
"ref_id": null
},
{
"start": 408,
"end": 415,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "RESULTS",
"sec_num": null
},
{
"text": "The best in-house Dry run result w as achieved before the formal run without looking a t t h e t est data. So it should be regarded as an example of the performance if we know t h e t o pic of the m a t erial. We t hink this is satisfactory, considering t h a t t h e e ort we m ade w as just preparing dictionaries and n o p a t t erns. Table 5 shows three experiments performed after the formal run. As the t o pic change may degrade o f t h e performance, we conducted experiments in which t h e training d a t a includes documents i n t h e same t o pic. The rst experiment used 75 of the formal run d a t a for training a n d t h e r e s t o f t h e d a t a for testing. Four such experiments w ere made t o obtain the result for the e n t ire corpus. The second experiment includes the training d a t a used in the formal run in addition to t h e 75 of the formal run d a t a. The t a ble shows about 1 improvement o v er the formal run. This is an encouraging result, the b e t t er performance was achieved with only 75 articles on the same t o pic compared with 294 articles on a di erent t o pic used in the formal run. The result o f t h e second experiment also shows a good sign that d o c u m ents in a di erent t o pic helped to improve t h e performance. This result suggests a n i d ea of domain adaptation scheme\". ",
"cite_spans": [],
"ref_spans": [
{
"start": 338,
"end": 345,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "RESULTS",
"sec_num": null
},
{
"text": "There have been several e orts t o a p ply machine learning t echniques to t h e same t ask 4 3 5 2 . In this section, we will discuss a system which i s o n e o f t h e most advanced and which closely resembles our own 2 . A good review of most of the o t h er systems can be found i n t h eir paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RELATED WORK",
"sec_num": null
},
{
"text": "Their system uses the d ecision tree algorithm and almost the same features. However, there are signi cant di erences between the systems. The m ain di erence is that t h ey have more than one d ecision tree, each of which d ecides if a particular named entity s t arts ends at t h e current t oken. In contrast, our system has only one d ecision tree which produces probabilities of information a bout t h e n amed entity. I n t his regard, we are similar to 3 , which also uses a probabilistic method in their N-gram based system. This is a crucial di erence which also has important consequences. Because the system of 2 m akes multiple decisions at each t oken, they could assign multiple, possibly inconsistent t ags. They solved the problem by i n troducing two somewhat idiosyncratic methods. One o f t h em is the d i s t ance score, which i s u s e d t o n d a n o pening and closing pair for each n amed entity m ainly based on distance information. The o t h er is the t ag priority scheme, which c h ooses a named entity among di erent t ypes of overlapping candidates based on the priority order of named entities. These methods require parameters which m ust be adjusted when they are applied to a n ew domain. In contrast, our system does not require such m ethods, as the m ultiple possibilities are resolved by t h e probabilistic method. This is a strong advantage, because we don't need manual adjustments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RELATED WORK",
"sec_num": null
},
{
"text": "The result t h ey reported is not comparable to our result, because the t ext and d e nition are di erent. But t h e t otal F-score of our system is similar t o t h eirs, even though the size of our training d a t a i s m u c h smaller.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RELATED WORK",
"sec_num": null
}
],
"back_matter": [
{
"text": "First, we h a v e t o consider topic or domain dependency of the t ask. It is clear that i n o r d er to a c hieve good performance in the framework, we h a v e t o i n v estigate dictionary entries for the t ask. It may or may not easy to modify the dictionary. F or example, a list of foreign person name w r i t t en in Katakana is not so easy to create, whereas a list of planet names is easy to n d. This di culty also exists i n p a t t ern-based methods, but in our framework it is not necessary to create domain dependent p a t t erns.Currently creating dictionaries is done b y h and. One possibility t o a u t omatize the process is to u s e a bootstrapping m ethod. Starting with core dictionaries, we can run t h e system on untagged texts, and increase the e n t ities in the dictionaries.Another issue is aliases. In newspaper articles, aliases are often used. The full name is used only the rst time t h e company i s m entioned Matsushita Denki Sangyou Kabushiki Kaisya = M a t sushita Electric Industrial Co. Ltd. and t h en aliases Matsushita or Matsushita Densan = M a t sushita E.I. are used in the l a t er sections of the article. Our system cannot handle these aliases, unless the aliases are registered in the dictionaries.Also, lexical information should help the accuracy. F or example, a name, possibly a person or an organization, in a particular argument slot of a verb can be disambiguated by t h e v erb. For example, a name i n t h e object slot of the v erb`hire' might be a person, while a name i n t h e s u bject slot of verb`manufacture' might be an organization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DISCUSSION",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Proceedings of Workshop on Tipster Program Phase II",
"authors": [],
"year": 1996,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Defense Advanced Research Projects Agency, Proceedings of Workshop on Tipster Program Phase II\" Morgan Kaufmann Publishers 1996",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning t o T ag Multilingual Texts Through Observation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bennett",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1997,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bennett, S., Aone, C. and L o v ell, C., Learning t o T ag Multilingual Texts Through Observation\" Con- ference on Empirical Methods in Natural Language Processing 1997",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Nymble: a High-Performance Learning Namender",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bikel",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bikel, D., Miller, S., Schwartz, R. and W eischedel, R., Nymble: a High-Performance Learning Name- nder\" Proceedings of the Fifth Conference on Applied Natural Language Processing 1997",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Description of the CRL NMSU Systems Used for MUC-6",
"authors": [
{
"first": "J",
"middle": [],
"last": "Cowie",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of Sixth Message Understanding Conference MUC-6",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cowie, J., Description of the CRL NMSU Systems Used for MUC-6\" Proceedings of Sixth Message Understanding Conference MUC-6 1995",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning t o Recognize Names Across Languages",
"authors": [
{
"first": "A",
"middle": [],
"last": "Gallippi",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th International Conference on Computational Linguistics COLING-96",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gallippi, A., Learning t o Recognize Names Across Languages\" Proceedings of the 16th International Conference on Computational Linguistics COLING-96 1996",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Japanese morphological analyzing System: JUMAN",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Yamaji",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Taeki",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matsumoto, Y., Kurohashi, S., Yamaji, O., Taeki, Y. and Nagao, M., Japanese morphological analyzing System: JUMAN\" Kyoto University and Nara Institute of Science and Technology 1997",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "C4.5: Program for Machine Learning",
"authors": [
{
"first": "R",
"middle": [],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quinlan, R., C4.5: Program for Machine Learning\" Morgan Kaufmann Publishers 1993",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Homepage of data related Japanese Named Entity",
"authors": [
{
"first": "S",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sekine, S., Homepage of data related Japanese Named Entity\" http: cs.nyu.edu cs projects proteus met2j 1997",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Five t ypes of Output will use two di erent s e t s o f t erms in order to a v oid the confusion between positions relative t o a t oken and regions of named entities. The t erms beginning and ending are used to i n dicate positions, whereas opening and closing are used to i n dicate t h e s t art and e n d o f n amed entities. Note t h a t t h ere is no overlapping o r embedding o f n amed entities. An example of real data i s s h o wn in Figure 1.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Sentence Example",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Decision Tree Path Example vehicle crash, and only the F ormal run d a t a i s o n t h e t o pic of space craft launch. Numbers in the brackets indicate t h e n u m ber of articles. Experiment Training D a t a T est Data 1 Formal run Training d a t a114, seefu data150, Formal run d a t a Dry run d a t a30 2 Best in-house Dry run Training d a t a114, seefu data150 Dry Run d a t a 3 75 25 experiment 75 of Formal run D a t a75 25 of Formal run d a t a 4 All training + 75 25 Training d a t a114, seefu data150, Dry 25 of Formal run d a t a run d a t a30,75 of Formal run d a t",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td>Special Dictionary Entries</td></tr></table>",
"num": null
},
"TABREF2": {
"type_str": "table",
"html": null,
"text": "",
"content": "<table/>",
"num": null
},
"TABREF5": {
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td colspan=\"2\">Result o f F ormal Run a n d t h e best in-house Dry run</td></tr><tr><td colspan=\"2\">large general corpus of tagged documents a s t h e basis, and t o add small domain speci c documents t o h a v e a domain speci c system. Lastly, i n t h e t hird experiment, we added the planet names in the l o c a t ion dictionary. From the formal run result, it was clear that o n e o f t h e m ain reasons of the performance degradation is the lack o f t h e planet names. The addition improves 3.5 which i s b e t t er than the o t h er trials. Although there are several other obvious reasons to be xed, the F-measure 86.34 is comparable to t h e best in-house Dry run experiment d escribed before Experiment 2;F-measure = 88.62.</td></tr><tr><td>Experiment 3 75 25 experiment 4 All training + 75 25 5 Add planet names</td><td>F-measure 80.46 82.73 86.34</td></tr></table>",
"num": null
},
"TABREF6": {
"type_str": "table",
"html": null,
"text": "Comparative Results",
"content": "<table/>",
"num": null
}
}
}
}