File size: 37,195 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
{
    "paper_id": "A88-1028",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T02:03:55.968218Z"
    },
    "title": "COMPUTATIONAL TECHNIQUES FOR IMPROVED NAME SEARCH",
    "authors": [
        {
            "first": "Beatrice",
            "middle": [
                "T"
            ],
            "last": "Oshika",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "Filip",
            "middle": [],
            "last": "Machi",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "Janet",
            "middle": [],
            "last": "Tom",
            "suffix": "",
            "affiliation": {},
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper describes enhancements made to techniques currently used to search large databases of proper names. Improvements included use of a Hidden Markov Model (HMM) statistical classifier to identify the likely linguistic provenance of a surname, and application of language-specific rules to generate plausible spelling variations of names. These two components were incorporated into a prototype front-end system driving existing name search procedures. HMM models and sets of linguistic rules were constructed for Farsi, Spanish and Vietnamese surnames and tested on a database of over 11,000 entries. Preliminary evaluation indicates improved retrieval of 20-30% as measured by number of correct items retrieved.",
    "pdf_parse": {
        "paper_id": "A88-1028",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper describes enhancements made to techniques currently used to search large databases of proper names. Improvements included use of a Hidden Markov Model (HMM) statistical classifier to identify the likely linguistic provenance of a surname, and application of language-specific rules to generate plausible spelling variations of names. These two components were incorporated into a prototype front-end system driving existing name search procedures. HMM models and sets of linguistic rules were constructed for Farsi, Spanish and Vietnamese surnames and tested on a database of over 11,000 entries. Preliminary evaluation indicates improved retrieval of 20-30% as measured by number of correct items retrieved.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "This paper describes enhancements made to current name search techniques used to access large databases of proper names. The work focused on improving name search algorithms to yield better matching and retrieval performance on data-bases containing large numbers of non-European 'foreign' names. Because the linguistic mix of names in large computer-supported databases has changed due to recent immigration and other demographic factors, current name search procedures do not provide the accurate retrieval required by insurance companies, state motor vehicle bureaus, law enforcement agencies and other institutions. As the potential consequences of incorrect retrieval are so severe (e.g., loss of benefits, false arrest), it is necessary that name name search techniques be improved to handle the linguistic variability reflected in current databases.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "INTRODUCTION",
                "sec_num": "1.0"
            },
            {
                "text": "Our specific approach decomposed the name search problem into two main components: \u2022",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "INTRODUCTION",
                "sec_num": "1.0"
            },
            {
                "text": "Language classification techniques to identify the source language for a given query name, and",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "INTRODUCTION",
                "sec_num": "1.0"
            },
            {
                "text": "Name association techniques, once a source language for a name is known, to exploit language-specific rules to generate variants of a name due to spelling variation, bad transcriptions, nicknames, and other name conventions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "INTRODUCTION",
                "sec_num": "1.0"
            },
            {
                "text": "A statistical classification technique based on the use of Hidden Markov Models (HMM) was used as a language discriminator. The test database contained about 11,000 names, including about 2,000 each from three target languages, Vietnamese, Farsi and Spanish, and 5,000 termed 'other' to broadly represent general European names. The decision procedures assumed a closed-world situation in which a name must be assigned to one of the four classes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "INTRODUCTION",
                "sec_num": "1.0"
            },
            {
                "text": "Language-specific rules in the form of context-sensitive, string rewrite rules were used to generate name variants. These were based on linguistic analysis of naming conventions, pronunciations and common misspellings for each target language.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "INTRODUCTION",
                "sec_num": "1.0"
            },
            {
                "text": "These two components were incorporated into a front-end system driving existing name search procedures. The front-end system was implemented in the C language and runs on a VAX-11/780 and Sun 3 workstations under Unix 4.2. Preliminary tests indicate improved retrieval (number of correct items retrieved) by as much as 20-30% over standard SOUNDEX and NYSIIS (Taft 1970) techniques.",
                "cite_spans": [
                    {
                        "start": 359,
                        "end": 370,
                        "text": "(Taft 1970)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "INTRODUCTION",
                "sec_num": "1.0"
            },
            {
                "text": "In current name search procedures, a search request is reduced to a canonical form which is then matched against a database of names also reduced to their canonical equivalents. All names having the same canonical form as the query name will be retrieved. The intent is that similar names (e.g., Cole, Kohl, Koll) will have identical canonical forms and dissimilar names (e.g., Cole, Smith, Jones) will have different canonical forms. Retrieval should then be insensitive to simple transformations such as spelling variants. Techniques of this type have been reviewed by Moore et al. (1977) .",
                "cite_spans": [
                    {
                        "start": 571,
                        "end": 590,
                        "text": "Moore et al. (1977)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "CURRENT NAME SEARCH PROCEDURES",
                "sec_num": "2.0"
            },
            {
                "text": "However, because of spelling variation in proper names, the canonical reduction algorithm may not always have the desired characteristics. Sometimes similar names are mapped to different canonical forms and dissimilar names mapped to the same forms. This is especially true when 'foreign' or non-European names are included in the database, because the canonical reduction techniques such as SOUNDEX and NYSIIS are very language-specific and based largely on Western European names. For example, one of the SOUNDEX reduction rules assumes that the characteristic shape of a name is embodied in its consonants and therefore the rule deletes most of the vowels. Although reasonable for English and certain other languages, this rule is less applicable to Chinese surnames which may be distinguished only by vowel (e.g., Li, Lee, Lu) .",
                "cite_spans": [
                    {
                        "start": 818,
                        "end": 830,
                        "text": "Li, Lee, Lu)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "CURRENT NAME SEARCH PROCEDURES",
                "sec_num": "2.0"
            },
            {
                "text": "In large databases with diverse sources of names, other name conventions may also need to be handled, such as the use of both matronymic and patronymic in Spanish (e.g., Maria Hernandez Garcia) or the inverted order of Chinese names (e.g., Li-Fang-Kuei, where Li is the surname).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "CURRENT NAME SEARCH PROCEDURES",
                "sec_num": "2.0"
            },
            {
                "text": "As mentioned in section 1.0, the approach taken to improve existing name search techniques was to first classify the query name as to language source and then use language-specific rewrite rules to generate plausible name variants. A statistical classifier based on Hidden Markov Models (HMM) was developed for several reasons. Similar models have been used successfully in language identification based on phonetic strings (House and Neuburg 1977, Li and Edwards 1980) and text strings (Ferguson 1980) . Also, HMMs have a relatively simple structure that make them tractable, both analytically and computationally, and effective procedures already exist for deriving HMMs from a purely statistical analysis of representative text.",
                "cite_spans": [
                    {
                        "start": 424,
                        "end": 434,
                        "text": "(House and",
                        "ref_id": null
                    },
                    {
                        "start": 435,
                        "end": 455,
                        "text": "Neuburg 1977, Li and",
                        "ref_id": null
                    },
                    {
                        "start": 456,
                        "end": 469,
                        "text": "Edwards 1980)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 487,
                        "end": 502,
                        "text": "(Ferguson 1980)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LANGUAGE CLAS SIFICATION",
                "sec_num": "3.0"
            },
            {
                "text": "HMMs are useful in language classification because they provide a means of assigning a probability distribution to words or names in a specific language. In particular, given an HMM, the probability that a given word would be generated by that model can be computed. Therefore, the decision procedure used in this project is to compute that probability for a given name against each of the language models, and to select as the source language that language whose model is most likely to generate the name.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LANGUAGE CLAS SIFICATION",
                "sec_num": "3.0"
            },
            {
                "text": "The following example illustrates how HMMs can be used to capture important information about language data. Table 1 contains training data representing sample text strings in a language corpus. Three different HMMs of two, four and six states, were built from these data and are shown in Tables 2-4, respectively. (The symbol CR in the tables corresponds to the blank space between words and is used as a word delimiter.)",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 109,
                        "end": 116,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "EXAMPLE OF HMM MODELING TEXT",
                "sec_num": "3.1"
            },
            {
                "text": "These HMMs can also be represented graphically, as shown in Figures 1-3. The numbered circles correspond to states; the arrows represent state transitions with non-zero probability and are labeled with the transition probability. The boxes contain the probability distribution of the output symbols produced when the model is in the state to which the box is connected. The process of generating the output sequence of a model can then be seen as a random traversal of the graph according to the probability weights on the arrows, with an output symbol generated randomly each time a state is visited, according to the output distribution associated with that state.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EXAMPLE OF HMM MODELING TEXT",
                "sec_num": "3.1"
            },
            {
                "text": "For example, in the two-state model shown in Table 2 (and graphically in Figure 1 ), letter (nondelimiter) symbols can be produced only in state two, and the output probability distribution for this state is simply the relative frequency with which each letter appears in the training data. That is, in the training data in .75 .5",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 45,
                        "end": 52,
                        "text": "Table 2",
                        "ref_id": null
                    },
                    {
                        "start": 73,
                        "end": 81,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "EXAMPLE OF HMM MODELING TEXT",
                "sec_num": "3.1"
            },
            {
                "text": ".667",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "EXAMPLE OF HMM MODELING TEXT",
                "sec_num": "3.1"
            },
            {
                "text": ".5 Figure 3 . Graphic Representation of Six State HMM for Sample Data five \"a\", four \"b\", three \"c\", etc., and the model assigns a probability of 5/15 = 0.333 to \"a\", 4/15 = 0.267 to \"o\", and so on. Similarly, the state transition probabilities for state two reflect the relative frequency with which letters follow letters and word delimiters follow letters. These parameters are derived strictly from an iterative automatic procedure and do not reflect human analysis of the data.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 3,
                        "end": 11,
                        "text": "Figure 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "EXAMPLE OF HMM MODELING TEXT",
                "sec_num": "3.1"
            },
            {
                "text": "In the four state model shown in Table 3 (and Figure 2) , it is possible to model the training data with more detail, and the iterations converge to a model with the two most frequently occuring symbols, \"a\" and \"b\", assigned to unique states (states two and four, respectively) and the remaining letters aggregated in state three. State one contains the word delimiter and transitions from state one occur only to state two, reflecting the fact that \"a\" is always word-initial in the training data.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 33,
                        "end": 40,
                        "text": "Table 3",
                        "ref_id": null
                    },
                    {
                        "start": 46,
                        "end": 55,
                        "text": "Figure 2)",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "EXAMPLE OF HMM MODELING TEXT",
                "sec_num": "3.1"
            },
            {
                "text": "In the six state model shown in Table 4 (and Figure 3) , the training data is modeled exactly. Each state corresponds to exactly one output symbol (a letter or word delimiter). For each state, transitions occur only to the state corresponding to the next allowable letter or to the word delimiter.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 32,
                        "end": 39,
                        "text": "Table 4",
                        "ref_id": null
                    },
                    {
                        "start": 45,
                        "end": 54,
                        "text": "Figure 3)",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "EXAMPLE OF HMM MODELING TEXT",
                "sec_num": "3.1"
            },
            {
                "text": "The outputs generated by these three models are shown in Table 5 . The six state model can be used to model the training data exactly, and in general, the faithfulness with which the training data are represented increases with the number of states.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 57,
                        "end": 64,
                        "text": "Table 5",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "EXAMPLE OF HMM MODELING TEXT",
                "sec_num": "3.1"
            },
            {
                "text": "The simple example in the preceding section illustrates the connection between model parameters and training data. It is more difficult to interpret models derived from more complex data such as natural language text, but it is possible to provide intuitive interpretations to the states in such models. Table 6 shows an eight state HMM derived from Spanish surnames.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 304,
                        "end": 311,
                        "text": "Table 6",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "HMM MODEL OF SPANISH NAMES",
                "sec_num": "3.2"
            },
            {
                "text": "State transition probabilities are shown at the bottom of the table, and it can be seen that the transition probability from state eight to state one (word delimiter) is greater than .95. That is, state eight can be considered to represent a \"word final\" state. The top part of the table shows that the highest output probabilities for state eight are assigned to the letters \"a,o,s,z\", correctly reflecting the fact that these letters commonly occur word final in Spanish Garcia, Murillo, Fuentes, Diaz. This HMM also \"discovers\" linguistic categories, such as the class of non-word-final vowels represented by state seven with the highest output probabilities assigned to the vowels \"a,e,i,o,u\" .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 678,
                        "end": 696,
                        "text": "vowels \"a,e,i,o,u\"",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "HMM MODEL OF SPANISH NAMES",
                "sec_num": "3.2"
            },
            {
                "text": "In order to use HMMs for language classification, it was first necessary to construct a model for each language category based on a representative sample. A maximum likelihood (ML) estimation technique was used because it leads to a relatively simple method for iteratively generating a sequence of successively better models for a given set of words. HMMs of four, six and eight states were generated for each of the language categories, and an eight state HMM was selected for the final configuration of the classifier. Higher dimensional models were not evaluated because the eight state model performed well enough for the application. With combined training and test data, language classification accuracy was 98% for Vietnamese, 96% for Farsi, 91% for Spanish, and 88% for Other. With training data separate from test data, language classification accuracy was 96% for Vietnamese, 90% for Farsi, 89% for Spanish, and 87% for Other. The language classification results are shown in Tables 7 and 8.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LANGUAGE CLASSIFICATION",
                "sec_num": "3.3"
            },
            {
                "text": "For each of the three language groups, Vietnamese, Farsi and Spanish, a set of linguistic rules could be applied using a general rule interpreter. The rules were developed after studying naming conventions and common transcription variations and also after performing protocol analyses to see how native English speakers (mis)spelled names pronounced by native Vietnamese (and Farsi and Spanish) speakers and (mis)pronounced by other English speakers. Naming conventions included word order (e.g., surnames coming first, or parents' surnames both used); common transcription variations included Romanization issues (e.g., Farsi character that is written as either 'v' or 'w').",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LINGUISTIC RULE COMPONENT",
                "sec_num": "4.0"
            },
            {
                "text": "The general form of the rules is lhs --> rhs / leftContext rightContext where the left-hand-side (lhs) is a character string and the right-hand-side is a string with a possible weight, so that the rules could be associated with a plausibility factor.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "LINGUISTIC RULE COMPONENT",
                "sec_num": "4.0"
            },
            {
                "text": "Rules may include a specific context; if a specific environment is not described, the rule applies in all cases. Table 9 shows sample rules and examples of output strings generated by applying the rules. The 'N/A' column gives examples of name strings for which a rule does not apply because the specified context is absent. An example with plausibility weights is also shown.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 113,
                        "end": 120,
                        "text": "Table 9",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "LINGUISTIC RULE COMPONENT",
                "sec_num": "4.0"
            },
            {
                "text": "Although the statistical model building is computationally intensive and time-consuming (several hours), the actual classification procedure is very efficient. The average cpu time to classify a query name was under 200 msec on a VAX-11/780. The rule component that generates spelling variants can process 100 query names in about 2-6 cpu seconds, the difference in time depending on average length of nal-ne.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "PERFORMANCE",
                "sec_num": "5.0"
            },
            {
                "text": "As for retrieval performance, in a test of 160 query names (including names known to be in the database and spelling variants not known to be in the database), there were 111 hits (69%) using NYSIIS procedures alone and 141 hits (88%) using the frontend language classifier and linguistic rules and sending the expanded query set to NYSIIS.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "PERFORMANCE",
                "sec_num": "5.0"
            },
            {
                "text": "In recent work, this technique has been extended to include modeling a database of Slavic surnames. Language classification accuracy based on a combined database of 13000 surnames representing Spanish, Farsi, Vietnamese, Slavic and 'other' names, with combined training data (1000 names from each language group to build each language model) and test data (remaining 8000 names), is 96.8% for Vietnamese, 87.7% for Farsi, 86.9% for Spanish, 86.5% for Slavic, and 82.9% for 'other'.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "PERFORMANCE",
                "sec_num": "5.0"
            },
            {
                "text": "an Utterance, |ournal of the Acoustical Society of America, 62 (3):708-713.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "PERFORMANCE",
                "sec_num": "5.0"
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Statistical Models for Automatic Language Identification",
                "authors": [
                    {
                        "first": "K",
                        "middle": [
                            "P"
                        ],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Thomas",
                        "middle": [
                            "J"
                        ],
                        "last": "Edwards",
                        "suffix": ""
                    }
                ],
                "year": 1980,
                "venue": "Proc. IEEE International Conference on Acoustics, Speech and Signal Processing",
                "volume": "",
                "issue": "",
                "pages": "884--887",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Li, K. P. and Edwards, Thomas J. 1980 Statistical Models for Automatic Language Identification, Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, Denver, Colorado, 884-887.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Accessing Individual Records from Personal Data Files Using Non-Unique Identifiers",
                "authors": [],
                "year": null,
                "venue": "National Bureau of Standards Special Publication",
                "volume": "500",
                "issue": "2",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Accessing Individual Records from Personal Data Files Using Non-Unique Identifiers. Computer Science and Technology, National Bureau of Standards Special Publication 500-2, Washington, D.C.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "New York State Identification and Intelligence System",
                "authors": [
                    {
                        "first": "Robert",
                        "middle": [
                            "L"
                        ],
                        "last": "Taft",
                        "suffix": ""
                    }
                ],
                "year": 1970,
                "venue": "Special Report",
                "volume": "",
                "issue": "1",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Taft, Robert L. 1970 Name Search Techniques. New York State Identification and Intelligence System, Special Report No. 1, Albany, New York.",
                "links": null
            }
        },
        "ref_entries": {
            "TABREF0": {
                "type_str": "table",
                "text": "Four State HMM Based on Sample Data Final Hidden Markov Model Pa'ram'eters I Four Stat% State Ou.!put Model I",
                "num": null,
                "html": null,
                "content": "<table><tr><td>there are 15 letter symbols:</td></tr></table>"
            },
            "TABREF1": {
                "type_str": "table",
                "text": "Output from Two, Four and Six State HMM for Sample Data Outputs from Illdden Markov Models",
                "num": null,
                "html": null,
                "content": "<table><tr><td>Two States</td><td>Four States</td><td>Six States</td></tr><tr><td>aadcc</td><td>ab</td><td>abcde</td></tr><tr><td>be</td><td>ab</td><td>abe</td></tr><tr><td>abcacaa</td><td>abcc</td><td>abcd</td></tr><tr><td>dcace</td><td>abd</td><td>abcd~</td></tr><tr><td>aaedb</td><td>abd</td><td>a</td></tr><tr><td>c</td><td>ab</td><td>abcd\u00a2</td></tr><tr><td>caea</td><td>abe</td><td>abc</td></tr><tr><td>c</td><td>ab</td><td>abed</td></tr><tr><td>cbc</td><td>ab</td><td>ab</td></tr><tr><td>ec</td><td>ab</td><td>abe</td></tr><tr><td>b</td><td>abe</td><td>ab</td></tr><tr><td>cbbcbcaebd</td><td>abed</td><td>abe</td></tr><tr><td>a</td><td>a</td><td>a</td></tr><tr><td>ca</td><td>ab</td><td>abed</td></tr><tr><td>b</td><td>abe</td><td>abed</td></tr><tr><td>cb</td><td>abccdcc</td><td>abe</td></tr><tr><td>ode</td><td>abcc</td><td>ab</td></tr><tr><td>bccbabebd</td><td>ab</td><td>abe</td></tr><tr><td>bc</td><td>ab</td><td>abed</td></tr><tr><td>dd</td><td>ab</td><td>abed</td></tr><tr><td>dca</td><td>abe</td><td>abcd\u00a2</td></tr><tr><td>ad</td><td>abed</td><td>a</td></tr><tr><td>c</td><td>nb</td><td>abode</td></tr><tr><td>c</td><td>abe</td><td>abed</td></tr><tr><td>ba</td><td>ab</td><td>ab</td></tr><tr><td>baea</td><td>abe</td><td>ab</td></tr><tr><td>b</td><td>abe</td><td>ab</td></tr><tr><td>ba</td><td>a</td><td>abcde</td></tr><tr><td>cabbd</td><td>ab</td><td>a</td></tr><tr><td>b</td><td>ab</td><td>a</td></tr><tr><td>ac</td><td>abe</td><td>ab</td></tr></table>"
            },
            "TABREF2": {
                "type_str": "table",
                "text": "",
                "num": null,
                "html": null,
                "content": "<table><tr><td/><td/><td colspan=\"4\">Hidden Markov Model Parameters</td><td/><td/><td/></tr><tr><td/><td/><td colspan=\"5\">Eight State, State Output Model for Spanish</td><td/><td/></tr><tr><td/><td/><td/><td colspan=\"3\">Output Probabilities</td><td/><td/><td/></tr><tr><td>Symbol</td><td/><td/><td/><td colspan=\"2\">State</td><td/><td/><td/></tr><tr><td/><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td></tr><tr><td>CR</td><td/><td/><td/><td>0</td><td>0</td><td/><td/><td>0</td></tr><tr><td>-</td><td>0</td><td>0</td><td>0.00427</td><td>0</td><td>0</td><td/><td/><td>0</td></tr><tr><td>a</td><td/><td>0.0479</td><td>0.0133</td><td>0</td><td>0.0042</td><td>0.0753</td><td>0.324</td><td>0.219</td></tr><tr><td>b</td><td/><td>0.00208</td><td>0</td><td>0.0681</td><td>0.00158</td><td>0.0427</td><td>0</td><td/></tr><tr><td>C</td><td/><td>0.0193</td><td>0</td><td>0.127</td><td>0.00222</td><td>0.0864</td><td>0</td><td/></tr><tr><td>d</td><td/><td>0.0755</td><td>0.0207</td><td>0.0601</td><td>0.229</td><td>0.0408</td><td/><td/></tr><tr><td>e</td><td/><td>0.567</td><td>0.032</td><td>0.00169</td><td>0.00477</td><td>0.00368</td><td>0.196</td><td>0.0268</td></tr><tr><td>f</td><td/><td>0</td><td>0</td><td>0.00875</td><td>0</td><td>0.0612</td><td>0</td><td>0</td></tr><tr><td/><td>0</td><td>0.0207</td><td>0</td><td>0.174</td><td>0</td><td>0.052</td><td>0</td><td>0.00161</td></tr><tr><td>h</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0.0825</td><td>0.0109</td><td>0</td></tr><tr><td/><td>0</td><td>0.00432</td><td>0.0495</td><td>0</td><td>0.013</td><td>0.00193</td><td>0.164</td><td>0.00442</td></tr><tr><td/><td/><td>0.0104</td><td/><td>0.0233</td><td>0</td><td>0.00295</td><td/><td/></tr><tr><td/><td>0</td><td>0.00252</td><td>0</td><td>0</td><td>0</td><td>0.00123</td><td>0</td><td>0</td></tr><tr><td>1</td><td>0</td><td>0.0048</td><td>0.189</td><td>0.066</td><td>0.O626</td><td>0.0565</td><td>0.00559</td><td>0.0118</td></tr><tr><td>m</td><td>0</td><td>0.00484</td><td>0</td><td>0.118</td><td>0.00448</td><td>0.0917</td><td>0</td><td>0</td></tr><tr><td>n</td><td>0</td><td>0.0743</td><td>0.262</td><td>0.0697</td><td>0.0593</td><td>0</td><td>0</td><td>0.0252</td></tr><tr><td>o</td><td>0</td><td>0.00784</td><td>0.00968</td><td>0</td><td>0</td><td>0.0122</td><td>0.186</td><td>0.189</td></tr><tr><td>P</td><td>0</td><td>0.0121</td><td>0.00825</td><td>0.0132</td><td>0.0138</td><td>0.122</td><td>0</td><td>0</td></tr><tr><td>q</td><td>0</td><td>0</td><td>0</td><td>0.0149</td><td>0.0199</td><td>0.00551</td><td>0</td><td>0</td></tr><tr><td>r</td><td>0</td><td>0.0528</td><td>0.346</td><td>0.0794</td><td>0.273</td><td>0.141</td><td>0.0129</td><td>0.00279</td></tr><tr><td>s</td><td>0</td><td>0.0393</td><td>0.0442</td><td>0.00992</td><td>0.00899</td><td>0.0872</td><td/><td>0.123</td></tr><tr><td/><td/><td>0.0339</td><td/><td>0.0726</td><td>0.155</td><td>0.00288</td><td/><td>0.0131</td></tr><tr><td/><td/><td>0.00162</td><td>0.00476</td><td>0</td><td>0</td><td/><td>0.1</td><td>0.00671</td></tr><tr><td>v</td><td>0</td><td>0.015</td><td>0</td><td>0.0884</td><td>0</td><td>0.0177</td><td>0</td><td>0</td></tr><tr><td>w</td><td/><td>0</td><td>0</td><td>0.00103</td><td>0</td><td>0.00213</td><td>0</td><td>0</td></tr><tr><td>x</td><td/><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0.00183</td></tr><tr><td>y</td><td/><td>0.00198</td><td>0.013</td><td>0.0031</td><td>0.00465</td><td>0.00149</td><td>0</td><td>0.00534</td></tr><tr><td>z</td><td/><td>0.00175</td><td>0.00287</td><td>0</td><td>0.14</td><td>0.00727</td><td>0</td><td>0.368</td></tr><tr><td/><td/><td/><td colspan=\"3\">Smte Transiuon Pmbabilides</td><td/><td/><td/></tr><tr><td>From</td><td>I</td><td/><td/><td>4</td><td>To</td><td/><td>7</td><td>8</td></tr><tr><td>l</td><td/><td/><td/><td>0.339</td><td>0.00323</td><td>0.602</td><td>0.0548</td><td/></tr><tr><td>2</td><td>0.00968</td><td>0.075</td><td>0.00561</td><td/><td>0.0869</td><td>0.00212</td><td>0.00665</td><td>0.814</td></tr><tr><td>3</td><td>0.0615</td><td>0.269</td><td>0.0353</td><td>0.259</td><td>0.235</td><td>0.0097</td><td>0.0253</td><td>0.104</td></tr><tr><td>4\"</td><td>0</td><td>0.0101</td><td>0.0132</td><td>0</td><td>0.00503</td><td>0.0245</td><td>0.929</td><td>0.0182</td></tr><tr><td>5</td><td>0.0117</td><td>0.228</td><td>0.00477</td><td>0.00466</td><td>0.0537</td><td>0.00145</td><td>0.542</td><td>0.154</td></tr><tr><td>6</td><td>0</td><td>0</td><td>0.0587</td><td>0.0341</td><td>0</td><td>-0.0564</td><td>0.85</td><td>0</td></tr><tr><td>7</td><td>0.0165</td><td>0.13</td><td>0.506</td><td>0.162</td><td>0.0627</td><td>0.00977</td><td>0.0207</td><td>0.0915</td></tr><tr><td>8</td><td>0.954</td><td>0</td><td>0.00169</td><td>0</td><td>0.00723</td><td>0.00216</td><td>0.00858</td><td>0.0256</td></tr></table>"
            }
        }
    }
}