{ "paper_id": "A88-1028", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:03:55.968218Z" }, "title": "COMPUTATIONAL TECHNIQUES FOR IMPROVED NAME SEARCH", "authors": [ { "first": "Beatrice", "middle": [ "T" ], "last": "Oshika", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Filip", "middle": [], "last": "Machi", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Janet", "middle": [], "last": "Tom", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper describes enhancements made to techniques currently used to search large databases of proper names. Improvements included use of a Hidden Markov Model (HMM) statistical classifier to identify the likely linguistic provenance of a surname, and application of language-specific rules to generate plausible spelling variations of names. These two components were incorporated into a prototype front-end system driving existing name search procedures. HMM models and sets of linguistic rules were constructed for Farsi, Spanish and Vietnamese surnames and tested on a database of over 11,000 entries. Preliminary evaluation indicates improved retrieval of 20-30% as measured by number of correct items retrieved.", "pdf_parse": { "paper_id": "A88-1028", "_pdf_hash": "", "abstract": [ { "text": "This paper describes enhancements made to techniques currently used to search large databases of proper names. Improvements included use of a Hidden Markov Model (HMM) statistical classifier to identify the likely linguistic provenance of a surname, and application of language-specific rules to generate plausible spelling variations of names. These two components were incorporated into a prototype front-end system driving existing name search procedures. HMM models and sets of linguistic rules were constructed for Farsi, Spanish and Vietnamese surnames and tested on a database of over 11,000 entries. Preliminary evaluation indicates improved retrieval of 20-30% as measured by number of correct items retrieved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This paper describes enhancements made to current name search techniques used to access large databases of proper names. The work focused on improving name search algorithms to yield better matching and retrieval performance on data-bases containing large numbers of non-European 'foreign' names. Because the linguistic mix of names in large computer-supported databases has changed due to recent immigration and other demographic factors, current name search procedures do not provide the accurate retrieval required by insurance companies, state motor vehicle bureaus, law enforcement agencies and other institutions. As the potential consequences of incorrect retrieval are so severe (e.g., loss of benefits, false arrest), it is necessary that name name search techniques be improved to handle the linguistic variability reflected in current databases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1.0" }, { "text": "Our specific approach decomposed the name search problem into two main components: \u2022", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1.0" }, { "text": "Language classification techniques to identify the source language for a given query name, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1.0" }, { "text": "Name association techniques, once a source language for a name is known, to exploit language-specific rules to generate variants of a name due to spelling variation, bad transcriptions, nicknames, and other name conventions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1.0" }, { "text": "A statistical classification technique based on the use of Hidden Markov Models (HMM) was used as a language discriminator. The test database contained about 11,000 names, including about 2,000 each from three target languages, Vietnamese, Farsi and Spanish, and 5,000 termed 'other' to broadly represent general European names. The decision procedures assumed a closed-world situation in which a name must be assigned to one of the four classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1.0" }, { "text": "Language-specific rules in the form of context-sensitive, string rewrite rules were used to generate name variants. These were based on linguistic analysis of naming conventions, pronunciations and common misspellings for each target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1.0" }, { "text": "These two components were incorporated into a front-end system driving existing name search procedures. The front-end system was implemented in the C language and runs on a VAX-11/780 and Sun 3 workstations under Unix 4.2. Preliminary tests indicate improved retrieval (number of correct items retrieved) by as much as 20-30% over standard SOUNDEX and NYSIIS (Taft 1970) techniques.", "cite_spans": [ { "start": 359, "end": 370, "text": "(Taft 1970)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": "1.0" }, { "text": "In current name search procedures, a search request is reduced to a canonical form which is then matched against a database of names also reduced to their canonical equivalents. All names having the same canonical form as the query name will be retrieved. The intent is that similar names (e.g., Cole, Kohl, Koll) will have identical canonical forms and dissimilar names (e.g., Cole, Smith, Jones) will have different canonical forms. Retrieval should then be insensitive to simple transformations such as spelling variants. Techniques of this type have been reviewed by Moore et al. (1977) .", "cite_spans": [ { "start": 571, "end": 590, "text": "Moore et al. (1977)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "CURRENT NAME SEARCH PROCEDURES", "sec_num": "2.0" }, { "text": "However, because of spelling variation in proper names, the canonical reduction algorithm may not always have the desired characteristics. Sometimes similar names are mapped to different canonical forms and dissimilar names mapped to the same forms. This is especially true when 'foreign' or non-European names are included in the database, because the canonical reduction techniques such as SOUNDEX and NYSIIS are very language-specific and based largely on Western European names. For example, one of the SOUNDEX reduction rules assumes that the characteristic shape of a name is embodied in its consonants and therefore the rule deletes most of the vowels. Although reasonable for English and certain other languages, this rule is less applicable to Chinese surnames which may be distinguished only by vowel (e.g., Li, Lee, Lu) .", "cite_spans": [ { "start": 818, "end": 830, "text": "Li, Lee, Lu)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "CURRENT NAME SEARCH PROCEDURES", "sec_num": "2.0" }, { "text": "In large databases with diverse sources of names, other name conventions may also need to be handled, such as the use of both matronymic and patronymic in Spanish (e.g., Maria Hernandez Garcia) or the inverted order of Chinese names (e.g., Li-Fang-Kuei, where Li is the surname).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CURRENT NAME SEARCH PROCEDURES", "sec_num": "2.0" }, { "text": "As mentioned in section 1.0, the approach taken to improve existing name search techniques was to first classify the query name as to language source and then use language-specific rewrite rules to generate plausible name variants. A statistical classifier based on Hidden Markov Models (HMM) was developed for several reasons. Similar models have been used successfully in language identification based on phonetic strings (House and Neuburg 1977, Li and Edwards 1980) and text strings (Ferguson 1980) . Also, HMMs have a relatively simple structure that make them tractable, both analytically and computationally, and effective procedures already exist for deriving HMMs from a purely statistical analysis of representative text.", "cite_spans": [ { "start": 424, "end": 434, "text": "(House and", "ref_id": null }, { "start": 435, "end": 455, "text": "Neuburg 1977, Li and", "ref_id": null }, { "start": 456, "end": 469, "text": "Edwards 1980)", "ref_id": "BIBREF0" }, { "start": 487, "end": 502, "text": "(Ferguson 1980)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "LANGUAGE CLAS SIFICATION", "sec_num": "3.0" }, { "text": "HMMs are useful in language classification because they provide a means of assigning a probability distribution to words or names in a specific language. In particular, given an HMM, the probability that a given word would be generated by that model can be computed. Therefore, the decision procedure used in this project is to compute that probability for a given name against each of the language models, and to select as the source language that language whose model is most likely to generate the name.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LANGUAGE CLAS SIFICATION", "sec_num": "3.0" }, { "text": "The following example illustrates how HMMs can be used to capture important information about language data. Table 1 contains training data representing sample text strings in a language corpus. Three different HMMs of two, four and six states, were built from these data and are shown in Tables 2-4, respectively. (The symbol CR in the tables corresponds to the blank space between words and is used as a word delimiter.)", "cite_spans": [], "ref_spans": [ { "start": 109, "end": 116, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "EXAMPLE OF HMM MODELING TEXT", "sec_num": "3.1" }, { "text": "These HMMs can also be represented graphically, as shown in Figures 1-3. The numbered circles correspond to states; the arrows represent state transitions with non-zero probability and are labeled with the transition probability. The boxes contain the probability distribution of the output symbols produced when the model is in the state to which the box is connected. The process of generating the output sequence of a model can then be seen as a random traversal of the graph according to the probability weights on the arrows, with an output symbol generated randomly each time a state is visited, according to the output distribution associated with that state.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXAMPLE OF HMM MODELING TEXT", "sec_num": "3.1" }, { "text": "For example, in the two-state model shown in Table 2 (and graphically in Figure 1 ), letter (nondelimiter) symbols can be produced only in state two, and the output probability distribution for this state is simply the relative frequency with which each letter appears in the training data. That is, in the training data in .75 .5", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 52, "text": "Table 2", "ref_id": null }, { "start": 73, "end": 81, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "EXAMPLE OF HMM MODELING TEXT", "sec_num": "3.1" }, { "text": ".667", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXAMPLE OF HMM MODELING TEXT", "sec_num": "3.1" }, { "text": ".5 Figure 3 . Graphic Representation of Six State HMM for Sample Data five \"a\", four \"b\", three \"c\", etc., and the model assigns a probability of 5/15 = 0.333 to \"a\", 4/15 = 0.267 to \"o\", and so on. Similarly, the state transition probabilities for state two reflect the relative frequency with which letters follow letters and word delimiters follow letters. These parameters are derived strictly from an iterative automatic procedure and do not reflect human analysis of the data.", "cite_spans": [], "ref_spans": [ { "start": 3, "end": 11, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "EXAMPLE OF HMM MODELING TEXT", "sec_num": "3.1" }, { "text": "In the four state model shown in Table 3 (and Figure 2) , it is possible to model the training data with more detail, and the iterations converge to a model with the two most frequently occuring symbols, \"a\" and \"b\", assigned to unique states (states two and four, respectively) and the remaining letters aggregated in state three. State one contains the word delimiter and transitions from state one occur only to state two, reflecting the fact that \"a\" is always word-initial in the training data.", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 40, "text": "Table 3", "ref_id": null }, { "start": 46, "end": 55, "text": "Figure 2)", "ref_id": null } ], "eq_spans": [], "section": "EXAMPLE OF HMM MODELING TEXT", "sec_num": "3.1" }, { "text": "In the six state model shown in Table 4 (and Figure 3) , the training data is modeled exactly. Each state corresponds to exactly one output symbol (a letter or word delimiter). For each state, transitions occur only to the state corresponding to the next allowable letter or to the word delimiter.", "cite_spans": [], "ref_spans": [ { "start": 32, "end": 39, "text": "Table 4", "ref_id": null }, { "start": 45, "end": 54, "text": "Figure 3)", "ref_id": null } ], "eq_spans": [], "section": "EXAMPLE OF HMM MODELING TEXT", "sec_num": "3.1" }, { "text": "The outputs generated by these three models are shown in Table 5 . The six state model can be used to model the training data exactly, and in general, the faithfulness with which the training data are represented increases with the number of states.", "cite_spans": [], "ref_spans": [ { "start": 57, "end": 64, "text": "Table 5", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "EXAMPLE OF HMM MODELING TEXT", "sec_num": "3.1" }, { "text": "The simple example in the preceding section illustrates the connection between model parameters and training data. It is more difficult to interpret models derived from more complex data such as natural language text, but it is possible to provide intuitive interpretations to the states in such models. Table 6 shows an eight state HMM derived from Spanish surnames.", "cite_spans": [], "ref_spans": [ { "start": 304, "end": 311, "text": "Table 6", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "HMM MODEL OF SPANISH NAMES", "sec_num": "3.2" }, { "text": "State transition probabilities are shown at the bottom of the table, and it can be seen that the transition probability from state eight to state one (word delimiter) is greater than .95. That is, state eight can be considered to represent a \"word final\" state. The top part of the table shows that the highest output probabilities for state eight are assigned to the letters \"a,o,s,z\", correctly reflecting the fact that these letters commonly occur word final in Spanish Garcia, Murillo, Fuentes, Diaz. This HMM also \"discovers\" linguistic categories, such as the class of non-word-final vowels represented by state seven with the highest output probabilities assigned to the vowels \"a,e,i,o,u\" .", "cite_spans": [], "ref_spans": [ { "start": 678, "end": 696, "text": "vowels \"a,e,i,o,u\"", "ref_id": null } ], "eq_spans": [], "section": "HMM MODEL OF SPANISH NAMES", "sec_num": "3.2" }, { "text": "In order to use HMMs for language classification, it was first necessary to construct a model for each language category based on a representative sample. A maximum likelihood (ML) estimation technique was used because it leads to a relatively simple method for iteratively generating a sequence of successively better models for a given set of words. HMMs of four, six and eight states were generated for each of the language categories, and an eight state HMM was selected for the final configuration of the classifier. Higher dimensional models were not evaluated because the eight state model performed well enough for the application. With combined training and test data, language classification accuracy was 98% for Vietnamese, 96% for Farsi, 91% for Spanish, and 88% for Other. With training data separate from test data, language classification accuracy was 96% for Vietnamese, 90% for Farsi, 89% for Spanish, and 87% for Other. The language classification results are shown in Tables 7 and 8.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LANGUAGE CLASSIFICATION", "sec_num": "3.3" }, { "text": "For each of the three language groups, Vietnamese, Farsi and Spanish, a set of linguistic rules could be applied using a general rule interpreter. The rules were developed after studying naming conventions and common transcription variations and also after performing protocol analyses to see how native English speakers (mis)spelled names pronounced by native Vietnamese (and Farsi and Spanish) speakers and (mis)pronounced by other English speakers. Naming conventions included word order (e.g., surnames coming first, or parents' surnames both used); common transcription variations included Romanization issues (e.g., Farsi character that is written as either 'v' or 'w').", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LINGUISTIC RULE COMPONENT", "sec_num": "4.0" }, { "text": "The general form of the rules is lhs --> rhs / leftContext rightContext where the left-hand-side (lhs) is a character string and the right-hand-side is a string with a possible weight, so that the rules could be associated with a plausibility factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "LINGUISTIC RULE COMPONENT", "sec_num": "4.0" }, { "text": "Rules may include a specific context; if a specific environment is not described, the rule applies in all cases. Table 9 shows sample rules and examples of output strings generated by applying the rules. The 'N/A' column gives examples of name strings for which a rule does not apply because the specified context is absent. An example with plausibility weights is also shown.", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 120, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "LINGUISTIC RULE COMPONENT", "sec_num": "4.0" }, { "text": "Although the statistical model building is computationally intensive and time-consuming (several hours), the actual classification procedure is very efficient. The average cpu time to classify a query name was under 200 msec on a VAX-11/780. The rule component that generates spelling variants can process 100 query names in about 2-6 cpu seconds, the difference in time depending on average length of nal-ne.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PERFORMANCE", "sec_num": "5.0" }, { "text": "As for retrieval performance, in a test of 160 query names (including names known to be in the database and spelling variants not known to be in the database), there were 111 hits (69%) using NYSIIS procedures alone and 141 hits (88%) using the frontend language classifier and linguistic rules and sending the expanded query set to NYSIIS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PERFORMANCE", "sec_num": "5.0" }, { "text": "In recent work, this technique has been extended to include modeling a database of Slavic surnames. Language classification accuracy based on a combined database of 13000 surnames representing Spanish, Farsi, Vietnamese, Slavic and 'other' names, with combined training data (1000 names from each language group to build each language model) and test data (remaining 8000 names), is 96.8% for Vietnamese, 87.7% for Farsi, 86.9% for Spanish, 86.5% for Slavic, and 82.9% for 'other'.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PERFORMANCE", "sec_num": "5.0" }, { "text": "an Utterance, |ournal of the Acoustical Society of America, 62 (3):708-713.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "PERFORMANCE", "sec_num": "5.0" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Statistical Models for Automatic Language Identification", "authors": [ { "first": "K", "middle": [ "P" ], "last": "Li", "suffix": "" }, { "first": "Thomas", "middle": [ "J" ], "last": "Edwards", "suffix": "" } ], "year": 1980, "venue": "Proc. IEEE International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "884--887", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li, K. P. and Edwards, Thomas J. 1980 Statistical Models for Automatic Language Identification, Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, Denver, Colorado, 884-887.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Accessing Individual Records from Personal Data Files Using Non-Unique Identifiers", "authors": [], "year": null, "venue": "National Bureau of Standards Special Publication", "volume": "500", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Accessing Individual Records from Personal Data Files Using Non-Unique Identifiers. Computer Science and Technology, National Bureau of Standards Special Publication 500-2, Washington, D.C.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "New York State Identification and Intelligence System", "authors": [ { "first": "Robert", "middle": [ "L" ], "last": "Taft", "suffix": "" } ], "year": 1970, "venue": "Special Report", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taft, Robert L. 1970 Name Search Techniques. New York State Identification and Intelligence System, Special Report No. 1, Albany, New York.", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "text": "Four State HMM Based on Sample Data Final Hidden Markov Model Pa'ram'eters I Four Stat% State Ou.!put Model I", "num": null, "html": null, "content": "
there are 15 letter symbols:
" }, "TABREF1": { "type_str": "table", "text": "Output from Two, Four and Six State HMM for Sample Data Outputs from Illdden Markov Models", "num": null, "html": null, "content": "
Two StatesFour StatesSix States
aadccababcde
beababe
abcacaaabccabcd
dcaceabdabcd~
aaedbabda
cababcd\u00a2
caeaabeabc
cababed
cbcabab
ecababe
babeab
cbbcbcaebdabedabe
aaa
caababed
babeabed
cbabccdccabe
odeabccab
bccbabebdababe
bcababed
ddababed
dcaabeabcd\u00a2
adabeda
cnbabode
cabeabed
baabab
baeaabeab
babeab
baaabcde
cabbdaba
baba
acabeab
" }, "TABREF2": { "type_str": "table", "text": "", "num": null, "html": null, "content": "
Hidden Markov Model Parameters
Eight State, State Output Model for Spanish
Output Probabilities
SymbolState
12345678
CR000
-000.00427000
a0.04790.013300.00420.07530.3240.219
b0.0020800.06810.001580.04270
C0.019300.1270.002220.08640
d0.07550.02070.06010.2290.0408
e0.5670.0320.001690.004770.003680.1960.0268
f000.0087500.061200
00.020700.17400.05200.00161
h000000.08250.01090
00.004320.049500.0130.001930.1640.00442
0.01040.023300.00295
00.002520000.0012300
100.00480.1890.0660.O6260.05650.005590.0118
m00.0048400.1180.004480.091700
n00.07430.2620.06970.0593000.0252
o00.007840.00968000.01220.1860.189
P00.01210.008250.01320.01380.12200
q0000.01490.01990.0055100
r00.05280.3460.07940.2730.1410.01290.00279
s00.03930.04420.009920.008990.08720.123
0.03390.07260.1550.002880.0131
0.001620.00476000.10.00671
v00.01500.088400.017700
w000.0010300.0021300
x0000000.00183
y0.001980.0130.00310.004650.0014900.00534
z0.001750.0028700.140.0072700.368
Smte Transiuon Pmbabilides
FromI4To78
l0.3390.003230.6020.0548
20.009680.0750.005610.08690.002120.006650.814
30.06150.2690.03530.2590.2350.00970.02530.104
4\"00.01010.013200.005030.02450.9290.0182
50.01170.2280.004770.004660.05370.001450.5420.154
6000.05870.03410-0.05640.850
70.01650.130.5060.1620.06270.009770.02070.0915
80.95400.0016900.007230.002160.008580.0256
" } } } }