Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
31.8 kB
{
"paper_id": "W89-0213",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:44:40.157468Z"
},
"title": "P a rsin g C ontinuous S p eech by HMM-LR M ethod",
"authors": [
{
"first": "Kenji",
"middle": [
"K"
],
"last": "Ita",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ATR Interpreting Telephony Research Laboratories Seika-chou",
"location": {
"addrLine": "Souraku-gun",
"postCode": "619-02",
"settlement": "Kyoto",
"country": "JAPAN"
}
},
"email": ""
},
{
"first": "Kaw",
"middle": [],
"last": "Takeshi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ATR Interpreting Telephony Research Laboratories Seika-chou",
"location": {
"addrLine": "Souraku-gun",
"postCode": "619-02",
"settlement": "Kyoto",
"country": "JAPAN"
}
},
"email": ""
},
{
"first": "H Iroaki",
"middle": [],
"last": "Abata",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ATR Interpreting Telephony Research Laboratories Seika-chou",
"location": {
"addrLine": "Souraku-gun",
"postCode": "619-02",
"settlement": "Kyoto",
"country": "JAPAN"
}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Saito",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "ATR Interpreting Telephony Research Laboratories Seika-chou",
"location": {
"addrLine": "Souraku-gun",
"postCode": "619-02",
"settlement": "Kyoto",
"country": "JAPAN"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a speech parsing method called HMM-LR. In HMM-LR, an LR parsing table is used to predict phones in speech input, and the system drives an HMM-based speech recognizer directly without any intervening structures such as a phone lattice. Very accurate, efficient speech parsing is achieved through the integrated processes of speech recognition and language analysis. The HMM-LR method is applied to large-vocabulary speaker-dependent Japanese phrase recognition. The recognition rate is 87.1% for the top candidates and 97.7% for the five best candidates.",
"pdf_parse": {
"paper_id": "W89-0213",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a speech parsing method called HMM-LR. In HMM-LR, an LR parsing table is used to predict phones in speech input, and the system drives an HMM-based speech recognizer directly without any intervening structures such as a phone lattice. Very accurate, efficient speech parsing is achieved through the integrated processes of speech recognition and language analysis. The HMM-LR method is applied to large-vocabulary speaker-dependent Japanese phrase recognition. The recognition rate is 87.1% for the top candidates and 97.7% for the five best candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper describes a speech parsing method called HMM-LR. This method uses an efficient parsing mechanism, a generalized LR parser, driving an HMM-based speech recognizer directly without any intervening structures such as a phone lattice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Generalized LR parsing [1] is a kind of LR parsing [2] , originally developed for programming languages and has been extended to handle arbitrary context-free grammars. An LR parser is guided by an LR parsing table automatically created from context-free grammar rules, and proceeds left-toright without backtracking. Compared with other parsing algorithms such as the CYK (Cocke-Younger-Kasami) algorithm [3] or Earley's algorithm [4] , a generalized LR parsing algorithm is the most efficient algorithm for natural language grammars. There have been some applications of generalized LR parsing to speech recognition. Tomita [5] proposes an efficient word lattice parsing algorithm. Saito [6] proposes a method of parsing phoneme sequences that include altered, missing and/or extra phonemes. However, these methods are inadequate because of the information loss due to signal-symbol conversion. The HMM-LR method does not use any intervening structures. The system drives an HMM-based speech recognizer directly for detecting/verifying phones predicted using an LR parsing table.",
"cite_spans": [
{
"start": 23,
"end": 26,
"text": "[1]",
"ref_id": "BIBREF0"
},
{
"start": 51,
"end": 54,
"text": "[2]",
"ref_id": "BIBREF1"
},
{
"start": 406,
"end": 409,
"text": "[3]",
"ref_id": "BIBREF2"
},
{
"start": 432,
"end": 435,
"text": "[4]",
"ref_id": "BIBREF3"
},
{
"start": 626,
"end": 629,
"text": "[5]",
"ref_id": "BIBREF4"
},
{
"start": 690,
"end": 693,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "HMM (Hidden Markov Models) [7] has the ability to cope with the acoustical variation of speech by means of stochastic modeling, and it has been used widely for speech recognition. In HMM, any word models can be composed of phone models. Thus, it is easy to construct a large vocabulary speech recognition system. This paper is organized as follows. Section 2 describes the LR parsing mechanism. Section 3 describes HMM. Section 4 describes the HMM-LR method. Section 5 describes recognition experiments using HMM-LR. Finally, section 6 presents our conclusions. 'Zj (a.j(t-l)ajibjiiyt)) a,(0 is the probability that the Markov process is in state i having generated code sequence yi,y2 ,...,yi. The final probability for the phone is given by apiT) where F is a final state of the phone model and T is a length of input code sequence.",
"cite_spans": [
{
"start": 27,
"end": 30,
"text": "[7]",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In standard LR parsing, the next parser action (shift, reduce, accept or error) is determined using the current parser state and next input symbol to check the LR parsing table. This parsing mechanism is valid only for symbolic data and cannot be applied simply to continuous data such as speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic M echanism",
"sec_num": "4.1"
},
{
"text": "In HMM-LR, the LR parsing table is used to predict the next phone in the speech. For the phone prediction, the grammar terminal symbols are phones instead of the grammatical category names generally used in natural language processing. Consequently, a lexicon for the specified task is embedded in the grammar. The following describes the basic mechanism of HMM-LR (see Fig.4 ). First, the parser picks up all phones which the initial state of the LR parsing table predicts, and invokes the HMM to verify the existence of these predicted phones. During this process, all possible parsing trees are constructed in Fig. 3 HMM phone model parallel. The HMM phone verifier receives a probability array which includes end point candidates and their probabilities, and updates it using an HMM probability calculation process (the forward algorithm). This probability array is attached to each partial parsing tree. When the highest probability in the array is lower than a threshold level, the partial parsing tree is pruned by threshold level, and also by beam-search technique. The parsing process proceeds in this way, and stops if the parser detects an accept action in the LR parsing table. In this case, if the best probability point reaches the end of speech data, parsing ends successfully. A very accurate, efficient parsing method is achieved through the integrated process of speech recognition and language analysis.",
"cite_spans": [],
"ref_spans": [
{
"start": 370,
"end": 375,
"text": "Fig.4",
"ref_id": null
},
{
"start": 613,
"end": 619,
"text": "Fig. 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Basic M echanism",
"sec_num": "4.1"
},
{
"text": "Moreover, HMM units are phones, and any word models can be composed of phone models, so it is easy to construct a large vocabulary speech recognition system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic M echanism",
"sec_num": "4.1"
},
{
"text": "To describe an algorithm for the HMM-LR method, we first introduce a data structure named cell. A cell is a structure with information about one possible parsing. The following are kept in the cell:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A lg o r it h m",
"sec_num": "4.2"
},
{
"text": "\u2022 LR stack, with information for parsing control.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A lg o r it h m",
"sec_num": "4.2"
},
{
"text": "\u2022 Probability array, which includes end point candidates and their probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A lg o r it h m",
"sec_num": "4.2"
},
{
"text": "The algorithm is summarized below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A lg o r it h m",
"sec_num": "4.2"
},
{
"text": "1. Initialization. Create a new cell C. Push the LR initial state 0 on top of the LR stack of C. Initialize the probability array Q of C; Recognition results are kept in cells. Generally, many recognition candidates exist, and it is possible to rank these candidates using a value Q{T).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A lg o r it h m",
"sec_num": "4.2"
},
{
"text": "Qit) = t = 0 1 Grammar rules V -* Vn t m (\") V c o n |(n ) V \u00ab ,m 1 -\" \" O t h I I V n \u00abm2 -* m o V n . m j -* m o r a V conj 1 -* r U V COn,2 \u00ab u V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A lg o r it h m",
"sec_num": "4.2"
},
{
"text": "The set S constructed in step 2 above is quite large. It is possible to set an upper limit on the number of elements in S by beam-search technique. It is also possible to use local ambiguity packing [1] to represent cells efficiently.",
"cite_spans": [
{
"start": 199,
"end": 202,
"text": "[1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A lg o r it h m",
"sec_num": "4.2"
},
{
"text": "The HMM-LR method is applied to speaker-dependent Japanese phrase recognition. Duration control techniques and separate vector quantization are used to achieve accurate phone recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "Two duration control techniques are used, one is phone duration control for each HMM phone model and the other is state duration control for each HMM state [8] . Phone duration control is carried out by weighting HMM output probabilities with phone duration histograms obtained from training sample statistics. State duration control is realized by state duration penalties calculated by modified forward-backward probabilities of training samples. In separate vector quantization, spectral features, spectral dynamic features and energy are quantized separately. In the training stage, the output vector probabilities of these three codebooks are estimated simultaneously and independently, and in the recognition stage all the output probabilities are calculated as a product of the output vector probabilities in these codebooks.",
"cite_spans": [
{
"start": 156,
"end": 159,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "The grammar used in the experiments describes a general Japanese syntax of phrases and is written in the form of context-free grammar. Lexical entries are also written in the form of contextfree grammar. There are 1,461 grammar rules including 1,035 different words, and perplexity per phone is 5.87. Assuming that the average phone length per word is three, the word perplexity is more than 100. Table 1 shows the phrase recognition rates for three speakers. The average recognition rate is 87.1% for the top candidate and 97.7% for the five best candidates. Japanese is an agglutinative language, and there are many variations of affixes after an independent word. The problem here is that recognition errors are often mistakes caused by these affixes. ",
"cite_spans": [],
"ref_spans": [
{
"start": 397,
"end": 404,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "In this paper, we described a speech parsing method called HMM-LR, which uses a generalized LR parsing mechanism and an HMM-based speech recognizer. The experiment results show that an HMM-LR method is very effective in continuous speech recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "An HMM-LR continuous speech recognition system is used as part of the SL-TRANS (Spoken Language TRANSlation) system developed at ATR Interpreting Telephony Research Laboratories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
}
],
"back_matter": [
{
"text": "The authors would like to express their gratitude to Dr.Akira Kurematsu, president of ATR In terp retin g Telephony Research Laboratories, for his encouragement and support, which made this research possible, and to Mr.Toshiyuki Hanazawa for the HMM program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Efficient Parsing for Natural Language: A Fast Algorithm for Practical Systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tomita",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomita, M.: Efficient Parsing for Natural Language: A Fast Algorithm for Practical Systems, Kluwer Academic Publishers (1986).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Compilers, Principles, Techniques, and Tools",
"authors": [
{
"first": "A",
"middle": [
"V"
],
"last": "Aho",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sethi",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aho, A.V., Sethi, R. and Ullman, J.D.: Compilers, Principles, Techniques, and Tools, Addison- Wesley (1986).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The Theory of Parsing, Tranlation, and Compiling",
"authors": [
{
"first": "A",
"middle": [
"V"
],
"last": "Aho",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aho, A.V. and Ullman, J.D.: The Theory of Parsing, Tranlation, and Compiling, Prentice-Hall, Englewood Cliffs (1972).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A n Efficient Context-Free Parsing Algorithm",
"authors": [
{
"first": "J",
"middle": [],
"last": "Earley",
"suffix": ""
}
],
"year": 1970,
"venue": "Comm. ACM",
"volume": "13",
"issue": "2",
"pages": "94--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Earley, J.: A n Efficient Context-Free Parsing Algorithm, Comm. ACM, Vol.13, No.2, pp.94-102 (1970).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A n Efficient Word Lattice Parsing Algorithm for Continuous Speech Recognition",
"authors": [
{
"first": "M",
"middle": [],
"last": "Tomita",
"suffix": ""
}
],
"year": 1986,
"venue": "Proc. IEEE Int. Conf. Acoust. Speech Signal Process. ICASSP-86",
"volume": "",
"issue": "",
"pages": "1569--1572",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomita, M.: A n Efficient Word Lattice Parsing Algorithm for Continuous Speech Recognition, Proc. IEEE Int. Conf. Acoust. Speech Signal Process. ICASSP-86, pp.1569-1572 (1986).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Parsing Noisy Sentences",
"authors": [
{
"first": "H",
"middle": [],
"last": "Saito",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Tomita",
"suffix": ""
}
],
"year": 1988,
"venue": "Proc. 12th Int. Conf. Comput. Linguist. COLING-88",
"volume": "",
"issue": "",
"pages": "561--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saito, H. and Tomita, M.: Parsing Noisy Sentences, Proc. 12th Int. Conf. Comput. Linguist. COLING-88, p p .561-566 (1988)",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An Introduction to the Application of the Theory of Probabilistic Functions of a Markov Process to Automatic Speech Recognition",
"authors": [
{
"first": "S",
"middle": [
"E"
],
"last": "Levinson",
"suffix": ""
},
{
"first": "L",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
},
{
"first": "M",
"middle": [
"M"
],
"last": "Sondhi",
"suffix": ""
}
],
"year": 1983,
"venue": "Bell Syst. Tech. J",
"volume": "62",
"issue": "4",
"pages": "1035--1074",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Levinson, S.E., Rabiner, L.R. and Sondhi, M.M.: An Introduction to the Application of the Theory of Probabilistic Functions of a Markov Process to Automatic Speech Recognition, Bell Syst. Tech. J., Vol.62, No.4, pp.1035-1074 (1983).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Duration Control Methods for HMM Phoneme Recognition",
"authors": [
{
"first": "T",
"middle": [],
"last": "Hanazawa",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kawabata",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Shikano",
"suffix": ""
}
],
"year": 1988,
"venue": "The Second Joint Meeting of ASA and ASJ",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanazawa, T., Kawabata, T. and Shikano, K.: Duration Control Methods for HMM Phoneme Recognition, The Second Joint Meeting of ASA and ASJ (1988).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "le exam p le g ra m m a r is sh o w n in F ig .l, and the LR parsin g table, com piled from the g ram m ar au to m a tica lly , is sh o w n in F ig .2. T h e left part is the action table and the righ t part is the goto table. T he entry \"acc\" sta n d s for the action \"a c c e p t\", and b la n k sp aces rep resen t \" e rro r\" . T he term in al sym bol rep resen ts the en d of the input.3. H M M ( H id d e n M a r k o v M o d e ls)HM M is effective in ex p r essin g sp e e ch s t a tis tic a lly , so it h a s b een u sed w id e ly for sp eech recognition. F ig .3 sh o w s an ex a m p le of a phone m odel. A m odel h as a collection of s t a t e s c o n n ec te d by tr a n s itio n s . T w o sets of prob ab ilities are attach ed to each tran sition . O ne is a tr a n s itio n p r o b a b ility a LJ, w h ich provides the p robability for ta k in g tra n sitio n from state i to s t a te ;. T h e other is an o u tp u t p r o b a b ility b tJk, w h ich provides the probability of e m ittin g code k w h en ta k in g a tra n sitio n from state i to state j . T h e fo r w a r d -b a c k w a r d a lg o r ith m [7] can be u sed to e stim a te the m o d el's p a ra m eters g iv en a collection of tra in in g data. A fter e stim a tin g the m o d e l's p aram eters, the fo r w a r d a lg o r ith m ( tre llis a lg o r ith m ) can be used to verify p h on es as follow s.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "Basic mechanism of HMM-LR Ramification of cells. Construct a set S = {(C, s, a, x) | 3C, a, x ( C is a cell & C is not accepted & sis a state ofC & ACTION[s,a}= x & \"error\" }. For each element (C, s, a, x) \u20ac S, do operations below. If a set S is empty, parsing is completed. 3. l i x -ushift s' \", verify the existence of phone a. In this case, update the probability array Q of the cell C by the following computation. If max Q(i) (i= 1...T) is below a threshold level set in advance, the cell C is abandoned. Else push s' on top of the LR stack of the c C. 4. If x-\"reduce A-0\", same as standa. . ^R parsing. 5. If x = \"accept\" and Q{T) is larger than a threshold level, the cell C is accepted. If not, cell C is abandoned. Return to 2.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>The LR parsing algorithm is summarized below.</td></tr><tr><td>1. Initialization. Set p to point to the first symbol of the input. Push the initial state 0 on top of</td></tr><tr><td>the stack.</td></tr><tr><td>2. Consult ACTION[s,a] where s is the state on top of the stack and a is the symbol pointed to by</td></tr><tr><td>P-</td></tr><tr><td>3. If ACTION[s,a] = \"shift s' \", push s' on top of the stack and advance p to the next input symbol.</td></tr><tr><td>4. If ACTION[s,a] ^\"reduce A-p\"</td></tr></table>",
"text": "The LR parser is deterministically guided by an LR parsing table with two subtables (action table and goto table). The action table determines the next parser action ACTION[s,a] from the state s currently on top of the stack and the current input symbol a. There are four kinds of actions, shift, reduce, accept and error. Shift means shift one word from input buffer onto the stack, reduce means reduce constituents on the stack using the grammar rule, accept means input is accepted by the grammar, and error means input is not accepted by the grammar. The goto table determines the next parser state GOTO[s,A] from the state s and the grammar symbol A. , pop |0| symbols off the stack and push GOTOis'A] where s'is the state now on top of the stack.",
"html": null
},
"TABREF1": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Phrase recognition rates",
"html": null
}
}
}
}