Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
33.1 kB
{
"paper_id": "M98-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:16:10.851577Z"
},
"title": "Using Collocation Statistics in Information Extraction",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Manitoba Winnipeg",
"location": {
"postCode": "R3T 2N2",
"region": "Manitoba",
"country": "Canada"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "",
"pdf_parse": {
"paper_id": "M98-1006",
"_pdf_hash": "",
"abstract": [],
"body_text": [
{
"text": "Our main objective in participating MUC-7 is to investigate and experiment with the use of collocation statistics in information extraction. A collocation is a habitual word combination, such as weather a storm\", le a lawsuit\", and the falling yen\". Collocation statistics refers to the frequency counts of the collocational relations extracted from a parsed corpus. For example, out of 6577 instances of addition\" in a corpus, 5190 was used as the object of in\". Out of 3214 instances of hire\", 12 of them take alien\" as the object.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "We participated in two tasks: Named Entity and Coreference. In both tasks, the input text is processed in two passes. During the rst pass we use the parse trees of input texts, combined with collocation statistics obtained from a large corpus, to automatically acquire or enrich lexical entries which are then used in the second pass.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "We de ne a collocation to be a dependency triple that consists of three elds:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COLLOCATION DATABASE",
"sec_num": null
},
{
"text": "word, relation, relative where the word eld is a word in a sentence, the relative eld can either bethe modi ee or a modi er of word, and the relation eld speci es the type of the relationship between word and relative as well as their parts of speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COLLOCATION DATABASE",
"sec_num": null
},
{
"text": "For example, the dependency triples extracted from the sentence I have a brown dog\" are: The identi ers for the dependency types are explained in Table 1 . We used MINIPAR, a descendent of PRINCIPAR 2 , to parse a text corpus that is made up of 55-million-word Wall Street Journal and 45-million-word San Jose Mercury. Two steps were taken to reduce the numberof errors in the parsed corpus. Firstly, only sentences with no more than 25 words are fed into the parser. Secondly, only complete parses are included in the parsed corpus. The 100 million word text corpus is parsed in about 72 hours on a Pentium 200 with 80MB memory. There are about 22 million words in the parse trees. Figure 1 shows an example entry in the resulting collocation database. Each e n try contains of all the dependency triples that have the same word eld. The dependency triples in an entry are sorted rst in the order of the part of speech of their word elds, then the relation eld, and then the relative eld.",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 153,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 683,
"end": 691,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "COLLOCATION DATABASE",
"sec_num": null
},
{
"text": "have V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COLLOCATION DATABASE",
"sec_num": null
},
{
"text": "The symbols used in Figure 1 are explained as follows. Let X beamultiset. The symbolkXk stands for the number of elements in X and jXj stands for the number of distinct elements in X. F or example, a. kreview, V:comp1:N, acquisitionk is the number of times acquisition\" is used as the object of the verb review\". b. kreview, *, *k is the numberof dependency triples in which the word eld is review\" which can be a noun or a verb. c. kreview, V:jvab:A, *k is the number of times v review is pre-modi ed by an adverb. d. jreview, V:jvab:A, *j is the number of distinct adverbs that were used as a pre-modi er of v review . e. k*, *, *k is the total number of dependency triples, which i s t wice the number of dependency relationships in the parsed corpus. f. kreview, Nk is the number of times the word review\" is used as a noun. g. ",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 28,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "COLLOCATION DATABASE",
"sec_num": null
},
{
"text": "Our named entity recognizer is a nite-state pattern matcher, which w as developed as part University of Manitoba MUC-6 e ort. The pattern matcher has access to both lexical items and surface strings in the input text. In MUC-7, we extended the earlier system in two w a ys:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NAMED ENTITY RECOGNITION",
"sec_num": null
},
{
"text": "We extracted recognition rules automatically from the collocation database to augment the manually coded pattern rules. We treated the collocational context of words in the input texts as features and used a Naive-Bayes classi er to categorized unknown proper names, which are then inserted into the systems lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NAMED ENTITY RECOGNITION",
"sec_num": null
},
{
"text": "A collocational context of a proper name is often a good indicator of its classi cation. For example, in the 22-million-word corpus, there are 33 instances where a proper noun is used as a prenominal modi er of managing director\". In 26 of the 33 instances, the proper name was classi ed as an organization. In the remaining 7 instances, the proper name was not classi ed. Therefore, if an unknown proper name is a prenominal modi er of managing director\", it is likely to refer to an organization. We extracted 3623 such contexts in which the frequency of one type of proper names is much greater as de ned by a rather arbitrary threshold than the frequencies of other types of proper names. If a proper name occurs in one of these contexts, we can then classify it accordingly. This use of the collocation database is equivalent to automatic generation of classi cation rules. In fact, some of the collocational contexts are equivalent to pattern-matching rules that were manually coded in the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NAMED ENTITY RECOGNITION",
"sec_num": null
},
{
"text": "There are only a small number of collocational contexts in which the classi cation of a proper name can be reliably determined. In most cases, a clear decision cannot be reached based on a single collocational context. For example, among 1504 objects of convince\", 49 of them were classi ed as organizations, and 457 of them were classi ed as persons. This suggests that if a proper name is used as the object of convince\", it is likely that the name refers to a person. However, there is also signi cant probability that the name refers to an organization. Instead of making the decision based on this single piece of evidence, we collect from the input texts all the collocational contexts in which an unknown proper names occurred. We then classify the the proper name with a naive Bayes classi er, using the the set of collocation contexts as features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NAMED ENTITY RECOGNITION",
"sec_num": null
},
{
"text": "The naive Bayes classi er uses a table to store the frequencies of proper name classes in collocational contexts. Sample entries of the frequency table are shown in Table 2 . Each row in the table represents a collocation feature. The rst column is a collocation feature. Words with this feature have been observed to occur at position X in the second column. The third to fth columns contain the frequencies of di erent proper name classes.",
"cite_spans": [],
"ref_spans": [
{
"start": 165,
"end": 172,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "NAMED ENTITY RECOGNITION",
"sec_num": null
},
{
"text": "Let C bea class of proper name C is one of LOC, ORG, or PER. Let F i beacollocation feature. Classi cation decision is made by nd the class C that maximizes Q k i=1 PF i jCP C, where F 1 ; F 2 ; : : : F k are the features of an unknown proper name. The probability PF i jC is estimated by m-estimates 5 , with m = 1 and p = 1 jC F j as the parameters, where C Fis the set of collocation features:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NAMED ENTITY RECOGNITION",
"sec_num": null
},
{
"text": "P m F i jC = k F i ; C k + 1 j C F j P f 2 C F k f;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NAMED ENTITY RECOGNITION",
"sec_num": null
},
{
"text": "Ck+ 1 where kF i ; C k denotes the frequency of words that belong to C in the context represented by f.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NAMED ENTITY RECOGNITION",
"sec_num": null
},
{
"text": "Example: The walkthrough article contains several occurrences of the word Xichang\" which i s not found in our lexicon. The parser extracted the following set of collocation contexts from the formal testing corpus:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NAMED ENTITY RECOGNITION",
"sec_num": null
},
{
"text": "1. the Xichang base\", where Xichang is used as the prenominal modi er of base\" base|N:nn:N; 2. the Xichang site\", where Xichang is used as the prenominal modi er of site\" site|N:nn:N; 3. the site in Xichang\", from which t w o features are extracted: the object of in\" in|P:pcomp:N; indirect modi er of site\" via the preposition in\" site|N:pnp-in:N.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NAMED ENTITY RECOGNITION",
"sec_num": null
},
{
"text": "The frequencies of the features are shown in Table 3 . These features allowed the naive Bayes classi er to correctly classify Xichang\" as a locale. Automatically acquiring lexical information on the y is an double edged sword. On the one hand, it allows classi cation of proper names that would otherwise beunclassi ed. On the other hand, since there is no human con rmation, the correctness of the automatically acquired lexical items cannot be guaranteed. When incorrect information is entered into the lexicon, a single error may propagate to many places. For example, during the development of our system, a combination of parser errors and the naive B a y es classi cation caused the word I\" to be added into the lexicon as a personal name. During the second pass, 143 spurious personal names were generated. Our NE evaluation results are shown in Table 4 . The pass1\" results are obtained by manually coded patterns in conjunction with the classi cation rules automatically extracted from the collocation database. With the naive Bayes classi cation, the recall is boosted by 6 percent while the precision is decreased by 2 with an overall increase of F-measure by 2.67. ",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 52,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 853,
"end": 860,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "NAMED ENTITY RECOGNITION",
"sec_num": null
},
{
"text": "Our coreference recognition subsystem used the same constraint-based model as our MUC-6 system. This model consists of an integrator and a set of independent modules, such a s s y n tactic patterns e.g., copula construction and appositive, string matching, binding theory, and centering heuristics. Each module proposes weighted assertions to the integrator. There are two t ypes of assertions. An equality assertion states that two noun phrases have the same referent. An inequality assertion states that two noun phrases must not have the same referent. The modules are allowed to freely contradict one another, or even themselves. The integrator use the weights associated with the assertions to resolve the con icts. A discourse model is constructed incrementally by the sequence of assertions that are sorted in descending order of their weights. When an assertion is consistent with the current model, the model is modi ed accordingly. Otherwise, the assertion is ignored and the model remains the same. One of the important factors to determine whether or not two noun phrases may refer to the same entity is their semantic compatibility. A personal pronoun must refer to a person. For example, the pronoun it\" may refer to an organization, an artifact, but not a person. A plane\" may refer to an aircraft. A disaster\" may refer to a crash. In MUC-6, we used the WordNet to determine the semantic compatibility and similarity b e t w een two noun phrases. However, without the ability to determine the intended sense of a word in the input text, we had to say that all senses are possible. 1 The problem with this approach is that the WordNet, like a n y other general purpose lexical resource, aims at providing broad-coverage. Consequently, it includes many usages of words that are very rare in our domain of interest. For example, one of the 8 potential senses of company\" in WordNet 1.5 is a visitor visitant\", which i s a h yponym of person\". This usage of the word practically never happens in newspaper articles. However, its existence prevents us to make assertions that personal pronouns like she\" cannot co-refer with company\".",
"cite_spans": [
{
"start": 1597,
"end": 1598,
"text": "1",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "COREFERENCE",
"sec_num": null
},
{
"text": "In MUC-7, we developed a word sense disambiguation WSD module, which removes some of the implausible senses from the list of potential senses. It does not necessarily narrows down the possible senses of a word instance to a single one, however.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COREFERENCE",
"sec_num": null
},
{
"text": "Given a polysemous word w in the input text, we take the following steps to narrow d o wn the possibilities for its intended meaning:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COREFERENCE",
"sec_num": null
},
{
"text": "1. Retrieve collocational contexts of w from the parse trees of the input text. 2. For each collocational context of w, retrieve its set of collocates, i.e., the set of words that occurred in the same collocational context. Take the union of all the sets of collocates of w. 3. Take the intersection of the union and the set of similar words of w which are extracted automatically with the collocational database 4 . We call the words in the intersection selectors. 4 . Score the set of potential senses of w by computing the similarities between senses of w and senses of the selectors in the WordNet 3 . Remove the senses of w that received a score less than 75 of the highest score.",
"cite_spans": [
{
"start": 466,
"end": 467,
"text": "4",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "COREFERENCE",
"sec_num": null
},
{
"text": "Example: consider the word ghter\" in the following context in the walkthrough article:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COREFERENCE",
"sec_num": null
},
{
"text": "... in the multibillion-dollar deals for ghter jets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COREFERENCE",
"sec_num": null
},
{
"text": "WordNet lists three senses of ghter\": combatant, battler, disrupter champion, hero, defender, protector ghter aircraft, attack aircraft",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COREFERENCE",
"sec_num": null
},
{
"text": "The disambiguation of this word takes the following steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COREFERENCE",
"sec_num": null
},
{
"text": "1. The parser recognized that ghter\" was used as the prenominal modi er of jet\". 2. Retrieve w ords from the collocation database that were also used as the prenominal modi er of jet\" shown in Table 5 . Freq is the frequency of the word in the context, LogL is the log likelihood ratio between the word and the context 1 . 3. Retrieve the similar words of ghter\" from an automatically generated thesaurus: jet .04 The number after a word is the similarity b e t w een the word and ghter\". The intersection of the similar word list and the above table consists of: combat 0.04; reconnaissance 0.05; stealth 0.05; transport 0.05; 4. Find a sense of ghter\" in WordNet that is most similar to senses of combat\", reconnaissance\", stealth\" or transport\". The ghter aircraft\" sense of ghter\" was selected.",
"cite_spans": [],
"ref_spans": [
{
"start": 193,
"end": 200,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "COREFERENCE",
"sec_num": null
},
{
"text": "We submitted two sets of results in MUC-7:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COREFERENCE",
"sec_num": null
},
{
"text": "the nowsd\" result in which the senses of a word are chosen simply by c hoosing its rst two senses in the WordNet. the o cial result that employs the above w ord sense disambiguation algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "COREFERENCE",
"sec_num": null
},
{
"text": "The results are summarized in Table 6 . Although the di erence between the use of WSD and the baseline is quite small, it turns out to be statistically signi cant. In some of the 20 input texts that were scored in coreference evaluation, the WSD module did not make any di erence. However, whenever there was a di erence it was always an improvement. It is also worth noting that, with WSD, both the recall and precision are increased. ",
"cite_spans": [],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "COREFERENCE",
"sec_num": null
},
{
"text": "The use of collocational statistics greatly improved the performance of our named entity recognition system. Although collocation-based Word Sense Disambiguation lead only to a small improvement in coreference recognition, the di erence is nonetheless statistically signi cant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSIONS",
"sec_num": null
},
{
"text": "In hindsight, we probably should have just used the rst sense listed in the WordNet for each w ord.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is supported by NSERC Research Grant OGP121338 and a research contract awarded to Nalante Inc. by Communications Security Establishment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ACKNOWLEDGEMENTS",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Accurate methods for the statistics of surprise and coincidence",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Dunning",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "191",
"issue": "",
"pages": "61--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Dunning. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 191:61 74, March 1993.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Principle-based parsing without overgeneration",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of ACL 93",
"volume": "112",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. Principle-based parsing without overgeneration. In Proceedings of ACL 93, pages 112 120, Columbus, Ohio, 1993.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using syntactic dependency as local context to resolve word sense ambiguity",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of ACL EACL-97",
"volume": "",
"issue": "",
"pages": "64--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. Using syntactic dependency as local context to resolve word sense ambiguity. In Proceedings of ACL EACL-97, pages 64 71, Madrid, Spain, July 1997.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic retrieval and clustering of similar words",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of COLING-ACL '98",
"volume": "",
"issue": "",
"pages": "768--774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. Automatic retrieval and clustering of similar words. In Proceedings of COLING- ACL '98, pages 768 774, Montreal, Canada, August 1998.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Machine Learning",
"authors": [
{
"first": "Tom",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom M. Mitchell. Machine Learning. McGraw-Hill, 1997.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": ":subj:N I I N:r-subj:V have have V:comp1:N dog dog N:r-comp1:V have dog N:jnab:A brown brown A:r-jnab:N dog dog N:det:D a a D:r-det:N dog",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": "An example entry in the Collocation Database i. kreview, *k is the total number of occurrences of the word review\" used as any category in the parsed corpus.",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"num": null,
"text": "Dependency types",
"content": "<table><tr><td>Label N:det:D N:jnab:A N:nn:N V:comp1:N a v erb and its noun object Relationship between: a noun and its determiner a noun and its adjectival modi er a noun and its nominal modi er V:subj:N a v erb and its subject V:jvab:A a v erb and its adverbial modi er</td></tr></table>",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"text": "Frequency of Collocation Features",
"content": "<table><tr><td>Collocation Feature control|N:r-comp1:V control|N:r-gen:N control|N:r-nn:N control|N:r-subj:V control|N:subj:N convene|N:r-comp1:V convene|N:r-subj:V convention|N:r-gen:N X's convention Context Pattern to control X X's control the X control X to control X is the control to convene X X to convene convention|N:r-nn:N the X convention</td><td>Frequency Counts LOC ORG PER 9 87 39 14 14 54 6 0 0 10 99 307 0 3 0 0 5 0 0 10 18 0 4 0 5 23 5</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "Frequencies of features of Xichang\"",
"content": "<table><tr><td>Collocation Feature base|N:nn:N site|N:nn:N in|P:pcomp:N site|N:pnp-in:N</td><td>Frequency Counts LOC ORG PER 77 19 0 26 16 34 35641 15630 0 7 0 0</td></tr></table>",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"text": "Evaluation results of the named entity task",
"content": "<table><tr><td>pass1 o cial</td><td>Precision Recall F-measure 89 79 83.70 87 85 86.37</td></tr></table>",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": "Collocates of ghter\" as prenominal modi er of jet\"",
"content": "<table><tr><td>Word ghter ORG passenger Lear PROD Concorde Avianca stealth MiG series Aero ot Delta CANADIENS NUM-passenger BLACKHAWKS Egyptair trainer Advanced Tactical Fighter Qantas training Gulfstream PSA ground attack Alitalia PAL NUM Syrian</td><td>Freq LogL Word 80 449.56 NUM 187 59.56 air force 17 51.93 Airbus 6 37.79 Harrier 14 30.08 -bound 4 22.22 Mirage 3 15.93 widebody 4 10.43 turbofan 2 10.35 KAL 5 8.69 cargo 2 8.16 four-engine 3 7.53 steering 2 6.34 water 1 6.17 Dragonair 2 5.98 Skyhawk 1 5.65 transport 2 5.50 Coast guard 1 5.31 reconnaissance 1 5.05 Pan American 3 4.97 United Express 1 4.85 Swissair 1 4.69 ANA 1 4.54 NUM-seat 1 4.12 Lufthansa 1 3.89 KLM 1 3.76 whirlpool</td><td>Freq LogL 212 160.15 13 56.28 10 44.18 5 33.62 3 22.68 4 20.02 3 15.66 2 10.35 2 9.23 4 8.30 1 7.55 2 7.09 6 6.23 1 6.17 1 5.65 3 5.63 3 5.43 2 5.12 1 5.05 1 4.85 1 4.69 1 4.69 1 4.21 1 3.96 1 3.89 1 3.03</td></tr></table>",
"html": null
},
"TABREF7": {
"type_str": "table",
"num": null,
"text": "Coreference recognition results",
"content": "<table><tr><td>Precision Recall F-measure nowsd 62.7 57.5 60.0 o cial 64.2 58.2 61.1</td></tr></table>",
"html": null
}
}
}
}