Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
66.2 kB
{
"paper_id": "I13-1017",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:14:06.537802Z"
},
"title": "Romanization-based Approach to Morphological Analysis in Korean SMS Text Processing",
"authors": [
{
"first": "Youngsam",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Seoul National University",
"location": {
"addrLine": "Gwanak-1, Gwanak-ro, Gwanak-gu",
"settlement": "Seoul",
"country": "South Korea"
}
},
"email": "youngsamy@gmail.com"
},
{
"first": "Hyopil",
"middle": [],
"last": "Shin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Seoul National University",
"location": {
"addrLine": "/ Gwanak-1, Gwanak-ro, Gwanak-gu",
"settlement": "Seoul",
"country": "South Korea"
}
},
"email": "hpshin@snu.ac.kr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this research, we suggest an approach to retrieval-related tasks for Korean SMS text. Most of the previous approaches to such text used morphological analysis as the routine stage of the preprocessing workflow, functionally equivalent to POS tagging. However, such approaches suffer difficulties since Short Message Service language usually contains irregular orthography, atypically spelled words, unspaced segments, etc. Two experiments were conducted to measure how well these problems can be avoided with the transliteration of Korean to Roman letters. In summary, we will argue that such a Romanization-based retrieval method has several advantages since it provides an easier way to preprocess the data with a variety of linguistic rules.",
"pdf_parse": {
"paper_id": "I13-1017",
"_pdf_hash": "",
"abstract": [
{
"text": "In this research, we suggest an approach to retrieval-related tasks for Korean SMS text. Most of the previous approaches to such text used morphological analysis as the routine stage of the preprocessing workflow, functionally equivalent to POS tagging. However, such approaches suffer difficulties since Short Message Service language usually contains irregular orthography, atypically spelled words, unspaced segments, etc. Two experiments were conducted to measure how well these problems can be avoided with the transliteration of Korean to Roman letters. In summary, we will argue that such a Romanization-based retrieval method has several advantages since it provides an easier way to preprocess the data with a variety of linguistic rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this internet era, everyday people express opinions, comments, or sentiments; all of which can be accessed via the web. Particularly with the popularization of mobile computing devices, it has become easier than ever for people to share messages using social media services like Twitter or Facebook. However, such an environment brings new challenges for researchers who aim to analyze or interpret this linguistic data. One of the problems they encounter is that these written texts have a different form than those in published books or articles. They were often called as short message service language, txt-speak, chat-speak, etc. This new data source has received attentions from various fields and researchers working in the field of sentiment analysis and opinion-mining often find that dealing with such texts using traditional approaches is problematic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For agglutinative languages like Korean, since words are formed by combining lemmas and various affixes, morphological analysis is required to find the functional meaning of each component. Most previous studies used morphological analysis only to preprocess the text, but this approach exhibits several weaknesses when used on the data that is written in SMS-like languages. First of all, texts are often unspaced to save on typing time and sentence length (e.g., Twitter only allows 140 characters per tweet). Secondly, many words are not typed in the same way as their dictionary entries; the letters are changed or reduced to smaller units due to morpho-phonetic variation and abbreviation processes. This paper will propose a new approach to overcome these shortcomings for morphologically rich languages while making use of Korean case studies. This approach adopts Yale Romanization to transliterate Korean alphabets into Roman letters, which, due to the way it handles Korean characters, allows for a more intuitive and easier way of implementing the relevant rewriting rules and handling morph-phonetic changes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 2, the problems of morphological analysis will be described and the properties of Korean SMS language will be reviewed. This will be followed up in Section 3 by an introduction to the Romanization-based framework and the method of employing linguistic rules. Section 4 will detail two retrieval experiments which were prepared to show the effectiveness of this approach. The first experiment was designed to observe whether the Romanization method could handle unspaced texts. The second experiment explored the possibility of covering phonetic variations of the target words using a small set of linguistic rules.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Transliteration methods have often been used for the task of keyword matching across different languages (Chen and Ku, 2002; Fujii and Ishikawa, 2001 ). In contrast, Han (2006) applied the transliteration method to perform part-of-speech tagging for Korean texts using Xerox Finite State Tool. Similarly, this paper proposes using the method not for Korean-English word equivalents but for Korean-to-varied Korean word detection.",
"cite_spans": [
{
"start": 105,
"end": 124,
"text": "(Chen and Ku, 2002;",
"ref_id": "BIBREF0"
},
{
"start": 125,
"end": 149,
"text": "Fujii and Ishikawa, 2001",
"ref_id": "BIBREF2"
},
{
"start": 166,
"end": 176,
"text": "Han (2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Research",
"sec_num": "2"
},
{
"text": "As the number of the users using social networking services increases rapidly, sentiment analysis or opinion mining capable of automatically extracting the sentiment orientation from online posts has been gaining attention from NLP researchers (Hu and Liu, 2004; Kim and Hovy, 2004; Wiebe, 2000; Pak and Paroubek, 2010) . As stated above, Korean is an agglutinative language and the chunks distinguished by space must be further separated into roots and affixes before they can be assigned a part-of-speech tag. This whole procedure is performed by morphological analysis and is critical to determining the meaning of a component. However, it is also known that such analysis can cause errors when not equipped with complete word entries to analyze the text. Such 'lack of lexicon' problems arise because after the morphological analysis categorizes all listed words in the sentence it classifies the remaining words as general nouns (Jang and Shin, 2010) . Consider the following.",
"cite_spans": [
{
"start": 244,
"end": 262,
"text": "(Hu and Liu, 2004;",
"ref_id": "BIBREF4"
},
{
"start": 263,
"end": 282,
"text": "Kim and Hovy, 2004;",
"ref_id": "BIBREF7"
},
{
"start": 283,
"end": 295,
"text": "Wiebe, 2000;",
"ref_id": "BIBREF14"
},
{
"start": 296,
"end": 319,
"text": "Pak and Paroubek, 2010)",
"ref_id": "BIBREF12"
},
{
"start": 934,
"end": 955,
"text": "(Jang and Shin, 2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problems of morphological analysis: lack of lexicon",
"sec_num": "2.1"
},
{
"text": "(1) \ub108\ubb34 \uc9c4\ubd80\ud55c \ub0b4\uc6a9 nemu cinpuha-n nayyong too stale-AD 1 content 'too stale contents' (2) \ub108\ubb34/a \uc9c4\ubd80/ncs \ud558/xpa \u3134/exm \ub0b4\uc6a9/nc nemu/a 2 cinpu/ncs ha/xpa n/exm nayyong/nc",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problems of morphological analysis: lack of lexicon",
"sec_num": "2.1"
},
{
"text": "1 Abbrebiates: AD(adnominal suffix), NM(nominative particle), IN(instrumental particle), SC(subordinative conjuctive suffix), CP(conjunctive particle), PST(past tense suffix), DC(declarative final suffix), RE(retrospective suffix), CN(conjectural suffix), PR(pronoun), PP(propositive suffix), AC(auxiliary conjunctive suffix), GE (genitive particle), LC(Locative particle)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problems of morphological analysis: lack of lexicon",
"sec_num": "2.1"
},
{
"text": "(3) \ub108/npp \ubb34\uc9c4/nc \ubd80/nc \ud55c/nc \ub0b4\uc6a9/nc ne/npp mucin/nc pu/nc han/nc nayyong/nc 'you Mujin(place name) wealth resentment contents' Sentence (3) is a misanalyzed version of sentence (1). The morphological analyzer's dictionary did not include the word entry ('cinbu') so the analyzer had to ignore the previous spacing and take the proper noun ('mucin') as a possible morpheme instead (Jang and Shin, 2010; p. 500) .",
"cite_spans": [
{
"start": 376,
"end": 397,
"text": "(Jang and Shin, 2010;",
"ref_id": "BIBREF5"
},
{
"start": 398,
"end": 405,
"text": "p. 500)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problems of morphological analysis: lack of lexicon",
"sec_num": "2.1"
},
{
"text": "As can be inferred from examples (1) ~ (3), typical morphological analysis consists of two stages: first, a sentence or clause is decomposed into relevant morphemes and then, second, the distinguished morphemes are assigned part-ofspeech tags which denote grammatical function. The reason why the morpheme separation stage precedes POS tagging is to avoid the sparse data problem caused by the multiplicity of morphological variants of the same stem (Han and Palmer, 2005) . However, the morpheme-based POS tagger in this process is vulnerable to irregular variations of word stems and, unfortunately, such variants are often found on the web. By the same reason it also produces erroneous results given unspaced texts since the complexity of the decomposing morphemes is very high.",
"cite_spans": [
{
"start": 450,
"end": 472,
"text": "(Han and Palmer, 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problems of morphological analysis: lack of lexicon",
"sec_num": "2.1"
},
{
"text": "This paper assumes that the morpheme analysis procedure is not feasible to process the SMS texts. In order to alleviate the pain, this research will focus on how one can extract the expected items from the linguistic data with which morpheme analysis does not work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problems of morphological analysis: lack of lexicon",
"sec_num": "2.1"
},
{
"text": "Socio-linguistic studies of the Korean SMS language have revealed that the irregular variations within the language are not arbitrarily irregular. The five distinguished properties have been summarized in Table 1 (Park, 2006; Lee, 2010; Kim, 2011) .",
"cite_spans": [
{
"start": 213,
"end": 225,
"text": "(Park, 2006;",
"ref_id": "BIBREF13"
},
{
"start": 226,
"end": 236,
"text": "Lee, 2010;",
"ref_id": "BIBREF9"
},
{
"start": 237,
"end": 247,
"text": "Kim, 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Properties of Korean SMS language",
"sec_num": "2.2"
},
{
"text": "Some of the properties in Table 1 can be found in English SMS texts as well, hinting that this set of the features may be due to common factors. 'Addition of sounds' is known as epenthesis phenomenon, existing in many languages including English; Crystal (2008) contended that many features of the texting language (logograms, initialisms, pictograms, abbreviations, nonstandard spellings) are not entirely new and have already been in writing systems for centuries. Ku nyeca-ka hakkyo-ey ka-ss-ta The woman(nyeca)-NM school(hakyo)-LC go-PST-DC 'The woman went to school' Linking sound or phonetic writing \uba4b\uc788\uc5b4 -> \uba38\uc2dc\uc368 mes-iss-e 'gorgeous' -> me-si-sse Reductions or shortenings Table 1 . Summarization of properties in Korean SMS text Ling and Baron (2007) reported that lexical shortening is the one of the most significant characteristics one can see in text messages. However, 'ignoring spacing' is the exception, since Korean suffixes can play as good predictors for the roles or the functions of the preceding stem. As such, removing spaces between phrases does not severely deteriorate the readers' understanding given the content. This study will focus on only three of the features presented in Table 1 : Unspacing, Linking, and lexical reduction. According to linguistic analysis (Park, 2006; Lee, 2010) , liaison and vowel reduction were very common among the phonetic variation of the words. Following that observation, this paper will incorporate a set of rules (presented in Park, 2006) in its experiment. Also, it will make use of the Romanization transliteration with the given phonological rules to cope with the lexical variations of the linguistic data.",
"cite_spans": [
{
"start": 247,
"end": 261,
"text": "Crystal (2008)",
"ref_id": "BIBREF1"
},
{
"start": 734,
"end": 755,
"text": "Ling and Baron (2007)",
"ref_id": "BIBREF11"
},
{
"start": 1288,
"end": 1300,
"text": "(Park, 2006;",
"ref_id": "BIBREF13"
},
{
"start": 1301,
"end": 1311,
"text": "Lee, 2010)",
"ref_id": "BIBREF9"
},
{
"start": 1487,
"end": 1498,
"text": "Park, 2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 1",
"ref_id": null
},
{
"start": 677,
"end": 684,
"text": "Table 1",
"ref_id": null
},
{
"start": 1202,
"end": 1209,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Properties of Korean SMS language",
"sec_num": "2.2"
},
{
"text": "\uba54\uc77c -> \uba5c meyil 'mail' -> meyl \uc11c\uc6b8 -> \uc124 sewul 'Seoul' -> sel Acronyms or abbreviation \uc560\ub2c8\uba54\uc774\uc158 -> \uc560\ub2c8 ay-ni-mey-i-syen 'animation' -> ay-ni \ube44\ubc00\ubc88\ud638 -> \ube44\ubc88 pi-mil-pen-ho 'password' -> pi-pen Addition of sounds \uc544\ube60 -> \uc555\ube60 a-ppa 'daddy' -> ap-ppa \uc5ec\ubcf4 -> \uc5ec\ubd09 ye-po 'honey' -> ye-pong",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Properties of Korean SMS language",
"sec_num": "2.2"
},
{
"text": "This section will provide the detailed contents of the lexical variation generation process. Basically, the generation process consists of the three main sub-modules: word-ending addition, vowelchange rules, and vowel omission. Each of these modules contains a set of linguistic rules. As a result, each target word in the list obtains its variants. These variants can then be used to check the input sentence for derived forms of the target word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Romanization-based morpheme retrieval process",
"sec_num": "3"
},
{
"text": "Yale Romanization is the transliteration systems developed at Yale University for Romanizing Mandarin, Cantonese, Korean, and Japanese. The Yale system of Korean 3 is generally used in linguistics and is adopted as the application of the transliteration process in this work. There are two other Romanization systems, Revised Romanization of Korean and McCune-Reischauer system, but since the emphasis of the systems is on how to transliterate entire Korean words to a string of elements of a pronounceable alphabet, only Yale Romanization has a one-to-one correspondence between Korean letters and English letters. Therefore, the other two systems are not considered in this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Yale Romanization",
"sec_num": "3.1"
},
{
"text": "The Korean alphabet, called Hangul, consists of blocks of multiple letters with each block representing a single syllable. For example, the first word of the Korean word, \ud55c\uae00 (hangul), can be decomposed into three letters ('\u314e'/'h', '\u314f'/'a', and '\u3134'/'n') though it is represented as a single character (or block) in Korean orthography. One advantage of using Yale Romanization is the ability to linearize the Korean syllables into a sequence of the phonemes and thus allowing the linking of alphabets with their sound properties. The examples in Table 1 show this phenomenon clearly. Although it seems '\uba4b\uc788\uc5b4'(mes-iss-e) and '\uba38\uc2dc\uc368'(me-si-sse) have quite different word forms, their romanized forms are identical; implicating that the latter is the phonetic writing version of the former. 4 Morphological analysis has difficulty when analyzing such phonetically written words since it makes distinctions based on Hangul syllables instead of the string of the letters. That is, 'mes-iss-e' and 'me-si-sse' are discriminated because the hyphens are taken as the boundary of the syllables even though this is not the case during pronunciation.",
"cite_spans": [
{
"start": 783,
"end": 784,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 544,
"end": 551,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Korean syllable",
"sec_num": "3.2"
},
{
"text": "In Korean grammar, verbs or adjectives do not come as independent morphemes, but always present along with an appropriate conjugation. This paper considers 17 word endings for the romanzied target words, following the standard grammar of Korean (~\ub2e4 '~ta', ~\uc740 '~un', ~\ub294 '~nun', ~\uace0 '~ko', ~\uae30 '~ki', ~\ub0d0 '~nya', ~\uc5c8\ub2e4 '~essta', ~\uc558\ub2e4 '~assta', ~\ub4e0\uc9c0 '~tunci', ~\ub358\uc9c0 '~tenci', ~\uc9c0 '~ci', ~\uac8c '~key', ~\uc74c '~um', ~\u3141 '~m', ~\uc2b5\ub2c8 '~supni', ~\uc74d\ub2c8 '~upni', ~\uad6c '~kwu').",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conjugation of verbs and adjectives",
"sec_num": "3.3.1"
},
{
"text": "When the target lexical entry is given with its part-of-speech information, and if it belongs to the categories of noun or adjective, the 17 endings are added to the base word, generating 17 different word forms to be included in the lexicon paradigm set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conjugation of verbs and adjectives",
"sec_num": "3.3.1"
},
{
"text": "This paper accepted the five vowel variation rules from Park (2006) as follows:",
"cite_spans": [
{
"start": 56,
"end": 67,
"text": "Park (2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vowel contraction or change",
"sec_num": "3.3.2"
},
{
"text": "(4) 'o' + 'a' -> 'wa'. e.g., pho-hang ('Phohang') -> phwang 5 (5) 'wu' + 'e' -> 'ye'. e.g., swu-ep ('a class') -> syep (6) 'wu' + 'i' -> 'wi'. e.g., pwu-in ('wife') -> pwin (7) 'i' + 'a' -> 'ya'. e.g., ki-an ('draft') -> kyan (8) 'i' + 'e' -> 'ye'. e.g., ki-ek ('memory') -> kyek",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vowel contraction or change",
"sec_num": "3.3.2"
},
{
"text": "The rules in (4) ~ (8) are supplied to the 'vowel-change' function that takes the Romanized target word as input and returns its changed form as the output.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vowel contraction or change",
"sec_num": "3.3.2"
},
{
"text": "The vowel reduction rules used in this paper aim to catch two types of shortening; the first type is concerned with the middle syllable of the whole word while the second works on the last syllable. As described in section 3.2, one Hangul syllable consists of several letters and, if the syllable is the target area of the reduction process, the contained vowel may be removed. Therefore, considering the first word of the Korean word, \ud55c\uae00 (hangul), Romanized as 'han', if one omits the vowel ('a') then the result would be 'hngul'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vowel reduction",
"sec_num": "3.3.3"
},
{
"text": "Previous studies showed that Korean SMS language has frequent vowel reductions (Park, 2006; Lee, 2010; Kim, 2011) with the middle and final syllables being the most common targets for reduction. The example sentence (9) presents the omission of the vowel in the middle syllable and (10) provides an example of reduction in the final syllable.",
"cite_spans": [
{
"start": 79,
"end": 91,
"text": "(Park, 2006;",
"ref_id": "BIBREF13"
},
{
"start": 92,
"end": 102,
"text": "Lee, 2010;",
"ref_id": "BIBREF9"
},
{
"start": 103,
"end": 113,
"text": "Kim, 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Vowel reduction",
"sec_num": "3.3.3"
},
{
"text": "(9) sa-mwu-sil ('office) -> sam-sil (10) key-im ('game') -> keym",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Vowel reduction",
"sec_num": "3.3.3"
},
{
"text": "Sentiment analysis or opinion mining techniques that utilize retrieval tasks to obtain the training sets or corpus data have to extract subjective chunks or morphemes from the real-world data. In fact, if one chooses to use an annotated subjective word list for the study, one must still go through the process of confirming whether the items in the given list are in the raw input data. For that reason, an effective retrieval operation is required for research which needs to manage unorganized message texts. This section documents two experiments. The first is on the effectiveness of the proposed approach for unspaced tweet texts, while the second focuses on lexical variation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment",
"sec_num": "4"
},
{
"text": "A large tweet dataset was obtained from another study (Lee et al., 2011 (Ko and Shin, 2010) .",
"cite_spans": [
{
"start": 54,
"end": 71,
"text": "(Lee et al., 2011",
"ref_id": "BIBREF10"
},
{
"start": 72,
"end": 91,
"text": "(Ko and Shin, 2010)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "Since it is needed to construct the test dataset for the first experiment, 100 tweets were randomly selected from the tweet corpus and were manually annotated using the target sets found in the sentiment word list (as a result, 128 items were found in the 100 tweets).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "For the second experiment, because no annotated corpus of Korean SMS texts was available, 80 tweets from the corpus were manually collected, each containing at least one irregular word (92 types in total). The varied word in the tweet was marked as the target and its corresponding original entry was restored and recorded in the target lexicon list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "This experiment involved conducting a simple retrieval test for the selected 100 tweets using the sentiment word list as described above. To make a comparison with the proposed approach, the performance of the morphological analysis method also needed to be evaluated. As such, the data was tested using a Korean morphology analyzer. 6 For the experimental conditions, one factor (spacing) was manipulated, providing two types of test dataset for the different approaches. Since removing all the spaces from the sentences would have left the morphological analyzer inoperable, only the spaces around the target were deleted to create the unspaced condition. Table 2 shows the results of the retrieval experiment: how well each method found the target items and how many they picked incorrectly. The morpheme analysis-based approach barely chose any wrong targets, but it missed too many right 6 We used the Korean morpheme analyzer distributed from the 21st century Sejong Project (http://www.sejong.or.kr/dist_frame.php).",
"cite_spans": [
{
"start": 334,
"end": 335,
"text": "6",
"ref_id": null
},
{
"start": 893,
"end": 894,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 658,
"end": 665,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 1: Spaced vs. Unspaced",
"sec_num": "4.2"
},
{
"text": "answers (the precision was 27% higher than the precision of Romanization-based method, while marking 7% lower recall rate). Although the morpheme analysis-based approach showed higher performance on the spaced text (0.82 versus 0.73 on F-Measure), the method proved ineffective against unspaced texts (the recall, compared to the Romanization method, was severely decreased from 0.72 to 0.29).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1: Spaced vs. Unspaced",
"sec_num": "4.2"
},
{
"text": "Following expectations, the Romanizationbased method was very robust against unspaced texts. This phenomenon is easily explained by considering that the method searched for the target strings without any regard for morpheme boundaries. In contrast, the morpheme analysisbased method took the incoming chunks and separated them into morphemes, but when text is unspaced the morpheme analyzer has to perform word-segmentation as well as morphemeanalysis. Thus one would anticipate an increase in errors when the input text is not properly spaced, because it would increase the complexity of the analysis process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1: Spaced vs. Unspaced",
"sec_num": "4.2"
},
{
"text": "However, unlike the predictions, the Romanization-based method recorded a lower precision than the morphological analysis-based approach. This result might be due to the set of short-length words in the target list. For example, words consisting of one or two letters such as 'ak' (both 'evil' or 'music' in English) may be erroneously identified in other words such as in 'ak-ki' ('musical instrument') since such short strings are likely to occur if only by chance. Thus, the Romanization-based method has a higher risk of errors if the system is supplied with such short terms. In the experiment above, the employed sentiment words were morphemes (not phrases or clauses), which is unfavorable for the Romanization approach. However, it is worthwhile to acknowledge that this is mitigated by employing the conjugation module, implying that welldefined rules can enhance performance. Table 3 . Results of retrieval tests for phonetically changed words",
"cite_spans": [],
"ref_spans": [
{
"start": 886,
"end": 893,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 1: Spaced vs. Unspaced",
"sec_num": "4.2"
},
{
"text": "Experiment 1 dealt with the cases where morpheme's grammatical category information was given, allowing the use of conjugation rule functions. Experiment 2 considers the situation in which specific words or expressions are given without POS tags and with phonetic variations of the targets which must be resolved before its original can be retrieved from the tweet data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: Covering phonetic changes in the lexicon",
"sec_num": "4.3"
},
{
"text": "A retrieval experiment was conducted given the test data as described in section 4.1. Unlike Experiment 1, this experiment utilized the submodules of the lexical shortening (as stated in section 3.3). The result is displayed in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 235,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 2: Covering phonetic changes in the lexicon",
"sec_num": "4.3"
},
{
"text": "The numbers in bold of Table 3 refer to the highest values for the column (tied values are treated as the same). The conjugation function is not carried out here because of a lack of grammatical category information, thus only three kinds of functions were manipulated as above. While vowel-change rules only care about the replacement of vowels, vowel-reduction rules cope with the circumstances in which the vowels in the word are omitted, resulting in a shortened form. H-weak rule is the only component that relates to any consonant change phenomena in this system; removing the phoneme 'h' between word syllables under specific conditions (e.g., The Korean word, 'coh-a' meaning 'good' is reduced to 'co-a'). The notation [+/-] indicates whether the mentioned function was employed in the construction of the target paradigm set.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 30,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 2: Covering phonetic changes in the lexicon",
"sec_num": "4.3"
},
{
"text": "As can be seen in Table 3 , the full model (including all the three sub-modules) outperforms the other models, proving the research assumption that implementation of linguistic rules would cover a subset of the lexical variations in the SMS language. With capturing the case alone, even the weakest model (with neither vowelreduction/change nor H-weak functions) showed better results than those of morphological analy-sis. This is because it could find typeequivalence between tokens such as 'cwuk-um' (\uc8fd\uc74c, 'death') and 'cwu-kum' (\uc8fc\uae08, 'death'), obtaining the higher F-score (0.38 vs. 0.13).",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 2: Covering phonetic changes in the lexicon",
"sec_num": "4.3"
},
{
"text": "Obviously, the strongest module affecting the results is the vowel-reduction function. Remember that this function has two omission rules for the middle and the last syllables of the target items.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: Covering phonetic changes in the lexicon",
"sec_num": "4.3"
},
{
"text": "The model (with vowel-reduction off and the other two functions on) clearly reveals the effect of this sub-module by exhibiting a rapid drop in F-score from 0.65 for the full-model to 0.40 for the current model. This effect is due to the high frequency of the vowel-reduction variations. Table 4 summarizes the types of variation in the test data, providing an explanation for the results in Table 3 . The proportion of phoneme reduction instances can be seen to be about a third of the total occurrences (36 out of 104, or approximately 35 percent), and it accounts for the steep decrease in F-score when the vowel-reduction function is not adopted. It is also worth noting that vowel-reduction in the first-syllable is quite rare; consistent with the linguistic analysis of empirical research (Park, 2006; p. 466) . The creation of vowel-reduced forms clearly had a large effect, lowering the accuracy from 0.96 to 0.80. This is because the shortened targets can also be found as sub-string of bigger words. However, this shortcoming does not weaken the efficiency of the whole approach. The morphological analysis-based retrieval method found only a few items in the data, which was expected considering that this analysis is dependent on a syllable-based word lexicon.",
"cite_spans": [
{
"start": 795,
"end": 807,
"text": "(Park, 2006;",
"ref_id": "BIBREF13"
},
{
"start": 808,
"end": 815,
"text": "p. 466)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 288,
"end": 295,
"text": "Table 4",
"ref_id": "TABREF2"
},
{
"start": 392,
"end": 399,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 2: Covering phonetic changes in the lexicon",
"sec_num": "4.3"
},
{
"text": "In short, though a small set of the linguistic rules were employed, and even using them is still far from achieving complete coverage, the results of the experiment implicate that such a rulebased system can capture at least part of the vast, complicated range of linguistic variations. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: Covering phonetic changes in the lexicon",
"sec_num": "4.3"
},
{
"text": "This paper confirmed that employing language-specific rules to handle SMS language text can enhance the results of the retrieval process. Although it is known that morphological analysis hardly produces erroneous results in formally written texts such as newspaper articles, the analysis results were made much worse for the SMS data in our experiments, which presented the motivation to pursue an additional approach. The procedure of sentiment analysis or opinion mining generally involves searching for items which are defined as subjectively meaningful, but typical morphological analysis cannot deal with the irregular changes of the web texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "5"
},
{
"text": "The reason why the morphological analysis does not work on such data is clear. The built-in stemmer or normalization process of the analyzer is not designed to cope with that kind of the text. However, in this paper, we tried to point out that judging the text as not well-formed enough to be processed is too quick. Instead, a set of generative rules to handle such texts were proposed and implemented in our experiments. Although those rules could be imported to a future morphological analyzer giving it broader coverage, suffice it to state that the text on the internet is not as simple as newspaper articles to the analyzers currently available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "5"
},
{
"text": "For such a case, this proposed method could be an alternative way to preprocess Korean SMS texts and it should be noted that there could be similar approaches for other morphologically rich languages like Japanese or Turkish. Normalizing text is a very complicated task for the type of the languages and well-organized module would be needed if it has to manipulate SMS texts for any morpheme-level retrieval process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "5"
},
{
"text": "A Romanization transliteration scheme is used in this study because it naturally represents the phonetic properties of Korean syllables while providing a more intuitive way to apply a set of defined rules to the sequence. Since phonemic variation is quite common in SMS texts, as mentioned, this approach seems useful and practical regarding the results of the experiments. Although the size of the dataset which was used for the test is small, the sample set contained cases which were well known in previous literature and their linguistic patterns were consistent with reports (Park, 2006; Lee, 2010; Kim, 2011) . However, to make the approach practical enough to be used by field engineers, a large scale corpus would be required to find the optimal set of the transformation rules, which is left for future study due to the lack of such annotated data at the time of writing.",
"cite_spans": [
{
"start": 580,
"end": 592,
"text": "(Park, 2006;",
"ref_id": "BIBREF13"
},
{
"start": 593,
"end": 603,
"text": "Lee, 2010;",
"ref_id": "BIBREF9"
},
{
"start": 604,
"end": 614,
"text": "Kim, 2011)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Conclusion",
"sec_num": "5"
},
{
"text": "POS tags: a(adverb), ncs(stative common noun), xpa(adjectivederived suffix), exm(adnominal suffix), nc(common noun), npp(personal pronoun)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://search.cpan.org/dist/Encode-Korean/lib/Encode/Korean/Yale.pm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It is worth to noting that it becomes easier to apply rewriting rules to the romanized Hangul text because of its' linearity.5 Note that the rule of 'H-weak' is manipulated here and the rule functionally works by omitting any 'h' between of sonorants. This rule helps to capture the typical linking sound phenomenon in Korean.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Lee, W., Cha, M. and Yang, H. for their kind approval to use the Tweet corpus and the three anonymous reviewers for their helpful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An NLP & IR approach to topic detection Topic detection and tracking",
"authors": [
{
"first": "H.-H",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "L.-W",
"middle": [],
"last": "Ku",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "243--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, H.-H., and Ku, L.-W. (2002). An NLP & IR approach to topic detection Topic detection and tracking (pp. 243-264): Kluwer Academic Pub- lishers.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Txtng: The Gr8 Db8",
"authors": [
{
"first": "D",
"middle": [],
"last": "Crystal",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Crystal, D. (2008). Txtng: The Gr8 Db8, Oxford Uni- versity Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Japanese/English cross-language information retrieval: Exploration of query translation and transliteration",
"authors": [
{
"first": "A",
"middle": [],
"last": "Fujii",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ishikawa",
"suffix": ""
}
],
"year": 2001,
"venue": "Computers and the Humanities",
"volume": "35",
"issue": "4",
"pages": "389--420",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fujii, A., and Ishikawa, T. (2001). Japanese/English cross-language information retrieval: Exploration of query translation and transliteration. Computers and the Humanities, 35(4), 389-420.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Klex: A finite-state transducer lexicon of Korean",
"authors": [
{
"first": "N",
"middle": [
"R"
],
"last": "Han",
"suffix": ""
}
],
"year": 2006,
"venue": "Finite-State Methods and Natural Language Processing",
"volume": "",
"issue": "",
"pages": "67--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Han, N. R. (2006). Klex: A finite-state transducer lex- icon of Korean. In Finite-State Methods and Natu- ral Language Processing (pp. 67-77). Springer Berlin Heidelberg.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Paper presented at the Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hu, M., and Liu, B. (2004). Mining and summarizing customer reviews. Paper presented at the Proceed- ings of the tenth ACM SIGKDD international con- ference on Knowledge discovery and data mining, Seattle, WA, USA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Language-specific sentiment analysis in morphologically rich languages",
"authors": [
{
"first": "H",
"middle": [],
"last": "Jang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Shin",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jang, H., and Shin, H. (2010). Language-specific sen- timent analysis in morphologically rich languages. Paper presented at the Proceedings of the 23rd In- ternational Conference on Computational Linguis- tics: Posters, Beijing, China.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Phonological and Morphological Characters of Junmal in Korean Net Lingo. Linguistics",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "61",
"issue": "",
"pages": "115--129",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim, S. (2011). Phonological and Morphological Characters of Junmal in Korean Net Lingo. Lin- guistics. 61, 115-129. In Korean.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Determining the sentiment of opinions",
"authors": [
{
"first": "S.-M",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2004,
"venue": "of the 20th international conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim, S.-M., and Hovy, E. (2004). Determining the sentiment of opinions. Paper presented at the Pro- ceedings of the 20th international conference on Computational Linguistics, Geneva, Switzerland.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Grading System of Movie Review through the Use of An Appraisal Dictionary and Computation of Semantic Segments",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Shin",
"suffix": ""
}
],
"year": 2010,
"venue": "Korean Journal of Cognitive Science",
"volume": "21",
"issue": "4",
"pages": "669--696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ko, M., and Shin, H. (2010). Grading System of Mov- ie Review through the Use of An Appraisal Dic- tionary and Computation of Semantic Segments. Korean Journal of Cognitive Science. 21(4), 669- 696. In Korean.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Study of Phonological Features and Orthography in Computer Mediated Language",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2010,
"venue": "Linguistic Research",
"volume": "27",
"issue": "1",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, J. (2010). A Study of Phonological Features and Orthography in Computer Mediated Language. Linguistic Research, 27(1), 1-18. In Korean.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Network Properties of Social Media Influentials : Focusing on the Korean Twitter Community",
"authors": [
{
"first": "W",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Cha",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "48",
"issue": "",
"pages": "44--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, W., Cha, M., Yang, H. (2011). Network Proper- ties of Social Media Influentials : Focusing on the Korean Twitter Community. Journal of Communi- cation Research. 48(2), 44-79. In Korean.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Text Messaging and IM: Linguistic Comparasion of American College Data",
"authors": [
{
"first": "R",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "N",
"middle": [
"S"
],
"last": "Baron",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Language and Social Psychology",
"volume": "26",
"issue": "3",
"pages": "291--298",
"other_ids": {
"DOI": [
"10.1177/0261927X06303480"
]
},
"num": null,
"urls": [],
"raw_text": "Ling, R., and Baron, N.S. (2007). Text Messaging and IM: Linguistic Comparasion of American College Data. Journal of Language and Social Psychology, 26(3), 291-298, doi:10.1177/0261927X06303480",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Twitter as a Corpus for Sentiment Analysis and Opinion Mining",
"authors": [
{
"first": "A",
"middle": [],
"last": "Pak",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Paroubek",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pak, A., and Paroubek, P. (2010). Twitter as a Corpus for Sentiment Analysis and Opinion Mining. Paper presented at the Proceedings of the Seventh con- ference on International Language Resources and Evaluation (LREC'10), Valletta, Malta. http://www.lrec- conf.org/proceedings/lrec2010/pdf/385_Paper.pdf",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Phonological Study of PC Communication Language Noun. Korean Education",
"authors": [
{
"first": "C",
"middle": [],
"last": "Park",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "119",
"issue": "",
"pages": "457--486",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Park, C. (2006). A Phonological Study of PC Com- munication Language Noun. Korean Education, 119, 457-486. In Korean.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning subjective adjectives from corpora",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 17th National Conference on Artificial Intelligence (AAAI-2000), Austin",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wiebe, J. M. (2000, July 30-August 3). Learning sub- jective adjectives from corpora. Paper presented at the In Proceedings of the 17th National Confer- ence on Artificial Intelligence (AAAI-2000), Aus- tin, TX.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "spaced: '\uadf8\ub140\uac00 \ud559\uad50\uc5d0 \uac14\ub2e4')",
"type_str": "figure"
},
"TABREF2": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Types and counts of instances in test dataset of Exp. 2"
}
}
}
}