Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
117 kB
{
"paper_id": "I13-1041",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:13:31.209971Z"
},
"title": "How Noisy Social Media Text, How Diffrnt Social Media Sources?",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": "",
"affiliation": {
"laboratory": "NICTA Victoria Research Laboratory",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Melbourne",
"location": {}
},
"email": "paulcook@unimelb.edu.au"
},
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": "",
"affiliation": {
"laboratory": "NICTA Victoria Research Laboratory",
"institution": "",
"location": {}
},
"email": "mhlui@unimelb.edu.au"
},
{
"first": "Andrew",
"middle": [],
"last": "Mackinlay",
"suffix": "",
"affiliation": {
"laboratory": "NICTA Victoria Research Laboratory",
"institution": "",
"location": {}
},
"email": "andrew.mackinlay@nicta.com.au"
},
{
"first": "Li",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "NICTA Victoria Research Laboratory",
"institution": "",
"location": {}
},
"email": "li.wang.d@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "While various claims have been made about text in social media text being noisy, there has never been a systematic study to investigate just how linguistically noisy or otherwise it is over a range of social media sources. We explore this question empirically over popular social media text types, in the form of YouTube comments, Twitter posts, web user forum posts, blog posts and Wikipedia, which we compare to a reference corpus of edited English text. We first extract out various descriptive statistics from each data type (including the distribution of languages, average sentence length and proportion of out-ofvocabulary words), and then investigate the proportion of grammatical sentences in each, based on a linguistically-motivated parser. We also investigate the relative similarity between different data types.",
"pdf_parse": {
"paper_id": "I13-1041",
"_pdf_hash": "",
"abstract": [
{
"text": "While various claims have been made about text in social media text being noisy, there has never been a systematic study to investigate just how linguistically noisy or otherwise it is over a range of social media sources. We explore this question empirically over popular social media text types, in the form of YouTube comments, Twitter posts, web user forum posts, blog posts and Wikipedia, which we compare to a reference corpus of edited English text. We first extract out various descriptive statistics from each data type (including the distribution of languages, average sentence length and proportion of out-ofvocabulary words), and then investigate the proportion of grammatical sentences in each, based on a linguistically-motivated parser. We also investigate the relative similarity between different data types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Various claims have been made about social media text being \"noisy\" (Java, 2007; Becker et al., 2009; Yin et al., 2012; Preotiuc-Pietro et al., 2012; Eisenstein, 2013, inter alia) . However, there has been little effort to quantify the extent to which social media text is more noisy than conventional, edited text types. Moreover, social media comes in many flavours -including microblogs, blogs, and user-generated comments -and research has tended to focus on a specific data source, such as Twitter or blogs. A natural question to ask is how different the textual content of the myriad of social media types are from one another. This is an important first step towards building a generalpurpose suite of social media text processing tools.",
"cite_spans": [
{
"start": 68,
"end": 80,
"text": "(Java, 2007;",
"ref_id": "BIBREF20"
},
{
"start": 81,
"end": 101,
"text": "Becker et al., 2009;",
"ref_id": "BIBREF2"
},
{
"start": 102,
"end": 119,
"text": "Yin et al., 2012;",
"ref_id": "BIBREF36"
},
{
"start": 120,
"end": 149,
"text": "Preotiuc-Pietro et al., 2012;",
"ref_id": "BIBREF26"
},
{
"start": 150,
"end": 179,
"text": "Eisenstein, 2013, inter alia)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most research to date on social media text has used very shallow text processing (such as keyword-based time-series analysis), with natural language processing (NLP) tools such as partof-speech taggers and parsers tending to be disfavoured because of the perceived intractability of applying them to social media text. However, there has been little analysis quantifying just how hard it is to apply NLP to social media text, or how intractable the data is for NLP tools.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper addresses the two issues above. We build corpora from a variety of popular social media sources, including microblogs, usergenerated comments, user forums, blogs, and collaboratively-authored content. We then compare these corpora to more conventional texts through a variety of statistical and linguistic analyses to quantitatively assess the relative extent to which they are \"noisy\", and quantify similarities between them. Our findings indicate that there are certainly differences between social media sites, but that if we focus our attention on English text, there are striking similarities, and that even sources such as Twitter may be more \"NLPtractable\" than they are often portrayed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Natural language processing (NLP) has been applied to a wide range of applications on social media, especially Twitter. Numerous studies have attempted to go beyond simple keyword and burstiness models to identify real-world events from Twitter (Benson et al., 2011; Ritter et al., 2012; Petrovic et al., 2012) . Recent efforts have considered identifying user location based on the textual content of tweets (Wing and Baldridge, 2011; Roller et al., 2012; Han et al., 2012b) and user metadata (Han et al., 2013) . Related work has examined models of the relationships between words and locations for the purpose of identifying and studying regional linguistic variation (Eisenstein et al., 2010; Eisenstein et al., 2012) .",
"cite_spans": [
{
"start": 245,
"end": 266,
"text": "(Benson et al., 2011;",
"ref_id": "BIBREF3"
},
{
"start": 267,
"end": 287,
"text": "Ritter et al., 2012;",
"ref_id": "BIBREF31"
},
{
"start": 288,
"end": 310,
"text": "Petrovic et al., 2012)",
"ref_id": "BIBREF25"
},
{
"start": 409,
"end": 435,
"text": "(Wing and Baldridge, 2011;",
"ref_id": "BIBREF35"
},
{
"start": 436,
"end": 456,
"text": "Roller et al., 2012;",
"ref_id": "BIBREF32"
},
{
"start": 457,
"end": 475,
"text": "Han et al., 2012b)",
"ref_id": "BIBREF16"
},
{
"start": 494,
"end": 512,
"text": "(Han et al., 2013)",
"ref_id": "BIBREF17"
},
{
"start": 671,
"end": 696,
"text": "(Eisenstein et al., 2010;",
"ref_id": "BIBREF7"
},
{
"start": 697,
"end": 721,
"text": "Eisenstein et al., 2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Given the abundance of non-standard language on social media, including lexical variants (e.g. supa for super) and acronyms (e.g. smh for shaking my head), as well as genre-specific phenomena such as the usage of hashtags and mentions on Twitter, standard NLP tools cannot be immediately applied. Efforts to address this problem have taken two main approaches: modifying social media data to more closely resemble standard text, and building social media-specific tools. Lexical normalisation is the task of converting non-standard forms such as tlkin and touchdooown to their standard forms (talking and touchdown, respectively), in the hopes of making text more tractable to NLP (Eisenstein, 2013) . Approaches to normalisation have exploited various sources of information including the context in which a given instance of a lexical variant occurs (Gouws et al., 2011; Han and Baldwin, 2011) , although the best results to date have been achieved by automatically discovering lexical variant-standard form pairs from a large Twitter corpus (Han et al., 2012a) . This latter approach is particularly appealing because it allows for very fast normalisation, suitable for processing large volumes of text.",
"cite_spans": [
{
"start": 681,
"end": 699,
"text": "(Eisenstein, 2013)",
"ref_id": "BIBREF9"
},
{
"start": 852,
"end": 872,
"text": "(Gouws et al., 2011;",
"ref_id": "BIBREF13"
},
{
"start": 873,
"end": 895,
"text": "Han and Baldwin, 2011)",
"ref_id": "BIBREF14"
},
{
"start": 1044,
"end": 1063,
"text": "(Han et al., 2012a)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Conversely, Owoputi et al. (2013) and Ritter et al. (2011) developed part-of-speech (POS) taggers for Twitter that are better able to handle properties of this text type such as the higher outof-vocabulary rate compared to conventional text. Ritter et al. further developed a Twitter shallow parser and named-entity recogniser. Foster et al. (2011) evaluated standard parsers on social media data, and found them to perform particularly poorly on Twitter, but showed that their performance can be improved through a retraining strategy.",
"cite_spans": [
{
"start": 12,
"end": 33,
"text": "Owoputi et al. (2013)",
"ref_id": "BIBREF24"
},
{
"start": 38,
"end": 58,
"text": "Ritter et al. (2011)",
"ref_id": "BIBREF30"
},
{
"start": 328,
"end": 348,
"text": "Foster et al. (2011)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Another natural question to ask is how similar the characteristics of social media text are to those of other domains. More specifically, we may be interested in a numerical measurement of how closely the language used in one corpus matches that of another. Kilgarriff (2001) proposed a method for calculating both inter-corpus similarity and intra-corpus homogeneity, and language modelling has also been used as the basis for calculating how well one corpus models another. We discuss both of these options below.",
"cite_spans": [
{
"start": 258,
"end": 275,
"text": "Kilgarriff (2001)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "In order to evaluate the characteristics of text in different social media sources, we assembled the following datasets from across the spectrum of popular social media sites, varying in terms of document length, the number of authors/editors per document, and the level of text editing:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "TWITTER-1/2: micro-blog posts from Twitter, crawled using the Streaming API over two discrete time periods (TWITTER-1 = 22 September 2011 and TWITTER-2 = 22 February 2012) to investigate the temporal-specificity of the data -documents up to 140 characters in length, single author per document, and no facility for post-editing COMMENTS: comments from YouTube, based on the dataset of O'Callaghan et al. 2012, but expanded to include all comments on videos in the original dataset 1 -documents up to 500 characters in length, single author per document, and no facility for post-editing FORUMS: a random selection of posts from the top-1000 valid vBulletin-based forums in the Big Boards forum ranking 2 -documents of variable length (with a site-configurable restriction on maximum post length), single author per document, and optional facility for post-editing (depending on the site configuration) BLOGS: blog posts from tier one of the ICWSM-2011 Spinn3r dataset (Burton et al., 2011 ) -generally no restriction on length, single author per document, and facility for post-editing WIKIPEDIA: text from the body of documents in a dump of English Wikipedia -no restriction on document length, usually multiple authors/editors per document, and facility for post-editing As a reference corpus of English from a nonsocial media source, we also include documents from the British National Corpus (Burnard, 2000) : BNC: all documents from the written portion of the British National Corpus (BNC) -documents of up to 45K words from a variety of sources, mostly by a single author, with editing.",
"cite_spans": [
{
"start": 968,
"end": 988,
"text": "(Burton et al., 2011",
"ref_id": "BIBREF5"
},
{
"start": 1396,
"end": 1411,
"text": "(Burnard, 2000)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "We present the number of documents and average document size for each dataset in Table 1 ",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 88,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3"
},
{
"text": "We first pre-process each dataset using the following standardised methodology. 3 In the case that the corpus comes with tokenisation and POS information, we strip this and perform automatic preprocessing to ensure consistency in the quality and composition of the tokens/tags. We first apply langid.py (Lui and Baldwin, 2012) -an off-the-shelf language identifier -to each document to detect its majority language. We then extract all documents identified as English for further processing.",
"cite_spans": [
{
"start": 80,
"end": 81,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Pre-processing",
"sec_num": "4"
},
{
"text": "We next perform sentence tokenisation. In line with the findings of Read et al. (2012a) based on experimentation with a selection of sentence tokenisers over user-generated content, we sentencetokenise with tokenizer. 4 Finally, we tokenise and POS tag the datasets using TweetNLP 0.3 (Owoputi et al., 2013) .",
"cite_spans": [
{
"start": 218,
"end": 219,
"text": "4",
"ref_id": null
},
{
"start": 285,
"end": 307,
"text": "(Owoputi et al., 2013)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Pre-processing",
"sec_num": "4"
},
{
"text": "One particularly important property of TweetNLP is that it identifies content such as mentions, URLs, and emoticons that aren't typically syntactic elements of a sentence. More-over, it is able to distinguish between usages of hashtags which are elements of a sentence, and those which are not, as in the case of Examples (1) and (2) below, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Pre-processing",
"sec_num": "4"
},
{
"text": "(1) love this #awesome view out of my window",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Pre-processing",
"sec_num": "4"
},
{
"text": "(2) Swinging with the besties! #awesome",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Pre-processing",
"sec_num": "4"
},
{
"text": "We POS tag each sentence in each corpus using TweetNLP, and remove all tokens identified as non-linguistic. 5 In our examples above, e.g., we remove the token #awesome from (2) but not (1).",
"cite_spans": [
{
"start": 108,
"end": 109,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Pre-processing",
"sec_num": "4"
},
{
"text": "To normalise for corpus size, we extract a random sample of sentences totalling 5M tokens from each dataset, and further partition this sample into 5 equal-sized sub-corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Pre-processing",
"sec_num": "4"
},
{
"text": "In this section, we analyse the characteristics of the language used in the respective data sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "First, we analyse the breakdown of languages found in each data source based on the predictions of langid.py, as detailed in Table 2 . Note that these results are based on the full datasets without language filtering. Also note that WIKIPEDIA and the BNC are intended to be monolingual English collections, and that FORUMS has a strong bias towards English due to the crawling methodology. For the remainder of the datasets, we expect the results to be representative of the language bias of the respective data sources.",
"cite_spans": [],
"ref_spans": [
{
"start": 125,
"end": 132,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Language Mix",
"sec_num": "5.1"
},
{
"text": "All data sources are dominated by English documents, although in the case of TWITTER-1/2, less than half of the documents are in English (en), with Japanese being the second most popular language, and strong representation from languages such as Portuguese (pt), Spanish (es), Indonesian (id), Dutch (nl) and Malay (ms). These results are largely consistent with earlier studies on the language distribution in Twitter (Semiocast, 2010; Hong et al., 2011) .",
"cite_spans": [
{
"start": 419,
"end": 436,
"text": "(Semiocast, 2010;",
"ref_id": "BIBREF33"
},
{
"start": 437,
"end": 455,
"text": "Hong et al., 2011)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Mix",
"sec_num": "5.1"
},
{
"text": "That the BNC is predicted to be 100% English is a validation of the accuracy of langid.py. WIKIPEDIA is more interesting, with tiny numbers (around 0.2% in total) of documents which are predicted to have a majority language of Latin (la), German (de), etc. Manual analysis of these As such, the language tags are actually overwhelmingly correct, 6 in the sense that the predominant language is indeed that indicated. The implications of these results for text processing of social media are profound. While English clearly dominates the data, there are significant amounts of non-English text in all our social media sources, with Twitter being the most extreme case: the majority of documents are not English. Additionally for TWITTER-1/2 and COM-MENTS, instances of all 97 languages modelled by langid.py were found in the dataset. At the very least, this underlines the importance of language identification as a means of determining the source language in cases where language-specific NLP tools are to be used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Mix",
"sec_num": "5.1"
},
{
"text": "Next, we analyse the lexical composition of the English documents. Hereafter, we focus exclusively on the 5M token subsample of each dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Analysis",
"sec_num": "5.2"
},
{
"text": "In Table 3 we present simple statistics on the average word length (in characters) and average sentence length (in words) for each dataset. We also analyse the relative occurrence of out-ofvocabulary (OOV) words, based on the GNU aspell dictionary v0.60.6.1 with case folding. We strip all \"online-specific\" markup (hashtags, user mentions and URLs), on the basis of the output of the POS tagger (i.e. any hashtags etc. that are not part of the syntactic structure of the text are removed). 7 To filter out common mis- 6 With the notable exception of Latin, where many of the documents contain lists of names from a variety of European language backgrounds, but little that is identifiable as Latin. Table 3 : Average word and sentence length, and proportion of OOV words (optionally with lexical normalisation) in each dataset spellings/social media usages such as ur for your, we optionally include a pre-step of \"lexical normalisation\" based on the dictionary of Han et al. (2012a) which gives the standard form for a given OOV, based on combined information from slang dictionaries and automatically-learnt correspondences (\"+norm\"). There is remarkably little difference in word length between datasets, but sentence length in TWITTER-1/2 and COMMENTS is around half that of the more formal WIKIPEDIA/BNC and also BLOGS, with FORUMS splitting the difference. The average word length for all of TWITTER-1/2, COMMENTS and FORUMS is remarkably similar. In terms of OOV words, FORUMS and COMMENTS are comparable to WIKIPEDIA and the BNC (where OOV words are dominated by proper nouns), and actually lower than BLOGS. TWITTER-1/2 has the highest OOV rate of all our datasets, although when we include lexical normalisation, it is only 2-4 percentage points higher than the other social media sources. The impact of lexical normalisation is most noticeable for TWITTER-1/2 and COMMENTS, indicating that informal text and \"ad hoc\" spellings are more prevalent in them than the other data sources.",
"cite_spans": [
{
"start": 491,
"end": 492,
"text": "7",
"ref_id": null
},
{
"start": 519,
"end": 520,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": null
},
{
"start": 700,
"end": 707,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lexical Analysis",
"sec_num": "5.2"
},
{
"text": "about one third; it also reduced the OOV rate in COMMENTS by around 10%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Analysis",
"sec_num": "5.2"
},
{
"text": "These results are broadly in agreement with the findings of Rello and Baeza-Yates (2012), who used the relative frequency of a set of common misspellings to estimate the lexical quality of social media, and arrived at the conclusion that social media text is on average \"cleaner\" than many other web sites, and becoming progressively cleaner over time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Analysis",
"sec_num": "5.2"
},
{
"text": "A natural next question to ask is how grammatical the text in each of our datasets is. We measure this using the English Resource Grammar (ERG: Flickinger et al. (2000) ), a broad-coverage HPSG-based grammar. One aspect of the ERG which makes it highly suited to testing grammaticality is that, unlike most NLP parsers, it is \"generative\", i.e. it explicitly models grammaticality, and is developed relative to both positive and negative test items to ensure it does not \"overgenerate\". We can therefore use it as a proxy for grammaticality judgements. Further to this, the ERG makes active use of 'root conditions' to indicate how much the grammar had to relax particular assumptions to produce a derivation for the sentence. These conditions vary on the dimensions of: (1) strict versus informal (corresponding to whether the sentence uses standard punctuation and capitalisation, or not); and (2) full sentences vs. fragments (e.g. isolated noun phrases). All of our experiments are based on the '1111' version of the grammar, and the CHEAP parsing engine (Callmeier, 2002) .",
"cite_spans": [
{
"start": 144,
"end": 168,
"text": "Flickinger et al. (2000)",
"ref_id": "BIBREF10"
},
{
"start": 1059,
"end": 1076,
"text": "(Callmeier, 2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammaticality",
"sec_num": "5.3"
},
{
"text": "In order to maximise the lexical coverage of the ERG, we used POS-conditioned generic lexical types (Adolphs et al., 2008) , whereby a generic lexical entry is created for each OOV word on the basis of the output of a POS tagger. To accommodate the TweetNLP POS tags, we manually created a new set of mappings to generic lexical entries. 8 We additionally re-tokenised the output of TweetNLP to split apart contractions (e.g. won't and possessive clitics (e.g. Kim's), in line with the Penn Treebank tokenisation strategy.",
"cite_spans": [
{
"start": 100,
"end": 122,
"text": "(Adolphs et al., 2008)",
"ref_id": "BIBREF0"
},
{
"start": 338,
"end": 339,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grammaticality",
"sec_num": "5.3"
},
{
"text": "In Table 4 we show the results of parsing 4000 randomly selected English sentences from each corpus using the ERG with the parsing setup we have described. 9 The highest parse coverage was observed for 8 The original POS mappings are based on the Penn POS tagset and have been tested and fine-tuned extensively; our POS mapping for the TweetNLP POS tags is much more immature, and has potentially contributed to a slight loss in Table 4 : Percentage of sentences (from a random sample of 4000) which can be parsed using the ERG, broken down by the root condition of the top-ranked parse for the parseable sentences the BNC (with only 23.2% not able to be parsed), closely followed by WIKIPEDIA. At the other end of the scale are the TWITTER-1 and TWITTER-2 variants, which are most likely to contain ungrammatical sentences, with up to 15% more sentences unable to be parsed, although this is only marginally higher than FORUMS and BLOGS, all of which contain more ungrammatical text than COMMENTS. Between these extremes are some mild surprises -BLOGS and FORUMS, which contain data produced in a more enduring and editable format than TWITTER-1/2, are, according to our metric, only marginally more grammatical. In addition, the non-editable and relatively transient COMMENTS sentences are substantially more likely to be grammatical than either FO-RUMS or BLOGS. A large part of this effect however is probably due to the sentence length differences between the corpora. As shown in Table 3 , the average length for COMMENTS is only 10.5 words, on par with TWITTER-1/2 (but according to this evidence, more carefully constructed). However, in the longer sentences of FORUMS and BLOGS, there is more scope for the authors to introduce anomalies into the text, increasing the chances of the sentence being unparseable.",
"cite_spans": [
{
"start": 156,
"end": 157,
"text": "9",
"ref_id": null
},
{
"start": 202,
"end": 203,
"text": "8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": null
},
{
"start": 429,
"end": 436,
"text": "Table 4",
"ref_id": null
},
{
"start": 1486,
"end": 1493,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Grammaticality",
"sec_num": "5.3"
},
{
"text": "Examining the root conditions related to formality and fragment analyses also gives us imparser accuracy relative to the \"canonical\" ERG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grammaticality",
"sec_num": "5.3"
},
{
"text": "9 Note that the reported results differ significantly from the coverage numbers reported by Read et al. (2012b) for WIKIPEDIA in particular, through a combination of a generic sentence and word tokenisation strategy, a potentially loweraccuracy/coarser-grained POS tagger, and a less mature POS mapping. The impact of these factors should be constant across datasets, however, meaning that the relative numbers should be truly indicative of the relative grammaticality of their text content. Table 5 : A breakdown of the causes of parser error in the unparseable sentences for each dataset portant insights into the corpora. WIKIPEDIA has by far the highest percentage of sentences with a strict, non-fragment analysis, much higher (10.3%) than the BNC even. In the less-edited corpora, of those sentences which are able to be parsed, a much smaller percentage are strict or full analyses, with the strict fragment analyses being most prevalent in TWITTER-1/2 and informal full analyses dominating in COMMENTS and FO-",
"cite_spans": [
{
"start": 92,
"end": 111,
"text": "Read et al. (2012b)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 492,
"end": 499,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Grammaticality",
"sec_num": "5.3"
},
{
"text": "The spread of grammaticality numbers is perhaps not as large as we might have expected. There are a few reasons for this. One important point is that the POS-tagging using a very coarsegrained tag set has inevitably led to very general lexical entries for handling unknown words (so we are not even sure of the person, number and tense associated with a verb). This means that it is possible that some of the sentences have been spuriously identified as grammatical, since the very general types for unknown words give the grammar great flexibility in fitting a parse tree to the sentence, even where it may not be appropriate. Secondly it is possible that this POS-tagging has led to an explosion in the number of candidate parse trees, which can paradoxically lead to a small decrease in coverage over longer sentences of WIKIPEDIA and the BNC due to the risk of exceeding the parser timeout or memory limit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RUMS.",
"sec_num": null
},
{
"text": "In line with Baldwin et al. (2005) , it is possible to shed further light on the quality of the grammaticality judgements, and also stylistic differences between the different corpora by manually analysing the unparseable sentences according to the cause for parse failure, as being due to: (1) a syntactic fragment (not explicitly handled by the ERG; e.g. noun and verb phrase fragments such as coming home ..., or standalone expletives such as wow!); (2) a preprocessor error (e.g. in sentence tokenisation or POS tagging); (3) parser resource limitations (usually caused by the grammar running out of edges in the chart, or timing out); (4) ungrammatical strings; (5) extragrammatical strings (where non-linguistic phenomena associated with the written presentation, such as bullets or HTML markup, interface unpredictably with the grammar); and (6) lexical and constructional gaps in the grammar. A breakdown of parse failure over a randomly-selected subset of 100 unparseable sentences from each of the datasets, carried out by the first author, is presented in Table 5 .",
"cite_spans": [
{
"start": 13,
"end": 34,
"text": "Baldwin et al. (2005)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 1067,
"end": 1075,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "RUMS.",
"sec_num": null
},
{
"text": "It is clear that the proportion of ungrammatical sentences is an underestimate, especially in the case of WIKIPEDIA and the BNC, where more than half of the \"failures\" are attributable to lexical or constructional gaps in the grammar. 10 For TWITTER-1/2, COMMENTS and FO-RUMS, however, the proportion of grammar gaps and genuinely ungrammatical inputs, respectively, is roughly equivalent, suggesting that our original findings for these datasets are an underestimate of the actual proportion of ungrammaticality, but that the relative proportions are accurate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RUMS.",
"sec_num": null
},
{
"text": "An additional observation that can be made from Table 5 is that preprocessing is a common cause of parser failure, primarily in sentence tokenisation (with multiple sentences tokenised into one), and to a lesser extent in POS tagging, and also occasional errors in language identification (only observed in the TWITTER-1/2 data).",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "RUMS.",
"sec_num": null
},
{
"text": "Reflecting back over the combined results for grammaticality, we can conclude that there is less syntactic \"noise\" in social media text than we may have thought, and that while there is no doubt that WIKIPEDIA and the BNC contain less ungrammatical text than the other datasets, the relative occurrence of syntactically \"noisy\" text in TWITTER-1/2, COMMENTS, FORUMS and BLOGS is relatively constant. There is partial concordance between these findings and those of Hu et al. (2013) , who examined textual properties of Twitter messages relative to blog, email, chat and SMS data, and also a newspaper. They found that Twitter messages were more formal than chat and SMS messages, and more similar to email and blog text in composition, in making prevalent use of standard constructions and lexical items.",
"cite_spans": [
{
"start": 465,
"end": 481,
"text": "Hu et al. (2013)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "RUMS.",
"sec_num": null
},
{
"text": "So far we have examined the datasets individually. Next, we investigate how intrinsically similar in style and content the different datasets are. One possible approach to this is via calculation of \"corpus similarity\" between datasets and homogeneity within a given dataset. In one of the very few studies of measuring corpus similarity and homogeneity, Kilgarriff (2001) introduced a method based on \u03c7 2 , whereby we measure the similarity of two corpora as the \u03c7 2 statistic over the 500 most frequent words in the union of the corpora. One limitation of Kilgarriff's method is that it is only applicable to corpora of equal size. We therefore use the five 1M token sub-corpora of each corpus in these experiments. We measure the similarity of two corpora as the average pairwise \u03c7 2 similarity between their sub-corpora. We measure the homogeneity (or self-similarity) of a corpus as the average pairwise similarity between sub-corpora of that corpus.",
"cite_spans": [
{
"start": 355,
"end": 372,
"text": "Kilgarriff (2001)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Similarity",
"sec_num": "5.4"
},
{
"text": "The homogeneity scores in Table 7 indicate that social media text exhibits greater lexical variation (as captured by the \u03c7 2 measure), and hence is less homogenous, than conventional text types (i.e. the BNC). TWITTER-1 and TWITTER-2 are the most homogenous of the social media corpora, and only fractionally less homogeneous than the BNC. BLOGS are much more diverse than the other corpora.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 7",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Corpus Similarity",
"sec_num": "5.4"
},
{
"text": "Turning to corpus similarity (Table 6 ), there appears to be a roughly linear partial ordering in the relative similarity between the corpora: TWITTER-1/2 \u2261 COMMENTS < FORUMS < BLOGS < BNC < WIKIPEDIA (as in, TWITTER-1/2 is more similar to FORUMS than it is to BLOGS, but more similar to BLOGS than the BNC, etc.). This can be observed most clearly based on the similarities of each other corpus with TWITTER-1/2 and WIKIPEDIA, but the similarities for all corpus pairs are consistent with this ordering. TWITTER-1 and TWITTER-2 are unsurprisingly the most similar corpora, with very little difference between the two crawls, suggesting that despite the real-time nature of Twitter, it is reasonably homogenous across time. We further see relatively high similarity between TWITTER-1/2 and COMMENTS, COMMENTS and FORUMS, and FO-RUMS and BLOGS.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "(Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Corpus Similarity",
"sec_num": "5.4"
},
{
"text": "Language modelling provides an alternative to estimating corpus similarity, based on the perplexity of a dataset relative to language models (LMs) trained over other partitions from the same dataset, and also partitions from other datasets. We construct open-vocabulary trigram LMs with Good-Turing smoothing using SRILM (Stolcke, 2002) .",
"cite_spans": [
{
"start": 321,
"end": 336,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modelling",
"sec_num": "5.5"
},
{
"text": "For each corpus, we build 5 LMs, each trained on 4 of the available 1M word sub-corpora. We then use each model to compute the perplexity of the held-out sub-corpus from the same dataset, as well as all sub-corpora for each other dataset. The results are presented in Figure 1 in the form of a box plot over the 5 LMs for a given training corpus (although the variance between LMs is usually so slight that the \"box\" appears as a single point).",
"cite_spans": [],
"ref_spans": [
{
"start": 268,
"end": 276,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Modelling",
"sec_num": "5.5"
},
{
"text": "For each corpus, the lowest perplexity is obtained on the held-out data from the same corpus. Overall, these results agree with those for \u03c7 2 similarity, namely that there is a continuous spectrum, with TWITTER-1/2 and WIKIPEDIA as the two extremes and COMMENTS, FORUMS, BLOGS and the BNC between them, in that order. Along this spectrum, COMMENTS, FORUMS and BLOGS form a cluster, as do the BNC and WIKIPEDIA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Modelling",
"sec_num": "5.5"
},
{
"text": "Combining these results with those for \u03c7 2 similarity, it would appear that FORUMS is the \"median\" dataset, which is most similar to each of the other datasets. The implication of this finding is that if a statistical model (e.g. for POS dis- TWITTER-1 Figure 1 : Trigram language model perplexity of test data conditioned on a given training corpus ambiguation or parse selection) were to be trained on a single data type and applied to the other data types, FORUMS should be the data of choice, as with the possible exception of WIKIPEDIA, it models the other corpora remarkably well. It also provides evidence for why methods based on edited text collections such as the BNC or newswire text perform badly on Twitter data.",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 261,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Language Modelling",
"sec_num": "5.5"
},
{
"text": "In this paper we built corpora from a range of social media sources -microblogs, usergenerated comments, user forums, blogs, and collaboratively-authored content -and compared them to each other and a reference corpus of more-conventional, edited documents. We applied a variety of linguistic and statistical analyses, specifically: language distribution, lexical analysis, grammaticality, and two measures of corpus similarity. This is the first such systematic analysis and cross-comparison of social media text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "We analysed the widely-acknowledged \"noisiness\" of social media texts from a number of perspectives, and showed that NLP techniques -including language identification, lexical normalisation, and part-of-speech tagging -can be applied to reduce this noise. Crucially, this suggests that although social media is indeed noisy, it appears to be possible to use NLP to \"cleanse\" it. Moreover, once rendered less noisy, (further) NLP on social media text might be more tractable than it is conventionally believed to be.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "In terms of grammaticality, our results confirmed that social media text is less grammatical than edited text, but also suggested that the disparity is relatively small.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Both of our more-general corpus similarity analyses revealed that the social media text types analysed appear to lie on a continuum of similarity ranging from microblogs to collaborativelyauthored content. This finding has potential implications on the selection of training data for statistical NLP systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "We post-processed the retrieved comments to remove all occurrences of the unicode U+FEFF codepoint (which is used either as a byte order marker at the start of messages or a zerowidth no-break space when used elsewhere in a document), as it skewed the results of the language identification.2 http://rankings.big-boards.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Acknowledging that superior domain-specific approaches exist, e.g. for Wikipedia sentence tokenisation using markup(Flickinger et al., 2010).4 http://www.cis.uni-muenchen.de/ wastl/misc/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Specifically, we remove any token tagged as #, @,\u02dc, U, or E.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Or, indeed, shortcomings in our POS mapping for unknown words, although again, the relative impact of this should be constant across datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "NICTA is funded by the Australian government as represented by Department of Broadband, Communication and Digital Economy, and the Australian Research Council through the ICT centre of Excellence programme.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Some fine points of hybrid natural language parsing",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Adolphs",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "Callmeier",
"suffix": ""
},
{
"first": "Berthold",
"middle": [],
"last": "Crysmann",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Kiefer",
"suffix": ""
}
],
"year": 2008,
"venue": "European Language Resources Association (ELRA), editor, Proc. of the 6th International Conference on Language Resources and Evaluation (LREC 2008)",
"volume": "",
"issue": "",
"pages": "1380--1387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Adolphs, Stephan Oepen, Ulrich Callmeier, Berthold Crysmann, Dan Flickinger, and Bernd Kiefer. 2008. Some fine points of hybrid natural language parsing. In European Language Resources Association (ELRA), editor, Proc. of the 6th International Conference on Language Resources and Evaluation (LREC 2008), pages 1380-1387, Marrakech, Morocco.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Beauty and the beast: What running a broad-coverage precision grammar over the BNC taught us about the grammar -and the corpus",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"M"
],
"last": "Bender",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Ara",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2005,
"venue": "Linguistic Evidence: Empirical, Theoretical, and Computational Perspectives",
"volume": "",
"issue": "",
"pages": "49--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Baldwin, Emily M. Bender, Dan Flickinger, Ara Kim, and Stephan Oepen. 2005. Beauty and the beast: What running a broad-coverage preci- sion grammar over the BNC taught us about the grammar -and the corpus. In Stephan Kepser and Marga Reis, editors, Linguistic Evidence: Empiri- cal, Theoretical, and Computational Perspectives, pages 49-69. Mouton de Gruyter, Berlin, Germany.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Event identification in social media",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Becker",
"suffix": ""
},
{
"first": "Mor",
"middle": [],
"last": "Naaman",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Gravano",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th International Workshop on the Web and Databases",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hila Becker, Mor Naaman, and Luis Gravano. 2009. Event identification in social media. In Proceedings of the 12th International Workshop on the Web and Databases (WebDB 2009), Providence, USA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Event discovery in social media feeds",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Benson",
"suffix": ""
},
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "389--398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Benson, Aria Haghighi, and Regina Barzilay. 2011. Event discovery in social media feeds. In Proc. of the 49th Annual Meeting of the Associa- tion for Computational Linguistics: Human Language Technologies, pages 389-398, Portland, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "User Reference Guide for the British National Corpus",
"authors": [
{
"first": "Lou",
"middle": [],
"last": "Burnard",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lou Burnard. 2000. User Reference Guide for the British National Corpus. Technical report, Oxford University Computing Services.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The ICWSM 2011 Spinn3r dataset",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Burton",
"suffix": ""
},
{
"first": "Niels",
"middle": [],
"last": "Kasch",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Soboroff",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 5th International Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Burton, Niels Kasch, and Ian Soboroff. 2011. The ICWSM 2011 Spinn3r dataset. In Proceedings of the 5th International Conference on Weblogs and Social Media (ICWSM 2011), Barcelona, Spain.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "PET -a platform for experimentation with efficient HPSG processing techniques",
"authors": [
{
"first": "Ulrich",
"middle": [],
"last": "Callmeier",
"suffix": ""
}
],
"year": 2002,
"venue": "Collaborative Language Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulrich Callmeier. 2002. PET -a platform for experimentation with efficient HPSG processing techniques. In Stephan Oepen, Dan Flickinger, Jun'ichi Tsujii, and Hans Uszkoreit, editors, Collaborative Language Engineering. CSLI Publications, Stanford, USA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A latent variable model for geographic lexical variation",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Connor",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1277--1287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein, Brendan O'Connor, Noah A. Smith, and Eric P. Xing. 2010. A latent variable model for geographic lexical variation. In Proc. of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1277-1287, Cambridge, MA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Mapping the geographical diffusion of new words",
"authors": [
{
"first": "Jacod",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Noad",
"middle": [
"A"
],
"last": "Connor",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacod Eisenstein, Brendan O'Connor, Noad A. Smith, and Eric P. Xing. 2012. Mapping the geographical diffusion of new words. Arxiv preprint arXiv, 1210.5268.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "What to do about bad language on the internet",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2013)",
"volume": "",
"issue": "",
"pages": "359--369",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Eisenstein. 2013. What to do about bad language on the internet. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2013), pages 359-369, Atlanta, USA.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "On building a more efficient grammar by exploiting types",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of Natural Language Engineering (Special Issue on Efficient Processing with HPSG)",
"volume": "6",
"issue": "1",
"pages": "15--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Flickinger, Stephan Oepen, Hans Uszkoreit, and Jun'ichi Tsujii. 2000. On building a more efficient grammar by exploiting types. Journal of Natural Language Engineering (Special Issue on Efficient Processing with HPSG), 6(1):15-28.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "WikiWoods: Syntacto-semantic annotation for English wikipedia",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Gisle",
"middle": [],
"last": "Ytrest\u00f8ol",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Flickinger, Stephan Oepen, and Gisle Ytrest\u00d8ol. 2010. WikiWoods: Syntacto-semantic annotation for English wikipedia. In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC 2010), Valletta, Malta.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "From news to comment: Resources and benchmarks for parsing the language of web 2.0",
"authors": [
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Ozlem",
"middle": [],
"last": "Cetinoglu",
"suffix": ""
},
{
"first": "Joachim",
"middle": [],
"last": "Wagner",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"Le"
],
"last": "Roux",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Deirdre",
"middle": [],
"last": "Hogan",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of the 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "893--901",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer Foster, Ozlem Cetinoglu, Joachim Wagner, Joseph Le Roux, Joakim Nivre, Deirdre Hogan, and Josef van Genabith. 2011. From news to com- ment: Resources and benchmarks for parsing the language of web 2.0. In Proc. of the 5th International Joint Conference on Natural Language Pro- cessing, pages 893-901, Chiang Mai, Thailand.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Unsupervised mining of lexical variants from noisy text",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Metzler",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the First workshop on Unsupervised Learning in NLP",
"volume": "",
"issue": "",
"pages": "82--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Gouws, Dirk Hovy, and Donald Metzler. 2011. Unsupervised mining of lexical variants from noisy text. In Proceedings of the First workshop on Unsupervised Learning in NLP, pages 82-90, Edinburgh, UK.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Lexical normalisation of short text messages: Makn sens a #twitter",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL HLT 2011)",
"volume": "",
"issue": "",
"pages": "368--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han and Timothy Baldwin. 2011. Lexical normalisation of short text mes- sages: Makn sens a #twitter. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Tech- nologies (ACL HLT 2011), pages 368-378, Portland, USA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatically constructing a normalisation dictionary for microblogs",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "421--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han, Paul Cook, and Timothy Baldwin. 2012a. Automatically constructing a normalisation dictionary for microblogs. In Proceedings of the Joint Con- ference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning 2012 (EMNLP-CoNLL 2012), pages 421-432, Jeju, Korea.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Geolocation prediction in social media data by finding location indicative words",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012)",
"volume": "",
"issue": "",
"pages": "1045--1062",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han, Paul Cook, and Timothy Baldwin. 2012b. Geolocation prediction in social media data by finding location indicative words. In Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012), pages 1045-1062, Mumbai, India.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A stacking-based approach to twitter user geolocation prediction",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013): System Demonstrations",
"volume": "",
"issue": "",
"pages": "7--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han, Paul Cook, and Timothy Baldwin. 2013. A stacking-based approach to twitter user geolocation prediction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013): Sys- tem Demonstrations, pages 7-12, Sofia, Bulgaria.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Language matters in Twitter: A large scale study",
"authors": [
{
"first": "Lichan",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Gregorio",
"middle": [],
"last": "Convertino",
"suffix": ""
},
{
"first": "Ed",
"middle": [
"H"
],
"last": "Chi",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 5th International Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lichan Hong, Gregorio Convertino, and Ed H. Chi. 2011. Language matters in Twitter: A large scale study. In Proceedings of the 5th International Con- ference on Weblogs and Social Media (ICWSM 2011), Barcelona, Spain.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Dude, srsly?: The surprisingly formal nature of Twitters language",
"authors": [
{
"first": "Yuheng",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Kartik",
"middle": [],
"last": "Talamadupula",
"suffix": ""
},
{
"first": "Subbarao",
"middle": [],
"last": "Kambhampati",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 7th International Conference on Weblogs and Social Media (ICWSM 2013)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuheng Hu, Kartik Talamadupula, and Subbarao Kambhampati. 2013. Dude, srsly?: The surprisingly formal nature of Twitters language. In Proceedings of the 7th International Conference on Weblogs and Social Media (ICWSM 2013), Boston, USA.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A framework for modeling influence, opinions and structure in social media",
"authors": [
{
"first": "Akshay",
"middle": [],
"last": "Java",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 22nd Annual Conference on Artificial Intelligence (AAAI-07)",
"volume": "",
"issue": "",
"pages": "1933--1934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akshay Java. 2007. A framework for modeling influence, opinions and struc- ture in social media. In Proceedings of the 22nd Annual Conference on Artificial Intelligence (AAAI-07), pages 1933-1934.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Comparing corpora",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
}
],
"year": 2001,
"venue": "International Journal of Corpus Linguistics",
"volume": "6",
"issue": "1",
"pages": "97--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Kilgarriff. 2001. Comparing corpora. International Journal of Corpus Linguistics, 6(1):97-133.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "langid.py: An off-the-shelf language identification tool",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL 2012) Demo Session",
"volume": "",
"issue": "",
"pages": "25--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In Proceedings of the 50th Annual Meeting of the As- sociation for Computational Linguistics (ACL 2012) Demo Session, pages 25-30, Jeju, Republic of Korea.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Network analysis of recurring YouTube spam campaigns",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Derek",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Callaghan",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Harrigan",
"suffix": ""
},
{
"first": "P\u00e1draig",
"middle": [],
"last": "Carthy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cunningham",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 6th International Conference on Weblogs and Social Media (ICWSM 2012)",
"volume": "",
"issue": "",
"pages": "531--534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Derek O'Callaghan, Martin Harrigan, Joe Carthy, and P\u00e1draig Cunningham. 2012. Network analysis of recurring YouTube spam campaigns. In Pro- ceedings of the 6th International Conference on Weblogs and Social Media (ICWSM 2012), pages 531-534, Dublin, Ireland.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Improved part-of-speech tagging for online conversational text with word clusters",
"authors": [
{
"first": "Olutobi",
"middle": [],
"last": "Owoputi",
"suffix": ""
},
{
"first": "Brendan",
"middle": [],
"last": "Oconnor",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT 2013)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olutobi Owoputi, Brendan OConnor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technologies (NAACL HLT 2013), Atlanta, USA.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Using paraphrases for improving first story detection in news and Twitter",
"authors": [
{
"first": "Sasa",
"middle": [],
"last": "Petrovic",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Lavrenko",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "338--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sasa Petrovic, Miles Osborne, and Victor Lavrenko. 2012. Using paraphrases for improving first story detection in news and Twitter. In Proc. of the 2012 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, pages 338-346, Montr\u00e9al, Canada.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Trendminer: An architecture for real time analysis of social media text",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Preotiuc-Pietro",
"suffix": ""
},
{
"first": "Sina",
"middle": [],
"last": "Samangooei",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Gibbins",
"suffix": ""
},
{
"first": "Mahesan",
"middle": [],
"last": "Niranjan",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the ICWSM 2013 Workshop on Real-Time Analysis and Mining of Social Streams",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Preotiuc-Pietro, Sina Samangooei, Trevor Cohn, Nicholas Gibbins, and Mahesan Niranjan. 2012. Trendminer: An architecture for real time anal- ysis of social media text. In Proceedings of the ICWSM 2013 Workshop on Real-Time Analysis and Mining of Social Streams, Dublin, Ireland.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Sentence boundary detection: A long solved problem?",
"authors": [
{
"first": "Jonathon",
"middle": [],
"last": "Read",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Dridan",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Lars",
"middle": [
"J\u00f8orgen"
],
"last": "Solberg",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING 2012: Posters",
"volume": "",
"issue": "",
"pages": "985--994",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathon Read, Rebecca Dridan, Stephan Oepen, and Lars J\u00d8orgen Solberg. 2012a. Sentence boundary detection: A long solved problem? In Proceed- ings of COLING 2012: Posters, pages 985-994, Mumbai, India.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The WeSearch corpus, treebank, and treecache -a comprehensive sample of user-generated content",
"authors": [
{
"first": "Jonathon",
"middle": [],
"last": "Read",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Dridan",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)",
"volume": "",
"issue": "",
"pages": "1829--1835",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathon Read, Dan Flickinger, Rebecca Dridan, Stephan Oepen, and Lilja \u00d8vrelid. 2012b. The WeSearch corpus, treebank, and treecache -a com- prehensive sample of user-generated content. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012), pages 1829-1835, Istanbul, Turkey.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Social media is NOT that bad! the lexical quality of social media",
"authors": [
{
"first": "Luz",
"middle": [],
"last": "Rello",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Baeza-Yates",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 6th International Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luz Rello and Ricardo Baeza-Yates. 2012. Social media is NOT that bad! the lexical quality of social media. In Proceedings of the 6th International Conference on Weblogs and Social Media (ICWSM 2012), Dublin, Ireland.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Named entity recognition in tweets: An experimental study",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1524--1534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recog- nition in tweets: An experimental study. In Proceedings of the 2011 Con- ference on Empirical Methods in Natural Language Processing (EMNLP 2011), pages 1524-1534, Edinburgh, UK.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Open domain event extraction from Twitter",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Mausam",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "1104--1112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Mausam, Oren Etzioni, and Sam Clark. 2012. Open domain event extraction from Twitter. In Proceedings of the 18th ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Mining, pages 1104- 1112, Beijing, China.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Supervised text-based geolocation using language models on an adaptive grid",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Speriosu",
"suffix": ""
},
{
"first": "Sarat",
"middle": [],
"last": "Rallapalli",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Wing",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "1500--1510",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Roller, Michael Speriosu, Sarat Rallapalli, Benjamin Wing, and Jason Baldridge. 2012. Supervised text-based geolocation using language mod- els on an adaptive grid. In Proceedings of the Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning 2012 (EMNLP-CoNLL 2012), pages 1500-1510, Jeju Island, Korea.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Half of messages on twitter are not in English -Japanese is the second most used language",
"authors": [
{
"first": "",
"middle": [],
"last": "Semiocast",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Semiocast. 2010. Half of messages on twitter are not in English -Japanese is the second most used language. Technical report, Semiocast.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "SRILM -an extensible language modeling toolkit",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the International Conference on Spoken Language Processing",
"volume": "2",
"issue": "",
"pages": "901--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke. 2002. SRILM -an extensible language modeling toolkit. In Proc. of the International Conference on Spoken Language Processing, volume 2, pages 901-904, Denver, USA.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Simple supervised document geolocation with geodesic grids",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Wing",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Baldridge",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "955--964",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Wing and Jason Baldridge. 2011. Simple supervised document ge- olocation with geodesic grids. In Proc. of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technolo- gies, pages 955-964, Portland, USA.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Using social media to enhance emergency situation awareness",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Lampert",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Cameron",
"suffix": ""
},
{
"first": "Bella",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Power",
"suffix": ""
}
],
"year": 2012,
"venue": "IEEE Intelligent Systems",
"volume": "27",
"issue": "6",
"pages": "52--59",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jie Yin, Andrew Lampert, Mark Cameron, Bella Robinson, and Robert Power. 2012. Using social media to enhance emergency situation awareness. IEEE Intelligent Systems, 27(6):52-59.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>Corpus</td><td>Documents</td><td>Average words per document</td></tr><tr><td colspan=\"2\">TWITTER-1 1 000 000</td><td>11.8 \u00b1 8.3</td></tr><tr><td colspan=\"2\">TWITTER-2 1 000 000</td><td>11.6 \u00b1 8.1</td></tr><tr><td>COMMENTS</td><td>874 772</td><td>15.8 \u00b1 18.6</td></tr><tr><td>FORUMS</td><td>1 000 000</td><td>23.2 \u00b1 29.3</td></tr><tr><td>BLOGS</td><td>1 000 000</td><td>147.7 \u00b1 339.3</td></tr><tr><td>WIKIPEDIA</td><td>200 000</td><td>281.2 \u00b1 363.8</td></tr><tr><td>BNC</td><td>3141</td><td>31 609.0 \u00b1 30 424.3</td></tr></table>",
"html": null,
"type_str": "table",
"text": ".",
"num": null
},
"TABREF1": {
"content": "<table><tr><td>: Number of documents and average doc-</td></tr><tr><td>ument size (mean\u00b1standard deviation, in words)</td></tr><tr><td>for each dataset</td></tr><tr><td>TWITTER-1/2 and COMMENTS, predictably, con-</td></tr><tr><td>tain the shortest documents, with 12-16 words per</td></tr><tr><td>document on average. Forum posts are around</td></tr><tr><td>twice the length on average (but the spread of</td></tr><tr><td>document lengths is considerably greater). Blog</td></tr><tr><td>posts, on average, contain around ten times the</td></tr><tr><td>number of words of a forum post, with a greater</td></tr><tr><td>spread again of document lengths and longer</td></tr><tr><td>sentences. Amongst our social media sources,</td></tr><tr><td>Wikipedia documents are by far the longest, but</td></tr><tr><td>considerably shorter than BNC documents.</td></tr></table>",
"html": null,
"type_str": "table",
"text": "",
"num": null
},
"TABREF3": {
"content": "<table><tr><td>: Top-10 languages (by ISO-639-1 identifier) in each dataset</td></tr><tr><td>documents reveals that most are made up of lists</td></tr><tr><td>of different types: names of people from a vari-</td></tr><tr><td>ety of ethnic backgrounds, foreign place names, or</td></tr><tr><td>titles of artworks/military honours in various lan-</td></tr><tr><td>guages.</td></tr></table>",
"html": null,
"type_str": "table",
"text": "",
"num": null
},
"TABREF8": {
"content": "<table><tr><td>: Corpus homogeneity using \u03c7 2 (smaller</td></tr><tr><td>values indicate greater self-similarity)</td></tr></table>",
"html": null,
"type_str": "table",
"text": "",
"num": null
},
"TABREF10": {
"content": "<table><tr><td/><td>BNC WIKIPEDIA BLOGS FORUMS COMMENTS TWITTER-2 TWITTER-1</td><td/></tr><tr><td/><td/><td colspan=\"2\">TWITTER-2</td></tr><tr><td/><td>BNC WIKIPEDIA BLOGS FORUMS COMMENTS TWITTER-2 TWITTER-1</td><td/></tr><tr><td/><td/><td colspan=\"2\">COMMENTS</td></tr><tr><td>Training Domain</td><td>BNC WIKIPEDIA BLOGS FORUMS COMMENTS TWITTER-2 TWITTER-1 BNC WIKIPEDIA BLOGS FORUMS COMMENTS TWITTER-2 TWITTER-1</td><td colspan=\"2\">BLOGS FORUMS</td></tr><tr><td/><td>BNC WIKIPEDIA BLOGS FORUMS COMMENTS TWITTER-2 TWITTER-1</td><td/></tr><tr><td/><td/><td colspan=\"2\">WIKIPEDIA</td></tr><tr><td/><td>BNC WIKIPEDIA BLOGS FORUMS COMMENTS TWITTER-2 TWITTER-1</td><td/></tr><tr><td/><td/><td/><td>BNC</td></tr><tr><td/><td>BNC WIKIPEDIA BLOGS FORUMS COMMENTS TWITTER-2 TWITTER-1</td><td/></tr><tr><td/><td/><td>500</td><td>1000</td><td>1500</td></tr><tr><td/><td/><td colspan=\"2\">Perplexity</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Pairwise corpus similarity (\u00d710 3 ) using \u03c7 2",
"num": null
}
}
}
}