|
{ |
|
"paper_id": "I13-1040", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:14:08.382084Z" |
|
}, |
|
"title": "Behind the Times: Detecting Epoch Changes using Large Corpora", |
|
"authors": [ |
|
{ |
|
"first": "Octavian", |
|
"middle": [], |
|
"last": "Popescu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "FBK-irst", |
|
"location": { |
|
"settlement": "Trento", |
|
"country": "Italy" |
|
} |
|
}, |
|
"email": "popescu@fbk.eu" |
|
}, |
|
{ |
|
"first": "Carlo", |
|
"middle": [], |
|
"last": "Strapparava", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "FBK-irst", |
|
"location": { |
|
"settlement": "Trento", |
|
"country": "Italy" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Using large corpora of chronologically ordered language, it is possible to explore diachronic phenomena, identifying previously unknown correlations between language usage and time periods, or epochs. We focused on a statistical approach to epoch delimitation and introduced the task of epoch characterization. We investigated the significant changes in the distribution of terms in the Google N-gram corpus and their relationships with emotion words. The results show that the method is reliable and the task is feasible.", |
|
"pdf_parse": { |
|
"paper_id": "I13-1040", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Using large corpora of chronologically ordered language, it is possible to explore diachronic phenomena, identifying previously unknown correlations between language usage and time periods, or epochs. We focused on a statistical approach to epoch delimitation and introduced the task of epoch characterization. We investigated the significant changes in the distribution of terms in the Google N-gram corpus and their relationships with emotion words. The results show that the method is reliable and the task is feasible.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Traditionally, scholars of history define epochs according to their deep knowledge and understanding of facts over a long stretch of time. Intuitively, in order to define a new epoch, both a big social impact of a series of events and new issues, which arouse the social interest, must be observed. However, it is hard to define what makes a feature \"distinctive\" or an event a \"great change\". It is even harder to evaluate and measure the impact of a series of changes in society in an objective way. Since the advent of regular newspapers and the industry of mass media, written information has represented a mirror of the interests of society. A social event is relevant only if people pay attention to it and comment on it. A major change in society is reflected in the frequencies with which a set of topics is mentioned in mass media, some of them becoming mentioned more often than previously, while some others are no more of interest. Furthermore, specific epochs typically develop a particular form of wording or rhetorical style.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper we describe a computational approach to epoch delimitation on the basis of word distribution over certain periods of time. A big quantity of data, chronologically ordered, allows accurate statistical statements regarding the covariance between the frequencies of two or more terms over a certain period of time. By discovering significant statistical changes in word usage behavior, it is possible to define epoch boundaries. We show that it is possible to distinguish a series of limited periods of time, spanning at most three years, within which non-random changes affect the joint distribution of terms. Between two such short periods (i.e. the boundaries) no statistical significant changes are observed for decades, and thus we can refer to it as an epoch. The distributions of the considered terms before and after boundaries are distinctly different.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We also introduce the task of epoch characterization. Certain words carry with them an emotional charge, like joy, fear, disgust etc. Within a given epoch, we can analyze the distribution of emotion words and their co-occurrences with the set of terms considered indicative for epoch definition. The pattern of these co-occurrences constitutes a blueprint of emotional tendencies with respect to some particular topics in the society within a certain period. Given an arbitrary sample of data from a given, but unknown period of time, the task consists in correlating the emotional pattern of the data with the one of an epoch from which the data comes. The experiments reported here show that this task is feasible and sensible results are obtained.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The corpus used in the current experiments is the Google 5-grams made of all tuples of consecutive 5 words, coming from English books printed roughly from 1614 to 2009.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For the purpose of the present paper, we compiled a lexicon of political and social terms. The lexicon contains 761 words, such as: capitalism, civil disobedience, demagogue, democracy, dictator, chickenhawk, education, government, peace, war etc. The these terms come from the lists compiled for the political and sociological domain publicly available 1 . The frequency of these terms and their covariance is analyzed over the years and non-random changes are found according to the methodology presented in Section 3. The methodology itself is purely statistical and it does not depend in any way on what the list contains. We could have equally chosen terms from art or sport domain, obtaining epoch boundaries specific to each domain.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The emotion words used in epoch characterization come primarily from the NRC Word-Emotion Association Lexicon (Mohammad and Turney, 2010) to which the list of emotion words extracted from WordNet-Affect (Strapparava and Valitutti, 2004) , distributed in the Semeval 2007 Affective Text task (Strapparava and Mihalcea, 2007) , has been added. The lexicon is made up of English words to which eight possible tags are attached: anger, anticipation, disgust, fear, joy, sadness, surprise and trust. All in all there are 14,000 words for which at least one affective tag is given.", |
|
"cite_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 137, |
|
"text": "(Mohammad and Turney, 2010)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 203, |
|
"end": 236, |
|
"text": "(Strapparava and Valitutti, 2004)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 323, |
|
"text": "(Strapparava and Mihalcea, 2007)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The paper is organized as follows. In Section 2 we review the relevant literature. Section 3 presents the statistical apparatus employed in epoch determination and epoch characterization. In Section 4 we present the experiments and the results we have obtained. In the last section we highlight the contribution of this paper and make an overview of further immediate work.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In (Michel et al., 2011) , besides a complete introduction to the Google Books corpus, a limited diachronic study of words meaning and form is also carried out. The authors introduce the term 'culturomic' and show that quantitative analyses may lead to interesting results. They show that it is possible to determine censorship and suppression by comparing the frequencies of proper names in bilingual Google books corpora. However, the authors did not proceed to a systematic studies of epochs.", |
|
"cite_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 24, |
|
"text": "(Michel et al., 2011)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Regarding semantic change, the task of sense disambiguation over the years is introduced in (Mihalcea and Nastase, 2012) . In their paper, the authors refer to definite periods of time as epochs but they considered them prior defined.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 120, |
|
"text": "(Mihalcea and Nastase, 2012)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In (Wang and Mccallum, 2006) an analysis of topics over time is carried out. The paper fo-cuses on rather fixed topics, which are expressed by frozen compounds, such as \"mexican war\", \"CVS operation\", and determines how these topics evolve during the years. However, because the scope of their paper is not global, the corpus used comes from 19 months of personal emails. It is hard to see how this method could generalize. A similar approach is described in (Wang et al., 2008) . The authors use LDA to facilitate the search into large corpora by automatically organizing them.", |
|
"cite_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 28, |
|
"text": "(Wang and Mccallum, 2006)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 459, |
|
"end": 478, |
|
"text": "(Wang et al., 2008)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In (Yu et al., 2010) , the statistics tests and the google N-gram corpus are used for (semi) automatic creation and validation of a sense pool. The frequencies extracted from Google N-gram corpus are filtered with an appropriate statistical test and further verified by human experts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 20, |
|
"text": "(Yu et al., 2010)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The richness and complexity of cultural information contained in the Google N-gram corpus is analyzed in (Joula, 2012) . By considering the degree of interdependence as a measure for complexity, the author used the 2-gram corpus to analyze the complexity of American culture. However, there is no the epoch distinction and statistical support.", |
|
"cite_spans": [ |
|
{ |
|
"start": 105, |
|
"end": 118, |
|
"text": "(Joula, 2012)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Regarding Sentiment analysis, text categorization according to affective relevance, opinion exploration for market analysis, etc. are just some examples of application this NLP area (Pang and Lee, 2008) . While positive/negative valence checking is an active field of sentiment analysis, a fine-grained emotion checking is nowadays an emerging research topic. For example, SemEval task on Affective Text (Strapparava and Mihalcea, 2007) focussed on the recognition of six emotions emotions in a corpus of news headlines.", |
|
"cite_spans": [ |
|
{ |
|
"start": 182, |
|
"end": 202, |
|
"text": "(Pang and Lee, 2008)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 404, |
|
"end": 436, |
|
"text": "(Strapparava and Mihalcea, 2007)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this section we present the statistical tests we used to analyze the data. We do not assume any prior distribution of the frequencies in the corpus and we employ both non parametric and parametric tests. In this section we present the statistical tests we used to analyze the data. We do not assume any prior distribution of the frequencies in the corpus and we employ both non parametric and parametric tests. The Google N-gram corpus is made up of a number of text files which contain N-grams, where N goes form 1 to 5, and which are obtained from English books published over the years. In Table 1 we present a snippet from the 5-gram corpus: n-grams year # occ. # pages # books democracy at work 1996 1 1 1 democracy at work 1997 5 5 5 democracy at work 1998 2 2 2 Table 1 : 5-Gram Google files", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 596, |
|
"end": 603, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 698, |
|
"end": 792, |
|
"text": "work 1996 1 1 1 democracy at work 1997 5 5 5 democracy at work 1998 2 2 2 Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Normalization. Due to the exponential growth of the published data, it is better to normalize the number of occurrences for a meaningful comparison. We considered all the content nouns, including proper names, and we computed for each term of interest the percentage of occurrences of that term with respect to the sum of frequencies of all content nouns (considering lemmata). In this paper, when we refer to frequency of a term we mean the normalized figure, unless explicitly stated otherwise. The percentage is in fact very informative on what the public opinion is concerned about in certain periods and substantial differences may be observed within a short period of time. For example, democracy was 25 times less a probable topic at the begin of twenty-first century than 50 years before. In such cases, one can clearly talk about a change of interest in society, see Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 876, |
|
"end": 884, |
|
"text": "Figure 1", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Statistical tests", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Welch's test. Welch test is a variant of t-student test to check whether two different samples come from the same population or not (Sawilowsky, 2001 ). The Welch test fits our purposes because it does not assume that the sample have equal variance, thus it can be applied where the other similar tests, such as classical t-student or F-test, do not. The initial conditions for Welch test does not include (1) the equality of the sample sizes and (2) either the homogeneity of population, thus the data may not come from a population having a distribution with a unique variance. In fact for this reason we prefer to use non parametric test in the present paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 132, |
|
"end": 149, |
|
"text": "(Sawilowsky, 2001", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistical tests", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In practice, we apply the Welch's test to sample size representing contiguous periods of time . To exemplify, let us consider here the term \"war\" and two different periods 1800-1900 and 1900-2000. Each period is split in two sub-periods, 1800-1850 vs. 1850-1900, and 1900-1950 vs. 1950-2000 The null hypothesis, that the two sample come from a population with the same mean cannot be rejected at \u03b1 = 0.1 in the first case. The same null hypothesis is rejected with a very high confidence, \u03b1 = 0.01 in the second case.", |
|
"cite_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 290, |
|
"text": "1850-1900, and 1900-1950 vs. 1950-2000", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistical tests", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Run Test. Run test is a non parametric test, which determines whether the a sequence of numbers is likely to be the result of a random process or there might be an inner pattern in data (Gibbons and Chakraborti, 1992; Lindgren, 1993) . For example let us suppose that we have a Bernoulli process with \"+\" and \"-\" possible outcomes and probabilities 1/4, 3/4 respectively. A sequences like ++++----+++++---is very unlikely to be a random generated sequences of this process. The run test is designed to detect such cases. A set of real values, as the frequencies of a term over a period of time are, is converted into a run sequences by considering the median of the sequence and obtaining a new sequences by marking with a \"+\" if the value is bigger than the median and with a \"-\" if not.", |
|
"cite_spans": [ |
|
{ |
|
"start": 186, |
|
"end": 217, |
|
"text": "(Gibbons and Chakraborti, 1992;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 218, |
|
"end": 233, |
|
"text": "Lindgren, 1993)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistical tests", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In practice we apply the run statistics on frequencies of a set of given terms. For example for the term government, considering two periods we obtain the results in Table 3 , where p 1 =1800-1850, p 2 =1850-1900, and p 3 =1900-1950. The null hypothesis, which is that the run sequence is randomly generated can be rejected at a significance level \u03b1 = 0.1 for the third sample, namely from 1900-1950.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 166, |
|
"end": 173, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Statistical tests", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Least Squares. The least squares method is used to find the line with the smallest sum of square of the difference between the data and the line points , (Bj\u00f6rck, 1996) . In practice we try to determine the longest period of time in which the data could be fit to a line, imposing that the sum of squares is bond by a small value. For example least squares method applied to the term government from 1968 to 2008 produce the optimal line plotted in Figure 2 . The line has the equation: y = 3.807 \u2212 0.001x. The sum of residuals is less than 0.002 (ss = 0.0014), which means that the average variance around the line points is 0.00036. This represents a remarkable fit of data to a line. Ratio. It is usual to find in the distribution of frequencies increasing or decreasing sequences. For a definite period of time where a particular direction of growth is observed, we take into account also the rate of growth defined as the ratio between the difference of a three consecutive values:", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 168, |
|
"text": "(Bj\u00f6rck, 1996)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 449, |
|
"end": 457, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Statistical tests", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "(x i \u2212x i\u22121 ) (x i+1 \u2212x i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistical tests", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": ". In practice we use the growth ratio for (C1) characterizing a whole period of time and, (C2) for detecting similarities among distributions for different/same terms over the same/different periods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistical tests", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The same growth rate may characterize a whole period of time. A change in the growth rate may signal the beginning of a new epoch.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "C1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In Table 4 , we report the median growth rate for the term democracy over two periods. Considering the difference of frequencies of two terms and using the run test we can observe if the growth rate remain the same or changed. In Table 5 we present two runs from two different period of times for the ratio of the differences between the terms education and democracy. We observe that we have the same growth ratio pattern in different periods. The results above show that from the point of view of the relationship between the frequencies of government and welfare we can clearly distinguish four different patterns. There is a strong statistical evidence that the frequencies two terms were correlated in the period 1800-1850 and independent between 1900-1950. Before concluding this section we also plot the frequencies of the emotion terms and two examples of emotion blue-print for years 1921 and 1945. The counts were normalized taking into account the emotion words (see Figures 4 and 3) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 718, |
|
"end": 762, |
|
"text": "1800-1850 and independent between 1900-1950.", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 3, |
|
"end": 10, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 230, |
|
"end": 237, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 978, |
|
"end": 994, |
|
"text": "Figures 4 and 3)", |
|
"ref_id": "FIGREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "C1", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In Section 3.1 we presented the statistical procedures we use for epoch determination. Each of these tests is able individually to find non-random changes in the distribution of the frequencies of terms over the years and to find the beginning and the end of the time periods where the same statistically relevant pattern -linear, same growth rate, dependency -is observed. However, noticing a change in the distribution is not enough for declaring the begin or the end of an epoch. The fact that many of the terms considered are affected by a change in their distribution more or less concomitantly must be observed in order to decide on the epoch boundaries. For now, we preferred a conservative view therefore in the experiments we carried we impose that significantly more than 50% of the terms change their distribution and that the period in which this is happening is at most three years. The algorithm for epoch determination using the tests introduced above is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Epoch: Decision Procedure", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Require: Google N-grams with time info Ensure: Epoch 1: Apply W elch s and Run test for non-random changes 2: Choose start year and end year spanning several decades 3: if number of terms positive to line 1 tests in the time interval +/-3 years around start year and end year \u2264 50% then 4:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm Epoch Detection", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "goto line 2 5: end if 6: Apply Least Square, Ratio, Spearman and Kendall 7: if number of terms positive to line 6 tests \u2264 50% then 8:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm Epoch Detection", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "goto line2 9: end if 10: epoch \u2190 [start year, end year] At step 6, the order in which the tests are applied is exactly as specified. If Least Square is positive then also the others are positive as well. An so on: if Ratio holds also the last two tests hold. Condition 7 is satisfied if at least Kendall is positive.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Algorithm Epoch Detection", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We considered a list of 761 political terms and we applied the decision procedure presented in Section 3.2. The output of the decision procedure is a set of years around which statistically significant changes in the distribution of frequencies for the majority of the terms considered occur. The epoch identified for the chosen list of terms and the decision procedure detailed in Section 3 identified the following 6 epochs epochs between 1800 and 2009, see Table 7 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 460, |
|
"end": 467, |
|
"text": "Table 7", |
|
"ref_id": "TABREF7" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "epoch 1 1800- 1860 epoch 4 1950-1975 epoch 2 1860-1900 epoch 5 1975-1999 epoch 3 1900-1950 epoch 6 1999-2009 Table 8 lists a few examples of terms affected by a statistical change at epoch boundaries. In Table 9 we present the number of terms which changed their distribution for each boundary, on the second column the absolute value and on the third column the percentage relative to the total number of terms considered, 761. We can see that the number of terms which are positive to statistical tests varies substantially. However, it is not by chance that the changes occur. There is a tolerance of a couple of years around the boundaries. For example, if a term's distribution changes +/-3 years around 1975, then this change is considered for epoch boundary delimitation. Especially in the last 60 years, it seems that the changes occur more frequently and they are more clearly delimited. During these times, the changes between two different trends occur within a couple of year in the great majority of cases. The dynamic of change is different in the nineteenth century, when it is more likely to observe a buffer zone for several years. In the buffer zone, the distribution around a mean value is quasi normal.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 108, |
|
"text": "1860 epoch 4 1950-1975 epoch 2 1860-1900 epoch 5 1975-1999 epoch 3 1900-1950 epoch 6 1999-2009", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 109, |
|
"end": 116, |
|
"text": "Table 8", |
|
"ref_id": "TABREF8" |
|
}, |
|
{ |
|
"start": 204, |
|
"end": 212, |
|
"text": "Table 9", |
|
"ref_id": "TABREF10" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In fact, by running Spearman and Kendall tests we discovered interesting dependencies between the distribution of certain terms and the time line. We computed the differences between the frequencies of pair of terms. For example, for the pair socialism and capitalism the results of the statistical tests show a strong correlation within each epoch, see Table 10 and Figure 5 . 1800 1810 1820 1830 1840 1850 1860 1870 1880 1890 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000 2010 Frequency Year Socialism Capitalism -1860 1860-1900 1900-1950 1950-1975 1975-1999 1999-2009 Table 12 : The average of emotion frequencies over the epochs Figure 6 : 10-fold validation", |
|
"cite_spans": [ |
|
{ |
|
"start": 443, |
|
"end": 497, |
|
"text": "1910 1920 1930 1940 1950 1960 1970 1980 1990 2000 2010", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 534, |
|
"end": 589, |
|
"text": "-1860 1860-1900 1900-1950 1950-1975 1975-1999 1999-2009", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 354, |
|
"end": 362, |
|
"text": "Table 10", |
|
"ref_id": "TABREF12" |
|
}, |
|
{ |
|
"start": 367, |
|
"end": 375, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 378, |
|
"end": 442, |
|
"text": "1800 1810 1820 1830 1840 1850 1860 1870 1880 1890 1900", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 590, |
|
"end": 598, |
|
"text": "Table 12", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 652, |
|
"end": 660, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To each epoch an emotional blueprint can be attached. An emotional blueprint is obtained by taking into consideration the emotion denoting terms. There are 7 emotion words; anger, anticipation, disgust, fear, joy, sadness, surprise, trust and two opinion words, negative and positive. The corpus we consider in this section is the part of Google 5-grams in which each 5-gram contains at least an emotion word. In Table 11 we present their distribution in Google-gram corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 413, |
|
"end": 421, |
|
"text": "Table 11", |
|
"ref_id": "TABREF13" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The epoch characterization task consists in using the epochs as categories and assigning an unseen sample covering a continuos, but unknown, period of time to one of the categories. For the experiments in this paper, we used the average values of each emotion term computed over the epochs as epoch blue print, thus each epoch is characterized by an unique value for each emotion term, see Table 12. For evaluation we used a k-fold cross validation approach. The k-partitions were obtained by choosing randomly for each occurrence in google corpus its partition, so in average each partition had an equal number of terms. The training was carried on k \u2212 1 partitions and tested on a single partition, thus there are k independent evaluation experiments. The training k \u2212 1 partitions were joined into an unique corpus which was split into epochs and for each epoch we computed the average for each emotion term. The test partition, the k-partition was split in ten contiguous subpartitions. For each test sub-partition, the average of the emotion terms was computed and compared against the averages from training corpus to find the most similar ones, resulting in 10k experiments (see Figure 6 ).", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 390, |
|
"end": 399, |
|
"text": "Table 12.", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1186, |
|
"end": 1194, |
|
"text": "Figure 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The procedure of finding the most similar epoch can be implemented in different ways. We discuss here two approaches. The first method computes the average over the training corpus for each emotion term and, separately, the average for the test corpus and sums up the squares of the differences experiment first run second run third run fourth run fifth run all occurrences squares sum 46% 51% 46% 48% 50% all occurrences best guess 60% 56% 60% 59% 60% co-occurrences squares sum 53% 58% 59% 57% 59% co-occurrences best guess 65% 69% 67% 66% 66% Table 13 : 5-partition cross-validation results for each particular epoch. The category assigned is the one with the least sum of squares. The second method compares the averages computed over the training for each epoch and chooses a representative for each epoch, let us call it best guess. The test sample compares only the averages against the best guess for each epoch and it is assigned to the epoch which has the closest best guess.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 390, |
|
"end": 577, |
|
"text": "51% 46% 48% 50% all occurrences best guess 60% 56% 60% 59% 60% co-occurrences squares sum 53% 58% 59% 57% 59% co-occurrences best guess 65% 69% 67% 66% 66% Table 13", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To measure the accuracy, we simply count how many times there was only one epoch chosen and that it was indeed the correct one. The figures reported in Table 13 represent the accuracy, as all the sub-partitions were checked and consequently the recall was 1. The last two experiments we carried out on considered political terms. Instead of considering all occurrences of the emotion terms inside a particular epoch, we considered only the co-occurrences of the emotion words with a set of political terms. For this purpose we chose a set of 20 from the list of 761 of political terms considered: capitalism, community, common good, democracy, education, free market, government, heresy hunting, individual rights, justice, middle class, money, nepotism, politics, public interest, savings, socialism, social system, technology, and war. The averages for each corpus, training and test respectively, were computed only for these terms and the two approaches above, squares sum and best guess were applied.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 152, |
|
"end": 160, |
|
"text": "Table 13", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "In order to understand weather the results above are informative, we run a simple baseline over the same data. The baseline decision was to consider for each subpartion a random epoch. The accuracy of the baseline is around 15%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The possibility to analyze automatically the changes over the time in the usage of certain terms is an open window into sociological studies carried from a language perspective with computational methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Further Research", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "During the experiments, some interesting research directions have been revealed. Firstly, although we made no attempt here to make the con-nection between certain changes and real historical events, it seemed that this was indeed possible. Sharply distinctive changes are observed for certain terms around global war dates. Secondly, while we used the ratio as a parameter which may signal a change, we carried no analyses on the typology of rates themselves. Such analyses may bring to light patterns into the dynamic of interests within a society. Thirdly, the methodology we presented can be easily used for prediction. Such studies could predict future changes. A striking example is represented by the covariance between socialism and capitalism, which seemed to indicate the collapse of political regimes in East Europe several years before it actually happened, see Figure 5 . We plan to investigate further the distribution of terms over the time going in the directions above.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 873, |
|
"end": 881, |
|
"text": "Figure 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion and Further Research", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "E.g. www.democracy.org.au/glossary.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Numerical Methods for Least Squares Problems", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Bj\u00f6rck", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Bj\u00f6rck. 1996. Numerical Methods for Least Squares Problems. SIAM: Society for Industrial and Applied Mathematics.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Nonparametric Statistical Inference", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Gibbons", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Chakraborti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. D. Gibbons and S. Chakraborti. 1992. Nonparamet- ric Statistical Inference. CRC Press.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Using the google ngram corpus to measure cultural complexity", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Joula", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of Digital Humanities", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "P. Joula. 2012. Using the google ngram corpus to mea- sure cultural complexity. In Proceedings of Digital Humanities, University of Hamburg, July.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Statistical Theory", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Lindgren", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1993, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Lindgren. 1993. Statistical Theory. Chapman and Hall/CRC.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Quantitive analysis of culture using millions of digitized books", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Michel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Aiden", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Veres", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Gray", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"P" |
|
], |
|
"last": "Pickett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Hoiberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Clancy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Norvig", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Orwant", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Nowak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Pinker", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Aiden", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Science", |
|
"volume": "331", |
|
"issue": "6014", |
|
"pages": "176--182", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J. B. Michel, Y.K. Shen, A.P. Aiden, A. Veres , M. K. Gray, J.P. Pickett, D. Hoiberg, D. Clancy, P. Norvig, J. Orwant, M. A. Nowak S. Pinker, and E.L. Aiden. 2011. Quantitive analysis of culture using millions of digitized books. Science, 331(6014):176-182, January.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Word epoch disambiguation: Finding how words change over time", |
|
"authors": [ |
|
{ |
|
"first": "Rada", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vivi", |
|
"middle": [], |
|
"last": "Nastase", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "259--263", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rada Mihalcea and Vivi Nastase. 2012. Word epoch disambiguation: Finding how words change over time. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 259-263, Jeju Island, Korea, July. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon", |
|
"authors": [ |
|
{ |
|
"first": "Saif", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Turney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "26--34", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif Mohammad and Peter Turney. 2010. Emotions evoked by common words and phrases: Using me- chanical turk to create an emotion lexicon. In Pro- ceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Genera- tion of Emotion in Text, pages 26-34, Los Angeles, CA, June. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval", |
|
"authors": [ |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "1--135", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "B. Pang and L. Lee. 2008. Opinion mining and senti- ment analysis. Foundations and Trends in Informa- tion Retrieval, 2(1-2):1-135.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Fermat, schubert, einstein, and behrens-fisher: The probable difference between two means when \u03c3 2 1 = \u03c3 2 2", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Sawilowsky", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Journal of Modern Applied Statistical Methods", |
|
"volume": "1", |
|
"issue": "2", |
|
"pages": "461--472", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "S. Sawilowsky. 2001. Fermat, schubert, einstein, and behrens-fisher: The probable difference between two means when \u03c3 2 1 = \u03c3 2 2 . Journal of Modern Ap- plied Statistical Methods, 1(2):461-472.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "SemEval-2007 task 14: Affective Text", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Strapparava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Mihalcea", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of SemEval-2007", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Strapparava and R. Mihalcea. 2007. SemEval-2007 task 14: Affective Text. In Proceedings of SemEval- 2007, Prague, Czech Republic, June.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Wordnet-affect: an affective extension of wordnet", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Strapparava", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Valitutti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 4th International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Strapparava and A. Valitutti. 2004. Wordnet-affect: an affective extension of wordnet. In Proceedings of the 4th International Conference on Language Re- sources and Evaluation, Lisbon.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Topics over time: A non markov continuos-time model of topical trends", |
|
"authors": [ |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of KDD-06", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "X. Wang and A. Mccallum. 2006. Topics over time: A non markov continuos-time model of topical trends. In Proceedings of KDD-06, Philadelphia, Pennsyl- vania, August.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Continous time dynamic topic models", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Heckerman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Wang, D. Blei, and D. Heckerman. 2008. Continous time dynamic topic models. In Proceedings of the International Conference on Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Annotation and verification of sense pools in ontonotes. Information Processing and Management", |
|
"authors": [ |
|
{ |
|
"first": "Liang-Chih", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chung-Hsien", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ru-Yng", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chao-Hong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eduard", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Hovy", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "", |
|
"volume": "46", |
|
"issue": "", |
|
"pages": "436--447", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Liang-Chih Yu, Chung-Hsien Wu, Ru-Yng Chang, Chao-Hong Liu, and Eduard H. Hovy. 2010. Annotation and verification of sense pools in ontonotes. Information Processing and Manage- ment, 46(4):436-447, July.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"text": "run run-test p1 +++-++-+---++-+--++-+-++-++-+++-++-+-+-.29839 p2 -++++-+++-+++-+++--+-+-+-+-++-+-+-+-+-.32603 p3 ------+-+++++--++-+-+++++++++++++++ .00001", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"text": "democracy, government, education, welfare, war and terrorism percentage", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"text": "Least squares applied to the frequencies for term government", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"FIGREF5": { |
|
"uris": null, |
|
"text": "Emotion percentages in 1921 and 1945", |
|
"type_str": "figure", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"text": "respectively. We test whether the samples 1800-1850 vs. 1850-1900 have the same mean, and we also test whether the samples1900-1950 vs. 1950- 2000 have the same mean. InTable 2we present the results obtained.", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Sample</td><td>t</td><td>Outcome</td></tr><tr><td>1800-1850 vs. 1850-1900</td><td/><td/></tr><tr><td>\u00b51 = .078 vs. \u00b52 =.081</td><td>0.23</td><td>No Rejection at \u03b1 = 0.1</td></tr><tr><td>1900-1950 vs. 1950-2000</td><td/><td/></tr><tr><td>\u00b51 = .184 vs. \u00b52 =.098</td><td>-5.163</td><td>Rejection at \u03b1 = 0.01</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"text": "Welch's test for term war.", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF2": { |
|
"text": "Run test for term government", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"text": "Growth rates patterns of the difference between education and democracy", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>Spearman and Kendall Test. Spearman and</td></tr><tr><td>Kendall tests are two non parametrical tests for</td></tr><tr><td>measuring the statistical dependencies between</td></tr><tr><td>two variables. In practice the time line is always</td></tr><tr><td>one of the variable and a positive answer from one</td></tr><tr><td>of this tests shows a non-random evolution of the</td></tr><tr><td>frequencies within a that period of time. Usually</td></tr><tr><td>we consider the difference between the frequen-</td></tr><tr><td>cies of two terms and apply the Spearman and</td></tr><tr><td>Kendall test against the timeline. In Table 6 we</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF6": { |
|
"text": "Spearman and Kendall test for time vs. difference between government and welfare", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td/><td>0.25</td><td>Anger</td></tr><tr><td/><td/><td>Anticipation</td></tr><tr><td/><td/><td>Fear</td></tr><tr><td/><td/><td>Joy</td></tr><tr><td/><td>0.2</td><td>Negative Positive</td></tr><tr><td/><td/><td>Sadness</td></tr><tr><td/><td/><td>Surprise</td></tr><tr><td/><td/><td>Trust</td></tr><tr><td>Percentage</td><td>0.1 0.15</td></tr><tr><td/><td>0.05</td></tr><tr><td/><td>0</td></tr><tr><td/><td>1921</td><td>1945</td></tr><tr><td/><td>Year</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"text": "Epochs between 1800-2009", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>term</td><td colspan=\"2\">change year positive test</td></tr><tr><td>two party system</td><td>1975</td><td>run, ratio</td></tr><tr><td>two party system</td><td>1999</td><td>Welch's, ratio</td></tr><tr><td>patriotism</td><td>1975</td><td>Welch's, ratio</td></tr><tr><td>patriotism</td><td>1999</td><td>Welch's, squares</td></tr><tr><td>too big to fail</td><td>1975</td><td>ratio</td></tr><tr><td>too big to fail</td><td>1999</td><td>squares</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF8": { |
|
"text": "Statistical significant changes", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF10": { |
|
"text": "", |
|
"num": null, |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table" |
|
}, |
|
"TABREF12": { |
|
"text": "socialism vs. capitalism through the epochs", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td colspan=\"2\">anger anticipation</td><td>disgust</td><td>fear</td></tr><tr><td>3914</td><td>9390</td><td>2448</td><td>6519</td></tr><tr><td>joy</td><td>sadness</td><td>surprise</td><td>trust</td></tr><tr><td>6053</td><td>9892</td><td>3173</td><td>12082</td></tr></table>", |
|
"type_str": "table" |
|
}, |
|
"TABREF13": { |
|
"text": "", |
|
"num": null, |
|
"html": null, |
|
"content": "<table><tr><td>: Emotion words in Google 5-grams</td></tr><tr><td>(\u00d710 6 )</td></tr></table>", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |