{ "paper_id": "W12-0406", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T04:12:37.559820Z" }, "title": "On the Use of Homogenous Sets of Subjects in Deceptive Language Analysis", "authors": [ { "first": "Tommaso", "middle": [], "last": "Fornaciari", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Trento", "location": {} }, "email": "tommaso.fornaciari@unitn.it" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "", "affiliation": { "laboratory": "Language and Computation Group University", "institution": "University of Trento", "location": {} }, "email": "massimo.poesio@unitn.it" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent studies on deceptive language suggest that machine learning algorithms can be employed with good results for classification of texts as truthful or untruthful. However, the models presented so far do not attempt to take advantage of the differences between subjects. In this paper, models have been trained in order to classify statements issued in Court as false or not-false, not only taking into consideration the whole corpus, but also by identifying more homogenous subsets of producers of deceptive language. The results suggest that the models are effective in recognizing false statements, and their performance can be improved if subsets of homogeneous data are provided.", "pdf_parse": { "paper_id": "W12-0406", "_pdf_hash": "", "abstract": [ { "text": "Recent studies on deceptive language suggest that machine learning algorithms can be employed with good results for classification of texts as truthful or untruthful. However, the models presented so far do not attempt to take advantage of the differences between subjects. In this paper, models have been trained in order to classify statements issued in Court as false or not-false, not only taking into consideration the whole corpus, but also by identifying more homogenous subsets of producers of deceptive language. The results suggest that the models are effective in recognizing false statements, and their performance can be improved if subsets of homogeneous data are provided.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Detecting deceptive communication is a challenging task, but one that could have a number of useful applications. A wide variety of approaches to the discovery of deceptive statements have been attempted, ranging from using physiological sensors such as lie detectors to using neuroscience methods (Davatzikos et al., 2005; Ganis et al., 2003) . More recently, a number of techniques have been developed for recognizing deception on the basis of the communicative behavior of subjects. Given the difficulty of the task, many such methods rely on both verbal and non-verbal behavior, to increase accuracy. So for instance De Paulo et al. (2003) considered more than 150 cues, verbal and non-verbal, directly observed through experimental subjects. But finding clues indicating deception through manual inspection is not easy. De Paulo et al. asserted that \"behaviors that are indicative of deception can be indicative of other states and processes as well\".", "cite_spans": [ { "start": 298, "end": 323, "text": "(Davatzikos et al., 2005;", "ref_id": "BIBREF4" }, { "start": 324, "end": 343, "text": "Ganis et al., 2003)", "ref_id": "BIBREF12" }, { "start": 621, "end": 643, "text": "De Paulo et al. (2003)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The same point is made in more recent literature: thus Frank et al. (2008) write \"We find that there is no clue or clue pattern that is specific to deception, although there are clues specific to emotion and cognition\", and they wish for \"real-world databases, identifying base rates for malfeasant behavior in security settings, optimizing training, and identifying preexisting excellence within security organizations\". Jensen et al. (2010) exploited cues coming from audio, video and textual data.", "cite_spans": [ { "start": 55, "end": 74, "text": "Frank et al. (2008)", "ref_id": "BIBREF11" }, { "start": 422, "end": 442, "text": "Jensen et al. (2010)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One solution is to let statistical and machine learning methods discover the clues. Work such as Fornaciari and Poesio (2011a,b) ; Newman et al. (2003) ; Strapparava and Mihalcea (2009) suggests that these techniques can perform reasonably well at the task of discovering deception even just from linguistic data, provided that corpora containing examples of deceptive and truthful texts are available. The availability of such corpora is not a trivial problem, and indeed, the creation of a realistic such corpus is one of the problems in which we invested substantial effort in our own previous work, as discussed in Section 3.", "cite_spans": [ { "start": 97, "end": 128, "text": "Fornaciari and Poesio (2011a,b)", "ref_id": null }, { "start": 131, "end": 151, "text": "Newman et al. (2003)", "ref_id": "BIBREF17" }, { "start": 154, "end": 185, "text": "Strapparava and Mihalcea (2009)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the work discussed in this paper, we tackle an issue which to our knowledge has not been addressed before, due to the limitations of the datasets previously available: this is whether the individual difference between experimental subjects affect deception detection. In previous work, lexical (Fornaciari and Poesio, 2011a) and surface (Fornaciari and Poesio, 2011b) features were employed to classify deceptive statements issued in Italian Courts. In this study, we report the results of experiments in which our methods were trained either over the whole corpus or over smaller subsets consisting of the utterances produced by more homogenous subsets of subjects. These subsets were identified either automatically, by clustering subjects according to their language profile, or by using meta-information about the subjects included in the corpus, such as their gender.", "cite_spans": [ { "start": 297, "end": 327, "text": "(Fornaciari and Poesio, 2011a)", "ref_id": "BIBREF8" }, { "start": 340, "end": 370, "text": "(Fornaciari and Poesio, 2011b)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The structure of the paper is as follows. In Section 2 some background knowledge is introduced. In Section 3 the data set is described. In Section 4 we discuss our machine learning and experimental methods. Finally, the results are presented in Section 5 and discussed in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "From a methodological point of view, to investigate deceptive language gives rise to some tricky issues: first of all, the strategy chosen to collect data. The literature can be divided in two main families of studies:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background 2.1 Deceptive language analysis", "sec_num": "2" }, { "text": "\u2022 Field studies;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background 2.1 Deceptive language analysis", "sec_num": "2" }, { "text": "\u2022 Laboratory studies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background 2.1 Deceptive language analysis", "sec_num": "2" }, { "text": "The first ones are usually interesting in forensic applications but in such studies verifying the sincerity of the statements is often complicated (Vrij, 2005) . Laboratory studies, instead, are characterized by the artificiality of participants' psychological conditions: therefore their findings may not be generalized to deception encountered in real life.", "cite_spans": [ { "start": 147, "end": 159, "text": "(Vrij, 2005)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Background 2.1 Deceptive language analysis", "sec_num": "2" }, { "text": "Due to practical difficulties in collection and annotation of suitable data, in literature finding papers in which real life linguistic data are employed, where truthfulness is surely known, is less common and Zhou et al. (2008) complain about the lack of \"data set for evaluating deception detection models\". Just recently some studies tried to fill this gap, concerning both the English (Bachenko et al., 2008; Fitzpatrick and Bachenko, 2009) and Italian language (Fornaciari and Poesio, 2011a,b) . Just the studies on Italian language come from data which have constituted the first nucleus of the corpus analysed here.", "cite_spans": [ { "start": 210, "end": 228, "text": "Zhou et al. (2008)", "ref_id": "BIBREF26" }, { "start": 389, "end": 412, "text": "(Bachenko et al., 2008;", "ref_id": "BIBREF2" }, { "start": 413, "end": 444, "text": "Fitzpatrick and Bachenko, 2009)", "ref_id": "BIBREF7" }, { "start": 466, "end": 498, "text": "(Fornaciari and Poesio, 2011a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Background 2.1 Deceptive language analysis", "sec_num": "2" }, { "text": "Our own work and that of other authors that recently employed machine learning techniques to detect deception in text employs techniques very similar to that of stylometry. Stylometry is a discipline which studies texts on the basis of their stylistic features, usually in order to attribute them to an author -giving rise to the branch of author attribution -or to get information about the author himself -this is the field of author profiling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylometry", "sec_num": "2.2" }, { "text": "Stylometric analyses, which relies mainly on machine learning algorithms, turned out to be effective in several forensic tasks: not only the classical field of author profiling (Coulthard, 2004; Koppel et al., 2006; Peersman et al., 2011; Solan and Tiersma, 2004) and author attribution (Luyckx and Daelemans, 2008; Mosteller and Wallace, 1964) , but also emotion detection (Vaassen and Daelemans, 2011) and plagiarism analysis (Stein et al., 2007) . Therefore, from a methodological point of view, Deceptive Language Analysis is a particular application of stylometry, exactly like other branches of Forensic Linguistics.", "cite_spans": [ { "start": 177, "end": 194, "text": "(Coulthard, 2004;", "ref_id": "BIBREF3" }, { "start": 195, "end": 215, "text": "Koppel et al., 2006;", "ref_id": "BIBREF14" }, { "start": 216, "end": 238, "text": "Peersman et al., 2011;", "ref_id": "BIBREF18" }, { "start": 239, "end": 263, "text": "Solan and Tiersma, 2004)", "ref_id": "BIBREF21" }, { "start": 287, "end": 315, "text": "(Luyckx and Daelemans, 2008;", "ref_id": "BIBREF15" }, { "start": 316, "end": 344, "text": "Mosteller and Wallace, 1964)", "ref_id": "BIBREF16" }, { "start": 374, "end": 403, "text": "(Vaassen and Daelemans, 2011)", "ref_id": "BIBREF24" }, { "start": 428, "end": 448, "text": "(Stein et al., 2007)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Stylometry", "sec_num": "2.2" }, { "text": "3 Data set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Stylometry", "sec_num": "2.2" }, { "text": "In order to study deceptive language, we created the DECOUR -DEception in COURt -corpus, better described in Fornaciari and Poesio (2012) . DECOUR is a corpus constituted by the transcripts of 35 hearings held in four Italian Courts: Bologna, Bolzano, Prato and Trento. These transcripts report verbatim the statements issued by a total of 31 different subjects -four of which have been heard twice. All the hearings come from criminal proceedings for calumny and false testimony (artt. 368 and 372 of the Italian Criminal Code).", "cite_spans": [ { "start": 109, "end": 137, "text": "Fornaciari and Poesio (2012)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "False testimonies in Court", "sec_num": "3.1" }, { "text": "In particular, the hearings of DECOUR come mainly from two situations:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "False testimonies in Court", "sec_num": "3.1" }, { "text": "\u2022 the defendant for any criminal proceeding tries to use calumny against someone;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "False testimonies in Court", "sec_num": "3.1" }, { "text": "\u2022 a witness in any criminal proceeding lies for some reason.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "False testimonies in Court", "sec_num": "3.1" }, { "text": "In both cases, a new criminal proceeding arises, in which the subjects can issue new statements or not, and having as a body of evidence the transcript of the hearing held in the previous proceeding. The crucial point is that DECOUR only includes text from individuals who in the end have been found guilty. Hence the proceeding ends with a judgment of the Court which summarize the facts, pointing out precisely the lies told by the speaker in order to establish his punishment. Thanks to the transcripts of the hearing and to the final judgment of the Court, it is possible to annotate the statements of the speakers on the basis of their truthfulness or untruthfulness, as follows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "False testimonies in Court", "sec_num": "3.1" }, { "text": "The hearings are dialogs, in which the judge, the public prosecutor and the lawyer pose questions to the witness/defendant who in turn has to give them answers. These answers are the object of investigation of this study. Each answer is considered a turn, delimited by the end of the previous and the beginning of the following intervention of another individual. Each turn is constituted by one or more utterances, delimited by punctuation marks: period, triple-dots, question and exclamation marks. Utterances are the analysis unit of DECOUR and have been annotated as false, true or uncertain. In order to verify the agreement in the judgments about truthfulness or untruthfulness of the utterances, three annotators separately annotated about 600 utterances. The agreement study concerning the three classes of utterances, described in detail in (Fornaciari and Poesio, 2012) , showed that the agreement value was k=.57. Instead, if the problem is reduced to a binary task -that is, if true and uncertain utterances are collapsed into a single category of notfalse utterances, opposed to the category of false ones -the agreement value is k=.64.", "cite_spans": [ { "start": 850, "end": 879, "text": "(Fornaciari and Poesio, 2012)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Annotation and agreement", "sec_num": "3.2" }, { "text": "The whole corpus has been tokenized and sensitive data have been made anonymous, according to the previous agreement with the Courts. Then DECOUR has been lemmatized and POS-tagged using a version of TreeTagger 1 (Schmid, 1994) trained for Italian.", "cite_spans": [ { "start": 213, "end": 227, "text": "(Schmid, 1994)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Corpus statistics", "sec_num": "3.3" }, { "text": "DECOUR is made up of 3015 utterances, which come from 2094 turns. 945 utterances have been annotated as false, 1202 as true and 868 as uncertain. The size of DECOUR is 41819 tokens, including punctuation blocks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus statistics", "sec_num": "3.3" }, { "text": "In this Section we first summarize our classification methods from previous work, then discuss the three experiments we carried out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "Each utterance is described by a feature vector. As in our previous studies (Fornaciari and Poesio, 2011a,b) three kinds of features were used.", "cite_spans": [ { "start": 76, "end": 108, "text": "(Fornaciari and Poesio, 2011a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Classification methods", "sec_num": "4.1" }, { "text": "First of all, the feature vectors include very basic linguistic information such as the length of utterances (with and without punctuation) and the number of words longer than six letters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification methods", "sec_num": "4.1" }, { "text": "The second type of information are lexical features. These features have been collected making use of LIWC -Linguistic Inquiry and Word Count, a linguistic tool realized by Pennebaker et al. (2001) and widely employed in deception detection (Newman et al., 2003; Strapparava and Mihalcea, 2009) . LIWC is based on a dictionary in which each term is associated with an appropriate set of syntactical, semantical and/or psychological categories. When a text is analysed with LIWC, the tokens of the text are compared with the LIWC dictionary. Every time a word present in the dictionary is found, the count of the corresponding categories grows. The output is a profile of the text which relies on the rate of incidence of the different categories in the text itself. LIWC also includes different dictionaries for several languages, amongst which Italian (Agosti and Rellini, 2007) . Therefore it has been possible to apply LIWC to Italian deceptive texts, and the approximate 80 linguistic dimensions which constitute the Italian LIWC dictionary have been included as features of the vectors.", "cite_spans": [ { "start": 173, "end": 197, "text": "Pennebaker et al. (2001)", "ref_id": "BIBREF19" }, { "start": 241, "end": 262, "text": "(Newman et al., 2003;", "ref_id": "BIBREF17" }, { "start": 263, "end": 294, "text": "Strapparava and Mihalcea, 2009)", "ref_id": "BIBREF23" }, { "start": 853, "end": 879, "text": "(Agosti and Rellini, 2007)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Classification methods", "sec_num": "4.1" }, { "text": "Lastly, frequencies of lemmas and part-ofspeech n-grams were used. Five kinds of ngrams of lemmas and part-of-speech were taken into consideration: from unigrams to pentagrams. These frequency lists come from the part of DE-COUR employed as training set. More precisely, they come from the utterances held as true or false of the training set, while the uncertain utterances have not been considered. In order to emphasize the collection of features effective in classifying true and false statements, frequency lists of n-grams have been built considering true and false utterances separately. This means that, in the training set, homologous frequency lists of n- grams -unigrams, bigrams and so on -have been collected from the subset of true utterances and form the subset of false ones. From these lists, the most frequent n-grams have been collected, in a decreasing amount according to the length of the n-grams. Table 1 shows in detail the number of the most frequent lemmas and part-of-speech collected for the different n-grams. Then the couples of frequency lists were merged into one. This procedure implies that the number of surface features is not determined a priori. In fact the 195 features indicated in Table 1 , which are collected from true and false utterances, are unified in a list where each feature has to appear only once. Therefore, theoretically in the case of perfect identity of features in true and false utterances, a final list with the same 195 features would be obtained. In the opposite case, if the n-grams from true and false utterances would be completely different, a list of 195 + 195, then 390 n-grams would result. The aim of this procedure is to get a list of n-grams which could be as much as possible representative of the features of true and false utterances. Obviously, the smaller the overlap of the features of the two subsets, the greater the difference in the appearance of true and false utterances, and greater the hope to reach a good performance in the classification task.", "cite_spans": [], "ref_spans": [ { "start": 920, "end": 927, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 1222, "end": 1229, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Classification methods", "sec_num": "4.1" }, { "text": "We used the Support Vector Machine implementation in R (Dimitriadou et al., 2011) . As specified above, the classes of the utterances are false vs. not-false, where the category of not-false utterances results from the union of the true and uncertain ones.", "cite_spans": [ { "start": 55, "end": 81, "text": "(Dimitriadou et al., 2011)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Classification methods", "sec_num": "4.1" }, { "text": "With the aim of training models able to classify the utterances of DECOUR as false or not-false, the corpus has been divided as follows: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus division", "sec_num": "4.2" }, { "text": "Three experiments were carried out. In the first experiment, the entire corpus was used to train and test our algorithms. In the second and third experiment, sub-corpora were identified.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4.3" }, { "text": "In the first experiment, the classification task has been carried out simply employing the training set and the test set as described above, in order to have a control as reference point in relation to the following experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 1: whole test set", "sec_num": "4.3.1" }, { "text": "In the second experiment, a more homogeneous subset of DECOUR was obtained by automatically identifying and removing outliers. This was done in an unsupervised way by building vector descriptions of the hearings and clustering them. The features of these vectors were the same ngrams described above, collected from the whole This data set has been transformed into a matrix of between-hearing distances and a Multi-Dimensional Scaling -MDS function has been applied to this matrix (Baayen, 2008) . Figure 1 shows the plot of MDS function. Each entity corresponds to a hearing, and is represented by a letter indicating the sex of the speaker. Getting a glimpse at Figure 1 , it is possible to notice that, in general, almost all the hearings are quite close -that is, similar -to each other. Only three hearings seem to be clearly more peripheral than all the others, particularly the three most to the left in Figure 1 . These hearings have been considered as outliers and shut out from the experiment. They are two hearings from Trento and one from Prato. In practice, it means that the training set, coming from the hearings of Bologna and Bolzano, remained the same as the previous experiment, while two hearings have been removed from the test set, which was constituted only by the hearings of Trento.", "cite_spans": [ { "start": 482, "end": 496, "text": "(Baayen, 2008)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 499, "end": 507, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 665, "end": 673, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 912, "end": 920, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Experiment 2: no outliers", "sec_num": "4.3.2" }, { "text": "Different from the previous one, the third experiment does not rely on a subset of data automatically identified. Instead, the subset comes from personal information concerning the sub-jects involved in the hearings. In fact, their sex, place of birth and age at the moment of the hearing are known. In this paper, places of birth and age have not been taken into consideration, since grouping them together in reliable categories raises issues that do not have a straightforward solution, and the size of the subsets of corpus which would be obtained must be taken into account.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 3: only male speakers", "sec_num": "4.3.3" }, { "text": "Therefore this experiment has been carried out taking into consideration only the sex of the subjects, and in particular it concerned only the hearings involving men. This meant reducing the training set consistently, where seven hearings of women were present and thence removed. Instead from the test set just three hearings have been taken off, one involving a woman and two involving a transsexual.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 3: only male speakers", "sec_num": "4.3.3" }, { "text": "The chance levels for the various test sets have been calculated through Monte Carlo simulations, each one specific to every experiment. In each simulation, 100000 times a number of random predictions has been produced, in the same amount and with the same rate of false utterances of the test set employed in the single experiment. Then this random output was compared to the real sequence of false and not-false utterances of the test set, in order to count the amount of correct predictions. The rate of correct answers reached by less than 0.01% of the random predictions has been accepted as chance threshold for every experiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.4" }, { "text": "As a baseline, a simple majority baseline was computed: to classify each utterance as belonging to the most numerous class in the test set (notfalse).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.4" }, { "text": "The test set of the first experiemnt, carried out on the whole test set, was made up of 426 utterances, of which 190 were false, that is 44.60%. While the majority baseline is 55.40% of accuracy, a Monte Carlo simulation applied to the test set showed that the chance level was 59.60% of correct predictions. The results are shown in Table 2. The overall accuracy -almost 66% -is clearly above the chance level, being more than six points greater than the baseline. In the second experiment, the test set without outliers was made up of 333 utterances; 141 were false, which means 42.34% of the test set. The majority baseline was then at 57.66%, while the chance threshold determined with a Monte Carlo simulation had an accuracy rate of 61.26%. Table 3 shows the results of the analyses. Taking the outliers out of the test set allows tthe best performance of the three experiments to be reached. In fact the accuracy is more than 69%, which is more than eight points above the highest chance level of 61.26%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In the third experimental condition, where only male speakers were considered, the training set was made up of 13 hearings and the test set of 6 hearings. The utterances in the test set were 307, of which 117 were false, meaning 38.11% of the test set. In this last case, the majority baseline is at 61.89% of accuracy, while according to a Monte Carlo simulation the chance level was 63.19%. The overall accuracy reached in this experiment, shown in Table 4 , was more than 68%: higher than the first experiment, but in this case the lower amount of false utterances in the test set led to higher chance thresholds. Therefore the difference between performance and the chance level of 63.19% is now the smallest of all the experiments: just five points and half. From the point of view of detection of false utterances, although with internal differences, all the experiments are placed in the same reference frame. In particular, the weak point in performance is always the recall of false utterances, which remains more or less at 30%. Instead the good news comes from the precision in recognizing them, which is close to 80%. Regarding true utterances, the recall is always good, being never lower than 93%, while the precision is close to 65%.", "cite_spans": [], "ref_spans": [ { "start": 451, "end": 458, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "The goal of this paper was to verify if restricting the analysis to more homogeneous subsets could improve the accuracy of our models. The results are mixed. On the one end, taking the outliers out of the corpus results in a remarkable improvement of accuracy in the classification task, in relation to the performance of the models tested on the whole test set. On the other end, in other cases -most clearly, considering only speakers of the male gender -we find no difference; our hypothesis is that any potential advantage derived from the increased homogeneity is offset by the reduction in training material (seven hearings are removed in this case). So the conclusion may be that increasing homogeneity is effective provided that the remaining set is still sufficiently large.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Regarding the models' capacity to detect false rather than true utterances, the difference between the respective recalls is noteworthy. In fact, while the recall of not-false utterances is very high, that of false ones is poor. In other words, the results indicate that an amount of false utterances is effectively so similar to the not-false ones, that the models are not able to detect them. One challenge for future studies is surely to find a way to detect some aspect currently neglected of deceptive language, which could be employed to widen the size of false utterances which can be recognized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "On the other hand, in the two more reliable experiments the precision in detecting false utterances was about 80%. This could suggest that an amount of false utterances exists, whose features are in some way peculiar and different from notfalse ones. The data seem to show that this subset could be more or less one third of all the false utterances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "However, this study was not aimed to estimate the possible performance of the models in an hypothetic practical application. The experimental conditions taken into consideration, in fact, are considerably different from those that would be present in a real life analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "The main reason of this difference is that in a real case to classify every utterance of a hearing would not be requested. A lot of statements are irrelevant or perfectly known as true. Furthermore it would not make sense to classify all the utterances which have not propositional value, such as questions or meta-communicative acts. In the perspective of deception detection in a real life scenario, to classify this last kind of utterances is useless. Only a subset of the propositional statements should be classified. In a previous study, carried out on a selection of utterances with propositional value of a part of DECOUR, machine learning models reached an accuracy of 75% in classification task (Fornaciari and Poesio, 2011b) . In that study, precision and recall of false utterances are also quite similar to those of this study, the first being about 90% and the second about 50%.", "cite_spans": [ { "start": 705, "end": 735, "text": "(Fornaciari and Poesio, 2011b)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "From a theoretical point of view, the present study suggests that it is possible to be relatively confident in the effectiveness of the models in the analysis of any kind of utterance. This means that deceptive language is at least in part different from the truthful one and stylometric analyses can detect it. If this is true, the rate of precision with which false statements are correctly classified should clearly exceed the chance level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "Also in this case, Monte Carlo simulation is taken as reference point. Out of the 100000 random trials carried out to determine the baseline for the first experiment, less than 0.01% had a precision greater than 57.90% in classifying false utterances, in front of a precision of the models at 80.82%. Regarding the second experiment, the threshold for precision related to false utterances was 58.15% against a precision of the models at 80.95%. In the third experiment, the baseline for precision was 55.55% and the performance of models was 74.42%. In every experiment the gap is about twenty points per cent. The same cannot be said about the recall of false utterances: the baselines of Monte Carlo simulations in the three experiments were about 51-54%, while the best models' performance (of the second experiment) did not exceed 36%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "The precision reached in recognizing false statements shows that the models were reliable in detection of deceptive language. On the other hand a remarkable amount of false utterances was not identified. The challenge for the future is to understand to which extent it will be possible to improve the recall in detecting false utterances, not losing and hopefully improving the relative precision. At that point, although in specific contexts, a computational linguistics' approach could be really employed to detect deception in real life scenarios.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "6" }, { "text": "To create DECOUR has been very complex, and it would not have been possible without the kind collaboration of a lot of people. Many thanks to Dr. Francesco Scutellari, President of the Court of Bologna, to Dr. Heinrich Zanon, President of the Court of Bolzano, to Dr. Francesco Antonio Genovese, President of the Court of Prato and to Dr. Sabino Giarrusso, President of the Court of Trento.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "7" }, { "text": "http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/ DecisionTreeTagger.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The Italian LIWC Dictionary", "authors": [ { "first": "A", "middle": [], "last": "Agosti", "suffix": "" }, { "first": "A", "middle": [], "last": "Rellini", "suffix": "" } ], "year": 2007, "venue": "LIWC.net", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Agosti, A. and Rellini, A. (2007). The Italian LIWC Dictionary. Technical report, LIWC.net, Austin, TX.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Analyzing linguistic data: a practical introduction to statistics using R", "authors": [ { "first": "R", "middle": [], "last": "Baayen", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baayen, R. (2008). Analyzing linguistic data: a practical introduction to statistics using R. Cambridge University Press.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Verification and implementation of language-based deception indicators in civil and criminal narratives", "authors": [ { "first": "J", "middle": [], "last": "Bachenko", "suffix": "" }, { "first": "E", "middle": [], "last": "Fitzpatrick", "suffix": "" }, { "first": "M", "middle": [], "last": "Schonwetter", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "41--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bachenko, J., Fitzpatrick, E., and Schonwetter, M. (2008). Verification and implementation of language-based deception indicators in civil and criminal narratives. In Proceedings of the 22nd International Conference on Computa- tional Linguistics -Volume 1, COLING '08, pages 41-48, Stroudsburg, PA, USA. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Author identification, idiolect, and linguistic uniqueness", "authors": [ { "first": "M", "middle": [], "last": "Coulthard", "suffix": "" } ], "year": 2004, "venue": "Applied Linguistics", "volume": "25", "issue": "4", "pages": "431--447", "other_ids": {}, "num": null, "urls": [], "raw_text": "Coulthard, M. (2004). Author identification, idi- olect, and linguistic uniqueness. Applied Lin- guistics, 25(4):431-447.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Classifying spatial patterns of brain activity with machine learning methods: Application to lie detection", "authors": [ { "first": "C", "middle": [], "last": "Davatzikos", "suffix": "" }, { "first": "K", "middle": [], "last": "Ruparel", "suffix": "" }, { "first": "Y", "middle": [], "last": "Fan", "suffix": "" }, { "first": "D", "middle": [], "last": "Shen", "suffix": "" }, { "first": "M", "middle": [], "last": "Acharyya", "suffix": "" }, { "first": "J", "middle": [], "last": "Loughead", "suffix": "" }, { "first": "R", "middle": [], "last": "Gur", "suffix": "" }, { "first": "D", "middle": [], "last": "Langleben", "suffix": "" } ], "year": 2005, "venue": "NeuroImage", "volume": "28", "issue": "3", "pages": "663--668", "other_ids": {}, "num": null, "urls": [], "raw_text": "Davatzikos, C., Ruparel, K., Fan, Y., Shen, D., Acharyya, M., Loughead, J., Gur, R., and Lan- gleben, D. (2005). Classifying spatial patterns of brain activity with machine learning meth- ods: Application to lie detection. NeuroImage, 28(3):663 -668.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Cues to deception", "authors": [ { "first": "B", "middle": [ "M" ], "last": "De Paulo", "suffix": "" }, { "first": "J", "middle": [ "J" ], "last": "Lindsay", "suffix": "" }, { "first": "B", "middle": [ "E" ], "last": "Malone", "suffix": "" }, { "first": "L", "middle": [], "last": "Muhlenbruck", "suffix": "" }, { "first": "K", "middle": [], "last": "Charlton", "suffix": "" }, { "first": "Cooper", "middle": [], "last": "", "suffix": "" }, { "first": "H", "middle": [], "last": "", "suffix": "" } ], "year": 2003, "venue": "Psychological Bulletin", "volume": "129", "issue": "1", "pages": "74--118", "other_ids": {}, "num": null, "urls": [], "raw_text": "De Paulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., and Cooper, H. (2003). Cues to deception. Psychological Bul- letin, 129(1):74-118.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Building a forensic corpus to test language-based indicators of deception", "authors": [ { "first": "E", "middle": [], "last": "Fitzpatrick", "suffix": "" }, { "first": "J", "middle": [], "last": "Bachenko", "suffix": "" } ], "year": 2009, "venue": "Language and Computers", "volume": "71", "issue": "1", "pages": "183--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fitzpatrick, E. and Bachenko, J. (2009). Building a forensic corpus to test language-based indi- cators of deception. Language and Computers, 71(1):183-196.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Lexical vs. surface features in deceptive language analysis", "authors": [ { "first": "T", "middle": [], "last": "Fornaciari", "suffix": "" }, { "first": "M", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the ICAIL 2011 Workshop Applying Human Language Technology to the Law", "volume": "", "issue": "", "pages": "2--8", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fornaciari, T. and Poesio, M. (2011a). Lexical vs. surface features in deceptive language anal- ysis. In Proceedings of the ICAIL 2011 Work- shop Applying Human Language Technology to the Law, AHLTL 2011, pages 2-8, Pittsburgh, USA.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Sincere and deceptive statements in italian criminal proceedings", "authors": [ { "first": "T", "middle": [], "last": "Fornaciari", "suffix": "" }, { "first": "M", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the International Association of Forensic Linguists Tenth Biennial Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fornaciari, T. and Poesio, M. (2011b). Sin- cere and deceptive statements in italian crimi- nal proceedings. In Proceedings of the Interna- tional Association of Forensic Linguists Tenth Biennial Conference, IAFL 2011, Cardiff, Wales, UK.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Decour: a corpus of deceptive statements in italian courts", "authors": [ { "first": "T", "middle": [], "last": "Fornaciari", "suffix": "" }, { "first": "M", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the eighth International Conference on Language Resources and Evaluation, LREC 2012", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fornaciari, T. and Poesio, M. (2012). Decour: a corpus of deceptive statements in italian courts. In Proceedings of the eighth International Con- ference on Language Resources and Evalua- tion, LREC 2012. In press.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Human behavior and deception detection", "authors": [ { "first": "M", "middle": [ "G" ], "last": "Frank", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Menasco", "suffix": "" }, { "first": "M", "middle": [], "last": "Sullivan", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frank, M. G., Menasco, M. A., and O'Sullivan, M. (2008). Human behavior and deception de- tection. In Voeller, J. G., editor, Wiley Hand- book of Science and Technology for Homeland Security. John Wiley & Sons, Inc.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Neural correlates of different types of deception: An fmri investigation", "authors": [ { "first": "G", "middle": [], "last": "Ganis", "suffix": "" }, { "first": "S", "middle": [], "last": "Kosslyn", "suffix": "" }, { "first": "S", "middle": [], "last": "Stose", "suffix": "" }, { "first": "W", "middle": [], "last": "Thompson", "suffix": "" }, { "first": "Yurgelun-Todd", "middle": [], "last": "", "suffix": "" }, { "first": "D", "middle": [], "last": "", "suffix": "" } ], "year": 2003, "venue": "Cerebral Cortex", "volume": "13", "issue": "8", "pages": "830--836", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ganis, G., Kosslyn, S., Stose, S., Thompson, W., and Yurgelun-Todd, D. (2003). Neural corre- lates of different types of deception: An fmri investigation. Cerebral Cortex, 13(8):830-836.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Automatic, Multimodal Evaluation of Human Interaction", "authors": [ { "first": "M", "middle": [ "L" ], "last": "Jensen", "suffix": "" }, { "first": "T", "middle": [ "O" ], "last": "Meservy", "suffix": "" }, { "first": "J", "middle": [ "K" ], "last": "Burgoon", "suffix": "" }, { "first": "J", "middle": [ "F" ], "last": "Nunamaker", "suffix": "" } ], "year": 2010, "venue": "Group Decision and Negotiation", "volume": "19", "issue": "4", "pages": "367--389", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jensen, M. L., Meservy, T. O., Burgoon, J. K., and Nunamaker, J. F. (2010). Automatic, Multi- modal Evaluation of Human Interaction. Group Decision and Negotiation, 19(4):367-389.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Effects of age and gender on blogging", "authors": [ { "first": "M", "middle": [], "last": "Koppel", "suffix": "" }, { "first": "J", "middle": [], "last": "Schler", "suffix": "" }, { "first": "S", "middle": [], "last": "Argamon", "suffix": "" }, { "first": "J", "middle": [], "last": "Pennebaker", "suffix": "" } ], "year": 2006, "venue": "AAAI 2006 Spring Symposium on Computational Approaches to Analysing Weblogs", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koppel, M., Schler, J., Argamon, S., and Pen- nebaker, J. (2006). Effects of age and gender on blogging. In AAAI 2006 Spring Symposium on Computational Approaches to Analysing We- blogs.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Authorship attribution and verification with many authors and limited data", "authors": [ { "first": "K", "middle": [], "last": "Luyckx", "suffix": "" }, { "first": "W", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", "volume": "1", "issue": "", "pages": "513--520", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luyckx, K. and Daelemans, W. (2008). Author- ship attribution and verification with many au- thors and limited data. In Proceedings of the 22nd International Conference on Computa- tional Linguistics -Volume 1, COLING '08, pages 513-520, Stroudsburg, PA, USA. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Inference and Disputed Authorship: The Federalist", "authors": [ { "first": "F", "middle": [], "last": "Mosteller", "suffix": "" }, { "first": "D", "middle": [], "last": "Wallace", "suffix": "" } ], "year": 1964, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mosteller, F. and Wallace, D. (1964). Infer- ence and Disputed Authorship: The Federalist. Addison-Wesley.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Lying Words: Predicting Deception From Linguistic Styles", "authors": [ { "first": "M", "middle": [ "L" ], "last": "Newman", "suffix": "" }, { "first": "J", "middle": [ "W" ], "last": "Pennebaker", "suffix": "" }, { "first": "D", "middle": [ "S" ], "last": "Berry", "suffix": "" }, { "first": "J", "middle": [ "M" ], "last": "Richards", "suffix": "" } ], "year": 2003, "venue": "Personality and Social Psychology Bulletin", "volume": "29", "issue": "5", "pages": "665--675", "other_ids": {}, "num": null, "urls": [], "raw_text": "Newman, M. L., Pennebaker, J. W., Berry, D. S., and Richards, J. M. (2003). Lying Words: Predicting Deception From Linguistic Styles. Personality and Social Psychology Bulletin, 29(5):665-675.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Age and gender prediction on netlog data", "authors": [ { "first": "C", "middle": [], "last": "Peersman", "suffix": "" }, { "first": "W", "middle": [], "last": "Daelemans", "suffix": "" }, { "first": "L", "middle": [], "last": "Van Vaerenbergh", "suffix": "" } ], "year": 2011, "venue": "Presented at the 21st Meeting of Computational Linguistics in the Netherlands (CLIN21)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peersman, C., Daelemans, W., and Van Vaeren- bergh, L. (2011). Age and gender prediction on netlog data. Presented at the 21st Meeting of Computational Linguistics in the Netherlands (CLIN21), Ghent, Belgium.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Linguistic Inquiry and Word Count (LIWC): LIWC2001. Lawrence Erlbaum Associates", "authors": [ { "first": "J", "middle": [ "W" ], "last": "Pennebaker", "suffix": "" }, { "first": "M", "middle": [ "E" ], "last": "Francis", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Booth", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pennebaker, J. W., Francis, M. E., and Booth, R. J. (2001). Linguistic Inquiry and Word Count (LIWC): LIWC2001. Lawrence Erlbaum As- sociates, Mahwah.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Probabilistic part-of-speech tagging using decision trees", "authors": [ { "first": "H", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 1994, "venue": "Proceedings of International Conference on New Methods in Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schmid, H. (1994). Probabilistic part-of-speech tagging using decision trees. In Proceedings of International Conference on New Methods in Language Processing.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Author identification in american courts", "authors": [ { "first": "L", "middle": [ "M" ], "last": "Solan", "suffix": "" }, { "first": "P", "middle": [ "M" ], "last": "Tiersma", "suffix": "" } ], "year": 2004, "venue": "Applied Linguistics", "volume": "25", "issue": "4", "pages": "448--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Solan, L. M. and Tiersma, P. M. (2004). Author identification in american courts. Applied Lin- guistics, 25(4):448-465.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Plagiarism analysis, authorship identification, and near-duplicate detection pan'07", "authors": [ { "first": "B", "middle": [], "last": "Stein", "suffix": "" }, { "first": "M", "middle": [], "last": "Koppel", "suffix": "" }, { "first": "E", "middle": [], "last": "Stamatatos", "suffix": "" } ], "year": 2007, "venue": "SIGIR Forum", "volume": "41", "issue": "", "pages": "68--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stein, B., Koppel, M., and Stamatatos, E. (2007). Plagiarism analysis, authorship identification, and near-duplicate detection pan'07. SIGIR Fo- rum, 41:68-71.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The Lie Detector: Explorations in the Automatic Recognition of Deceptive Language", "authors": [ { "first": "C", "middle": [], "last": "Strapparava", "suffix": "" }, { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2009, "venue": "Proceeding ACLShort '09 -Proceedings of the ACL-IJCNLP 2009 Conference Short Papers", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Strapparava, C. and Mihalcea, R. (2009). The Lie Detector: Explorations in the Automatic Recognition of Deceptive Language. In Pro- ceeding ACLShort '09 -Proceedings of the ACL-IJCNLP 2009 Conference Short Papers.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Automatic emotion classification for interpersonal communication", "authors": [ { "first": "F", "middle": [], "last": "Vaassen", "suffix": "" }, { "first": "W", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 2011, "venue": "2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vaassen, F. and Daelemans, W. (2011). Auto- matic emotion classification for interpersonal communication. In 2nd Workshop on Compu- tational Approaches to Subjectivity and Senti- ment Analysis (WASSA 2.011).", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Criteria-based content analysis -A Qualitative Review of the First 37 Studies", "authors": [ { "first": "A", "middle": [], "last": "Vrij", "suffix": "" } ], "year": 2005, "venue": "Psychology, Public Policy, and Law", "volume": "11", "issue": "1", "pages": "3--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vrij, A. (2005). Criteria-based content analysis -A Qualitative Review of the First 37 Studies. Psychology, Public Policy, and Law, 11(1):3- 41.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A Statistical Language Modeling Approach to Online Deception Detection", "authors": [ { "first": "L", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Y", "middle": [], "last": "Shi", "suffix": "" }, { "first": "D", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2008, "venue": "IEEE Transactions on Knowledge and Data Engineering", "volume": "20", "issue": "8", "pages": "1077--1081", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou, L., Shi, Y., and Zhang, D. (2008). A Statistical Language Modeling Approach to Online Deception Detection. IEEE Transac- tions on Knowledge and Data Engineering, 20(8):1077-1081.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "uris": null, "text": "Multi-Dimensional Scaling of DE-COUR\u0116ach entity corresponds to a hearing; the letters represent the sex of the speakers. corpus (not from the only test set); their values were the mean values of the frequencies of the utterances belonging to the hearing." }, "TABREF0": { "html": null, "num": null, "type_str": "table", "text": "The most frequent n-grams collected", "content": "
N-gramsLemmas POS Total
Unigrams5015
Bigrams4012
Trigrams309
Tetragrams206
Pentagrams103
Total15045195
" }, "TABREF1": { "html": null, "num": null, "type_str": "table", "text": "Training set The 20 hearings coming from the Courts of Bologna and Bolzano have been employed as training set. In terms of analysis units, this means 2279 utterances, that is 75.59% of DECOUR. The features of the vectors come from this set of data.", "content": "
Test set The 9 hearings of the Court of Trento
have been employed as test set, in order to
evaluate the effectiveness of the trained mod-
els. This test set was made up by 426 utter-
ances, which are 14.13% of DECOUR.
Development set The 6 hearings of the Court of
Prato have been employed as development
set during the phase of choice and calibration
of vector features, therefore this set of utter-
ances is not directly involved in the results of
the following experiments. The develpment
set was constituted by 310 utterances, that is
10.28% of DECOUR.
In the various experimental conditions, some sub-
sets of DECOUR have been taken into consider-
ation. Hence, different hearings have been re-
moved from the test and/or training set in order
to carry out different experiments. Since the test
sets vary in the different experiments, in relation
to each of them different chance levels have been
determined, in order to evaluate the effectiveness
of the models' performance.
" }, "TABREF2": { "html": null, "num": null, "type_str": "table", "text": "Whole training and test set", "content": "
CorrectlyIncorrectly
classified entities classified entities Precision Recall F-measure
False utterances5913180.82% 31.05%44.86%
True utterances2221462.89% 94.07%75.38%
Total281145
Total percent65.96%34.04%
Monte Carlo simulation59.60%
Majority baseline55.40%
" }, "TABREF3": { "html": null, "num": null, "type_str": "table", "text": "", "content": "
: Test set without outliers
CorrectlyIncorrectly
classified entities classified entities Precision Recall F-measure
False utterances519080.95% 36.17%50.00%
True utterances1801266.67% 93.75%77.92%
Total231102
Total percent69.37%30.63%
Monte Carlo simulation61.26%
Majority baseline57.66%
" }, "TABREF4": { "html": null, "num": null, "type_str": "table", "text": "Training and test set with only male speakers", "content": "
CorrectlyIncorrectly
classified entities classified entities Precision Recall F-measure
False utterances328574.42% 27.35%40.00%
True utterances1791167.80% 94.21%78.85%
Total21196
Total percent68.73%31.27%
Monte Carlo simulation63.19%
Majority baseline61.89%
" } } } }