gem_id
stringlengths
37
41
paper_id
stringlengths
3
4
paper_title
stringlengths
19
183
paper_abstract
stringlengths
168
1.38k
paper_content
sequence
paper_headers
sequence
slide_id
stringlengths
37
41
slide_title
stringlengths
2
85
slide_content_text
stringlengths
11
2.55k
target
stringlengths
11
2.55k
references
list
GEM-SciDuet-train-60#paper-1117#slide-11
1117
A Multi-sentiment-resource Enhanced Attention Network for Sentiment Classification
Deep learning approaches for sentiment classification do not fully exploit sentiment linguistic knowledge. In this paper, we propose a Multi-sentiment-resource Enhanced Attention Network (MEAN) to alleviate the problem by integrating three kinds of sentiment linguistic knowledge (e.g., sentiment lexicon, negation words, intensity words) into the deep neural network via attention mechanisms. By using various types of sentiment resources, MEAN utilizes sentiment-relevant information from different representation subspaces, which makes it more effective to capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction. The experimental results demonstrate that MEAN has robust superiority over strong competitors.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93 ], "paper_content_text": [ "Introduction Sentiment classification is an important task of natural language processing (NLP), aiming to classify the sentiment polarity of a given text as positive, negative, or more fine-grained classes.", "It has obtained considerable attention due to its broad applications in natural language processing (Hao et al., 2012; .", "Most existing studies set up sentiment classifiers using supervised machine learning approaches, such as support vector machine (SVM) (Pang et al., 2002) , convolutional neural network (CNN) (Kim, 2014; Bonggun et al., 2017) , long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997; Qian et al., 2017) , Tree-LSTM (Tai et al., 2015) , and attention-based methods (Zhou et al., 2016; Yang et al., 2016; Lin et al., 2017; Du et al., 2017) .", "Despite the remarkable progress made by the previous work, we argue that sentiment analysis still remains a challenge.", "Sentiment resources including sentiment lexicon, negation words, intensity words play a crucial role in traditional sentiment classification approaches (Maks and Vossen, 2012; Duyu et al., 2014) .", "Despite its usefulness, to date, the sentiment linguistic knowledge has been underutilized in most recent deep neural network models (e.g., CNNs and LSTMs).", "In this work, we propose a Multi-sentimentresource Enhanced Attention Network (MEAN) for sentence-level sentiment classification to integrate many kinds of sentiment linguistic knowledge into deep neural networks via multi-path attention mechanism.", "Specifically, we first design a coupled word embedding module to model the word representation from character-level and word-level semantics.", "This can help to capture the morphological information such as prefixes and suffixes of words.", "Then, we propose a multisentiment-resource attention module to learn more comprehensive and meaningful sentiment-specific sentence representation by using the three types of sentiment resource words as attention sources attending to the context words respectively.", "In this way, we can attend to different sentimentrelevant information from different representation subspaces implied by different types of sentiment sources and capture the overall semantics of the sentiment, negation and intensity words for sentiment prediction.", "The main contributions of this paper are summarized as follows.", "First, we design a coupled word embedding obtained from character-level embedding and word-level embedding to capture both the character-level morphological information and word-level semantics.", "Second, we propose a multi-sentiment-resource attention module to learn more comprehensive sentiment-specific sentence representation from multiply subspaces implied by three kinds of sentiment resources including sentiment lexicon, intensity words, negation words.", "Finally, the experimental results show that MEAN consistently outperforms competitive methods.", "Model Our proposed MEAN model consists of three key components: coupled word embedding module, multi-sentiment-resource attention module, sentence classifier module.", "In the rest of this section, we will elaborate these three parts in details.", "Coupled Word Embedding To exploit the sentiment-related morphological information implied by some prefixes and suffixes of words (such as \"Non-\", \"In-\", \"Im-\"), we design a coupled word embedding learned from character-level embedding and word-level embedding.", "We first design a character-level convolution neural network (Char-CNN) to obtain characterlevel embedding (Zhang et al., 2015) .", "Different from (Zhang et al., 2015) , the designed Char-CNN is a fully convolutional network without max-pooling layer to capture better semantic information in character chunk.", "Specifically, we first input one-hot-encoding character sequences to a 1 × 1 convolution layer to enhance the semantic nonlinear representation ability of our model (Long et al., 2015) , and the output is then fed into a multi-gram (i.e.", "different window sizes) convolution layer to capture different local character chunk information.", "For word-level embedding, we use pre-trained word vectors, GloVe (Pennington et al., 2014) , to map each word to a lowdimensional vector space.", "Finally, each word is represented as a concatenation of the characterlevel embedding and word-level embedding.", "This is performed on the context words and the three types of sentiment resource words 1 , resulting in four final coupled word embedding matrices: the W c = [w c 1 , ..., w c t ] ∈ R d×t for context words, the W s = [w s 1 , ..., w s m ] ∈ R d×m for sentiment words, the W i = [w i 1 , ..., w i k ] ∈ R d×k for intensity word- s, the W n = [w n 1 , ..., w n p ] ∈ R d×p for negation words.", "Here, t, m, k, p are the length of the corresponding items respectively, and d is the embedding dimension.", "Each W is normalized to better calculate the following word correlation.", "1 To be precise, sentiment resource words include sentiment words, negation words and intensity words.", "Multi-sentiment-resource Attention Module After obtaining the coupled word embedding, we propose a multi-sentiment-resource attention mechanism to help select the crucial sentimentresource-relevant context words to build the sentiment-specific sentence representation.", "Concretely, we use the three kinds of sentiment resource words as attention sources to attend to the context words respectively, which is beneficial to capture different sentiment-relevant context words corresponding to different types of sentiment sources.", "For example, using sentiment words as attention source attending to the context words helps form the sentiment-word-enhanced sentence representation.", "Then, we combine the three kinds of sentiment-resource-enhanced sentence representations to learn the final sentiment-specific sentence representation.", "We design three types of attention mechanisms: sentiment attention, intensity attention, negation attention to model the three kinds of sentiment resources, respectively.", "In the following, we will elaborate the three types of attention mechanisms in details.", "First, inspired by (Xiong et al.)", ", we expect to establish the word-level relationship between the context words and different kinds of sentiment resource words.", "To be specific, we define the dot products among the context words and the three kinds of sentiment resource words as correlation matrices.", "Mathematically, the detailed formulation is described as follows.", "M s = (W c ) T · W s ∈ R t×m (1) M i = (W c ) T · W i ∈ R t×k (2) M n = (W c ) T · W n ∈ R t×p (3) where M s , M i , M n are the correlation matrices to measure the relationship among the context words and the three kinds of sentiment resource words, representing the relevance between the context words and the sentiment resource word.", "After obtaining the correlation matrices, we can compute the sentiment-resource-relevant context word representations X c s , X c i , X c n by the dot products among the context words and different types of corresponding correlation matrices.", "Meanwhile, we can also obtain the context-wordrelevant sentiment word representation matrix X s by the dot product between the correlation matrix M s and the sentiment words W s , the context-word-relevant intensity word representation matrix X i by the dot product between the intensity words W i and the correlation matrix M i , the context-word-relevant negation word representation matrix X n by the dot product between the negation words W n and the correlation matrix M n .", "The detailed formulas are presented as follows: X c s = W c M s , X s = W s (M s ) T (4) X c i = W c M i , X i = W i (M i ) T (5) X c n = W c M n , X n = W n (M n ) T (6) The final enhanced context word representation matrix is computed as: X c = X c s + X c i + X c n .", "(7) Next, we employ four independent GRU networks (Chung et al., 2015) to encode hidden states of the context words and the three types of sentiment resource words, respectively.", "Formally, given the word embedding X c , X s , X i , X n , the hidden state matrices H c , H s , H i , H n can be obtained as follows: H c = GRU (X c ) (8) H s = GRU (X s ) (9) H i = GRU (X i ) (10) H n = GRU (X n ) (11) After obtaining the hidden state matrices, the sentiment-word-enhanced sentence representation o 1 can be computed as: o 1 = t i=1 α i h c i , q s = m i=1 h s i /m (12) β([h c i ; q s ]) = u T s tanh(W s [h c i ; q s ]) (13) α i = exp(β([h c i ; q s ])) t i=1 exp(β([h c i ; q s ])) (14) where q s denotes the mean-pooling operation towards H s , β is the attention function that calculates the importance of the i-th word h c i in the context and α i indicates the importance of the ith word in the context, u s and W s are learnable parameters.", "Similarly, with the hidden states H i and H n for the intensity words and the negation words as attention sources, we can obtain the intensityword-enhanced sentence representation o 2 and the negation-word-enhanced sentence representation o 3 .", "The final comprehensive sentiment-specific sentence representationõ is the composition of the above three sentiment-resource-specific sentence representations o 1 , o 2 , o 3 : o = [o 1 , o 2 , o 3 ] (15) Sentence Classifier After obtaining the final sentence representationõ, we feed it to a softmax layer to predict the sentiment label distribution of a sentence: y = exp(W o Tõ +b o ) C i=1 exp(W o Tõ +b o ) (16) whereŷ is the predicted sentiment distribution of the sentence, C is the number of sentiment labels, W o andb o are parameters to be learned.", "For model training, our goal is to minimize the cross entropy between the ground truth and predicted results for all sentences.", "Meanwhile, in order to avoid overfitting, we use dropout strategy to randomly omit parts of the parameters on each training case.", "Inspired by , we also design a penalization term to ensure the diversity of semantics from different sentiment-resourcespecific sentence representations, which reduces information redundancy from different sentiment resources attention.", "Specifically, the final loss function is presented as follows: L(ŷ, y) = − N i=1 C j=1 y j i log(ŷ j i ) + λ( θ∈Θ θ 2 ) (17) + µ||ÕÕ T − ψI|| 2 F O =[o 1 ; o 2 ; o 3 ] (18) where y j i is the target sentiment distribution of the sentence,ŷ j i is the prediction probabilities, θ denotes each parameter to be regularized, Θ is parameter set, λ is the coefficient for L 2 regularization, µ is a hyper-parameter to balance the three terms, ψ is the weight parameter, I denotes the the identity matrix and ||.|| F denotes the Frobenius norm of a matrix.", "Here, the first two terms of the loss function are cross-entropy function of the predicted and true distributions and L 2 regularization respectively, and the final term is a penalization term to encourage the diversity of sentiment sources.", "Experiments Datasets and Sentiment Resources Movie Review (MR) 2 and Stanford Sentiment Treebank (SST) 3 are used to evaluate our model.", "MR dataset has 5,331 positive samples and 5,331 negative samples.", "We adopt the same data split as in (Qian et al., 2017) .", "SST consists of 8,545 training samples, 1,101 validation samples, 2210 test samples.", "Each sample is marked as very negative, negative, neutral, positive, or very positive.", "Sentiment lexicon combines the sentiment words from both (Qian et al., 2017) and (Hu and Liu, 2004) , resulting in 10,899 sentiment words in total.", "We collect negation and intensity words manually as the number of these words is limited.", "Baselines In order to comprehensively evaluate the performance of our model, we list several baselines for sentence-level sentiment classification.", "RNTN: Recursive Tensor Neural Network (Socher et al., 2013 ) is used to model correlations between different dimensions of child nodes vectors.", "LSTM/Bi-LSTM: Cho et al.", "(2014) employs Long Short-Term Memory and the bidirectional variant to capture sequential information.", "Tree-LSTM: Memory cells was introduced by Tree-Structured Long Short-Term Memory (Tai et al., 2015) and gates into tree-structured neural network, which is beneficial to capture semantic relatedness by parsing syntax trees.", "CNN: Convolutional Neural Networks (Kim, 2014 ) is applied to generate task-specific sentence representation.", "NCSL: Teng et al.", "(2016) designs a Neural Context-Sensitive Lexicon (NSCL) to obtain prior sentiment scores of words in the sentence.", "LR-Bi-LSTM: Qian et al.", "(2017) imposes linguistic roles into neural networks by applying linguistic regularization on intermediate outputs with KL divergence.", "Self-attention: Lin et al.", "(2017) proposes a selfattention mechanism to learn structured sentence embedding.", "ID-LSTM: (Tianyang et al., 2018) uses reinforcement learning to learn structured sentence representation for sentiment classification.", "Implementation Details In our experiments, the dimensions of characterlevel embedding and word embedding (GloVe) are both set to 300.", "Kernel sizes of multi-gram convolution for Char-CNN are set to 2, 3, respectively.", "All the weight matrices are initialized as random orthogonal matrices, and we set all the bias vectors as zero vectors.", "We optimize the proposed model with RMSprop algorithm, using mini-batch training.", "The size of mini-batch is 60.", "The dropout rate is 0.5, and the coefficient λ of L 2 normalization is set to 10 −5 .", "µ is set to 10 −4 .", "ψ is set to 0.9.", "When there are not sentiment resource words in the sentences, all the context words are treated as sentiment resource words to implement the multi-path self-attention strategy.", "Experiment Results In our experiments, to be consistent with the recent baseline methods, we adopt classification accuracy as evaluation metric.", "We summarize the experimental results in Table 1 .", "Our model has robust superiority over competitors and sets stateof-the-art on MR and SST datasets.", "First, our model brings a substantial improvement over the methods that do not leverage sentiment linguistic knowledge (e.g., RNTN, LSTM, BiLSTM, C-NN and ID-LSTM) on both datasets.", "This verifies the effectiveness of leveraging sentiment linguistic resource with the deep learning algorithms.", "Second, our model also consistently outperforms LR-Bi-LSTM which integrates linguistic roles of sentiment, negation and intensity words into neural networks via the linguistic regularization.", "For example, our model achieves 2.4% improvements over the MR dataset and 0.8% improvements over the SST dataset compared to LR-Bi-LSTM.", "This is because that MEAN designs attention mechanisms to leverage sentiment resources efficiently, which utilizes the interactive information between context words and sentiment resource words.", "In order to analyze the effectiveness of each component of MEAN, we also report the ablation test in terms of discarding character-level embedding (denoted as MEAN w/o CharCNN) and sentiment words/negation words/intensity words (denoted as MEAN w/o sentiment words/negation words/intensity words).", "All the tested factors con-tribute greatly to the improvement of the MEAN.", "In particular, the accuracy decreases sharply when discarding the sentiment words.", "This is within our expectation since sentiment words are vital when classifying the polarity of the sentences.", "(Qian et al., 2017) , and the results marked with * denote the results are obtained by our implementation.", "Conclusion In this paper, we propose a novel Multi-sentimentresource Enhanced Attention Network (MEAN) to enhance the performance of sentence-level sentiment analysis, which integrates the sentiment linguistic knowledge into the deep neural network." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3.1", "3.2", "3.3", "3.4", "4" ], "paper_header_content": [ "Introduction", "Model", "Coupled Word Embedding", "Multi-sentiment-resource Attention Module", "Sentence Classifier", "Datasets and Sentiment Resources", "Baselines", "Implementation Details", "Experiment Results", "Conclusion" ] }
GEM-SciDuet-train-60#paper-1117#slide-11
Summary and Future work
Integrating sentiment resources into neural networks is effective to improve the performance of sentence-level sentiment classification. How to deign the more effective information-fusion methods is still challenging, such as regularization, attention, . In future work, we can consider employing position embedding to automatically detecting various sentiment resource words.
Integrating sentiment resources into neural networks is effective to improve the performance of sentence-level sentiment classification. How to deign the more effective information-fusion methods is still challenging, such as regularization, attention, . In future work, we can consider employing position embedding to automatically detecting various sentiment resource words.
[]
GEM-SciDuet-train-61#paper-1120#slide-0
1120
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction When drugs are concomitantly administered to a patient, the effects of the drugs may be enhanced or weakened, which may also cause side effects.", "These kinds of interactions are called Drug-Drug Interactions (DDIs).", "Several drug databases have been maintained to summarize drug and DDI information such as DrugBank (Law et al., 2014) , Therapeutic Target database , and PharmGKB (Thorn et al., 2013) .", "Automatic DDI extraction from texts is expected to support the maintenance of databases with high coverage and quick update to help medical experts.", "Deep neural network-based methods have recently drawn a considerable attention (Liu et al., 2016; Sahu and Anand, 2017; Zheng et al., 2017; Lim et al., 2018) since they show state-of-the-art performance without manual feature engineering.", "In parallel to the progress in DDI extraction from texts, Graph Convolutional Networks (GCNs) have been proposed and applied to estimate physical and chemical properties of molec-ular graphs such as solubility and toxicity (Duvenaud et al., 2015; Gilmer et al., 2017) .", "In this study, we propose a novel method to utilize both textual and molecular information for DDI extraction from texts.", "We illustrate the overview of the proposed model in Figure 1 .", "We obtain the representations of drug pairs in molecular graph structures using GCNs and concatenate the representations with the representations of the textual mention pairs obtained by convolutional neural networks (CNNs).", "We trained the molecule-based model using interacting pairs mentioned in the DrugBank database and then trained the entire model using the labeled pairs in the text data set of the DDIExtraction 2013 shared task (SemEval-2013 Task 9) (Segura .", "In the experiment, we show GCNs can predict DDIs from molecular graphs in a high accuracy.", "We also show molecular information can enhance the performance of DDI extraction from texts in 2.39 percent points in F-score.", "The contribution of this paper is three-fold: • We propose a novel neural method to extract DDIs from texts with the related molecular structure information.", "• We apply GCNs to pairwise drug molecules for the first time and show GCNs can predict DDIs between drug molecular structures in a high accuracy.", "• We show the molecular information is useful in extracting DDIs from texts.", "Methods Text-based DDI Extraction Our model for extracting DDIs from texts is based on the CNN model by Zeng et al.", "(2014) .", "When an input sentence S = (w 1 , w 2 , · · · , w N ) is given, We prepare word embedding w w i of w i and word Figure 1 : Overview of the proposed model position embeddings w p i,1 and w p i,2 that correspond to the relative positions from the first and second target entities, respectively.", "We concatenate these embeddings as in Equation (1) , and we use the resulting vector as the input to the subsequent convolution layer: w i = [w w i ; w p i,1 ; w p i,2 ], (1) where [; ] denotes the concatenation.", "We calculate the expression for each filter j with the window size k l .", "z i,l = [w i−(k l −1)/2 , · · · , w i−(k l +1)/2 ], (2) m i,j,l = relu(W conv j z i,l + b conv ), (3) m j,l = max i m i,j,l , (4) where L is the number of windows, W conv j and b conv are the weight and bias of CNN, and max indicates max pooling (Boureau et al., 2010) .", "We convert the output of the convolution layer into a fixed-size vector that represents a textual pair as follows: m l = [m 1,l , · · · , m J,l ], (5) h t = [m 1 ; .", ".", ".", "; m L ], (6) where J is the number of filters.", "We get a predictionŷ t by the following fully connected neural networks: h (1) t = relu(W (1) t h t + b (1) t ), (7) y t = softmax(W (2) t h (1) t + b (2) t ), (8) where W (1) t and W (2) t are weights and b (1) t and b (2) t are bias terms.", "Molecular Structure-based DDI Classification We represent drug pairs in molecular graph structures using two GCN methods: CNNs for fingerprints (NFP) (Duvenaud et al., 2015) and Gated Graph Neural Networks (GGNN) .", "They both convert a drug molecule graph G into a fixed size vector h g by aggregating the representation h T v of an atom node v in G. We represent atoms as nodes and bonds as edges in the graph.", "NFP first obtains the representation h t v by the following equations (Duvenaud et al., 2015) .", "m t+1 v = h t v + w∈N (v) h t w , (9) h t+1 v = σ(H deg(v) t m t+1 v ), (10) where h t v is the representation of v in the t-th step, N (v) is the neighbors of v, and H deg(v) t is a weight parameter.", "h 0 v is initialized by the atom features of v. deg(v) is the degree of a node v and σ is a sigmoid function.", "NFP then acquires the representation of the graph structure h g = v,t softmax(W t h t v ), (11) where W t is a weight matrix.", "GGNN first obtains the representation h t v by using Gated Recurrent Unit (GRU)-based recurrent neural networks as follows: m t+1 v = w∈N (v) A evw h t w (12) h t+1 v = GRU([h t v ; m t+1 v ]), (13) where A evw is a weight for the bond type of each edge e vw .", "GGNN then acquires the representation of the graph structure.", "h g = v σ(i([h T v ; h 0 v ])) (j(h T v )), (14) where i and j are linear layers and is the element-wise product.", "We obtain the representation of a molecular pair by concatenating the molecular graph representations of drugs g 1 and g 2 , i.e., h m = [h g 1 ; h g 2 ].", "We get a predictionŷ m as follows: h (1) m = relu(W (1) m h m + b (1) m ), (15) y m = softmax(W (2) m h (1) m + b (2) m ), (16) where W (1) m and W (2) m are weights and b (1) m and b (2) m are bias terms.", "DDI Extraction from Texts Using Molecular Structures We realize the simultaneous use of textual and molecular information by concatenating a textbased and molecule-based vectors: h all = [h t ; h m ].", "We normalize molecule-based vectors.", "We then use h all instead of h t in Equation 7 .", "In training, we first train the molecular-based DDI classification model.", "The molecular-based classification is performed by minimizing the loss function L m = − y m logŷ m .", "We then fix the parameters for GCNs and train text-based DDI extraction model by minimizing the loss function L t = − y t logŷ t .", "Experimental Settings In this section, we explain the textual and molecular data and task settings and training settings.", "Text Corpus and Task Setting We followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task for the evaluation.", "This data set is composed of documents annotated with drug mentions and their four types of interactions: Mechanism, Effect, Advice and Int.", "For the data statistics, please refer to the supplementary materials.", "The task is a multi-class classification task, i.e., to classify a given pair of drugs into the four interaction types or no interaction.", "We evaluated the performance with micro-averaged precision (P), Figure 2 : Associating DrugBank entries with texts and molecular graph structures recall (R), and F-score (F) on all the interaction types.", "We used the official evaluation script provided by the task organizers.", "As preprocessing, we split sentences into words using the GENIA tagger (Tsuruoka et al., 2005) .", "We replaced the drug mentions of the target pair with DRUG1 and DRUG2 according to their order of appearance.", "We also replaced other drug mentions with DRUGOTHER.", "We did not employ negative instance filtering unlike other existing methods, e.g., Liu et al.", "(2016) , since our focus is to evaluate the effect of the molecular information on texts.", "We linked mentions in texts to DrugBank entries by string matching.", "We lowercased the mentions and the names in the entries and chose the entries with the most overlaps.", "As a result, 92.15% and 93.09% of drug mentions in train and test data set matched the DrugBank entries.", "Data and Task for Molecular Structures We extracted 255,229 interacting (positive) pairs from DrugBank.", "We note that, unlike text-based interactions, DrugBank only contains the information of interacting pairs; there are no detailed labels and no information for non-interacting (negative) pairs.", "We thus generated the same number of pseudo negative pairs by randomly pairing drugs and removing those in positive pairs.", "To avoid overestimation of the performance, we also deleted drug pairs mentioned in the test set of the text corpus.", "We split positive and negative pairs into 4:1 for training and test data, and we evaluated the classification accuracy using only the molecular information.", "To obtain the graph of a drug molecule, we took (Weininger, 1988) string encoding of the molecule from DrugBank and then converted it into the graph using RDKit (Landrum, 2016) as illustrated in Figure 2 .", "For the atom features, we used randomly embedded vectors for each atoms (i.e., C, O, N, ...).", "We also used 4 bond types: single, double, triple, or aromatic.", "Training Settings We employed mini-batch training using the Adam optimizer (Kingma and Ba, 2015) .", "We used L2 regularization to avoid over-fitting.", "We tuned the bias term b (2) t for negative examples in the final softmax layer.", "For the hyper-parameters, please refer to the supplementary materials.", "We employed pre-trained word embeddings trained by using the word2vec tool (Mikolov et al., 2013) on the 2014 MEDLINE/PubMed baseline distribution.", "The vocabulary size was 215,840.", "The embedding of the drugs, i.e., DRUG1 and DRUG2 were initialized with the pre-trained embedding of the word drug.", "The embeddings of training words that did not appear in the pretrained embeddings were initialized with the average of all pre-trained word embeddings.", "Words that appeared only once in the training data were replaced with an UNK word during training, and the embedding of words in the test data set that did not appear in both training and pre-trained embeddings were set to the embedding of the UNK word.", "Word position embeddings are initialized with random values drawn from a uniform distribution.", "We set the molecule-based vectors of unmatched entities to zero vectors.", "Table 1 shows the performance of DDI extraction models.", "We show the performance without negative instance filtering or ensemble for the fair comparison.", "We observe the increase of recall and F-score by using molecular information, Both GCNs improvements were statistically significant (p < 0.05 for NFP and p < 0.005 for GGNN) with randomized shuffled test.", "Table 2 shows F-scores on individual DDI types.", "The molecular information improves Fscores especially on type Mechanism and Effect.", "Results We also evaluated the accuracy of binary classification on DrugBank pairs by using only the molecular information in Table 3 .", "The performance is high, although the accuracy is evaluated on automatically generated negative instances.", "Finally, we applied the molecular-based DDI classification model trained on DrugBank to the DDIExtraction 2013 task data set.", "Since the Drug-Bank has no detailed labels, we mapped all four types of interactions to positive interactions and evaluated the classification performance.", "The results in Table 4 show that GCNs produce higher recall than precision and the overall performance is low considering the high performance on Drug-Bank pairs.", "This might be because the interactions of drugs are not always mentioned in texts even if the drugs can interact with each other and because hedged DDI mentions are annotated as DDIs in the text data set.", "We also trained the DDI extraction model only with molecular information by replacing h all with h m , but the F-scores were quite low (< 5%).", "These results show that we cannot predict textual relations only with molecular information.", "Related Work Various feature-based methods have been proposed during and after the DDIExtraction-2013 shared task .", "Kim et al.", "(2015) proposed a two-phase SVM-based approach that employed a linear SVM with rich features that consist of word, word pair, dependency graph, parse tree, and noun phrase-based constrained coordination features.", "Zheng et al.", "(2016) proposed a context vector graph kernel to exploit various types of contexts.", "Raihani and Laachfoubi (2017) also employed a two-phase SVM-based approach using non-linear kernels and they proposed five groups of features: word, drug, pair of drug, main verb and negative sentence features.", "Our model does not use any features or kernels.", "Various neural DDI extraction models have been recently proposed using CNNs and Recurrent Neural Networks (RNNs).", "Liu et al.", "(2016) built a CNN-based model based on word and position embeddings.", "Zheng et al.", "(2017) proposed a Bidirectional Long Short-Term Memory RNN (Bi-LSTM)-based model with an input attention mechanism, which obtained target drug-specific word representations before the Bi-LSTM.", "Lim et al.", "(2018) proposed Recursive neural networkbased model with a subtree containment feature and an ensemble method.", "This model showed the state-of-the-art performance on the DDIExtraction 2013 shared task data set if systems do not use negative instance filtering.", "These approaches did not consider molecular information, and they can also be enhanced by the molecular information.", "Vilar et al.", "(2017) focused on detecting DDIs from different sources such as pharmacovigilance sources, scientific biomedical literature and social media.", "They did not use deep neural networks and they did not consider molecular information.", "Learning representations of graphs are widely studied in several tasks such as knowledge base completion, drug discovery, and material science Gilmer et al., 2017) .", "Several graph convolutional neural networks have been proposed such as NFP (Duvenaud et al., 2015) , GGNN , and Molecular Graph Convolutions (Kearnes et al., 2016) , but they have not been applied to DDI extraction.", "Conclusions We proposed a novel neural method for DDI extraction using both textual and molecular informa-tion.", "The results show that DDIs can be predicted with high accuracy from molecular structure information and that the molecular information can improve DDI extraction from texts by 2.39 percept points in F-score on the data set of the DDIExtraction 2013 shared task.", "As future work, we would like to seek the way to model the textual and molecular representations jointly with alleviating the differences in labels.", "We will also investigate the use of other information in DrugBank." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Text-based DDI Extraction", "Molecular Structure-based DDI Classification", "DDI Extraction from Texts Using Molecular Structures", "Experimental Settings", "Text Corpus and Task Setting", "Data and Task for Molecular Structures", "Training Settings", "Results", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-61#paper-1120#slide-0
Introduction
Our target problem is the extraction of drug-drug interactions (DDIs) from biomedical texts Grepafloxacin inhibits the metabolism of Theophylline We investigate the use of external drug database (DrugBank) information in extracting DDIs from texts We especially focus on molecular structure information
Our target problem is the extraction of drug-drug interactions (DDIs) from biomedical texts Grepafloxacin inhibits the metabolism of Theophylline We investigate the use of external drug database (DrugBank) information in extracting DDIs from texts We especially focus on molecular structure information
[]
GEM-SciDuet-train-61#paper-1120#slide-1
1120
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction When drugs are concomitantly administered to a patient, the effects of the drugs may be enhanced or weakened, which may also cause side effects.", "These kinds of interactions are called Drug-Drug Interactions (DDIs).", "Several drug databases have been maintained to summarize drug and DDI information such as DrugBank (Law et al., 2014) , Therapeutic Target database , and PharmGKB (Thorn et al., 2013) .", "Automatic DDI extraction from texts is expected to support the maintenance of databases with high coverage and quick update to help medical experts.", "Deep neural network-based methods have recently drawn a considerable attention (Liu et al., 2016; Sahu and Anand, 2017; Zheng et al., 2017; Lim et al., 2018) since they show state-of-the-art performance without manual feature engineering.", "In parallel to the progress in DDI extraction from texts, Graph Convolutional Networks (GCNs) have been proposed and applied to estimate physical and chemical properties of molec-ular graphs such as solubility and toxicity (Duvenaud et al., 2015; Gilmer et al., 2017) .", "In this study, we propose a novel method to utilize both textual and molecular information for DDI extraction from texts.", "We illustrate the overview of the proposed model in Figure 1 .", "We obtain the representations of drug pairs in molecular graph structures using GCNs and concatenate the representations with the representations of the textual mention pairs obtained by convolutional neural networks (CNNs).", "We trained the molecule-based model using interacting pairs mentioned in the DrugBank database and then trained the entire model using the labeled pairs in the text data set of the DDIExtraction 2013 shared task (SemEval-2013 Task 9) (Segura .", "In the experiment, we show GCNs can predict DDIs from molecular graphs in a high accuracy.", "We also show molecular information can enhance the performance of DDI extraction from texts in 2.39 percent points in F-score.", "The contribution of this paper is three-fold: • We propose a novel neural method to extract DDIs from texts with the related molecular structure information.", "• We apply GCNs to pairwise drug molecules for the first time and show GCNs can predict DDIs between drug molecular structures in a high accuracy.", "• We show the molecular information is useful in extracting DDIs from texts.", "Methods Text-based DDI Extraction Our model for extracting DDIs from texts is based on the CNN model by Zeng et al.", "(2014) .", "When an input sentence S = (w 1 , w 2 , · · · , w N ) is given, We prepare word embedding w w i of w i and word Figure 1 : Overview of the proposed model position embeddings w p i,1 and w p i,2 that correspond to the relative positions from the first and second target entities, respectively.", "We concatenate these embeddings as in Equation (1) , and we use the resulting vector as the input to the subsequent convolution layer: w i = [w w i ; w p i,1 ; w p i,2 ], (1) where [; ] denotes the concatenation.", "We calculate the expression for each filter j with the window size k l .", "z i,l = [w i−(k l −1)/2 , · · · , w i−(k l +1)/2 ], (2) m i,j,l = relu(W conv j z i,l + b conv ), (3) m j,l = max i m i,j,l , (4) where L is the number of windows, W conv j and b conv are the weight and bias of CNN, and max indicates max pooling (Boureau et al., 2010) .", "We convert the output of the convolution layer into a fixed-size vector that represents a textual pair as follows: m l = [m 1,l , · · · , m J,l ], (5) h t = [m 1 ; .", ".", ".", "; m L ], (6) where J is the number of filters.", "We get a predictionŷ t by the following fully connected neural networks: h (1) t = relu(W (1) t h t + b (1) t ), (7) y t = softmax(W (2) t h (1) t + b (2) t ), (8) where W (1) t and W (2) t are weights and b (1) t and b (2) t are bias terms.", "Molecular Structure-based DDI Classification We represent drug pairs in molecular graph structures using two GCN methods: CNNs for fingerprints (NFP) (Duvenaud et al., 2015) and Gated Graph Neural Networks (GGNN) .", "They both convert a drug molecule graph G into a fixed size vector h g by aggregating the representation h T v of an atom node v in G. We represent atoms as nodes and bonds as edges in the graph.", "NFP first obtains the representation h t v by the following equations (Duvenaud et al., 2015) .", "m t+1 v = h t v + w∈N (v) h t w , (9) h t+1 v = σ(H deg(v) t m t+1 v ), (10) where h t v is the representation of v in the t-th step, N (v) is the neighbors of v, and H deg(v) t is a weight parameter.", "h 0 v is initialized by the atom features of v. deg(v) is the degree of a node v and σ is a sigmoid function.", "NFP then acquires the representation of the graph structure h g = v,t softmax(W t h t v ), (11) where W t is a weight matrix.", "GGNN first obtains the representation h t v by using Gated Recurrent Unit (GRU)-based recurrent neural networks as follows: m t+1 v = w∈N (v) A evw h t w (12) h t+1 v = GRU([h t v ; m t+1 v ]), (13) where A evw is a weight for the bond type of each edge e vw .", "GGNN then acquires the representation of the graph structure.", "h g = v σ(i([h T v ; h 0 v ])) (j(h T v )), (14) where i and j are linear layers and is the element-wise product.", "We obtain the representation of a molecular pair by concatenating the molecular graph representations of drugs g 1 and g 2 , i.e., h m = [h g 1 ; h g 2 ].", "We get a predictionŷ m as follows: h (1) m = relu(W (1) m h m + b (1) m ), (15) y m = softmax(W (2) m h (1) m + b (2) m ), (16) where W (1) m and W (2) m are weights and b (1) m and b (2) m are bias terms.", "DDI Extraction from Texts Using Molecular Structures We realize the simultaneous use of textual and molecular information by concatenating a textbased and molecule-based vectors: h all = [h t ; h m ].", "We normalize molecule-based vectors.", "We then use h all instead of h t in Equation 7 .", "In training, we first train the molecular-based DDI classification model.", "The molecular-based classification is performed by minimizing the loss function L m = − y m logŷ m .", "We then fix the parameters for GCNs and train text-based DDI extraction model by minimizing the loss function L t = − y t logŷ t .", "Experimental Settings In this section, we explain the textual and molecular data and task settings and training settings.", "Text Corpus and Task Setting We followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task for the evaluation.", "This data set is composed of documents annotated with drug mentions and their four types of interactions: Mechanism, Effect, Advice and Int.", "For the data statistics, please refer to the supplementary materials.", "The task is a multi-class classification task, i.e., to classify a given pair of drugs into the four interaction types or no interaction.", "We evaluated the performance with micro-averaged precision (P), Figure 2 : Associating DrugBank entries with texts and molecular graph structures recall (R), and F-score (F) on all the interaction types.", "We used the official evaluation script provided by the task organizers.", "As preprocessing, we split sentences into words using the GENIA tagger (Tsuruoka et al., 2005) .", "We replaced the drug mentions of the target pair with DRUG1 and DRUG2 according to their order of appearance.", "We also replaced other drug mentions with DRUGOTHER.", "We did not employ negative instance filtering unlike other existing methods, e.g., Liu et al.", "(2016) , since our focus is to evaluate the effect of the molecular information on texts.", "We linked mentions in texts to DrugBank entries by string matching.", "We lowercased the mentions and the names in the entries and chose the entries with the most overlaps.", "As a result, 92.15% and 93.09% of drug mentions in train and test data set matched the DrugBank entries.", "Data and Task for Molecular Structures We extracted 255,229 interacting (positive) pairs from DrugBank.", "We note that, unlike text-based interactions, DrugBank only contains the information of interacting pairs; there are no detailed labels and no information for non-interacting (negative) pairs.", "We thus generated the same number of pseudo negative pairs by randomly pairing drugs and removing those in positive pairs.", "To avoid overestimation of the performance, we also deleted drug pairs mentioned in the test set of the text corpus.", "We split positive and negative pairs into 4:1 for training and test data, and we evaluated the classification accuracy using only the molecular information.", "To obtain the graph of a drug molecule, we took (Weininger, 1988) string encoding of the molecule from DrugBank and then converted it into the graph using RDKit (Landrum, 2016) as illustrated in Figure 2 .", "For the atom features, we used randomly embedded vectors for each atoms (i.e., C, O, N, ...).", "We also used 4 bond types: single, double, triple, or aromatic.", "Training Settings We employed mini-batch training using the Adam optimizer (Kingma and Ba, 2015) .", "We used L2 regularization to avoid over-fitting.", "We tuned the bias term b (2) t for negative examples in the final softmax layer.", "For the hyper-parameters, please refer to the supplementary materials.", "We employed pre-trained word embeddings trained by using the word2vec tool (Mikolov et al., 2013) on the 2014 MEDLINE/PubMed baseline distribution.", "The vocabulary size was 215,840.", "The embedding of the drugs, i.e., DRUG1 and DRUG2 were initialized with the pre-trained embedding of the word drug.", "The embeddings of training words that did not appear in the pretrained embeddings were initialized with the average of all pre-trained word embeddings.", "Words that appeared only once in the training data were replaced with an UNK word during training, and the embedding of words in the test data set that did not appear in both training and pre-trained embeddings were set to the embedding of the UNK word.", "Word position embeddings are initialized with random values drawn from a uniform distribution.", "We set the molecule-based vectors of unmatched entities to zero vectors.", "Table 1 shows the performance of DDI extraction models.", "We show the performance without negative instance filtering or ensemble for the fair comparison.", "We observe the increase of recall and F-score by using molecular information, Both GCNs improvements were statistically significant (p < 0.05 for NFP and p < 0.005 for GGNN) with randomized shuffled test.", "Table 2 shows F-scores on individual DDI types.", "The molecular information improves Fscores especially on type Mechanism and Effect.", "Results We also evaluated the accuracy of binary classification on DrugBank pairs by using only the molecular information in Table 3 .", "The performance is high, although the accuracy is evaluated on automatically generated negative instances.", "Finally, we applied the molecular-based DDI classification model trained on DrugBank to the DDIExtraction 2013 task data set.", "Since the Drug-Bank has no detailed labels, we mapped all four types of interactions to positive interactions and evaluated the classification performance.", "The results in Table 4 show that GCNs produce higher recall than precision and the overall performance is low considering the high performance on Drug-Bank pairs.", "This might be because the interactions of drugs are not always mentioned in texts even if the drugs can interact with each other and because hedged DDI mentions are annotated as DDIs in the text data set.", "We also trained the DDI extraction model only with molecular information by replacing h all with h m , but the F-scores were quite low (< 5%).", "These results show that we cannot predict textual relations only with molecular information.", "Related Work Various feature-based methods have been proposed during and after the DDIExtraction-2013 shared task .", "Kim et al.", "(2015) proposed a two-phase SVM-based approach that employed a linear SVM with rich features that consist of word, word pair, dependency graph, parse tree, and noun phrase-based constrained coordination features.", "Zheng et al.", "(2016) proposed a context vector graph kernel to exploit various types of contexts.", "Raihani and Laachfoubi (2017) also employed a two-phase SVM-based approach using non-linear kernels and they proposed five groups of features: word, drug, pair of drug, main verb and negative sentence features.", "Our model does not use any features or kernels.", "Various neural DDI extraction models have been recently proposed using CNNs and Recurrent Neural Networks (RNNs).", "Liu et al.", "(2016) built a CNN-based model based on word and position embeddings.", "Zheng et al.", "(2017) proposed a Bidirectional Long Short-Term Memory RNN (Bi-LSTM)-based model with an input attention mechanism, which obtained target drug-specific word representations before the Bi-LSTM.", "Lim et al.", "(2018) proposed Recursive neural networkbased model with a subtree containment feature and an ensemble method.", "This model showed the state-of-the-art performance on the DDIExtraction 2013 shared task data set if systems do not use negative instance filtering.", "These approaches did not consider molecular information, and they can also be enhanced by the molecular information.", "Vilar et al.", "(2017) focused on detecting DDIs from different sources such as pharmacovigilance sources, scientific biomedical literature and social media.", "They did not use deep neural networks and they did not consider molecular information.", "Learning representations of graphs are widely studied in several tasks such as knowledge base completion, drug discovery, and material science Gilmer et al., 2017) .", "Several graph convolutional neural networks have been proposed such as NFP (Duvenaud et al., 2015) , GGNN , and Molecular Graph Convolutions (Kearnes et al., 2016) , but they have not been applied to DDI extraction.", "Conclusions We proposed a novel neural method for DDI extraction using both textual and molecular informa-tion.", "The results show that DDIs can be predicted with high accuracy from molecular structure information and that the molecular information can improve DDI extraction from texts by 2.39 percept points in F-score on the data set of the DDIExtraction 2013 shared task.", "As future work, we would like to seek the way to model the textual and molecular representations jointly with alleviating the differences in labels.", "We will also investigate the use of other information in DrugBank." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Text-based DDI Extraction", "Molecular Structure-based DDI Classification", "DDI Extraction from Texts Using Molecular Structures", "Experimental Settings", "Text Corpus and Task Setting", "Data and Task for Molecular Structures", "Training Settings", "Results", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-61#paper-1120#slide-1
Method Overview
We obtain the representations of textual drug pairs using convolutional neural networks (CNNs) and molecular drug pairs using graph convolutional networks (GCNs) We concatenate text-based and molecule-based vectors Grepafloxacin inhibits the metabolism of Theophylline
We obtain the representations of textual drug pairs using convolutional neural networks (CNNs) and molecular drug pairs using graph convolutional networks (GCNs) We concatenate text-based and molecule-based vectors Grepafloxacin inhibits the metabolism of Theophylline
[]
GEM-SciDuet-train-61#paper-1120#slide-2
1120
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction When drugs are concomitantly administered to a patient, the effects of the drugs may be enhanced or weakened, which may also cause side effects.", "These kinds of interactions are called Drug-Drug Interactions (DDIs).", "Several drug databases have been maintained to summarize drug and DDI information such as DrugBank (Law et al., 2014) , Therapeutic Target database , and PharmGKB (Thorn et al., 2013) .", "Automatic DDI extraction from texts is expected to support the maintenance of databases with high coverage and quick update to help medical experts.", "Deep neural network-based methods have recently drawn a considerable attention (Liu et al., 2016; Sahu and Anand, 2017; Zheng et al., 2017; Lim et al., 2018) since they show state-of-the-art performance without manual feature engineering.", "In parallel to the progress in DDI extraction from texts, Graph Convolutional Networks (GCNs) have been proposed and applied to estimate physical and chemical properties of molec-ular graphs such as solubility and toxicity (Duvenaud et al., 2015; Gilmer et al., 2017) .", "In this study, we propose a novel method to utilize both textual and molecular information for DDI extraction from texts.", "We illustrate the overview of the proposed model in Figure 1 .", "We obtain the representations of drug pairs in molecular graph structures using GCNs and concatenate the representations with the representations of the textual mention pairs obtained by convolutional neural networks (CNNs).", "We trained the molecule-based model using interacting pairs mentioned in the DrugBank database and then trained the entire model using the labeled pairs in the text data set of the DDIExtraction 2013 shared task (SemEval-2013 Task 9) (Segura .", "In the experiment, we show GCNs can predict DDIs from molecular graphs in a high accuracy.", "We also show molecular information can enhance the performance of DDI extraction from texts in 2.39 percent points in F-score.", "The contribution of this paper is three-fold: • We propose a novel neural method to extract DDIs from texts with the related molecular structure information.", "• We apply GCNs to pairwise drug molecules for the first time and show GCNs can predict DDIs between drug molecular structures in a high accuracy.", "• We show the molecular information is useful in extracting DDIs from texts.", "Methods Text-based DDI Extraction Our model for extracting DDIs from texts is based on the CNN model by Zeng et al.", "(2014) .", "When an input sentence S = (w 1 , w 2 , · · · , w N ) is given, We prepare word embedding w w i of w i and word Figure 1 : Overview of the proposed model position embeddings w p i,1 and w p i,2 that correspond to the relative positions from the first and second target entities, respectively.", "We concatenate these embeddings as in Equation (1) , and we use the resulting vector as the input to the subsequent convolution layer: w i = [w w i ; w p i,1 ; w p i,2 ], (1) where [; ] denotes the concatenation.", "We calculate the expression for each filter j with the window size k l .", "z i,l = [w i−(k l −1)/2 , · · · , w i−(k l +1)/2 ], (2) m i,j,l = relu(W conv j z i,l + b conv ), (3) m j,l = max i m i,j,l , (4) where L is the number of windows, W conv j and b conv are the weight and bias of CNN, and max indicates max pooling (Boureau et al., 2010) .", "We convert the output of the convolution layer into a fixed-size vector that represents a textual pair as follows: m l = [m 1,l , · · · , m J,l ], (5) h t = [m 1 ; .", ".", ".", "; m L ], (6) where J is the number of filters.", "We get a predictionŷ t by the following fully connected neural networks: h (1) t = relu(W (1) t h t + b (1) t ), (7) y t = softmax(W (2) t h (1) t + b (2) t ), (8) where W (1) t and W (2) t are weights and b (1) t and b (2) t are bias terms.", "Molecular Structure-based DDI Classification We represent drug pairs in molecular graph structures using two GCN methods: CNNs for fingerprints (NFP) (Duvenaud et al., 2015) and Gated Graph Neural Networks (GGNN) .", "They both convert a drug molecule graph G into a fixed size vector h g by aggregating the representation h T v of an atom node v in G. We represent atoms as nodes and bonds as edges in the graph.", "NFP first obtains the representation h t v by the following equations (Duvenaud et al., 2015) .", "m t+1 v = h t v + w∈N (v) h t w , (9) h t+1 v = σ(H deg(v) t m t+1 v ), (10) where h t v is the representation of v in the t-th step, N (v) is the neighbors of v, and H deg(v) t is a weight parameter.", "h 0 v is initialized by the atom features of v. deg(v) is the degree of a node v and σ is a sigmoid function.", "NFP then acquires the representation of the graph structure h g = v,t softmax(W t h t v ), (11) where W t is a weight matrix.", "GGNN first obtains the representation h t v by using Gated Recurrent Unit (GRU)-based recurrent neural networks as follows: m t+1 v = w∈N (v) A evw h t w (12) h t+1 v = GRU([h t v ; m t+1 v ]), (13) where A evw is a weight for the bond type of each edge e vw .", "GGNN then acquires the representation of the graph structure.", "h g = v σ(i([h T v ; h 0 v ])) (j(h T v )), (14) where i and j are linear layers and is the element-wise product.", "We obtain the representation of a molecular pair by concatenating the molecular graph representations of drugs g 1 and g 2 , i.e., h m = [h g 1 ; h g 2 ].", "We get a predictionŷ m as follows: h (1) m = relu(W (1) m h m + b (1) m ), (15) y m = softmax(W (2) m h (1) m + b (2) m ), (16) where W (1) m and W (2) m are weights and b (1) m and b (2) m are bias terms.", "DDI Extraction from Texts Using Molecular Structures We realize the simultaneous use of textual and molecular information by concatenating a textbased and molecule-based vectors: h all = [h t ; h m ].", "We normalize molecule-based vectors.", "We then use h all instead of h t in Equation 7 .", "In training, we first train the molecular-based DDI classification model.", "The molecular-based classification is performed by minimizing the loss function L m = − y m logŷ m .", "We then fix the parameters for GCNs and train text-based DDI extraction model by minimizing the loss function L t = − y t logŷ t .", "Experimental Settings In this section, we explain the textual and molecular data and task settings and training settings.", "Text Corpus and Task Setting We followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task for the evaluation.", "This data set is composed of documents annotated with drug mentions and their four types of interactions: Mechanism, Effect, Advice and Int.", "For the data statistics, please refer to the supplementary materials.", "The task is a multi-class classification task, i.e., to classify a given pair of drugs into the four interaction types or no interaction.", "We evaluated the performance with micro-averaged precision (P), Figure 2 : Associating DrugBank entries with texts and molecular graph structures recall (R), and F-score (F) on all the interaction types.", "We used the official evaluation script provided by the task organizers.", "As preprocessing, we split sentences into words using the GENIA tagger (Tsuruoka et al., 2005) .", "We replaced the drug mentions of the target pair with DRUG1 and DRUG2 according to their order of appearance.", "We also replaced other drug mentions with DRUGOTHER.", "We did not employ negative instance filtering unlike other existing methods, e.g., Liu et al.", "(2016) , since our focus is to evaluate the effect of the molecular information on texts.", "We linked mentions in texts to DrugBank entries by string matching.", "We lowercased the mentions and the names in the entries and chose the entries with the most overlaps.", "As a result, 92.15% and 93.09% of drug mentions in train and test data set matched the DrugBank entries.", "Data and Task for Molecular Structures We extracted 255,229 interacting (positive) pairs from DrugBank.", "We note that, unlike text-based interactions, DrugBank only contains the information of interacting pairs; there are no detailed labels and no information for non-interacting (negative) pairs.", "We thus generated the same number of pseudo negative pairs by randomly pairing drugs and removing those in positive pairs.", "To avoid overestimation of the performance, we also deleted drug pairs mentioned in the test set of the text corpus.", "We split positive and negative pairs into 4:1 for training and test data, and we evaluated the classification accuracy using only the molecular information.", "To obtain the graph of a drug molecule, we took (Weininger, 1988) string encoding of the molecule from DrugBank and then converted it into the graph using RDKit (Landrum, 2016) as illustrated in Figure 2 .", "For the atom features, we used randomly embedded vectors for each atoms (i.e., C, O, N, ...).", "We also used 4 bond types: single, double, triple, or aromatic.", "Training Settings We employed mini-batch training using the Adam optimizer (Kingma and Ba, 2015) .", "We used L2 regularization to avoid over-fitting.", "We tuned the bias term b (2) t for negative examples in the final softmax layer.", "For the hyper-parameters, please refer to the supplementary materials.", "We employed pre-trained word embeddings trained by using the word2vec tool (Mikolov et al., 2013) on the 2014 MEDLINE/PubMed baseline distribution.", "The vocabulary size was 215,840.", "The embedding of the drugs, i.e., DRUG1 and DRUG2 were initialized with the pre-trained embedding of the word drug.", "The embeddings of training words that did not appear in the pretrained embeddings were initialized with the average of all pre-trained word embeddings.", "Words that appeared only once in the training data were replaced with an UNK word during training, and the embedding of words in the test data set that did not appear in both training and pre-trained embeddings were set to the embedding of the UNK word.", "Word position embeddings are initialized with random values drawn from a uniform distribution.", "We set the molecule-based vectors of unmatched entities to zero vectors.", "Table 1 shows the performance of DDI extraction models.", "We show the performance without negative instance filtering or ensemble for the fair comparison.", "We observe the increase of recall and F-score by using molecular information, Both GCNs improvements were statistically significant (p < 0.05 for NFP and p < 0.005 for GGNN) with randomized shuffled test.", "Table 2 shows F-scores on individual DDI types.", "The molecular information improves Fscores especially on type Mechanism and Effect.", "Results We also evaluated the accuracy of binary classification on DrugBank pairs by using only the molecular information in Table 3 .", "The performance is high, although the accuracy is evaluated on automatically generated negative instances.", "Finally, we applied the molecular-based DDI classification model trained on DrugBank to the DDIExtraction 2013 task data set.", "Since the Drug-Bank has no detailed labels, we mapped all four types of interactions to positive interactions and evaluated the classification performance.", "The results in Table 4 show that GCNs produce higher recall than precision and the overall performance is low considering the high performance on Drug-Bank pairs.", "This might be because the interactions of drugs are not always mentioned in texts even if the drugs can interact with each other and because hedged DDI mentions are annotated as DDIs in the text data set.", "We also trained the DDI extraction model only with molecular information by replacing h all with h m , but the F-scores were quite low (< 5%).", "These results show that we cannot predict textual relations only with molecular information.", "Related Work Various feature-based methods have been proposed during and after the DDIExtraction-2013 shared task .", "Kim et al.", "(2015) proposed a two-phase SVM-based approach that employed a linear SVM with rich features that consist of word, word pair, dependency graph, parse tree, and noun phrase-based constrained coordination features.", "Zheng et al.", "(2016) proposed a context vector graph kernel to exploit various types of contexts.", "Raihani and Laachfoubi (2017) also employed a two-phase SVM-based approach using non-linear kernels and they proposed five groups of features: word, drug, pair of drug, main verb and negative sentence features.", "Our model does not use any features or kernels.", "Various neural DDI extraction models have been recently proposed using CNNs and Recurrent Neural Networks (RNNs).", "Liu et al.", "(2016) built a CNN-based model based on word and position embeddings.", "Zheng et al.", "(2017) proposed a Bidirectional Long Short-Term Memory RNN (Bi-LSTM)-based model with an input attention mechanism, which obtained target drug-specific word representations before the Bi-LSTM.", "Lim et al.", "(2018) proposed Recursive neural networkbased model with a subtree containment feature and an ensemble method.", "This model showed the state-of-the-art performance on the DDIExtraction 2013 shared task data set if systems do not use negative instance filtering.", "These approaches did not consider molecular information, and they can also be enhanced by the molecular information.", "Vilar et al.", "(2017) focused on detecting DDIs from different sources such as pharmacovigilance sources, scientific biomedical literature and social media.", "They did not use deep neural networks and they did not consider molecular information.", "Learning representations of graphs are widely studied in several tasks such as knowledge base completion, drug discovery, and material science Gilmer et al., 2017) .", "Several graph convolutional neural networks have been proposed such as NFP (Duvenaud et al., 2015) , GGNN , and Molecular Graph Convolutions (Kearnes et al., 2016) , but they have not been applied to DDI extraction.", "Conclusions We proposed a novel neural method for DDI extraction using both textual and molecular informa-tion.", "The results show that DDIs can be predicted with high accuracy from molecular structure information and that the molecular information can improve DDI extraction from texts by 2.39 percept points in F-score on the data set of the DDIExtraction 2013 shared task.", "As future work, we would like to seek the way to model the textual and molecular representations jointly with alleviating the differences in labels.", "We will also investigate the use of other information in DrugBank." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Text-based DDI Extraction", "Molecular Structure-based DDI Classification", "DDI Extraction from Texts Using Molecular Structures", "Experimental Settings", "Text Corpus and Task Setting", "Data and Task for Molecular Structures", "Training Settings", "Results", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-61#paper-1120#slide-2
Method
DDI extraction from texts using molecular structures Molecular structure-based DDI representation word + position embeddings Grepafloxacin inhibits Textual vector Text Corpus the metabolism of Input sentence word vector
DDI extraction from texts using molecular structures Molecular structure-based DDI representation word + position embeddings Grepafloxacin inhibits Textual vector Text Corpus the metabolism of Input sentence word vector
[]
GEM-SciDuet-train-61#paper-1120#slide-3
1120
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction When drugs are concomitantly administered to a patient, the effects of the drugs may be enhanced or weakened, which may also cause side effects.", "These kinds of interactions are called Drug-Drug Interactions (DDIs).", "Several drug databases have been maintained to summarize drug and DDI information such as DrugBank (Law et al., 2014) , Therapeutic Target database , and PharmGKB (Thorn et al., 2013) .", "Automatic DDI extraction from texts is expected to support the maintenance of databases with high coverage and quick update to help medical experts.", "Deep neural network-based methods have recently drawn a considerable attention (Liu et al., 2016; Sahu and Anand, 2017; Zheng et al., 2017; Lim et al., 2018) since they show state-of-the-art performance without manual feature engineering.", "In parallel to the progress in DDI extraction from texts, Graph Convolutional Networks (GCNs) have been proposed and applied to estimate physical and chemical properties of molec-ular graphs such as solubility and toxicity (Duvenaud et al., 2015; Gilmer et al., 2017) .", "In this study, we propose a novel method to utilize both textual and molecular information for DDI extraction from texts.", "We illustrate the overview of the proposed model in Figure 1 .", "We obtain the representations of drug pairs in molecular graph structures using GCNs and concatenate the representations with the representations of the textual mention pairs obtained by convolutional neural networks (CNNs).", "We trained the molecule-based model using interacting pairs mentioned in the DrugBank database and then trained the entire model using the labeled pairs in the text data set of the DDIExtraction 2013 shared task (SemEval-2013 Task 9) (Segura .", "In the experiment, we show GCNs can predict DDIs from molecular graphs in a high accuracy.", "We also show molecular information can enhance the performance of DDI extraction from texts in 2.39 percent points in F-score.", "The contribution of this paper is three-fold: • We propose a novel neural method to extract DDIs from texts with the related molecular structure information.", "• We apply GCNs to pairwise drug molecules for the first time and show GCNs can predict DDIs between drug molecular structures in a high accuracy.", "• We show the molecular information is useful in extracting DDIs from texts.", "Methods Text-based DDI Extraction Our model for extracting DDIs from texts is based on the CNN model by Zeng et al.", "(2014) .", "When an input sentence S = (w 1 , w 2 , · · · , w N ) is given, We prepare word embedding w w i of w i and word Figure 1 : Overview of the proposed model position embeddings w p i,1 and w p i,2 that correspond to the relative positions from the first and second target entities, respectively.", "We concatenate these embeddings as in Equation (1) , and we use the resulting vector as the input to the subsequent convolution layer: w i = [w w i ; w p i,1 ; w p i,2 ], (1) where [; ] denotes the concatenation.", "We calculate the expression for each filter j with the window size k l .", "z i,l = [w i−(k l −1)/2 , · · · , w i−(k l +1)/2 ], (2) m i,j,l = relu(W conv j z i,l + b conv ), (3) m j,l = max i m i,j,l , (4) where L is the number of windows, W conv j and b conv are the weight and bias of CNN, and max indicates max pooling (Boureau et al., 2010) .", "We convert the output of the convolution layer into a fixed-size vector that represents a textual pair as follows: m l = [m 1,l , · · · , m J,l ], (5) h t = [m 1 ; .", ".", ".", "; m L ], (6) where J is the number of filters.", "We get a predictionŷ t by the following fully connected neural networks: h (1) t = relu(W (1) t h t + b (1) t ), (7) y t = softmax(W (2) t h (1) t + b (2) t ), (8) where W (1) t and W (2) t are weights and b (1) t and b (2) t are bias terms.", "Molecular Structure-based DDI Classification We represent drug pairs in molecular graph structures using two GCN methods: CNNs for fingerprints (NFP) (Duvenaud et al., 2015) and Gated Graph Neural Networks (GGNN) .", "They both convert a drug molecule graph G into a fixed size vector h g by aggregating the representation h T v of an atom node v in G. We represent atoms as nodes and bonds as edges in the graph.", "NFP first obtains the representation h t v by the following equations (Duvenaud et al., 2015) .", "m t+1 v = h t v + w∈N (v) h t w , (9) h t+1 v = σ(H deg(v) t m t+1 v ), (10) where h t v is the representation of v in the t-th step, N (v) is the neighbors of v, and H deg(v) t is a weight parameter.", "h 0 v is initialized by the atom features of v. deg(v) is the degree of a node v and σ is a sigmoid function.", "NFP then acquires the representation of the graph structure h g = v,t softmax(W t h t v ), (11) where W t is a weight matrix.", "GGNN first obtains the representation h t v by using Gated Recurrent Unit (GRU)-based recurrent neural networks as follows: m t+1 v = w∈N (v) A evw h t w (12) h t+1 v = GRU([h t v ; m t+1 v ]), (13) where A evw is a weight for the bond type of each edge e vw .", "GGNN then acquires the representation of the graph structure.", "h g = v σ(i([h T v ; h 0 v ])) (j(h T v )), (14) where i and j are linear layers and is the element-wise product.", "We obtain the representation of a molecular pair by concatenating the molecular graph representations of drugs g 1 and g 2 , i.e., h m = [h g 1 ; h g 2 ].", "We get a predictionŷ m as follows: h (1) m = relu(W (1) m h m + b (1) m ), (15) y m = softmax(W (2) m h (1) m + b (2) m ), (16) where W (1) m and W (2) m are weights and b (1) m and b (2) m are bias terms.", "DDI Extraction from Texts Using Molecular Structures We realize the simultaneous use of textual and molecular information by concatenating a textbased and molecule-based vectors: h all = [h t ; h m ].", "We normalize molecule-based vectors.", "We then use h all instead of h t in Equation 7 .", "In training, we first train the molecular-based DDI classification model.", "The molecular-based classification is performed by minimizing the loss function L m = − y m logŷ m .", "We then fix the parameters for GCNs and train text-based DDI extraction model by minimizing the loss function L t = − y t logŷ t .", "Experimental Settings In this section, we explain the textual and molecular data and task settings and training settings.", "Text Corpus and Task Setting We followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task for the evaluation.", "This data set is composed of documents annotated with drug mentions and their four types of interactions: Mechanism, Effect, Advice and Int.", "For the data statistics, please refer to the supplementary materials.", "The task is a multi-class classification task, i.e., to classify a given pair of drugs into the four interaction types or no interaction.", "We evaluated the performance with micro-averaged precision (P), Figure 2 : Associating DrugBank entries with texts and molecular graph structures recall (R), and F-score (F) on all the interaction types.", "We used the official evaluation script provided by the task organizers.", "As preprocessing, we split sentences into words using the GENIA tagger (Tsuruoka et al., 2005) .", "We replaced the drug mentions of the target pair with DRUG1 and DRUG2 according to their order of appearance.", "We also replaced other drug mentions with DRUGOTHER.", "We did not employ negative instance filtering unlike other existing methods, e.g., Liu et al.", "(2016) , since our focus is to evaluate the effect of the molecular information on texts.", "We linked mentions in texts to DrugBank entries by string matching.", "We lowercased the mentions and the names in the entries and chose the entries with the most overlaps.", "As a result, 92.15% and 93.09% of drug mentions in train and test data set matched the DrugBank entries.", "Data and Task for Molecular Structures We extracted 255,229 interacting (positive) pairs from DrugBank.", "We note that, unlike text-based interactions, DrugBank only contains the information of interacting pairs; there are no detailed labels and no information for non-interacting (negative) pairs.", "We thus generated the same number of pseudo negative pairs by randomly pairing drugs and removing those in positive pairs.", "To avoid overestimation of the performance, we also deleted drug pairs mentioned in the test set of the text corpus.", "We split positive and negative pairs into 4:1 for training and test data, and we evaluated the classification accuracy using only the molecular information.", "To obtain the graph of a drug molecule, we took (Weininger, 1988) string encoding of the molecule from DrugBank and then converted it into the graph using RDKit (Landrum, 2016) as illustrated in Figure 2 .", "For the atom features, we used randomly embedded vectors for each atoms (i.e., C, O, N, ...).", "We also used 4 bond types: single, double, triple, or aromatic.", "Training Settings We employed mini-batch training using the Adam optimizer (Kingma and Ba, 2015) .", "We used L2 regularization to avoid over-fitting.", "We tuned the bias term b (2) t for negative examples in the final softmax layer.", "For the hyper-parameters, please refer to the supplementary materials.", "We employed pre-trained word embeddings trained by using the word2vec tool (Mikolov et al., 2013) on the 2014 MEDLINE/PubMed baseline distribution.", "The vocabulary size was 215,840.", "The embedding of the drugs, i.e., DRUG1 and DRUG2 were initialized with the pre-trained embedding of the word drug.", "The embeddings of training words that did not appear in the pretrained embeddings were initialized with the average of all pre-trained word embeddings.", "Words that appeared only once in the training data were replaced with an UNK word during training, and the embedding of words in the test data set that did not appear in both training and pre-trained embeddings were set to the embedding of the UNK word.", "Word position embeddings are initialized with random values drawn from a uniform distribution.", "We set the molecule-based vectors of unmatched entities to zero vectors.", "Table 1 shows the performance of DDI extraction models.", "We show the performance without negative instance filtering or ensemble for the fair comparison.", "We observe the increase of recall and F-score by using molecular information, Both GCNs improvements were statistically significant (p < 0.05 for NFP and p < 0.005 for GGNN) with randomized shuffled test.", "Table 2 shows F-scores on individual DDI types.", "The molecular information improves Fscores especially on type Mechanism and Effect.", "Results We also evaluated the accuracy of binary classification on DrugBank pairs by using only the molecular information in Table 3 .", "The performance is high, although the accuracy is evaluated on automatically generated negative instances.", "Finally, we applied the molecular-based DDI classification model trained on DrugBank to the DDIExtraction 2013 task data set.", "Since the Drug-Bank has no detailed labels, we mapped all four types of interactions to positive interactions and evaluated the classification performance.", "The results in Table 4 show that GCNs produce higher recall than precision and the overall performance is low considering the high performance on Drug-Bank pairs.", "This might be because the interactions of drugs are not always mentioned in texts even if the drugs can interact with each other and because hedged DDI mentions are annotated as DDIs in the text data set.", "We also trained the DDI extraction model only with molecular information by replacing h all with h m , but the F-scores were quite low (< 5%).", "These results show that we cannot predict textual relations only with molecular information.", "Related Work Various feature-based methods have been proposed during and after the DDIExtraction-2013 shared task .", "Kim et al.", "(2015) proposed a two-phase SVM-based approach that employed a linear SVM with rich features that consist of word, word pair, dependency graph, parse tree, and noun phrase-based constrained coordination features.", "Zheng et al.", "(2016) proposed a context vector graph kernel to exploit various types of contexts.", "Raihani and Laachfoubi (2017) also employed a two-phase SVM-based approach using non-linear kernels and they proposed five groups of features: word, drug, pair of drug, main verb and negative sentence features.", "Our model does not use any features or kernels.", "Various neural DDI extraction models have been recently proposed using CNNs and Recurrent Neural Networks (RNNs).", "Liu et al.", "(2016) built a CNN-based model based on word and position embeddings.", "Zheng et al.", "(2017) proposed a Bidirectional Long Short-Term Memory RNN (Bi-LSTM)-based model with an input attention mechanism, which obtained target drug-specific word representations before the Bi-LSTM.", "Lim et al.", "(2018) proposed Recursive neural networkbased model with a subtree containment feature and an ensemble method.", "This model showed the state-of-the-art performance on the DDIExtraction 2013 shared task data set if systems do not use negative instance filtering.", "These approaches did not consider molecular information, and they can also be enhanced by the molecular information.", "Vilar et al.", "(2017) focused on detecting DDIs from different sources such as pharmacovigilance sources, scientific biomedical literature and social media.", "They did not use deep neural networks and they did not consider molecular information.", "Learning representations of graphs are widely studied in several tasks such as knowledge base completion, drug discovery, and material science Gilmer et al., 2017) .", "Several graph convolutional neural networks have been proposed such as NFP (Duvenaud et al., 2015) , GGNN , and Molecular Graph Convolutions (Kearnes et al., 2016) , but they have not been applied to DDI extraction.", "Conclusions We proposed a novel neural method for DDI extraction using both textual and molecular informa-tion.", "The results show that DDIs can be predicted with high accuracy from molecular structure information and that the molecular information can improve DDI extraction from texts by 2.39 percept points in F-score on the data set of the DDIExtraction 2013 shared task.", "As future work, we would like to seek the way to model the textual and molecular representations jointly with alleviating the differences in labels.", "We will also investigate the use of other information in DrugBank." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Text-based DDI Extraction", "Molecular Structure-based DDI Classification", "DDI Extraction from Texts Using Molecular Structures", "Experimental Settings", "Text Corpus and Task Setting", "Data and Task for Molecular Structures", "Training Settings", "Results", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-61#paper-1120#slide-3
Method Text based DDI Representation
word + position embeddings Grepafloxacin inhibits Textual vector Text Corpus the metabolism of Our model for representing textual DDIs is based on the CNN model by We use word and position embeddings as the input to the convolution layer We convert the output of the convolution layer into a fixed-size textual vector
word + position embeddings Grepafloxacin inhibits Textual vector Text Corpus the metabolism of Our model for representing textual DDIs is based on the CNN model by We use word and position embeddings as the input to the convolution layer We convert the output of the convolution layer into a fixed-size textual vector
[]
GEM-SciDuet-train-61#paper-1120#slide-4
1120
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction When drugs are concomitantly administered to a patient, the effects of the drugs may be enhanced or weakened, which may also cause side effects.", "These kinds of interactions are called Drug-Drug Interactions (DDIs).", "Several drug databases have been maintained to summarize drug and DDI information such as DrugBank (Law et al., 2014) , Therapeutic Target database , and PharmGKB (Thorn et al., 2013) .", "Automatic DDI extraction from texts is expected to support the maintenance of databases with high coverage and quick update to help medical experts.", "Deep neural network-based methods have recently drawn a considerable attention (Liu et al., 2016; Sahu and Anand, 2017; Zheng et al., 2017; Lim et al., 2018) since they show state-of-the-art performance without manual feature engineering.", "In parallel to the progress in DDI extraction from texts, Graph Convolutional Networks (GCNs) have been proposed and applied to estimate physical and chemical properties of molec-ular graphs such as solubility and toxicity (Duvenaud et al., 2015; Gilmer et al., 2017) .", "In this study, we propose a novel method to utilize both textual and molecular information for DDI extraction from texts.", "We illustrate the overview of the proposed model in Figure 1 .", "We obtain the representations of drug pairs in molecular graph structures using GCNs and concatenate the representations with the representations of the textual mention pairs obtained by convolutional neural networks (CNNs).", "We trained the molecule-based model using interacting pairs mentioned in the DrugBank database and then trained the entire model using the labeled pairs in the text data set of the DDIExtraction 2013 shared task (SemEval-2013 Task 9) (Segura .", "In the experiment, we show GCNs can predict DDIs from molecular graphs in a high accuracy.", "We also show molecular information can enhance the performance of DDI extraction from texts in 2.39 percent points in F-score.", "The contribution of this paper is three-fold: • We propose a novel neural method to extract DDIs from texts with the related molecular structure information.", "• We apply GCNs to pairwise drug molecules for the first time and show GCNs can predict DDIs between drug molecular structures in a high accuracy.", "• We show the molecular information is useful in extracting DDIs from texts.", "Methods Text-based DDI Extraction Our model for extracting DDIs from texts is based on the CNN model by Zeng et al.", "(2014) .", "When an input sentence S = (w 1 , w 2 , · · · , w N ) is given, We prepare word embedding w w i of w i and word Figure 1 : Overview of the proposed model position embeddings w p i,1 and w p i,2 that correspond to the relative positions from the first and second target entities, respectively.", "We concatenate these embeddings as in Equation (1) , and we use the resulting vector as the input to the subsequent convolution layer: w i = [w w i ; w p i,1 ; w p i,2 ], (1) where [; ] denotes the concatenation.", "We calculate the expression for each filter j with the window size k l .", "z i,l = [w i−(k l −1)/2 , · · · , w i−(k l +1)/2 ], (2) m i,j,l = relu(W conv j z i,l + b conv ), (3) m j,l = max i m i,j,l , (4) where L is the number of windows, W conv j and b conv are the weight and bias of CNN, and max indicates max pooling (Boureau et al., 2010) .", "We convert the output of the convolution layer into a fixed-size vector that represents a textual pair as follows: m l = [m 1,l , · · · , m J,l ], (5) h t = [m 1 ; .", ".", ".", "; m L ], (6) where J is the number of filters.", "We get a predictionŷ t by the following fully connected neural networks: h (1) t = relu(W (1) t h t + b (1) t ), (7) y t = softmax(W (2) t h (1) t + b (2) t ), (8) where W (1) t and W (2) t are weights and b (1) t and b (2) t are bias terms.", "Molecular Structure-based DDI Classification We represent drug pairs in molecular graph structures using two GCN methods: CNNs for fingerprints (NFP) (Duvenaud et al., 2015) and Gated Graph Neural Networks (GGNN) .", "They both convert a drug molecule graph G into a fixed size vector h g by aggregating the representation h T v of an atom node v in G. We represent atoms as nodes and bonds as edges in the graph.", "NFP first obtains the representation h t v by the following equations (Duvenaud et al., 2015) .", "m t+1 v = h t v + w∈N (v) h t w , (9) h t+1 v = σ(H deg(v) t m t+1 v ), (10) where h t v is the representation of v in the t-th step, N (v) is the neighbors of v, and H deg(v) t is a weight parameter.", "h 0 v is initialized by the atom features of v. deg(v) is the degree of a node v and σ is a sigmoid function.", "NFP then acquires the representation of the graph structure h g = v,t softmax(W t h t v ), (11) where W t is a weight matrix.", "GGNN first obtains the representation h t v by using Gated Recurrent Unit (GRU)-based recurrent neural networks as follows: m t+1 v = w∈N (v) A evw h t w (12) h t+1 v = GRU([h t v ; m t+1 v ]), (13) where A evw is a weight for the bond type of each edge e vw .", "GGNN then acquires the representation of the graph structure.", "h g = v σ(i([h T v ; h 0 v ])) (j(h T v )), (14) where i and j are linear layers and is the element-wise product.", "We obtain the representation of a molecular pair by concatenating the molecular graph representations of drugs g 1 and g 2 , i.e., h m = [h g 1 ; h g 2 ].", "We get a predictionŷ m as follows: h (1) m = relu(W (1) m h m + b (1) m ), (15) y m = softmax(W (2) m h (1) m + b (2) m ), (16) where W (1) m and W (2) m are weights and b (1) m and b (2) m are bias terms.", "DDI Extraction from Texts Using Molecular Structures We realize the simultaneous use of textual and molecular information by concatenating a textbased and molecule-based vectors: h all = [h t ; h m ].", "We normalize molecule-based vectors.", "We then use h all instead of h t in Equation 7 .", "In training, we first train the molecular-based DDI classification model.", "The molecular-based classification is performed by minimizing the loss function L m = − y m logŷ m .", "We then fix the parameters for GCNs and train text-based DDI extraction model by minimizing the loss function L t = − y t logŷ t .", "Experimental Settings In this section, we explain the textual and molecular data and task settings and training settings.", "Text Corpus and Task Setting We followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task for the evaluation.", "This data set is composed of documents annotated with drug mentions and their four types of interactions: Mechanism, Effect, Advice and Int.", "For the data statistics, please refer to the supplementary materials.", "The task is a multi-class classification task, i.e., to classify a given pair of drugs into the four interaction types or no interaction.", "We evaluated the performance with micro-averaged precision (P), Figure 2 : Associating DrugBank entries with texts and molecular graph structures recall (R), and F-score (F) on all the interaction types.", "We used the official evaluation script provided by the task organizers.", "As preprocessing, we split sentences into words using the GENIA tagger (Tsuruoka et al., 2005) .", "We replaced the drug mentions of the target pair with DRUG1 and DRUG2 according to their order of appearance.", "We also replaced other drug mentions with DRUGOTHER.", "We did not employ negative instance filtering unlike other existing methods, e.g., Liu et al.", "(2016) , since our focus is to evaluate the effect of the molecular information on texts.", "We linked mentions in texts to DrugBank entries by string matching.", "We lowercased the mentions and the names in the entries and chose the entries with the most overlaps.", "As a result, 92.15% and 93.09% of drug mentions in train and test data set matched the DrugBank entries.", "Data and Task for Molecular Structures We extracted 255,229 interacting (positive) pairs from DrugBank.", "We note that, unlike text-based interactions, DrugBank only contains the information of interacting pairs; there are no detailed labels and no information for non-interacting (negative) pairs.", "We thus generated the same number of pseudo negative pairs by randomly pairing drugs and removing those in positive pairs.", "To avoid overestimation of the performance, we also deleted drug pairs mentioned in the test set of the text corpus.", "We split positive and negative pairs into 4:1 for training and test data, and we evaluated the classification accuracy using only the molecular information.", "To obtain the graph of a drug molecule, we took (Weininger, 1988) string encoding of the molecule from DrugBank and then converted it into the graph using RDKit (Landrum, 2016) as illustrated in Figure 2 .", "For the atom features, we used randomly embedded vectors for each atoms (i.e., C, O, N, ...).", "We also used 4 bond types: single, double, triple, or aromatic.", "Training Settings We employed mini-batch training using the Adam optimizer (Kingma and Ba, 2015) .", "We used L2 regularization to avoid over-fitting.", "We tuned the bias term b (2) t for negative examples in the final softmax layer.", "For the hyper-parameters, please refer to the supplementary materials.", "We employed pre-trained word embeddings trained by using the word2vec tool (Mikolov et al., 2013) on the 2014 MEDLINE/PubMed baseline distribution.", "The vocabulary size was 215,840.", "The embedding of the drugs, i.e., DRUG1 and DRUG2 were initialized with the pre-trained embedding of the word drug.", "The embeddings of training words that did not appear in the pretrained embeddings were initialized with the average of all pre-trained word embeddings.", "Words that appeared only once in the training data were replaced with an UNK word during training, and the embedding of words in the test data set that did not appear in both training and pre-trained embeddings were set to the embedding of the UNK word.", "Word position embeddings are initialized with random values drawn from a uniform distribution.", "We set the molecule-based vectors of unmatched entities to zero vectors.", "Table 1 shows the performance of DDI extraction models.", "We show the performance without negative instance filtering or ensemble for the fair comparison.", "We observe the increase of recall and F-score by using molecular information, Both GCNs improvements were statistically significant (p < 0.05 for NFP and p < 0.005 for GGNN) with randomized shuffled test.", "Table 2 shows F-scores on individual DDI types.", "The molecular information improves Fscores especially on type Mechanism and Effect.", "Results We also evaluated the accuracy of binary classification on DrugBank pairs by using only the molecular information in Table 3 .", "The performance is high, although the accuracy is evaluated on automatically generated negative instances.", "Finally, we applied the molecular-based DDI classification model trained on DrugBank to the DDIExtraction 2013 task data set.", "Since the Drug-Bank has no detailed labels, we mapped all four types of interactions to positive interactions and evaluated the classification performance.", "The results in Table 4 show that GCNs produce higher recall than precision and the overall performance is low considering the high performance on Drug-Bank pairs.", "This might be because the interactions of drugs are not always mentioned in texts even if the drugs can interact with each other and because hedged DDI mentions are annotated as DDIs in the text data set.", "We also trained the DDI extraction model only with molecular information by replacing h all with h m , but the F-scores were quite low (< 5%).", "These results show that we cannot predict textual relations only with molecular information.", "Related Work Various feature-based methods have been proposed during and after the DDIExtraction-2013 shared task .", "Kim et al.", "(2015) proposed a two-phase SVM-based approach that employed a linear SVM with rich features that consist of word, word pair, dependency graph, parse tree, and noun phrase-based constrained coordination features.", "Zheng et al.", "(2016) proposed a context vector graph kernel to exploit various types of contexts.", "Raihani and Laachfoubi (2017) also employed a two-phase SVM-based approach using non-linear kernels and they proposed five groups of features: word, drug, pair of drug, main verb and negative sentence features.", "Our model does not use any features or kernels.", "Various neural DDI extraction models have been recently proposed using CNNs and Recurrent Neural Networks (RNNs).", "Liu et al.", "(2016) built a CNN-based model based on word and position embeddings.", "Zheng et al.", "(2017) proposed a Bidirectional Long Short-Term Memory RNN (Bi-LSTM)-based model with an input attention mechanism, which obtained target drug-specific word representations before the Bi-LSTM.", "Lim et al.", "(2018) proposed Recursive neural networkbased model with a subtree containment feature and an ensemble method.", "This model showed the state-of-the-art performance on the DDIExtraction 2013 shared task data set if systems do not use negative instance filtering.", "These approaches did not consider molecular information, and they can also be enhanced by the molecular information.", "Vilar et al.", "(2017) focused on detecting DDIs from different sources such as pharmacovigilance sources, scientific biomedical literature and social media.", "They did not use deep neural networks and they did not consider molecular information.", "Learning representations of graphs are widely studied in several tasks such as knowledge base completion, drug discovery, and material science Gilmer et al., 2017) .", "Several graph convolutional neural networks have been proposed such as NFP (Duvenaud et al., 2015) , GGNN , and Molecular Graph Convolutions (Kearnes et al., 2016) , but they have not been applied to DDI extraction.", "Conclusions We proposed a novel neural method for DDI extraction using both textual and molecular informa-tion.", "The results show that DDIs can be predicted with high accuracy from molecular structure information and that the molecular information can improve DDI extraction from texts by 2.39 percept points in F-score on the data set of the DDIExtraction 2013 shared task.", "As future work, we would like to seek the way to model the textual and molecular representations jointly with alleviating the differences in labels.", "We will also investigate the use of other information in DrugBank." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Text-based DDI Extraction", "Molecular Structure-based DDI Classification", "DDI Extraction from Texts Using Molecular Structures", "Experimental Settings", "Text Corpus and Task Setting", "Data and Task for Molecular Structures", "Training Settings", "Results", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-61#paper-1120#slide-4
Method Molecular Structure based DDI Representation
We represent drug pairs in molecular graph structures using We pre-train GCNs using interacting (positive) pairs mentioned in the DrugBank and not mentioned (pseudo negative) pairs in the DrugBank Theophylline interact not mentioned Graph Convolutional Network (GCN) [Li et al. 2016] We use GCNs to convert a drug molecule graph into a fixed size vector by aggregating node vectors graph structure molecular vector Node : neighbors of GRU : gated Recurrent Unit : element-wise product : concatenation : learned weight
We represent drug pairs in molecular graph structures using We pre-train GCNs using interacting (positive) pairs mentioned in the DrugBank and not mentioned (pseudo negative) pairs in the DrugBank Theophylline interact not mentioned Graph Convolutional Network (GCN) [Li et al. 2016] We use GCNs to convert a drug molecule graph into a fixed size vector by aggregating node vectors graph structure molecular vector Node : neighbors of GRU : gated Recurrent Unit : element-wise product : concatenation : learned weight
[]
GEM-SciDuet-train-61#paper-1120#slide-5
1120
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction When drugs are concomitantly administered to a patient, the effects of the drugs may be enhanced or weakened, which may also cause side effects.", "These kinds of interactions are called Drug-Drug Interactions (DDIs).", "Several drug databases have been maintained to summarize drug and DDI information such as DrugBank (Law et al., 2014) , Therapeutic Target database , and PharmGKB (Thorn et al., 2013) .", "Automatic DDI extraction from texts is expected to support the maintenance of databases with high coverage and quick update to help medical experts.", "Deep neural network-based methods have recently drawn a considerable attention (Liu et al., 2016; Sahu and Anand, 2017; Zheng et al., 2017; Lim et al., 2018) since they show state-of-the-art performance without manual feature engineering.", "In parallel to the progress in DDI extraction from texts, Graph Convolutional Networks (GCNs) have been proposed and applied to estimate physical and chemical properties of molec-ular graphs such as solubility and toxicity (Duvenaud et al., 2015; Gilmer et al., 2017) .", "In this study, we propose a novel method to utilize both textual and molecular information for DDI extraction from texts.", "We illustrate the overview of the proposed model in Figure 1 .", "We obtain the representations of drug pairs in molecular graph structures using GCNs and concatenate the representations with the representations of the textual mention pairs obtained by convolutional neural networks (CNNs).", "We trained the molecule-based model using interacting pairs mentioned in the DrugBank database and then trained the entire model using the labeled pairs in the text data set of the DDIExtraction 2013 shared task (SemEval-2013 Task 9) (Segura .", "In the experiment, we show GCNs can predict DDIs from molecular graphs in a high accuracy.", "We also show molecular information can enhance the performance of DDI extraction from texts in 2.39 percent points in F-score.", "The contribution of this paper is three-fold: • We propose a novel neural method to extract DDIs from texts with the related molecular structure information.", "• We apply GCNs to pairwise drug molecules for the first time and show GCNs can predict DDIs between drug molecular structures in a high accuracy.", "• We show the molecular information is useful in extracting DDIs from texts.", "Methods Text-based DDI Extraction Our model for extracting DDIs from texts is based on the CNN model by Zeng et al.", "(2014) .", "When an input sentence S = (w 1 , w 2 , · · · , w N ) is given, We prepare word embedding w w i of w i and word Figure 1 : Overview of the proposed model position embeddings w p i,1 and w p i,2 that correspond to the relative positions from the first and second target entities, respectively.", "We concatenate these embeddings as in Equation (1) , and we use the resulting vector as the input to the subsequent convolution layer: w i = [w w i ; w p i,1 ; w p i,2 ], (1) where [; ] denotes the concatenation.", "We calculate the expression for each filter j with the window size k l .", "z i,l = [w i−(k l −1)/2 , · · · , w i−(k l +1)/2 ], (2) m i,j,l = relu(W conv j z i,l + b conv ), (3) m j,l = max i m i,j,l , (4) where L is the number of windows, W conv j and b conv are the weight and bias of CNN, and max indicates max pooling (Boureau et al., 2010) .", "We convert the output of the convolution layer into a fixed-size vector that represents a textual pair as follows: m l = [m 1,l , · · · , m J,l ], (5) h t = [m 1 ; .", ".", ".", "; m L ], (6) where J is the number of filters.", "We get a predictionŷ t by the following fully connected neural networks: h (1) t = relu(W (1) t h t + b (1) t ), (7) y t = softmax(W (2) t h (1) t + b (2) t ), (8) where W (1) t and W (2) t are weights and b (1) t and b (2) t are bias terms.", "Molecular Structure-based DDI Classification We represent drug pairs in molecular graph structures using two GCN methods: CNNs for fingerprints (NFP) (Duvenaud et al., 2015) and Gated Graph Neural Networks (GGNN) .", "They both convert a drug molecule graph G into a fixed size vector h g by aggregating the representation h T v of an atom node v in G. We represent atoms as nodes and bonds as edges in the graph.", "NFP first obtains the representation h t v by the following equations (Duvenaud et al., 2015) .", "m t+1 v = h t v + w∈N (v) h t w , (9) h t+1 v = σ(H deg(v) t m t+1 v ), (10) where h t v is the representation of v in the t-th step, N (v) is the neighbors of v, and H deg(v) t is a weight parameter.", "h 0 v is initialized by the atom features of v. deg(v) is the degree of a node v and σ is a sigmoid function.", "NFP then acquires the representation of the graph structure h g = v,t softmax(W t h t v ), (11) where W t is a weight matrix.", "GGNN first obtains the representation h t v by using Gated Recurrent Unit (GRU)-based recurrent neural networks as follows: m t+1 v = w∈N (v) A evw h t w (12) h t+1 v = GRU([h t v ; m t+1 v ]), (13) where A evw is a weight for the bond type of each edge e vw .", "GGNN then acquires the representation of the graph structure.", "h g = v σ(i([h T v ; h 0 v ])) (j(h T v )), (14) where i and j are linear layers and is the element-wise product.", "We obtain the representation of a molecular pair by concatenating the molecular graph representations of drugs g 1 and g 2 , i.e., h m = [h g 1 ; h g 2 ].", "We get a predictionŷ m as follows: h (1) m = relu(W (1) m h m + b (1) m ), (15) y m = softmax(W (2) m h (1) m + b (2) m ), (16) where W (1) m and W (2) m are weights and b (1) m and b (2) m are bias terms.", "DDI Extraction from Texts Using Molecular Structures We realize the simultaneous use of textual and molecular information by concatenating a textbased and molecule-based vectors: h all = [h t ; h m ].", "We normalize molecule-based vectors.", "We then use h all instead of h t in Equation 7 .", "In training, we first train the molecular-based DDI classification model.", "The molecular-based classification is performed by minimizing the loss function L m = − y m logŷ m .", "We then fix the parameters for GCNs and train text-based DDI extraction model by minimizing the loss function L t = − y t logŷ t .", "Experimental Settings In this section, we explain the textual and molecular data and task settings and training settings.", "Text Corpus and Task Setting We followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task for the evaluation.", "This data set is composed of documents annotated with drug mentions and their four types of interactions: Mechanism, Effect, Advice and Int.", "For the data statistics, please refer to the supplementary materials.", "The task is a multi-class classification task, i.e., to classify a given pair of drugs into the four interaction types or no interaction.", "We evaluated the performance with micro-averaged precision (P), Figure 2 : Associating DrugBank entries with texts and molecular graph structures recall (R), and F-score (F) on all the interaction types.", "We used the official evaluation script provided by the task organizers.", "As preprocessing, we split sentences into words using the GENIA tagger (Tsuruoka et al., 2005) .", "We replaced the drug mentions of the target pair with DRUG1 and DRUG2 according to their order of appearance.", "We also replaced other drug mentions with DRUGOTHER.", "We did not employ negative instance filtering unlike other existing methods, e.g., Liu et al.", "(2016) , since our focus is to evaluate the effect of the molecular information on texts.", "We linked mentions in texts to DrugBank entries by string matching.", "We lowercased the mentions and the names in the entries and chose the entries with the most overlaps.", "As a result, 92.15% and 93.09% of drug mentions in train and test data set matched the DrugBank entries.", "Data and Task for Molecular Structures We extracted 255,229 interacting (positive) pairs from DrugBank.", "We note that, unlike text-based interactions, DrugBank only contains the information of interacting pairs; there are no detailed labels and no information for non-interacting (negative) pairs.", "We thus generated the same number of pseudo negative pairs by randomly pairing drugs and removing those in positive pairs.", "To avoid overestimation of the performance, we also deleted drug pairs mentioned in the test set of the text corpus.", "We split positive and negative pairs into 4:1 for training and test data, and we evaluated the classification accuracy using only the molecular information.", "To obtain the graph of a drug molecule, we took (Weininger, 1988) string encoding of the molecule from DrugBank and then converted it into the graph using RDKit (Landrum, 2016) as illustrated in Figure 2 .", "For the atom features, we used randomly embedded vectors for each atoms (i.e., C, O, N, ...).", "We also used 4 bond types: single, double, triple, or aromatic.", "Training Settings We employed mini-batch training using the Adam optimizer (Kingma and Ba, 2015) .", "We used L2 regularization to avoid over-fitting.", "We tuned the bias term b (2) t for negative examples in the final softmax layer.", "For the hyper-parameters, please refer to the supplementary materials.", "We employed pre-trained word embeddings trained by using the word2vec tool (Mikolov et al., 2013) on the 2014 MEDLINE/PubMed baseline distribution.", "The vocabulary size was 215,840.", "The embedding of the drugs, i.e., DRUG1 and DRUG2 were initialized with the pre-trained embedding of the word drug.", "The embeddings of training words that did not appear in the pretrained embeddings were initialized with the average of all pre-trained word embeddings.", "Words that appeared only once in the training data were replaced with an UNK word during training, and the embedding of words in the test data set that did not appear in both training and pre-trained embeddings were set to the embedding of the UNK word.", "Word position embeddings are initialized with random values drawn from a uniform distribution.", "We set the molecule-based vectors of unmatched entities to zero vectors.", "Table 1 shows the performance of DDI extraction models.", "We show the performance without negative instance filtering or ensemble for the fair comparison.", "We observe the increase of recall and F-score by using molecular information, Both GCNs improvements were statistically significant (p < 0.05 for NFP and p < 0.005 for GGNN) with randomized shuffled test.", "Table 2 shows F-scores on individual DDI types.", "The molecular information improves Fscores especially on type Mechanism and Effect.", "Results We also evaluated the accuracy of binary classification on DrugBank pairs by using only the molecular information in Table 3 .", "The performance is high, although the accuracy is evaluated on automatically generated negative instances.", "Finally, we applied the molecular-based DDI classification model trained on DrugBank to the DDIExtraction 2013 task data set.", "Since the Drug-Bank has no detailed labels, we mapped all four types of interactions to positive interactions and evaluated the classification performance.", "The results in Table 4 show that GCNs produce higher recall than precision and the overall performance is low considering the high performance on Drug-Bank pairs.", "This might be because the interactions of drugs are not always mentioned in texts even if the drugs can interact with each other and because hedged DDI mentions are annotated as DDIs in the text data set.", "We also trained the DDI extraction model only with molecular information by replacing h all with h m , but the F-scores were quite low (< 5%).", "These results show that we cannot predict textual relations only with molecular information.", "Related Work Various feature-based methods have been proposed during and after the DDIExtraction-2013 shared task .", "Kim et al.", "(2015) proposed a two-phase SVM-based approach that employed a linear SVM with rich features that consist of word, word pair, dependency graph, parse tree, and noun phrase-based constrained coordination features.", "Zheng et al.", "(2016) proposed a context vector graph kernel to exploit various types of contexts.", "Raihani and Laachfoubi (2017) also employed a two-phase SVM-based approach using non-linear kernels and they proposed five groups of features: word, drug, pair of drug, main verb and negative sentence features.", "Our model does not use any features or kernels.", "Various neural DDI extraction models have been recently proposed using CNNs and Recurrent Neural Networks (RNNs).", "Liu et al.", "(2016) built a CNN-based model based on word and position embeddings.", "Zheng et al.", "(2017) proposed a Bidirectional Long Short-Term Memory RNN (Bi-LSTM)-based model with an input attention mechanism, which obtained target drug-specific word representations before the Bi-LSTM.", "Lim et al.", "(2018) proposed Recursive neural networkbased model with a subtree containment feature and an ensemble method.", "This model showed the state-of-the-art performance on the DDIExtraction 2013 shared task data set if systems do not use negative instance filtering.", "These approaches did not consider molecular information, and they can also be enhanced by the molecular information.", "Vilar et al.", "(2017) focused on detecting DDIs from different sources such as pharmacovigilance sources, scientific biomedical literature and social media.", "They did not use deep neural networks and they did not consider molecular information.", "Learning representations of graphs are widely studied in several tasks such as knowledge base completion, drug discovery, and material science Gilmer et al., 2017) .", "Several graph convolutional neural networks have been proposed such as NFP (Duvenaud et al., 2015) , GGNN , and Molecular Graph Convolutions (Kearnes et al., 2016) , but they have not been applied to DDI extraction.", "Conclusions We proposed a novel neural method for DDI extraction using both textual and molecular informa-tion.", "The results show that DDIs can be predicted with high accuracy from molecular structure information and that the molecular information can improve DDI extraction from texts by 2.39 percept points in F-score on the data set of the DDIExtraction 2013 shared task.", "As future work, we would like to seek the way to model the textual and molecular representations jointly with alleviating the differences in labels.", "We will also investigate the use of other information in DrugBank." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Text-based DDI Extraction", "Molecular Structure-based DDI Classification", "DDI Extraction from Texts Using Molecular Structures", "Experimental Settings", "Text Corpus and Task Setting", "Data and Task for Molecular Structures", "Training Settings", "Results", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-61#paper-1120#slide-5
Method DDI Extraction from Texts Using Molecular Structures
word + position embeddings Grepafloxacin inhibits the metabolism of Link mentions in text corpus to drug database entries by relaxed string matching Obtain molecular vectors via GCNs with fixed parameters Grepafloxacin GCN DrugBank Theophylline Predict DDIs from concatenated textual and molecular vectors Grepafloxacin GCN concat DrugBank Theophylline
word + position embeddings Grepafloxacin inhibits the metabolism of Link mentions in text corpus to drug database entries by relaxed string matching Obtain molecular vectors via GCNs with fixed parameters Grepafloxacin GCN DrugBank Theophylline Predict DDIs from concatenated textual and molecular vectors Grepafloxacin GCN concat DrugBank Theophylline
[]
GEM-SciDuet-train-61#paper-1120#slide-6
1120
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction When drugs are concomitantly administered to a patient, the effects of the drugs may be enhanced or weakened, which may also cause side effects.", "These kinds of interactions are called Drug-Drug Interactions (DDIs).", "Several drug databases have been maintained to summarize drug and DDI information such as DrugBank (Law et al., 2014) , Therapeutic Target database , and PharmGKB (Thorn et al., 2013) .", "Automatic DDI extraction from texts is expected to support the maintenance of databases with high coverage and quick update to help medical experts.", "Deep neural network-based methods have recently drawn a considerable attention (Liu et al., 2016; Sahu and Anand, 2017; Zheng et al., 2017; Lim et al., 2018) since they show state-of-the-art performance without manual feature engineering.", "In parallel to the progress in DDI extraction from texts, Graph Convolutional Networks (GCNs) have been proposed and applied to estimate physical and chemical properties of molec-ular graphs such as solubility and toxicity (Duvenaud et al., 2015; Gilmer et al., 2017) .", "In this study, we propose a novel method to utilize both textual and molecular information for DDI extraction from texts.", "We illustrate the overview of the proposed model in Figure 1 .", "We obtain the representations of drug pairs in molecular graph structures using GCNs and concatenate the representations with the representations of the textual mention pairs obtained by convolutional neural networks (CNNs).", "We trained the molecule-based model using interacting pairs mentioned in the DrugBank database and then trained the entire model using the labeled pairs in the text data set of the DDIExtraction 2013 shared task (SemEval-2013 Task 9) (Segura .", "In the experiment, we show GCNs can predict DDIs from molecular graphs in a high accuracy.", "We also show molecular information can enhance the performance of DDI extraction from texts in 2.39 percent points in F-score.", "The contribution of this paper is three-fold: • We propose a novel neural method to extract DDIs from texts with the related molecular structure information.", "• We apply GCNs to pairwise drug molecules for the first time and show GCNs can predict DDIs between drug molecular structures in a high accuracy.", "• We show the molecular information is useful in extracting DDIs from texts.", "Methods Text-based DDI Extraction Our model for extracting DDIs from texts is based on the CNN model by Zeng et al.", "(2014) .", "When an input sentence S = (w 1 , w 2 , · · · , w N ) is given, We prepare word embedding w w i of w i and word Figure 1 : Overview of the proposed model position embeddings w p i,1 and w p i,2 that correspond to the relative positions from the first and second target entities, respectively.", "We concatenate these embeddings as in Equation (1) , and we use the resulting vector as the input to the subsequent convolution layer: w i = [w w i ; w p i,1 ; w p i,2 ], (1) where [; ] denotes the concatenation.", "We calculate the expression for each filter j with the window size k l .", "z i,l = [w i−(k l −1)/2 , · · · , w i−(k l +1)/2 ], (2) m i,j,l = relu(W conv j z i,l + b conv ), (3) m j,l = max i m i,j,l , (4) where L is the number of windows, W conv j and b conv are the weight and bias of CNN, and max indicates max pooling (Boureau et al., 2010) .", "We convert the output of the convolution layer into a fixed-size vector that represents a textual pair as follows: m l = [m 1,l , · · · , m J,l ], (5) h t = [m 1 ; .", ".", ".", "; m L ], (6) where J is the number of filters.", "We get a predictionŷ t by the following fully connected neural networks: h (1) t = relu(W (1) t h t + b (1) t ), (7) y t = softmax(W (2) t h (1) t + b (2) t ), (8) where W (1) t and W (2) t are weights and b (1) t and b (2) t are bias terms.", "Molecular Structure-based DDI Classification We represent drug pairs in molecular graph structures using two GCN methods: CNNs for fingerprints (NFP) (Duvenaud et al., 2015) and Gated Graph Neural Networks (GGNN) .", "They both convert a drug molecule graph G into a fixed size vector h g by aggregating the representation h T v of an atom node v in G. We represent atoms as nodes and bonds as edges in the graph.", "NFP first obtains the representation h t v by the following equations (Duvenaud et al., 2015) .", "m t+1 v = h t v + w∈N (v) h t w , (9) h t+1 v = σ(H deg(v) t m t+1 v ), (10) where h t v is the representation of v in the t-th step, N (v) is the neighbors of v, and H deg(v) t is a weight parameter.", "h 0 v is initialized by the atom features of v. deg(v) is the degree of a node v and σ is a sigmoid function.", "NFP then acquires the representation of the graph structure h g = v,t softmax(W t h t v ), (11) where W t is a weight matrix.", "GGNN first obtains the representation h t v by using Gated Recurrent Unit (GRU)-based recurrent neural networks as follows: m t+1 v = w∈N (v) A evw h t w (12) h t+1 v = GRU([h t v ; m t+1 v ]), (13) where A evw is a weight for the bond type of each edge e vw .", "GGNN then acquires the representation of the graph structure.", "h g = v σ(i([h T v ; h 0 v ])) (j(h T v )), (14) where i and j are linear layers and is the element-wise product.", "We obtain the representation of a molecular pair by concatenating the molecular graph representations of drugs g 1 and g 2 , i.e., h m = [h g 1 ; h g 2 ].", "We get a predictionŷ m as follows: h (1) m = relu(W (1) m h m + b (1) m ), (15) y m = softmax(W (2) m h (1) m + b (2) m ), (16) where W (1) m and W (2) m are weights and b (1) m and b (2) m are bias terms.", "DDI Extraction from Texts Using Molecular Structures We realize the simultaneous use of textual and molecular information by concatenating a textbased and molecule-based vectors: h all = [h t ; h m ].", "We normalize molecule-based vectors.", "We then use h all instead of h t in Equation 7 .", "In training, we first train the molecular-based DDI classification model.", "The molecular-based classification is performed by minimizing the loss function L m = − y m logŷ m .", "We then fix the parameters for GCNs and train text-based DDI extraction model by minimizing the loss function L t = − y t logŷ t .", "Experimental Settings In this section, we explain the textual and molecular data and task settings and training settings.", "Text Corpus and Task Setting We followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task for the evaluation.", "This data set is composed of documents annotated with drug mentions and their four types of interactions: Mechanism, Effect, Advice and Int.", "For the data statistics, please refer to the supplementary materials.", "The task is a multi-class classification task, i.e., to classify a given pair of drugs into the four interaction types or no interaction.", "We evaluated the performance with micro-averaged precision (P), Figure 2 : Associating DrugBank entries with texts and molecular graph structures recall (R), and F-score (F) on all the interaction types.", "We used the official evaluation script provided by the task organizers.", "As preprocessing, we split sentences into words using the GENIA tagger (Tsuruoka et al., 2005) .", "We replaced the drug mentions of the target pair with DRUG1 and DRUG2 according to their order of appearance.", "We also replaced other drug mentions with DRUGOTHER.", "We did not employ negative instance filtering unlike other existing methods, e.g., Liu et al.", "(2016) , since our focus is to evaluate the effect of the molecular information on texts.", "We linked mentions in texts to DrugBank entries by string matching.", "We lowercased the mentions and the names in the entries and chose the entries with the most overlaps.", "As a result, 92.15% and 93.09% of drug mentions in train and test data set matched the DrugBank entries.", "Data and Task for Molecular Structures We extracted 255,229 interacting (positive) pairs from DrugBank.", "We note that, unlike text-based interactions, DrugBank only contains the information of interacting pairs; there are no detailed labels and no information for non-interacting (negative) pairs.", "We thus generated the same number of pseudo negative pairs by randomly pairing drugs and removing those in positive pairs.", "To avoid overestimation of the performance, we also deleted drug pairs mentioned in the test set of the text corpus.", "We split positive and negative pairs into 4:1 for training and test data, and we evaluated the classification accuracy using only the molecular information.", "To obtain the graph of a drug molecule, we took (Weininger, 1988) string encoding of the molecule from DrugBank and then converted it into the graph using RDKit (Landrum, 2016) as illustrated in Figure 2 .", "For the atom features, we used randomly embedded vectors for each atoms (i.e., C, O, N, ...).", "We also used 4 bond types: single, double, triple, or aromatic.", "Training Settings We employed mini-batch training using the Adam optimizer (Kingma and Ba, 2015) .", "We used L2 regularization to avoid over-fitting.", "We tuned the bias term b (2) t for negative examples in the final softmax layer.", "For the hyper-parameters, please refer to the supplementary materials.", "We employed pre-trained word embeddings trained by using the word2vec tool (Mikolov et al., 2013) on the 2014 MEDLINE/PubMed baseline distribution.", "The vocabulary size was 215,840.", "The embedding of the drugs, i.e., DRUG1 and DRUG2 were initialized with the pre-trained embedding of the word drug.", "The embeddings of training words that did not appear in the pretrained embeddings were initialized with the average of all pre-trained word embeddings.", "Words that appeared only once in the training data were replaced with an UNK word during training, and the embedding of words in the test data set that did not appear in both training and pre-trained embeddings were set to the embedding of the UNK word.", "Word position embeddings are initialized with random values drawn from a uniform distribution.", "We set the molecule-based vectors of unmatched entities to zero vectors.", "Table 1 shows the performance of DDI extraction models.", "We show the performance without negative instance filtering or ensemble for the fair comparison.", "We observe the increase of recall and F-score by using molecular information, Both GCNs improvements were statistically significant (p < 0.05 for NFP and p < 0.005 for GGNN) with randomized shuffled test.", "Table 2 shows F-scores on individual DDI types.", "The molecular information improves Fscores especially on type Mechanism and Effect.", "Results We also evaluated the accuracy of binary classification on DrugBank pairs by using only the molecular information in Table 3 .", "The performance is high, although the accuracy is evaluated on automatically generated negative instances.", "Finally, we applied the molecular-based DDI classification model trained on DrugBank to the DDIExtraction 2013 task data set.", "Since the Drug-Bank has no detailed labels, we mapped all four types of interactions to positive interactions and evaluated the classification performance.", "The results in Table 4 show that GCNs produce higher recall than precision and the overall performance is low considering the high performance on Drug-Bank pairs.", "This might be because the interactions of drugs are not always mentioned in texts even if the drugs can interact with each other and because hedged DDI mentions are annotated as DDIs in the text data set.", "We also trained the DDI extraction model only with molecular information by replacing h all with h m , but the F-scores were quite low (< 5%).", "These results show that we cannot predict textual relations only with molecular information.", "Related Work Various feature-based methods have been proposed during and after the DDIExtraction-2013 shared task .", "Kim et al.", "(2015) proposed a two-phase SVM-based approach that employed a linear SVM with rich features that consist of word, word pair, dependency graph, parse tree, and noun phrase-based constrained coordination features.", "Zheng et al.", "(2016) proposed a context vector graph kernel to exploit various types of contexts.", "Raihani and Laachfoubi (2017) also employed a two-phase SVM-based approach using non-linear kernels and they proposed five groups of features: word, drug, pair of drug, main verb and negative sentence features.", "Our model does not use any features or kernels.", "Various neural DDI extraction models have been recently proposed using CNNs and Recurrent Neural Networks (RNNs).", "Liu et al.", "(2016) built a CNN-based model based on word and position embeddings.", "Zheng et al.", "(2017) proposed a Bidirectional Long Short-Term Memory RNN (Bi-LSTM)-based model with an input attention mechanism, which obtained target drug-specific word representations before the Bi-LSTM.", "Lim et al.", "(2018) proposed Recursive neural networkbased model with a subtree containment feature and an ensemble method.", "This model showed the state-of-the-art performance on the DDIExtraction 2013 shared task data set if systems do not use negative instance filtering.", "These approaches did not consider molecular information, and they can also be enhanced by the molecular information.", "Vilar et al.", "(2017) focused on detecting DDIs from different sources such as pharmacovigilance sources, scientific biomedical literature and social media.", "They did not use deep neural networks and they did not consider molecular information.", "Learning representations of graphs are widely studied in several tasks such as knowledge base completion, drug discovery, and material science Gilmer et al., 2017) .", "Several graph convolutional neural networks have been proposed such as NFP (Duvenaud et al., 2015) , GGNN , and Molecular Graph Convolutions (Kearnes et al., 2016) , but they have not been applied to DDI extraction.", "Conclusions We proposed a novel neural method for DDI extraction using both textual and molecular informa-tion.", "The results show that DDIs can be predicted with high accuracy from molecular structure information and that the molecular information can improve DDI extraction from texts by 2.39 percept points in F-score on the data set of the DDIExtraction 2013 shared task.", "As future work, we would like to seek the way to model the textual and molecular representations jointly with alleviating the differences in labels.", "We will also investigate the use of other information in DrugBank." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Text-based DDI Extraction", "Molecular Structure-based DDI Classification", "DDI Extraction from Texts Using Molecular Structures", "Experimental Settings", "Text Corpus and Task Setting", "Data and Task for Molecular Structures", "Training Settings", "Results", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-61#paper-1120#slide-6
Task Settings
The data set is composed of documents annotated with drug mentions and their 4 types of interactions (Mechanism, Effect, Advice and Interaction) or no interaction Statistics of the DDI SemEval2013 shared task
The data set is composed of documents annotated with drug mentions and their 4 types of interactions (Mechanism, Effect, Advice and Interaction) or no interaction Statistics of the DDI SemEval2013 shared task
[]
GEM-SciDuet-train-61#paper-1120#slide-7
1120
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction When drugs are concomitantly administered to a patient, the effects of the drugs may be enhanced or weakened, which may also cause side effects.", "These kinds of interactions are called Drug-Drug Interactions (DDIs).", "Several drug databases have been maintained to summarize drug and DDI information such as DrugBank (Law et al., 2014) , Therapeutic Target database , and PharmGKB (Thorn et al., 2013) .", "Automatic DDI extraction from texts is expected to support the maintenance of databases with high coverage and quick update to help medical experts.", "Deep neural network-based methods have recently drawn a considerable attention (Liu et al., 2016; Sahu and Anand, 2017; Zheng et al., 2017; Lim et al., 2018) since they show state-of-the-art performance without manual feature engineering.", "In parallel to the progress in DDI extraction from texts, Graph Convolutional Networks (GCNs) have been proposed and applied to estimate physical and chemical properties of molec-ular graphs such as solubility and toxicity (Duvenaud et al., 2015; Gilmer et al., 2017) .", "In this study, we propose a novel method to utilize both textual and molecular information for DDI extraction from texts.", "We illustrate the overview of the proposed model in Figure 1 .", "We obtain the representations of drug pairs in molecular graph structures using GCNs and concatenate the representations with the representations of the textual mention pairs obtained by convolutional neural networks (CNNs).", "We trained the molecule-based model using interacting pairs mentioned in the DrugBank database and then trained the entire model using the labeled pairs in the text data set of the DDIExtraction 2013 shared task (SemEval-2013 Task 9) (Segura .", "In the experiment, we show GCNs can predict DDIs from molecular graphs in a high accuracy.", "We also show molecular information can enhance the performance of DDI extraction from texts in 2.39 percent points in F-score.", "The contribution of this paper is three-fold: • We propose a novel neural method to extract DDIs from texts with the related molecular structure information.", "• We apply GCNs to pairwise drug molecules for the first time and show GCNs can predict DDIs between drug molecular structures in a high accuracy.", "• We show the molecular information is useful in extracting DDIs from texts.", "Methods Text-based DDI Extraction Our model for extracting DDIs from texts is based on the CNN model by Zeng et al.", "(2014) .", "When an input sentence S = (w 1 , w 2 , · · · , w N ) is given, We prepare word embedding w w i of w i and word Figure 1 : Overview of the proposed model position embeddings w p i,1 and w p i,2 that correspond to the relative positions from the first and second target entities, respectively.", "We concatenate these embeddings as in Equation (1) , and we use the resulting vector as the input to the subsequent convolution layer: w i = [w w i ; w p i,1 ; w p i,2 ], (1) where [; ] denotes the concatenation.", "We calculate the expression for each filter j with the window size k l .", "z i,l = [w i−(k l −1)/2 , · · · , w i−(k l +1)/2 ], (2) m i,j,l = relu(W conv j z i,l + b conv ), (3) m j,l = max i m i,j,l , (4) where L is the number of windows, W conv j and b conv are the weight and bias of CNN, and max indicates max pooling (Boureau et al., 2010) .", "We convert the output of the convolution layer into a fixed-size vector that represents a textual pair as follows: m l = [m 1,l , · · · , m J,l ], (5) h t = [m 1 ; .", ".", ".", "; m L ], (6) where J is the number of filters.", "We get a predictionŷ t by the following fully connected neural networks: h (1) t = relu(W (1) t h t + b (1) t ), (7) y t = softmax(W (2) t h (1) t + b (2) t ), (8) where W (1) t and W (2) t are weights and b (1) t and b (2) t are bias terms.", "Molecular Structure-based DDI Classification We represent drug pairs in molecular graph structures using two GCN methods: CNNs for fingerprints (NFP) (Duvenaud et al., 2015) and Gated Graph Neural Networks (GGNN) .", "They both convert a drug molecule graph G into a fixed size vector h g by aggregating the representation h T v of an atom node v in G. We represent atoms as nodes and bonds as edges in the graph.", "NFP first obtains the representation h t v by the following equations (Duvenaud et al., 2015) .", "m t+1 v = h t v + w∈N (v) h t w , (9) h t+1 v = σ(H deg(v) t m t+1 v ), (10) where h t v is the representation of v in the t-th step, N (v) is the neighbors of v, and H deg(v) t is a weight parameter.", "h 0 v is initialized by the atom features of v. deg(v) is the degree of a node v and σ is a sigmoid function.", "NFP then acquires the representation of the graph structure h g = v,t softmax(W t h t v ), (11) where W t is a weight matrix.", "GGNN first obtains the representation h t v by using Gated Recurrent Unit (GRU)-based recurrent neural networks as follows: m t+1 v = w∈N (v) A evw h t w (12) h t+1 v = GRU([h t v ; m t+1 v ]), (13) where A evw is a weight for the bond type of each edge e vw .", "GGNN then acquires the representation of the graph structure.", "h g = v σ(i([h T v ; h 0 v ])) (j(h T v )), (14) where i and j are linear layers and is the element-wise product.", "We obtain the representation of a molecular pair by concatenating the molecular graph representations of drugs g 1 and g 2 , i.e., h m = [h g 1 ; h g 2 ].", "We get a predictionŷ m as follows: h (1) m = relu(W (1) m h m + b (1) m ), (15) y m = softmax(W (2) m h (1) m + b (2) m ), (16) where W (1) m and W (2) m are weights and b (1) m and b (2) m are bias terms.", "DDI Extraction from Texts Using Molecular Structures We realize the simultaneous use of textual and molecular information by concatenating a textbased and molecule-based vectors: h all = [h t ; h m ].", "We normalize molecule-based vectors.", "We then use h all instead of h t in Equation 7 .", "In training, we first train the molecular-based DDI classification model.", "The molecular-based classification is performed by minimizing the loss function L m = − y m logŷ m .", "We then fix the parameters for GCNs and train text-based DDI extraction model by minimizing the loss function L t = − y t logŷ t .", "Experimental Settings In this section, we explain the textual and molecular data and task settings and training settings.", "Text Corpus and Task Setting We followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task for the evaluation.", "This data set is composed of documents annotated with drug mentions and their four types of interactions: Mechanism, Effect, Advice and Int.", "For the data statistics, please refer to the supplementary materials.", "The task is a multi-class classification task, i.e., to classify a given pair of drugs into the four interaction types or no interaction.", "We evaluated the performance with micro-averaged precision (P), Figure 2 : Associating DrugBank entries with texts and molecular graph structures recall (R), and F-score (F) on all the interaction types.", "We used the official evaluation script provided by the task organizers.", "As preprocessing, we split sentences into words using the GENIA tagger (Tsuruoka et al., 2005) .", "We replaced the drug mentions of the target pair with DRUG1 and DRUG2 according to their order of appearance.", "We also replaced other drug mentions with DRUGOTHER.", "We did not employ negative instance filtering unlike other existing methods, e.g., Liu et al.", "(2016) , since our focus is to evaluate the effect of the molecular information on texts.", "We linked mentions in texts to DrugBank entries by string matching.", "We lowercased the mentions and the names in the entries and chose the entries with the most overlaps.", "As a result, 92.15% and 93.09% of drug mentions in train and test data set matched the DrugBank entries.", "Data and Task for Molecular Structures We extracted 255,229 interacting (positive) pairs from DrugBank.", "We note that, unlike text-based interactions, DrugBank only contains the information of interacting pairs; there are no detailed labels and no information for non-interacting (negative) pairs.", "We thus generated the same number of pseudo negative pairs by randomly pairing drugs and removing those in positive pairs.", "To avoid overestimation of the performance, we also deleted drug pairs mentioned in the test set of the text corpus.", "We split positive and negative pairs into 4:1 for training and test data, and we evaluated the classification accuracy using only the molecular information.", "To obtain the graph of a drug molecule, we took (Weininger, 1988) string encoding of the molecule from DrugBank and then converted it into the graph using RDKit (Landrum, 2016) as illustrated in Figure 2 .", "For the atom features, we used randomly embedded vectors for each atoms (i.e., C, O, N, ...).", "We also used 4 bond types: single, double, triple, or aromatic.", "Training Settings We employed mini-batch training using the Adam optimizer (Kingma and Ba, 2015) .", "We used L2 regularization to avoid over-fitting.", "We tuned the bias term b (2) t for negative examples in the final softmax layer.", "For the hyper-parameters, please refer to the supplementary materials.", "We employed pre-trained word embeddings trained by using the word2vec tool (Mikolov et al., 2013) on the 2014 MEDLINE/PubMed baseline distribution.", "The vocabulary size was 215,840.", "The embedding of the drugs, i.e., DRUG1 and DRUG2 were initialized with the pre-trained embedding of the word drug.", "The embeddings of training words that did not appear in the pretrained embeddings were initialized with the average of all pre-trained word embeddings.", "Words that appeared only once in the training data were replaced with an UNK word during training, and the embedding of words in the test data set that did not appear in both training and pre-trained embeddings were set to the embedding of the UNK word.", "Word position embeddings are initialized with random values drawn from a uniform distribution.", "We set the molecule-based vectors of unmatched entities to zero vectors.", "Table 1 shows the performance of DDI extraction models.", "We show the performance without negative instance filtering or ensemble for the fair comparison.", "We observe the increase of recall and F-score by using molecular information, Both GCNs improvements were statistically significant (p < 0.05 for NFP and p < 0.005 for GGNN) with randomized shuffled test.", "Table 2 shows F-scores on individual DDI types.", "The molecular information improves Fscores especially on type Mechanism and Effect.", "Results We also evaluated the accuracy of binary classification on DrugBank pairs by using only the molecular information in Table 3 .", "The performance is high, although the accuracy is evaluated on automatically generated negative instances.", "Finally, we applied the molecular-based DDI classification model trained on DrugBank to the DDIExtraction 2013 task data set.", "Since the Drug-Bank has no detailed labels, we mapped all four types of interactions to positive interactions and evaluated the classification performance.", "The results in Table 4 show that GCNs produce higher recall than precision and the overall performance is low considering the high performance on Drug-Bank pairs.", "This might be because the interactions of drugs are not always mentioned in texts even if the drugs can interact with each other and because hedged DDI mentions are annotated as DDIs in the text data set.", "We also trained the DDI extraction model only with molecular information by replacing h all with h m , but the F-scores were quite low (< 5%).", "These results show that we cannot predict textual relations only with molecular information.", "Related Work Various feature-based methods have been proposed during and after the DDIExtraction-2013 shared task .", "Kim et al.", "(2015) proposed a two-phase SVM-based approach that employed a linear SVM with rich features that consist of word, word pair, dependency graph, parse tree, and noun phrase-based constrained coordination features.", "Zheng et al.", "(2016) proposed a context vector graph kernel to exploit various types of contexts.", "Raihani and Laachfoubi (2017) also employed a two-phase SVM-based approach using non-linear kernels and they proposed five groups of features: word, drug, pair of drug, main verb and negative sentence features.", "Our model does not use any features or kernels.", "Various neural DDI extraction models have been recently proposed using CNNs and Recurrent Neural Networks (RNNs).", "Liu et al.", "(2016) built a CNN-based model based on word and position embeddings.", "Zheng et al.", "(2017) proposed a Bidirectional Long Short-Term Memory RNN (Bi-LSTM)-based model with an input attention mechanism, which obtained target drug-specific word representations before the Bi-LSTM.", "Lim et al.", "(2018) proposed Recursive neural networkbased model with a subtree containment feature and an ensemble method.", "This model showed the state-of-the-art performance on the DDIExtraction 2013 shared task data set if systems do not use negative instance filtering.", "These approaches did not consider molecular information, and they can also be enhanced by the molecular information.", "Vilar et al.", "(2017) focused on detecting DDIs from different sources such as pharmacovigilance sources, scientific biomedical literature and social media.", "They did not use deep neural networks and they did not consider molecular information.", "Learning representations of graphs are widely studied in several tasks such as knowledge base completion, drug discovery, and material science Gilmer et al., 2017) .", "Several graph convolutional neural networks have been proposed such as NFP (Duvenaud et al., 2015) , GGNN , and Molecular Graph Convolutions (Kearnes et al., 2016) , but they have not been applied to DDI extraction.", "Conclusions We proposed a novel neural method for DDI extraction using both textual and molecular informa-tion.", "The results show that DDIs can be predicted with high accuracy from molecular structure information and that the molecular information can improve DDI extraction from texts by 2.39 percept points in F-score on the data set of the DDIExtraction 2013 shared task.", "As future work, we would like to seek the way to model the textual and molecular representations jointly with alleviating the differences in labels.", "We will also investigate the use of other information in DrugBank." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Text-based DDI Extraction", "Molecular Structure-based DDI Classification", "DDI Extraction from Texts Using Molecular Structures", "Experimental Settings", "Text Corpus and Task Setting", "Data and Task for Molecular Structures", "Training Settings", "Results", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-61#paper-1120#slide-7
Data for Pre training GCNs
We extracted 255,229 interacting (positive) pairs from DrugBank and generated the same number of pseudo negative pairs by randomly pairing DrugBank drugs We deleted drug pairs mentioned in the test set of the text corpus
We extracted 255,229 interacting (positive) pairs from DrugBank and generated the same number of pseudo negative pairs by randomly pairing DrugBank drugs We deleted drug pairs mentioned in the test set of the text corpus
[]
GEM-SciDuet-train-61#paper-1120#slide-8
1120
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction When drugs are concomitantly administered to a patient, the effects of the drugs may be enhanced or weakened, which may also cause side effects.", "These kinds of interactions are called Drug-Drug Interactions (DDIs).", "Several drug databases have been maintained to summarize drug and DDI information such as DrugBank (Law et al., 2014) , Therapeutic Target database , and PharmGKB (Thorn et al., 2013) .", "Automatic DDI extraction from texts is expected to support the maintenance of databases with high coverage and quick update to help medical experts.", "Deep neural network-based methods have recently drawn a considerable attention (Liu et al., 2016; Sahu and Anand, 2017; Zheng et al., 2017; Lim et al., 2018) since they show state-of-the-art performance without manual feature engineering.", "In parallel to the progress in DDI extraction from texts, Graph Convolutional Networks (GCNs) have been proposed and applied to estimate physical and chemical properties of molec-ular graphs such as solubility and toxicity (Duvenaud et al., 2015; Gilmer et al., 2017) .", "In this study, we propose a novel method to utilize both textual and molecular information for DDI extraction from texts.", "We illustrate the overview of the proposed model in Figure 1 .", "We obtain the representations of drug pairs in molecular graph structures using GCNs and concatenate the representations with the representations of the textual mention pairs obtained by convolutional neural networks (CNNs).", "We trained the molecule-based model using interacting pairs mentioned in the DrugBank database and then trained the entire model using the labeled pairs in the text data set of the DDIExtraction 2013 shared task (SemEval-2013 Task 9) (Segura .", "In the experiment, we show GCNs can predict DDIs from molecular graphs in a high accuracy.", "We also show molecular information can enhance the performance of DDI extraction from texts in 2.39 percent points in F-score.", "The contribution of this paper is three-fold: • We propose a novel neural method to extract DDIs from texts with the related molecular structure information.", "• We apply GCNs to pairwise drug molecules for the first time and show GCNs can predict DDIs between drug molecular structures in a high accuracy.", "• We show the molecular information is useful in extracting DDIs from texts.", "Methods Text-based DDI Extraction Our model for extracting DDIs from texts is based on the CNN model by Zeng et al.", "(2014) .", "When an input sentence S = (w 1 , w 2 , · · · , w N ) is given, We prepare word embedding w w i of w i and word Figure 1 : Overview of the proposed model position embeddings w p i,1 and w p i,2 that correspond to the relative positions from the first and second target entities, respectively.", "We concatenate these embeddings as in Equation (1) , and we use the resulting vector as the input to the subsequent convolution layer: w i = [w w i ; w p i,1 ; w p i,2 ], (1) where [; ] denotes the concatenation.", "We calculate the expression for each filter j with the window size k l .", "z i,l = [w i−(k l −1)/2 , · · · , w i−(k l +1)/2 ], (2) m i,j,l = relu(W conv j z i,l + b conv ), (3) m j,l = max i m i,j,l , (4) where L is the number of windows, W conv j and b conv are the weight and bias of CNN, and max indicates max pooling (Boureau et al., 2010) .", "We convert the output of the convolution layer into a fixed-size vector that represents a textual pair as follows: m l = [m 1,l , · · · , m J,l ], (5) h t = [m 1 ; .", ".", ".", "; m L ], (6) where J is the number of filters.", "We get a predictionŷ t by the following fully connected neural networks: h (1) t = relu(W (1) t h t + b (1) t ), (7) y t = softmax(W (2) t h (1) t + b (2) t ), (8) where W (1) t and W (2) t are weights and b (1) t and b (2) t are bias terms.", "Molecular Structure-based DDI Classification We represent drug pairs in molecular graph structures using two GCN methods: CNNs for fingerprints (NFP) (Duvenaud et al., 2015) and Gated Graph Neural Networks (GGNN) .", "They both convert a drug molecule graph G into a fixed size vector h g by aggregating the representation h T v of an atom node v in G. We represent atoms as nodes and bonds as edges in the graph.", "NFP first obtains the representation h t v by the following equations (Duvenaud et al., 2015) .", "m t+1 v = h t v + w∈N (v) h t w , (9) h t+1 v = σ(H deg(v) t m t+1 v ), (10) where h t v is the representation of v in the t-th step, N (v) is the neighbors of v, and H deg(v) t is a weight parameter.", "h 0 v is initialized by the atom features of v. deg(v) is the degree of a node v and σ is a sigmoid function.", "NFP then acquires the representation of the graph structure h g = v,t softmax(W t h t v ), (11) where W t is a weight matrix.", "GGNN first obtains the representation h t v by using Gated Recurrent Unit (GRU)-based recurrent neural networks as follows: m t+1 v = w∈N (v) A evw h t w (12) h t+1 v = GRU([h t v ; m t+1 v ]), (13) where A evw is a weight for the bond type of each edge e vw .", "GGNN then acquires the representation of the graph structure.", "h g = v σ(i([h T v ; h 0 v ])) (j(h T v )), (14) where i and j are linear layers and is the element-wise product.", "We obtain the representation of a molecular pair by concatenating the molecular graph representations of drugs g 1 and g 2 , i.e., h m = [h g 1 ; h g 2 ].", "We get a predictionŷ m as follows: h (1) m = relu(W (1) m h m + b (1) m ), (15) y m = softmax(W (2) m h (1) m + b (2) m ), (16) where W (1) m and W (2) m are weights and b (1) m and b (2) m are bias terms.", "DDI Extraction from Texts Using Molecular Structures We realize the simultaneous use of textual and molecular information by concatenating a textbased and molecule-based vectors: h all = [h t ; h m ].", "We normalize molecule-based vectors.", "We then use h all instead of h t in Equation 7 .", "In training, we first train the molecular-based DDI classification model.", "The molecular-based classification is performed by minimizing the loss function L m = − y m logŷ m .", "We then fix the parameters for GCNs and train text-based DDI extraction model by minimizing the loss function L t = − y t logŷ t .", "Experimental Settings In this section, we explain the textual and molecular data and task settings and training settings.", "Text Corpus and Task Setting We followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task for the evaluation.", "This data set is composed of documents annotated with drug mentions and their four types of interactions: Mechanism, Effect, Advice and Int.", "For the data statistics, please refer to the supplementary materials.", "The task is a multi-class classification task, i.e., to classify a given pair of drugs into the four interaction types or no interaction.", "We evaluated the performance with micro-averaged precision (P), Figure 2 : Associating DrugBank entries with texts and molecular graph structures recall (R), and F-score (F) on all the interaction types.", "We used the official evaluation script provided by the task organizers.", "As preprocessing, we split sentences into words using the GENIA tagger (Tsuruoka et al., 2005) .", "We replaced the drug mentions of the target pair with DRUG1 and DRUG2 according to their order of appearance.", "We also replaced other drug mentions with DRUGOTHER.", "We did not employ negative instance filtering unlike other existing methods, e.g., Liu et al.", "(2016) , since our focus is to evaluate the effect of the molecular information on texts.", "We linked mentions in texts to DrugBank entries by string matching.", "We lowercased the mentions and the names in the entries and chose the entries with the most overlaps.", "As a result, 92.15% and 93.09% of drug mentions in train and test data set matched the DrugBank entries.", "Data and Task for Molecular Structures We extracted 255,229 interacting (positive) pairs from DrugBank.", "We note that, unlike text-based interactions, DrugBank only contains the information of interacting pairs; there are no detailed labels and no information for non-interacting (negative) pairs.", "We thus generated the same number of pseudo negative pairs by randomly pairing drugs and removing those in positive pairs.", "To avoid overestimation of the performance, we also deleted drug pairs mentioned in the test set of the text corpus.", "We split positive and negative pairs into 4:1 for training and test data, and we evaluated the classification accuracy using only the molecular information.", "To obtain the graph of a drug molecule, we took (Weininger, 1988) string encoding of the molecule from DrugBank and then converted it into the graph using RDKit (Landrum, 2016) as illustrated in Figure 2 .", "For the atom features, we used randomly embedded vectors for each atoms (i.e., C, O, N, ...).", "We also used 4 bond types: single, double, triple, or aromatic.", "Training Settings We employed mini-batch training using the Adam optimizer (Kingma and Ba, 2015) .", "We used L2 regularization to avoid over-fitting.", "We tuned the bias term b (2) t for negative examples in the final softmax layer.", "For the hyper-parameters, please refer to the supplementary materials.", "We employed pre-trained word embeddings trained by using the word2vec tool (Mikolov et al., 2013) on the 2014 MEDLINE/PubMed baseline distribution.", "The vocabulary size was 215,840.", "The embedding of the drugs, i.e., DRUG1 and DRUG2 were initialized with the pre-trained embedding of the word drug.", "The embeddings of training words that did not appear in the pretrained embeddings were initialized with the average of all pre-trained word embeddings.", "Words that appeared only once in the training data were replaced with an UNK word during training, and the embedding of words in the test data set that did not appear in both training and pre-trained embeddings were set to the embedding of the UNK word.", "Word position embeddings are initialized with random values drawn from a uniform distribution.", "We set the molecule-based vectors of unmatched entities to zero vectors.", "Table 1 shows the performance of DDI extraction models.", "We show the performance without negative instance filtering or ensemble for the fair comparison.", "We observe the increase of recall and F-score by using molecular information, Both GCNs improvements were statistically significant (p < 0.05 for NFP and p < 0.005 for GGNN) with randomized shuffled test.", "Table 2 shows F-scores on individual DDI types.", "The molecular information improves Fscores especially on type Mechanism and Effect.", "Results We also evaluated the accuracy of binary classification on DrugBank pairs by using only the molecular information in Table 3 .", "The performance is high, although the accuracy is evaluated on automatically generated negative instances.", "Finally, we applied the molecular-based DDI classification model trained on DrugBank to the DDIExtraction 2013 task data set.", "Since the Drug-Bank has no detailed labels, we mapped all four types of interactions to positive interactions and evaluated the classification performance.", "The results in Table 4 show that GCNs produce higher recall than precision and the overall performance is low considering the high performance on Drug-Bank pairs.", "This might be because the interactions of drugs are not always mentioned in texts even if the drugs can interact with each other and because hedged DDI mentions are annotated as DDIs in the text data set.", "We also trained the DDI extraction model only with molecular information by replacing h all with h m , but the F-scores were quite low (< 5%).", "These results show that we cannot predict textual relations only with molecular information.", "Related Work Various feature-based methods have been proposed during and after the DDIExtraction-2013 shared task .", "Kim et al.", "(2015) proposed a two-phase SVM-based approach that employed a linear SVM with rich features that consist of word, word pair, dependency graph, parse tree, and noun phrase-based constrained coordination features.", "Zheng et al.", "(2016) proposed a context vector graph kernel to exploit various types of contexts.", "Raihani and Laachfoubi (2017) also employed a two-phase SVM-based approach using non-linear kernels and they proposed five groups of features: word, drug, pair of drug, main verb and negative sentence features.", "Our model does not use any features or kernels.", "Various neural DDI extraction models have been recently proposed using CNNs and Recurrent Neural Networks (RNNs).", "Liu et al.", "(2016) built a CNN-based model based on word and position embeddings.", "Zheng et al.", "(2017) proposed a Bidirectional Long Short-Term Memory RNN (Bi-LSTM)-based model with an input attention mechanism, which obtained target drug-specific word representations before the Bi-LSTM.", "Lim et al.", "(2018) proposed Recursive neural networkbased model with a subtree containment feature and an ensemble method.", "This model showed the state-of-the-art performance on the DDIExtraction 2013 shared task data set if systems do not use negative instance filtering.", "These approaches did not consider molecular information, and they can also be enhanced by the molecular information.", "Vilar et al.", "(2017) focused on detecting DDIs from different sources such as pharmacovigilance sources, scientific biomedical literature and social media.", "They did not use deep neural networks and they did not consider molecular information.", "Learning representations of graphs are widely studied in several tasks such as knowledge base completion, drug discovery, and material science Gilmer et al., 2017) .", "Several graph convolutional neural networks have been proposed such as NFP (Duvenaud et al., 2015) , GGNN , and Molecular Graph Convolutions (Kearnes et al., 2016) , but they have not been applied to DDI extraction.", "Conclusions We proposed a novel neural method for DDI extraction using both textual and molecular informa-tion.", "The results show that DDIs can be predicted with high accuracy from molecular structure information and that the molecular information can improve DDI extraction from texts by 2.39 percept points in F-score on the data set of the DDIExtraction 2013 shared task.", "As future work, we would like to seek the way to model the textual and molecular representations jointly with alleviating the differences in labels.", "We will also investigate the use of other information in DrugBank." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Text-based DDI Extraction", "Molecular Structure-based DDI Classification", "DDI Extraction from Texts Using Molecular Structures", "Experimental Settings", "Text Corpus and Task Setting", "Data and Task for Molecular Structures", "Training Settings", "Results", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-61#paper-1120#slide-8
Molecular Structure Features
To obtain the graph of a drug molecule, we took as input the SMILES string encoding of the molecule from DrugBank and then converted it into the 2D graph structure using RDKit For the initial atom (node) vectors, we used randomly embedded vectors for atoms, i.e., C, O, N, We also used 4 bond (edge) types: single, double, triple, and aromatic
To obtain the graph of a drug molecule, we took as input the SMILES string encoding of the molecule from DrugBank and then converted it into the 2D graph structure using RDKit For the initial atom (node) vectors, we used randomly embedded vectors for atoms, i.e., C, O, N, We also used 4 bond (edge) types: single, double, triple, and aromatic
[]
GEM-SciDuet-train-61#paper-1120#slide-9
1120
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction When drugs are concomitantly administered to a patient, the effects of the drugs may be enhanced or weakened, which may also cause side effects.", "These kinds of interactions are called Drug-Drug Interactions (DDIs).", "Several drug databases have been maintained to summarize drug and DDI information such as DrugBank (Law et al., 2014) , Therapeutic Target database , and PharmGKB (Thorn et al., 2013) .", "Automatic DDI extraction from texts is expected to support the maintenance of databases with high coverage and quick update to help medical experts.", "Deep neural network-based methods have recently drawn a considerable attention (Liu et al., 2016; Sahu and Anand, 2017; Zheng et al., 2017; Lim et al., 2018) since they show state-of-the-art performance without manual feature engineering.", "In parallel to the progress in DDI extraction from texts, Graph Convolutional Networks (GCNs) have been proposed and applied to estimate physical and chemical properties of molec-ular graphs such as solubility and toxicity (Duvenaud et al., 2015; Gilmer et al., 2017) .", "In this study, we propose a novel method to utilize both textual and molecular information for DDI extraction from texts.", "We illustrate the overview of the proposed model in Figure 1 .", "We obtain the representations of drug pairs in molecular graph structures using GCNs and concatenate the representations with the representations of the textual mention pairs obtained by convolutional neural networks (CNNs).", "We trained the molecule-based model using interacting pairs mentioned in the DrugBank database and then trained the entire model using the labeled pairs in the text data set of the DDIExtraction 2013 shared task (SemEval-2013 Task 9) (Segura .", "In the experiment, we show GCNs can predict DDIs from molecular graphs in a high accuracy.", "We also show molecular information can enhance the performance of DDI extraction from texts in 2.39 percent points in F-score.", "The contribution of this paper is three-fold: • We propose a novel neural method to extract DDIs from texts with the related molecular structure information.", "• We apply GCNs to pairwise drug molecules for the first time and show GCNs can predict DDIs between drug molecular structures in a high accuracy.", "• We show the molecular information is useful in extracting DDIs from texts.", "Methods Text-based DDI Extraction Our model for extracting DDIs from texts is based on the CNN model by Zeng et al.", "(2014) .", "When an input sentence S = (w 1 , w 2 , · · · , w N ) is given, We prepare word embedding w w i of w i and word Figure 1 : Overview of the proposed model position embeddings w p i,1 and w p i,2 that correspond to the relative positions from the first and second target entities, respectively.", "We concatenate these embeddings as in Equation (1) , and we use the resulting vector as the input to the subsequent convolution layer: w i = [w w i ; w p i,1 ; w p i,2 ], (1) where [; ] denotes the concatenation.", "We calculate the expression for each filter j with the window size k l .", "z i,l = [w i−(k l −1)/2 , · · · , w i−(k l +1)/2 ], (2) m i,j,l = relu(W conv j z i,l + b conv ), (3) m j,l = max i m i,j,l , (4) where L is the number of windows, W conv j and b conv are the weight and bias of CNN, and max indicates max pooling (Boureau et al., 2010) .", "We convert the output of the convolution layer into a fixed-size vector that represents a textual pair as follows: m l = [m 1,l , · · · , m J,l ], (5) h t = [m 1 ; .", ".", ".", "; m L ], (6) where J is the number of filters.", "We get a predictionŷ t by the following fully connected neural networks: h (1) t = relu(W (1) t h t + b (1) t ), (7) y t = softmax(W (2) t h (1) t + b (2) t ), (8) where W (1) t and W (2) t are weights and b (1) t and b (2) t are bias terms.", "Molecular Structure-based DDI Classification We represent drug pairs in molecular graph structures using two GCN methods: CNNs for fingerprints (NFP) (Duvenaud et al., 2015) and Gated Graph Neural Networks (GGNN) .", "They both convert a drug molecule graph G into a fixed size vector h g by aggregating the representation h T v of an atom node v in G. We represent atoms as nodes and bonds as edges in the graph.", "NFP first obtains the representation h t v by the following equations (Duvenaud et al., 2015) .", "m t+1 v = h t v + w∈N (v) h t w , (9) h t+1 v = σ(H deg(v) t m t+1 v ), (10) where h t v is the representation of v in the t-th step, N (v) is the neighbors of v, and H deg(v) t is a weight parameter.", "h 0 v is initialized by the atom features of v. deg(v) is the degree of a node v and σ is a sigmoid function.", "NFP then acquires the representation of the graph structure h g = v,t softmax(W t h t v ), (11) where W t is a weight matrix.", "GGNN first obtains the representation h t v by using Gated Recurrent Unit (GRU)-based recurrent neural networks as follows: m t+1 v = w∈N (v) A evw h t w (12) h t+1 v = GRU([h t v ; m t+1 v ]), (13) where A evw is a weight for the bond type of each edge e vw .", "GGNN then acquires the representation of the graph structure.", "h g = v σ(i([h T v ; h 0 v ])) (j(h T v )), (14) where i and j are linear layers and is the element-wise product.", "We obtain the representation of a molecular pair by concatenating the molecular graph representations of drugs g 1 and g 2 , i.e., h m = [h g 1 ; h g 2 ].", "We get a predictionŷ m as follows: h (1) m = relu(W (1) m h m + b (1) m ), (15) y m = softmax(W (2) m h (1) m + b (2) m ), (16) where W (1) m and W (2) m are weights and b (1) m and b (2) m are bias terms.", "DDI Extraction from Texts Using Molecular Structures We realize the simultaneous use of textual and molecular information by concatenating a textbased and molecule-based vectors: h all = [h t ; h m ].", "We normalize molecule-based vectors.", "We then use h all instead of h t in Equation 7 .", "In training, we first train the molecular-based DDI classification model.", "The molecular-based classification is performed by minimizing the loss function L m = − y m logŷ m .", "We then fix the parameters for GCNs and train text-based DDI extraction model by minimizing the loss function L t = − y t logŷ t .", "Experimental Settings In this section, we explain the textual and molecular data and task settings and training settings.", "Text Corpus and Task Setting We followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task for the evaluation.", "This data set is composed of documents annotated with drug mentions and their four types of interactions: Mechanism, Effect, Advice and Int.", "For the data statistics, please refer to the supplementary materials.", "The task is a multi-class classification task, i.e., to classify a given pair of drugs into the four interaction types or no interaction.", "We evaluated the performance with micro-averaged precision (P), Figure 2 : Associating DrugBank entries with texts and molecular graph structures recall (R), and F-score (F) on all the interaction types.", "We used the official evaluation script provided by the task organizers.", "As preprocessing, we split sentences into words using the GENIA tagger (Tsuruoka et al., 2005) .", "We replaced the drug mentions of the target pair with DRUG1 and DRUG2 according to their order of appearance.", "We also replaced other drug mentions with DRUGOTHER.", "We did not employ negative instance filtering unlike other existing methods, e.g., Liu et al.", "(2016) , since our focus is to evaluate the effect of the molecular information on texts.", "We linked mentions in texts to DrugBank entries by string matching.", "We lowercased the mentions and the names in the entries and chose the entries with the most overlaps.", "As a result, 92.15% and 93.09% of drug mentions in train and test data set matched the DrugBank entries.", "Data and Task for Molecular Structures We extracted 255,229 interacting (positive) pairs from DrugBank.", "We note that, unlike text-based interactions, DrugBank only contains the information of interacting pairs; there are no detailed labels and no information for non-interacting (negative) pairs.", "We thus generated the same number of pseudo negative pairs by randomly pairing drugs and removing those in positive pairs.", "To avoid overestimation of the performance, we also deleted drug pairs mentioned in the test set of the text corpus.", "We split positive and negative pairs into 4:1 for training and test data, and we evaluated the classification accuracy using only the molecular information.", "To obtain the graph of a drug molecule, we took (Weininger, 1988) string encoding of the molecule from DrugBank and then converted it into the graph using RDKit (Landrum, 2016) as illustrated in Figure 2 .", "For the atom features, we used randomly embedded vectors for each atoms (i.e., C, O, N, ...).", "We also used 4 bond types: single, double, triple, or aromatic.", "Training Settings We employed mini-batch training using the Adam optimizer (Kingma and Ba, 2015) .", "We used L2 regularization to avoid over-fitting.", "We tuned the bias term b (2) t for negative examples in the final softmax layer.", "For the hyper-parameters, please refer to the supplementary materials.", "We employed pre-trained word embeddings trained by using the word2vec tool (Mikolov et al., 2013) on the 2014 MEDLINE/PubMed baseline distribution.", "The vocabulary size was 215,840.", "The embedding of the drugs, i.e., DRUG1 and DRUG2 were initialized with the pre-trained embedding of the word drug.", "The embeddings of training words that did not appear in the pretrained embeddings were initialized with the average of all pre-trained word embeddings.", "Words that appeared only once in the training data were replaced with an UNK word during training, and the embedding of words in the test data set that did not appear in both training and pre-trained embeddings were set to the embedding of the UNK word.", "Word position embeddings are initialized with random values drawn from a uniform distribution.", "We set the molecule-based vectors of unmatched entities to zero vectors.", "Table 1 shows the performance of DDI extraction models.", "We show the performance without negative instance filtering or ensemble for the fair comparison.", "We observe the increase of recall and F-score by using molecular information, Both GCNs improvements were statistically significant (p < 0.05 for NFP and p < 0.005 for GGNN) with randomized shuffled test.", "Table 2 shows F-scores on individual DDI types.", "The molecular information improves Fscores especially on type Mechanism and Effect.", "Results We also evaluated the accuracy of binary classification on DrugBank pairs by using only the molecular information in Table 3 .", "The performance is high, although the accuracy is evaluated on automatically generated negative instances.", "Finally, we applied the molecular-based DDI classification model trained on DrugBank to the DDIExtraction 2013 task data set.", "Since the Drug-Bank has no detailed labels, we mapped all four types of interactions to positive interactions and evaluated the classification performance.", "The results in Table 4 show that GCNs produce higher recall than precision and the overall performance is low considering the high performance on Drug-Bank pairs.", "This might be because the interactions of drugs are not always mentioned in texts even if the drugs can interact with each other and because hedged DDI mentions are annotated as DDIs in the text data set.", "We also trained the DDI extraction model only with molecular information by replacing h all with h m , but the F-scores were quite low (< 5%).", "These results show that we cannot predict textual relations only with molecular information.", "Related Work Various feature-based methods have been proposed during and after the DDIExtraction-2013 shared task .", "Kim et al.", "(2015) proposed a two-phase SVM-based approach that employed a linear SVM with rich features that consist of word, word pair, dependency graph, parse tree, and noun phrase-based constrained coordination features.", "Zheng et al.", "(2016) proposed a context vector graph kernel to exploit various types of contexts.", "Raihani and Laachfoubi (2017) also employed a two-phase SVM-based approach using non-linear kernels and they proposed five groups of features: word, drug, pair of drug, main verb and negative sentence features.", "Our model does not use any features or kernels.", "Various neural DDI extraction models have been recently proposed using CNNs and Recurrent Neural Networks (RNNs).", "Liu et al.", "(2016) built a CNN-based model based on word and position embeddings.", "Zheng et al.", "(2017) proposed a Bidirectional Long Short-Term Memory RNN (Bi-LSTM)-based model with an input attention mechanism, which obtained target drug-specific word representations before the Bi-LSTM.", "Lim et al.", "(2018) proposed Recursive neural networkbased model with a subtree containment feature and an ensemble method.", "This model showed the state-of-the-art performance on the DDIExtraction 2013 shared task data set if systems do not use negative instance filtering.", "These approaches did not consider molecular information, and they can also be enhanced by the molecular information.", "Vilar et al.", "(2017) focused on detecting DDIs from different sources such as pharmacovigilance sources, scientific biomedical literature and social media.", "They did not use deep neural networks and they did not consider molecular information.", "Learning representations of graphs are widely studied in several tasks such as knowledge base completion, drug discovery, and material science Gilmer et al., 2017) .", "Several graph convolutional neural networks have been proposed such as NFP (Duvenaud et al., 2015) , GGNN , and Molecular Graph Convolutions (Kearnes et al., 2016) , but they have not been applied to DDI extraction.", "Conclusions We proposed a novel neural method for DDI extraction using both textual and molecular informa-tion.", "The results show that DDIs can be predicted with high accuracy from molecular structure information and that the molecular information can improve DDI extraction from texts by 2.39 percept points in F-score on the data set of the DDIExtraction 2013 shared task.", "As future work, we would like to seek the way to model the textual and molecular representations jointly with alleviating the differences in labels.", "We will also investigate the use of other information in DrugBank." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Text-based DDI Extraction", "Molecular Structure-based DDI Classification", "DDI Extraction from Texts Using Molecular Structures", "Experimental Settings", "Text Corpus and Task Setting", "Data and Task for Molecular Structures", "Training Settings", "Results", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-61#paper-1120#slide-9
Differences of Labels in Text and Database Tasks
Interacting drug pairs in database may not appear as positive instances in the text task Text task define 4 detailed types, while database task has one positive type. Grepafloxacin inhibits the metabolism of Theophylline While the effect of Grepafloxacin on the metabolism of C.P.A substrates is not evaluated, in vitro data suggested similar effects of Grepafloxacin in Theophylline metabolism
Interacting drug pairs in database may not appear as positive instances in the text task Text task define 4 detailed types, while database task has one positive type. Grepafloxacin inhibits the metabolism of Theophylline While the effect of Grepafloxacin on the metabolism of C.P.A substrates is not evaluated, in vitro data suggested similar effects of Grepafloxacin in Theophylline metabolism
[]
GEM-SciDuet-train-61#paper-1120#slide-10
1120
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction When drugs are concomitantly administered to a patient, the effects of the drugs may be enhanced or weakened, which may also cause side effects.", "These kinds of interactions are called Drug-Drug Interactions (DDIs).", "Several drug databases have been maintained to summarize drug and DDI information such as DrugBank (Law et al., 2014) , Therapeutic Target database , and PharmGKB (Thorn et al., 2013) .", "Automatic DDI extraction from texts is expected to support the maintenance of databases with high coverage and quick update to help medical experts.", "Deep neural network-based methods have recently drawn a considerable attention (Liu et al., 2016; Sahu and Anand, 2017; Zheng et al., 2017; Lim et al., 2018) since they show state-of-the-art performance without manual feature engineering.", "In parallel to the progress in DDI extraction from texts, Graph Convolutional Networks (GCNs) have been proposed and applied to estimate physical and chemical properties of molec-ular graphs such as solubility and toxicity (Duvenaud et al., 2015; Gilmer et al., 2017) .", "In this study, we propose a novel method to utilize both textual and molecular information for DDI extraction from texts.", "We illustrate the overview of the proposed model in Figure 1 .", "We obtain the representations of drug pairs in molecular graph structures using GCNs and concatenate the representations with the representations of the textual mention pairs obtained by convolutional neural networks (CNNs).", "We trained the molecule-based model using interacting pairs mentioned in the DrugBank database and then trained the entire model using the labeled pairs in the text data set of the DDIExtraction 2013 shared task (SemEval-2013 Task 9) (Segura .", "In the experiment, we show GCNs can predict DDIs from molecular graphs in a high accuracy.", "We also show molecular information can enhance the performance of DDI extraction from texts in 2.39 percent points in F-score.", "The contribution of this paper is three-fold: • We propose a novel neural method to extract DDIs from texts with the related molecular structure information.", "• We apply GCNs to pairwise drug molecules for the first time and show GCNs can predict DDIs between drug molecular structures in a high accuracy.", "• We show the molecular information is useful in extracting DDIs from texts.", "Methods Text-based DDI Extraction Our model for extracting DDIs from texts is based on the CNN model by Zeng et al.", "(2014) .", "When an input sentence S = (w 1 , w 2 , · · · , w N ) is given, We prepare word embedding w w i of w i and word Figure 1 : Overview of the proposed model position embeddings w p i,1 and w p i,2 that correspond to the relative positions from the first and second target entities, respectively.", "We concatenate these embeddings as in Equation (1) , and we use the resulting vector as the input to the subsequent convolution layer: w i = [w w i ; w p i,1 ; w p i,2 ], (1) where [; ] denotes the concatenation.", "We calculate the expression for each filter j with the window size k l .", "z i,l = [w i−(k l −1)/2 , · · · , w i−(k l +1)/2 ], (2) m i,j,l = relu(W conv j z i,l + b conv ), (3) m j,l = max i m i,j,l , (4) where L is the number of windows, W conv j and b conv are the weight and bias of CNN, and max indicates max pooling (Boureau et al., 2010) .", "We convert the output of the convolution layer into a fixed-size vector that represents a textual pair as follows: m l = [m 1,l , · · · , m J,l ], (5) h t = [m 1 ; .", ".", ".", "; m L ], (6) where J is the number of filters.", "We get a predictionŷ t by the following fully connected neural networks: h (1) t = relu(W (1) t h t + b (1) t ), (7) y t = softmax(W (2) t h (1) t + b (2) t ), (8) where W (1) t and W (2) t are weights and b (1) t and b (2) t are bias terms.", "Molecular Structure-based DDI Classification We represent drug pairs in molecular graph structures using two GCN methods: CNNs for fingerprints (NFP) (Duvenaud et al., 2015) and Gated Graph Neural Networks (GGNN) .", "They both convert a drug molecule graph G into a fixed size vector h g by aggregating the representation h T v of an atom node v in G. We represent atoms as nodes and bonds as edges in the graph.", "NFP first obtains the representation h t v by the following equations (Duvenaud et al., 2015) .", "m t+1 v = h t v + w∈N (v) h t w , (9) h t+1 v = σ(H deg(v) t m t+1 v ), (10) where h t v is the representation of v in the t-th step, N (v) is the neighbors of v, and H deg(v) t is a weight parameter.", "h 0 v is initialized by the atom features of v. deg(v) is the degree of a node v and σ is a sigmoid function.", "NFP then acquires the representation of the graph structure h g = v,t softmax(W t h t v ), (11) where W t is a weight matrix.", "GGNN first obtains the representation h t v by using Gated Recurrent Unit (GRU)-based recurrent neural networks as follows: m t+1 v = w∈N (v) A evw h t w (12) h t+1 v = GRU([h t v ; m t+1 v ]), (13) where A evw is a weight for the bond type of each edge e vw .", "GGNN then acquires the representation of the graph structure.", "h g = v σ(i([h T v ; h 0 v ])) (j(h T v )), (14) where i and j are linear layers and is the element-wise product.", "We obtain the representation of a molecular pair by concatenating the molecular graph representations of drugs g 1 and g 2 , i.e., h m = [h g 1 ; h g 2 ].", "We get a predictionŷ m as follows: h (1) m = relu(W (1) m h m + b (1) m ), (15) y m = softmax(W (2) m h (1) m + b (2) m ), (16) where W (1) m and W (2) m are weights and b (1) m and b (2) m are bias terms.", "DDI Extraction from Texts Using Molecular Structures We realize the simultaneous use of textual and molecular information by concatenating a textbased and molecule-based vectors: h all = [h t ; h m ].", "We normalize molecule-based vectors.", "We then use h all instead of h t in Equation 7 .", "In training, we first train the molecular-based DDI classification model.", "The molecular-based classification is performed by minimizing the loss function L m = − y m logŷ m .", "We then fix the parameters for GCNs and train text-based DDI extraction model by minimizing the loss function L t = − y t logŷ t .", "Experimental Settings In this section, we explain the textual and molecular data and task settings and training settings.", "Text Corpus and Task Setting We followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task for the evaluation.", "This data set is composed of documents annotated with drug mentions and their four types of interactions: Mechanism, Effect, Advice and Int.", "For the data statistics, please refer to the supplementary materials.", "The task is a multi-class classification task, i.e., to classify a given pair of drugs into the four interaction types or no interaction.", "We evaluated the performance with micro-averaged precision (P), Figure 2 : Associating DrugBank entries with texts and molecular graph structures recall (R), and F-score (F) on all the interaction types.", "We used the official evaluation script provided by the task organizers.", "As preprocessing, we split sentences into words using the GENIA tagger (Tsuruoka et al., 2005) .", "We replaced the drug mentions of the target pair with DRUG1 and DRUG2 according to their order of appearance.", "We also replaced other drug mentions with DRUGOTHER.", "We did not employ negative instance filtering unlike other existing methods, e.g., Liu et al.", "(2016) , since our focus is to evaluate the effect of the molecular information on texts.", "We linked mentions in texts to DrugBank entries by string matching.", "We lowercased the mentions and the names in the entries and chose the entries with the most overlaps.", "As a result, 92.15% and 93.09% of drug mentions in train and test data set matched the DrugBank entries.", "Data and Task for Molecular Structures We extracted 255,229 interacting (positive) pairs from DrugBank.", "We note that, unlike text-based interactions, DrugBank only contains the information of interacting pairs; there are no detailed labels and no information for non-interacting (negative) pairs.", "We thus generated the same number of pseudo negative pairs by randomly pairing drugs and removing those in positive pairs.", "To avoid overestimation of the performance, we also deleted drug pairs mentioned in the test set of the text corpus.", "We split positive and negative pairs into 4:1 for training and test data, and we evaluated the classification accuracy using only the molecular information.", "To obtain the graph of a drug molecule, we took (Weininger, 1988) string encoding of the molecule from DrugBank and then converted it into the graph using RDKit (Landrum, 2016) as illustrated in Figure 2 .", "For the atom features, we used randomly embedded vectors for each atoms (i.e., C, O, N, ...).", "We also used 4 bond types: single, double, triple, or aromatic.", "Training Settings We employed mini-batch training using the Adam optimizer (Kingma and Ba, 2015) .", "We used L2 regularization to avoid over-fitting.", "We tuned the bias term b (2) t for negative examples in the final softmax layer.", "For the hyper-parameters, please refer to the supplementary materials.", "We employed pre-trained word embeddings trained by using the word2vec tool (Mikolov et al., 2013) on the 2014 MEDLINE/PubMed baseline distribution.", "The vocabulary size was 215,840.", "The embedding of the drugs, i.e., DRUG1 and DRUG2 were initialized with the pre-trained embedding of the word drug.", "The embeddings of training words that did not appear in the pretrained embeddings were initialized with the average of all pre-trained word embeddings.", "Words that appeared only once in the training data were replaced with an UNK word during training, and the embedding of words in the test data set that did not appear in both training and pre-trained embeddings were set to the embedding of the UNK word.", "Word position embeddings are initialized with random values drawn from a uniform distribution.", "We set the molecule-based vectors of unmatched entities to zero vectors.", "Table 1 shows the performance of DDI extraction models.", "We show the performance without negative instance filtering or ensemble for the fair comparison.", "We observe the increase of recall and F-score by using molecular information, Both GCNs improvements were statistically significant (p < 0.05 for NFP and p < 0.005 for GGNN) with randomized shuffled test.", "Table 2 shows F-scores on individual DDI types.", "The molecular information improves Fscores especially on type Mechanism and Effect.", "Results We also evaluated the accuracy of binary classification on DrugBank pairs by using only the molecular information in Table 3 .", "The performance is high, although the accuracy is evaluated on automatically generated negative instances.", "Finally, we applied the molecular-based DDI classification model trained on DrugBank to the DDIExtraction 2013 task data set.", "Since the Drug-Bank has no detailed labels, we mapped all four types of interactions to positive interactions and evaluated the classification performance.", "The results in Table 4 show that GCNs produce higher recall than precision and the overall performance is low considering the high performance on Drug-Bank pairs.", "This might be because the interactions of drugs are not always mentioned in texts even if the drugs can interact with each other and because hedged DDI mentions are annotated as DDIs in the text data set.", "We also trained the DDI extraction model only with molecular information by replacing h all with h m , but the F-scores were quite low (< 5%).", "These results show that we cannot predict textual relations only with molecular information.", "Related Work Various feature-based methods have been proposed during and after the DDIExtraction-2013 shared task .", "Kim et al.", "(2015) proposed a two-phase SVM-based approach that employed a linear SVM with rich features that consist of word, word pair, dependency graph, parse tree, and noun phrase-based constrained coordination features.", "Zheng et al.", "(2016) proposed a context vector graph kernel to exploit various types of contexts.", "Raihani and Laachfoubi (2017) also employed a two-phase SVM-based approach using non-linear kernels and they proposed five groups of features: word, drug, pair of drug, main verb and negative sentence features.", "Our model does not use any features or kernels.", "Various neural DDI extraction models have been recently proposed using CNNs and Recurrent Neural Networks (RNNs).", "Liu et al.", "(2016) built a CNN-based model based on word and position embeddings.", "Zheng et al.", "(2017) proposed a Bidirectional Long Short-Term Memory RNN (Bi-LSTM)-based model with an input attention mechanism, which obtained target drug-specific word representations before the Bi-LSTM.", "Lim et al.", "(2018) proposed Recursive neural networkbased model with a subtree containment feature and an ensemble method.", "This model showed the state-of-the-art performance on the DDIExtraction 2013 shared task data set if systems do not use negative instance filtering.", "These approaches did not consider molecular information, and they can also be enhanced by the molecular information.", "Vilar et al.", "(2017) focused on detecting DDIs from different sources such as pharmacovigilance sources, scientific biomedical literature and social media.", "They did not use deep neural networks and they did not consider molecular information.", "Learning representations of graphs are widely studied in several tasks such as knowledge base completion, drug discovery, and material science Gilmer et al., 2017) .", "Several graph convolutional neural networks have been proposed such as NFP (Duvenaud et al., 2015) , GGNN , and Molecular Graph Convolutions (Kearnes et al., 2016) , but they have not been applied to DDI extraction.", "Conclusions We proposed a novel neural method for DDI extraction using both textual and molecular informa-tion.", "The results show that DDIs can be predicted with high accuracy from molecular structure information and that the molecular information can improve DDI extraction from texts by 2.39 percept points in F-score on the data set of the DDIExtraction 2013 shared task.", "As future work, we would like to seek the way to model the textual and molecular representations jointly with alleviating the differences in labels.", "We will also investigate the use of other information in DrugBank." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Text-based DDI Extraction", "Molecular Structure-based DDI Classification", "DDI Extraction from Texts Using Molecular Structures", "Experimental Settings", "Text Corpus and Task Setting", "Data and Task for Molecular Structures", "Training Settings", "Results", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-61#paper-1120#slide-10
Training Settings
Mini-batch training using the Adam optimizer with L2 regularization Word embeddings trained by the word2vec tool on the 2014 MEDLINE/PubMed baseline distribution Hyper-parameters for text-based model Hyper-parameters for molecule-based
Mini-batch training using the Adam optimizer with L2 regularization Word embeddings trained by the word2vec tool on the 2014 MEDLINE/PubMed baseline distribution Hyper-parameters for text-based model Hyper-parameters for molecule-based
[]
GEM-SciDuet-train-61#paper-1120#slide-11
1120
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction When drugs are concomitantly administered to a patient, the effects of the drugs may be enhanced or weakened, which may also cause side effects.", "These kinds of interactions are called Drug-Drug Interactions (DDIs).", "Several drug databases have been maintained to summarize drug and DDI information such as DrugBank (Law et al., 2014) , Therapeutic Target database , and PharmGKB (Thorn et al., 2013) .", "Automatic DDI extraction from texts is expected to support the maintenance of databases with high coverage and quick update to help medical experts.", "Deep neural network-based methods have recently drawn a considerable attention (Liu et al., 2016; Sahu and Anand, 2017; Zheng et al., 2017; Lim et al., 2018) since they show state-of-the-art performance without manual feature engineering.", "In parallel to the progress in DDI extraction from texts, Graph Convolutional Networks (GCNs) have been proposed and applied to estimate physical and chemical properties of molec-ular graphs such as solubility and toxicity (Duvenaud et al., 2015; Gilmer et al., 2017) .", "In this study, we propose a novel method to utilize both textual and molecular information for DDI extraction from texts.", "We illustrate the overview of the proposed model in Figure 1 .", "We obtain the representations of drug pairs in molecular graph structures using GCNs and concatenate the representations with the representations of the textual mention pairs obtained by convolutional neural networks (CNNs).", "We trained the molecule-based model using interacting pairs mentioned in the DrugBank database and then trained the entire model using the labeled pairs in the text data set of the DDIExtraction 2013 shared task (SemEval-2013 Task 9) (Segura .", "In the experiment, we show GCNs can predict DDIs from molecular graphs in a high accuracy.", "We also show molecular information can enhance the performance of DDI extraction from texts in 2.39 percent points in F-score.", "The contribution of this paper is three-fold: • We propose a novel neural method to extract DDIs from texts with the related molecular structure information.", "• We apply GCNs to pairwise drug molecules for the first time and show GCNs can predict DDIs between drug molecular structures in a high accuracy.", "• We show the molecular information is useful in extracting DDIs from texts.", "Methods Text-based DDI Extraction Our model for extracting DDIs from texts is based on the CNN model by Zeng et al.", "(2014) .", "When an input sentence S = (w 1 , w 2 , · · · , w N ) is given, We prepare word embedding w w i of w i and word Figure 1 : Overview of the proposed model position embeddings w p i,1 and w p i,2 that correspond to the relative positions from the first and second target entities, respectively.", "We concatenate these embeddings as in Equation (1) , and we use the resulting vector as the input to the subsequent convolution layer: w i = [w w i ; w p i,1 ; w p i,2 ], (1) where [; ] denotes the concatenation.", "We calculate the expression for each filter j with the window size k l .", "z i,l = [w i−(k l −1)/2 , · · · , w i−(k l +1)/2 ], (2) m i,j,l = relu(W conv j z i,l + b conv ), (3) m j,l = max i m i,j,l , (4) where L is the number of windows, W conv j and b conv are the weight and bias of CNN, and max indicates max pooling (Boureau et al., 2010) .", "We convert the output of the convolution layer into a fixed-size vector that represents a textual pair as follows: m l = [m 1,l , · · · , m J,l ], (5) h t = [m 1 ; .", ".", ".", "; m L ], (6) where J is the number of filters.", "We get a predictionŷ t by the following fully connected neural networks: h (1) t = relu(W (1) t h t + b (1) t ), (7) y t = softmax(W (2) t h (1) t + b (2) t ), (8) where W (1) t and W (2) t are weights and b (1) t and b (2) t are bias terms.", "Molecular Structure-based DDI Classification We represent drug pairs in molecular graph structures using two GCN methods: CNNs for fingerprints (NFP) (Duvenaud et al., 2015) and Gated Graph Neural Networks (GGNN) .", "They both convert a drug molecule graph G into a fixed size vector h g by aggregating the representation h T v of an atom node v in G. We represent atoms as nodes and bonds as edges in the graph.", "NFP first obtains the representation h t v by the following equations (Duvenaud et al., 2015) .", "m t+1 v = h t v + w∈N (v) h t w , (9) h t+1 v = σ(H deg(v) t m t+1 v ), (10) where h t v is the representation of v in the t-th step, N (v) is the neighbors of v, and H deg(v) t is a weight parameter.", "h 0 v is initialized by the atom features of v. deg(v) is the degree of a node v and σ is a sigmoid function.", "NFP then acquires the representation of the graph structure h g = v,t softmax(W t h t v ), (11) where W t is a weight matrix.", "GGNN first obtains the representation h t v by using Gated Recurrent Unit (GRU)-based recurrent neural networks as follows: m t+1 v = w∈N (v) A evw h t w (12) h t+1 v = GRU([h t v ; m t+1 v ]), (13) where A evw is a weight for the bond type of each edge e vw .", "GGNN then acquires the representation of the graph structure.", "h g = v σ(i([h T v ; h 0 v ])) (j(h T v )), (14) where i and j are linear layers and is the element-wise product.", "We obtain the representation of a molecular pair by concatenating the molecular graph representations of drugs g 1 and g 2 , i.e., h m = [h g 1 ; h g 2 ].", "We get a predictionŷ m as follows: h (1) m = relu(W (1) m h m + b (1) m ), (15) y m = softmax(W (2) m h (1) m + b (2) m ), (16) where W (1) m and W (2) m are weights and b (1) m and b (2) m are bias terms.", "DDI Extraction from Texts Using Molecular Structures We realize the simultaneous use of textual and molecular information by concatenating a textbased and molecule-based vectors: h all = [h t ; h m ].", "We normalize molecule-based vectors.", "We then use h all instead of h t in Equation 7 .", "In training, we first train the molecular-based DDI classification model.", "The molecular-based classification is performed by minimizing the loss function L m = − y m logŷ m .", "We then fix the parameters for GCNs and train text-based DDI extraction model by minimizing the loss function L t = − y t logŷ t .", "Experimental Settings In this section, we explain the textual and molecular data and task settings and training settings.", "Text Corpus and Task Setting We followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task for the evaluation.", "This data set is composed of documents annotated with drug mentions and their four types of interactions: Mechanism, Effect, Advice and Int.", "For the data statistics, please refer to the supplementary materials.", "The task is a multi-class classification task, i.e., to classify a given pair of drugs into the four interaction types or no interaction.", "We evaluated the performance with micro-averaged precision (P), Figure 2 : Associating DrugBank entries with texts and molecular graph structures recall (R), and F-score (F) on all the interaction types.", "We used the official evaluation script provided by the task organizers.", "As preprocessing, we split sentences into words using the GENIA tagger (Tsuruoka et al., 2005) .", "We replaced the drug mentions of the target pair with DRUG1 and DRUG2 according to their order of appearance.", "We also replaced other drug mentions with DRUGOTHER.", "We did not employ negative instance filtering unlike other existing methods, e.g., Liu et al.", "(2016) , since our focus is to evaluate the effect of the molecular information on texts.", "We linked mentions in texts to DrugBank entries by string matching.", "We lowercased the mentions and the names in the entries and chose the entries with the most overlaps.", "As a result, 92.15% and 93.09% of drug mentions in train and test data set matched the DrugBank entries.", "Data and Task for Molecular Structures We extracted 255,229 interacting (positive) pairs from DrugBank.", "We note that, unlike text-based interactions, DrugBank only contains the information of interacting pairs; there are no detailed labels and no information for non-interacting (negative) pairs.", "We thus generated the same number of pseudo negative pairs by randomly pairing drugs and removing those in positive pairs.", "To avoid overestimation of the performance, we also deleted drug pairs mentioned in the test set of the text corpus.", "We split positive and negative pairs into 4:1 for training and test data, and we evaluated the classification accuracy using only the molecular information.", "To obtain the graph of a drug molecule, we took (Weininger, 1988) string encoding of the molecule from DrugBank and then converted it into the graph using RDKit (Landrum, 2016) as illustrated in Figure 2 .", "For the atom features, we used randomly embedded vectors for each atoms (i.e., C, O, N, ...).", "We also used 4 bond types: single, double, triple, or aromatic.", "Training Settings We employed mini-batch training using the Adam optimizer (Kingma and Ba, 2015) .", "We used L2 regularization to avoid over-fitting.", "We tuned the bias term b (2) t for negative examples in the final softmax layer.", "For the hyper-parameters, please refer to the supplementary materials.", "We employed pre-trained word embeddings trained by using the word2vec tool (Mikolov et al., 2013) on the 2014 MEDLINE/PubMed baseline distribution.", "The vocabulary size was 215,840.", "The embedding of the drugs, i.e., DRUG1 and DRUG2 were initialized with the pre-trained embedding of the word drug.", "The embeddings of training words that did not appear in the pretrained embeddings were initialized with the average of all pre-trained word embeddings.", "Words that appeared only once in the training data were replaced with an UNK word during training, and the embedding of words in the test data set that did not appear in both training and pre-trained embeddings were set to the embedding of the UNK word.", "Word position embeddings are initialized with random values drawn from a uniform distribution.", "We set the molecule-based vectors of unmatched entities to zero vectors.", "Table 1 shows the performance of DDI extraction models.", "We show the performance without negative instance filtering or ensemble for the fair comparison.", "We observe the increase of recall and F-score by using molecular information, Both GCNs improvements were statistically significant (p < 0.05 for NFP and p < 0.005 for GGNN) with randomized shuffled test.", "Table 2 shows F-scores on individual DDI types.", "The molecular information improves Fscores especially on type Mechanism and Effect.", "Results We also evaluated the accuracy of binary classification on DrugBank pairs by using only the molecular information in Table 3 .", "The performance is high, although the accuracy is evaluated on automatically generated negative instances.", "Finally, we applied the molecular-based DDI classification model trained on DrugBank to the DDIExtraction 2013 task data set.", "Since the Drug-Bank has no detailed labels, we mapped all four types of interactions to positive interactions and evaluated the classification performance.", "The results in Table 4 show that GCNs produce higher recall than precision and the overall performance is low considering the high performance on Drug-Bank pairs.", "This might be because the interactions of drugs are not always mentioned in texts even if the drugs can interact with each other and because hedged DDI mentions are annotated as DDIs in the text data set.", "We also trained the DDI extraction model only with molecular information by replacing h all with h m , but the F-scores were quite low (< 5%).", "These results show that we cannot predict textual relations only with molecular information.", "Related Work Various feature-based methods have been proposed during and after the DDIExtraction-2013 shared task .", "Kim et al.", "(2015) proposed a two-phase SVM-based approach that employed a linear SVM with rich features that consist of word, word pair, dependency graph, parse tree, and noun phrase-based constrained coordination features.", "Zheng et al.", "(2016) proposed a context vector graph kernel to exploit various types of contexts.", "Raihani and Laachfoubi (2017) also employed a two-phase SVM-based approach using non-linear kernels and they proposed five groups of features: word, drug, pair of drug, main verb and negative sentence features.", "Our model does not use any features or kernels.", "Various neural DDI extraction models have been recently proposed using CNNs and Recurrent Neural Networks (RNNs).", "Liu et al.", "(2016) built a CNN-based model based on word and position embeddings.", "Zheng et al.", "(2017) proposed a Bidirectional Long Short-Term Memory RNN (Bi-LSTM)-based model with an input attention mechanism, which obtained target drug-specific word representations before the Bi-LSTM.", "Lim et al.", "(2018) proposed Recursive neural networkbased model with a subtree containment feature and an ensemble method.", "This model showed the state-of-the-art performance on the DDIExtraction 2013 shared task data set if systems do not use negative instance filtering.", "These approaches did not consider molecular information, and they can also be enhanced by the molecular information.", "Vilar et al.", "(2017) focused on detecting DDIs from different sources such as pharmacovigilance sources, scientific biomedical literature and social media.", "They did not use deep neural networks and they did not consider molecular information.", "Learning representations of graphs are widely studied in several tasks such as knowledge base completion, drug discovery, and material science Gilmer et al., 2017) .", "Several graph convolutional neural networks have been proposed such as NFP (Duvenaud et al., 2015) , GGNN , and Molecular Graph Convolutions (Kearnes et al., 2016) , but they have not been applied to DDI extraction.", "Conclusions We proposed a novel neural method for DDI extraction using both textual and molecular informa-tion.", "The results show that DDIs can be predicted with high accuracy from molecular structure information and that the molecular information can improve DDI extraction from texts by 2.39 percept points in F-score on the data set of the DDIExtraction 2013 shared task.", "As future work, we would like to seek the way to model the textual and molecular representations jointly with alleviating the differences in labels.", "We will also investigate the use of other information in DrugBank." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Text-based DDI Extraction", "Molecular Structure-based DDI Classification", "DDI Extraction from Texts Using Molecular Structures", "Experimental Settings", "Text Corpus and Task Setting", "Data and Task for Molecular Structures", "Training Settings", "Results", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-61#paper-1120#slide-11
Evaluation on Relaxed String Matching
How much of drug mentions in texts are linked to DrugBank entries by relaxed string matching? We lowercased the mentions and the names in the entries and chose the entries with the most overlaps As a result, 92.15% and 93.09% of drug mentions in train and test SemEval2013 data set matched the DrugBank entries
How much of drug mentions in texts are linked to DrugBank entries by relaxed string matching? We lowercased the mentions and the names in the entries and chose the entries with the most overlaps As a result, 92.15% and 93.09% of drug mentions in train and test SemEval2013 data set matched the DrugBank entries
[]
GEM-SciDuet-train-61#paper-1120#slide-12
1120
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction When drugs are concomitantly administered to a patient, the effects of the drugs may be enhanced or weakened, which may also cause side effects.", "These kinds of interactions are called Drug-Drug Interactions (DDIs).", "Several drug databases have been maintained to summarize drug and DDI information such as DrugBank (Law et al., 2014) , Therapeutic Target database , and PharmGKB (Thorn et al., 2013) .", "Automatic DDI extraction from texts is expected to support the maintenance of databases with high coverage and quick update to help medical experts.", "Deep neural network-based methods have recently drawn a considerable attention (Liu et al., 2016; Sahu and Anand, 2017; Zheng et al., 2017; Lim et al., 2018) since they show state-of-the-art performance without manual feature engineering.", "In parallel to the progress in DDI extraction from texts, Graph Convolutional Networks (GCNs) have been proposed and applied to estimate physical and chemical properties of molec-ular graphs such as solubility and toxicity (Duvenaud et al., 2015; Gilmer et al., 2017) .", "In this study, we propose a novel method to utilize both textual and molecular information for DDI extraction from texts.", "We illustrate the overview of the proposed model in Figure 1 .", "We obtain the representations of drug pairs in molecular graph structures using GCNs and concatenate the representations with the representations of the textual mention pairs obtained by convolutional neural networks (CNNs).", "We trained the molecule-based model using interacting pairs mentioned in the DrugBank database and then trained the entire model using the labeled pairs in the text data set of the DDIExtraction 2013 shared task (SemEval-2013 Task 9) (Segura .", "In the experiment, we show GCNs can predict DDIs from molecular graphs in a high accuracy.", "We also show molecular information can enhance the performance of DDI extraction from texts in 2.39 percent points in F-score.", "The contribution of this paper is three-fold: • We propose a novel neural method to extract DDIs from texts with the related molecular structure information.", "• We apply GCNs to pairwise drug molecules for the first time and show GCNs can predict DDIs between drug molecular structures in a high accuracy.", "• We show the molecular information is useful in extracting DDIs from texts.", "Methods Text-based DDI Extraction Our model for extracting DDIs from texts is based on the CNN model by Zeng et al.", "(2014) .", "When an input sentence S = (w 1 , w 2 , · · · , w N ) is given, We prepare word embedding w w i of w i and word Figure 1 : Overview of the proposed model position embeddings w p i,1 and w p i,2 that correspond to the relative positions from the first and second target entities, respectively.", "We concatenate these embeddings as in Equation (1) , and we use the resulting vector as the input to the subsequent convolution layer: w i = [w w i ; w p i,1 ; w p i,2 ], (1) where [; ] denotes the concatenation.", "We calculate the expression for each filter j with the window size k l .", "z i,l = [w i−(k l −1)/2 , · · · , w i−(k l +1)/2 ], (2) m i,j,l = relu(W conv j z i,l + b conv ), (3) m j,l = max i m i,j,l , (4) where L is the number of windows, W conv j and b conv are the weight and bias of CNN, and max indicates max pooling (Boureau et al., 2010) .", "We convert the output of the convolution layer into a fixed-size vector that represents a textual pair as follows: m l = [m 1,l , · · · , m J,l ], (5) h t = [m 1 ; .", ".", ".", "; m L ], (6) where J is the number of filters.", "We get a predictionŷ t by the following fully connected neural networks: h (1) t = relu(W (1) t h t + b (1) t ), (7) y t = softmax(W (2) t h (1) t + b (2) t ), (8) where W (1) t and W (2) t are weights and b (1) t and b (2) t are bias terms.", "Molecular Structure-based DDI Classification We represent drug pairs in molecular graph structures using two GCN methods: CNNs for fingerprints (NFP) (Duvenaud et al., 2015) and Gated Graph Neural Networks (GGNN) .", "They both convert a drug molecule graph G into a fixed size vector h g by aggregating the representation h T v of an atom node v in G. We represent atoms as nodes and bonds as edges in the graph.", "NFP first obtains the representation h t v by the following equations (Duvenaud et al., 2015) .", "m t+1 v = h t v + w∈N (v) h t w , (9) h t+1 v = σ(H deg(v) t m t+1 v ), (10) where h t v is the representation of v in the t-th step, N (v) is the neighbors of v, and H deg(v) t is a weight parameter.", "h 0 v is initialized by the atom features of v. deg(v) is the degree of a node v and σ is a sigmoid function.", "NFP then acquires the representation of the graph structure h g = v,t softmax(W t h t v ), (11) where W t is a weight matrix.", "GGNN first obtains the representation h t v by using Gated Recurrent Unit (GRU)-based recurrent neural networks as follows: m t+1 v = w∈N (v) A evw h t w (12) h t+1 v = GRU([h t v ; m t+1 v ]), (13) where A evw is a weight for the bond type of each edge e vw .", "GGNN then acquires the representation of the graph structure.", "h g = v σ(i([h T v ; h 0 v ])) (j(h T v )), (14) where i and j are linear layers and is the element-wise product.", "We obtain the representation of a molecular pair by concatenating the molecular graph representations of drugs g 1 and g 2 , i.e., h m = [h g 1 ; h g 2 ].", "We get a predictionŷ m as follows: h (1) m = relu(W (1) m h m + b (1) m ), (15) y m = softmax(W (2) m h (1) m + b (2) m ), (16) where W (1) m and W (2) m are weights and b (1) m and b (2) m are bias terms.", "DDI Extraction from Texts Using Molecular Structures We realize the simultaneous use of textual and molecular information by concatenating a textbased and molecule-based vectors: h all = [h t ; h m ].", "We normalize molecule-based vectors.", "We then use h all instead of h t in Equation 7 .", "In training, we first train the molecular-based DDI classification model.", "The molecular-based classification is performed by minimizing the loss function L m = − y m logŷ m .", "We then fix the parameters for GCNs and train text-based DDI extraction model by minimizing the loss function L t = − y t logŷ t .", "Experimental Settings In this section, we explain the textual and molecular data and task settings and training settings.", "Text Corpus and Task Setting We followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task for the evaluation.", "This data set is composed of documents annotated with drug mentions and their four types of interactions: Mechanism, Effect, Advice and Int.", "For the data statistics, please refer to the supplementary materials.", "The task is a multi-class classification task, i.e., to classify a given pair of drugs into the four interaction types or no interaction.", "We evaluated the performance with micro-averaged precision (P), Figure 2 : Associating DrugBank entries with texts and molecular graph structures recall (R), and F-score (F) on all the interaction types.", "We used the official evaluation script provided by the task organizers.", "As preprocessing, we split sentences into words using the GENIA tagger (Tsuruoka et al., 2005) .", "We replaced the drug mentions of the target pair with DRUG1 and DRUG2 according to their order of appearance.", "We also replaced other drug mentions with DRUGOTHER.", "We did not employ negative instance filtering unlike other existing methods, e.g., Liu et al.", "(2016) , since our focus is to evaluate the effect of the molecular information on texts.", "We linked mentions in texts to DrugBank entries by string matching.", "We lowercased the mentions and the names in the entries and chose the entries with the most overlaps.", "As a result, 92.15% and 93.09% of drug mentions in train and test data set matched the DrugBank entries.", "Data and Task for Molecular Structures We extracted 255,229 interacting (positive) pairs from DrugBank.", "We note that, unlike text-based interactions, DrugBank only contains the information of interacting pairs; there are no detailed labels and no information for non-interacting (negative) pairs.", "We thus generated the same number of pseudo negative pairs by randomly pairing drugs and removing those in positive pairs.", "To avoid overestimation of the performance, we also deleted drug pairs mentioned in the test set of the text corpus.", "We split positive and negative pairs into 4:1 for training and test data, and we evaluated the classification accuracy using only the molecular information.", "To obtain the graph of a drug molecule, we took (Weininger, 1988) string encoding of the molecule from DrugBank and then converted it into the graph using RDKit (Landrum, 2016) as illustrated in Figure 2 .", "For the atom features, we used randomly embedded vectors for each atoms (i.e., C, O, N, ...).", "We also used 4 bond types: single, double, triple, or aromatic.", "Training Settings We employed mini-batch training using the Adam optimizer (Kingma and Ba, 2015) .", "We used L2 regularization to avoid over-fitting.", "We tuned the bias term b (2) t for negative examples in the final softmax layer.", "For the hyper-parameters, please refer to the supplementary materials.", "We employed pre-trained word embeddings trained by using the word2vec tool (Mikolov et al., 2013) on the 2014 MEDLINE/PubMed baseline distribution.", "The vocabulary size was 215,840.", "The embedding of the drugs, i.e., DRUG1 and DRUG2 were initialized with the pre-trained embedding of the word drug.", "The embeddings of training words that did not appear in the pretrained embeddings were initialized with the average of all pre-trained word embeddings.", "Words that appeared only once in the training data were replaced with an UNK word during training, and the embedding of words in the test data set that did not appear in both training and pre-trained embeddings were set to the embedding of the UNK word.", "Word position embeddings are initialized with random values drawn from a uniform distribution.", "We set the molecule-based vectors of unmatched entities to zero vectors.", "Table 1 shows the performance of DDI extraction models.", "We show the performance without negative instance filtering or ensemble for the fair comparison.", "We observe the increase of recall and F-score by using molecular information, Both GCNs improvements were statistically significant (p < 0.05 for NFP and p < 0.005 for GGNN) with randomized shuffled test.", "Table 2 shows F-scores on individual DDI types.", "The molecular information improves Fscores especially on type Mechanism and Effect.", "Results We also evaluated the accuracy of binary classification on DrugBank pairs by using only the molecular information in Table 3 .", "The performance is high, although the accuracy is evaluated on automatically generated negative instances.", "Finally, we applied the molecular-based DDI classification model trained on DrugBank to the DDIExtraction 2013 task data set.", "Since the Drug-Bank has no detailed labels, we mapped all four types of interactions to positive interactions and evaluated the classification performance.", "The results in Table 4 show that GCNs produce higher recall than precision and the overall performance is low considering the high performance on Drug-Bank pairs.", "This might be because the interactions of drugs are not always mentioned in texts even if the drugs can interact with each other and because hedged DDI mentions are annotated as DDIs in the text data set.", "We also trained the DDI extraction model only with molecular information by replacing h all with h m , but the F-scores were quite low (< 5%).", "These results show that we cannot predict textual relations only with molecular information.", "Related Work Various feature-based methods have been proposed during and after the DDIExtraction-2013 shared task .", "Kim et al.", "(2015) proposed a two-phase SVM-based approach that employed a linear SVM with rich features that consist of word, word pair, dependency graph, parse tree, and noun phrase-based constrained coordination features.", "Zheng et al.", "(2016) proposed a context vector graph kernel to exploit various types of contexts.", "Raihani and Laachfoubi (2017) also employed a two-phase SVM-based approach using non-linear kernels and they proposed five groups of features: word, drug, pair of drug, main verb and negative sentence features.", "Our model does not use any features or kernels.", "Various neural DDI extraction models have been recently proposed using CNNs and Recurrent Neural Networks (RNNs).", "Liu et al.", "(2016) built a CNN-based model based on word and position embeddings.", "Zheng et al.", "(2017) proposed a Bidirectional Long Short-Term Memory RNN (Bi-LSTM)-based model with an input attention mechanism, which obtained target drug-specific word representations before the Bi-LSTM.", "Lim et al.", "(2018) proposed Recursive neural networkbased model with a subtree containment feature and an ensemble method.", "This model showed the state-of-the-art performance on the DDIExtraction 2013 shared task data set if systems do not use negative instance filtering.", "These approaches did not consider molecular information, and they can also be enhanced by the molecular information.", "Vilar et al.", "(2017) focused on detecting DDIs from different sources such as pharmacovigilance sources, scientific biomedical literature and social media.", "They did not use deep neural networks and they did not consider molecular information.", "Learning representations of graphs are widely studied in several tasks such as knowledge base completion, drug discovery, and material science Gilmer et al., 2017) .", "Several graph convolutional neural networks have been proposed such as NFP (Duvenaud et al., 2015) , GGNN , and Molecular Graph Convolutions (Kearnes et al., 2016) , but they have not been applied to DDI extraction.", "Conclusions We proposed a novel neural method for DDI extraction using both textual and molecular informa-tion.", "The results show that DDIs can be predicted with high accuracy from molecular structure information and that the molecular information can improve DDI extraction from texts by 2.39 percept points in F-score on the data set of the DDIExtraction 2013 shared task.", "As future work, we would like to seek the way to model the textual and molecular representations jointly with alleviating the differences in labels.", "We will also investigate the use of other information in DrugBank." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Text-based DDI Extraction", "Molecular Structure-based DDI Classification", "DDI Extraction from Texts Using Molecular Structures", "Experimental Settings", "Text Corpus and Task Setting", "Data and Task for Molecular Structures", "Training Settings", "Results", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-61#paper-1120#slide-12
Evaluation on DDI Extraction from Texts SemEval2013 Shared Task
We observe the increase of micro F-score by using molecular structures Text + Molecular Structure
We observe the increase of micro F-score by using molecular structures Text + Molecular Structure
[]
GEM-SciDuet-train-61#paper-1120#slide-13
1120
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction When drugs are concomitantly administered to a patient, the effects of the drugs may be enhanced or weakened, which may also cause side effects.", "These kinds of interactions are called Drug-Drug Interactions (DDIs).", "Several drug databases have been maintained to summarize drug and DDI information such as DrugBank (Law et al., 2014) , Therapeutic Target database , and PharmGKB (Thorn et al., 2013) .", "Automatic DDI extraction from texts is expected to support the maintenance of databases with high coverage and quick update to help medical experts.", "Deep neural network-based methods have recently drawn a considerable attention (Liu et al., 2016; Sahu and Anand, 2017; Zheng et al., 2017; Lim et al., 2018) since they show state-of-the-art performance without manual feature engineering.", "In parallel to the progress in DDI extraction from texts, Graph Convolutional Networks (GCNs) have been proposed and applied to estimate physical and chemical properties of molec-ular graphs such as solubility and toxicity (Duvenaud et al., 2015; Gilmer et al., 2017) .", "In this study, we propose a novel method to utilize both textual and molecular information for DDI extraction from texts.", "We illustrate the overview of the proposed model in Figure 1 .", "We obtain the representations of drug pairs in molecular graph structures using GCNs and concatenate the representations with the representations of the textual mention pairs obtained by convolutional neural networks (CNNs).", "We trained the molecule-based model using interacting pairs mentioned in the DrugBank database and then trained the entire model using the labeled pairs in the text data set of the DDIExtraction 2013 shared task (SemEval-2013 Task 9) (Segura .", "In the experiment, we show GCNs can predict DDIs from molecular graphs in a high accuracy.", "We also show molecular information can enhance the performance of DDI extraction from texts in 2.39 percent points in F-score.", "The contribution of this paper is three-fold: • We propose a novel neural method to extract DDIs from texts with the related molecular structure information.", "• We apply GCNs to pairwise drug molecules for the first time and show GCNs can predict DDIs between drug molecular structures in a high accuracy.", "• We show the molecular information is useful in extracting DDIs from texts.", "Methods Text-based DDI Extraction Our model for extracting DDIs from texts is based on the CNN model by Zeng et al.", "(2014) .", "When an input sentence S = (w 1 , w 2 , · · · , w N ) is given, We prepare word embedding w w i of w i and word Figure 1 : Overview of the proposed model position embeddings w p i,1 and w p i,2 that correspond to the relative positions from the first and second target entities, respectively.", "We concatenate these embeddings as in Equation (1) , and we use the resulting vector as the input to the subsequent convolution layer: w i = [w w i ; w p i,1 ; w p i,2 ], (1) where [; ] denotes the concatenation.", "We calculate the expression for each filter j with the window size k l .", "z i,l = [w i−(k l −1)/2 , · · · , w i−(k l +1)/2 ], (2) m i,j,l = relu(W conv j z i,l + b conv ), (3) m j,l = max i m i,j,l , (4) where L is the number of windows, W conv j and b conv are the weight and bias of CNN, and max indicates max pooling (Boureau et al., 2010) .", "We convert the output of the convolution layer into a fixed-size vector that represents a textual pair as follows: m l = [m 1,l , · · · , m J,l ], (5) h t = [m 1 ; .", ".", ".", "; m L ], (6) where J is the number of filters.", "We get a predictionŷ t by the following fully connected neural networks: h (1) t = relu(W (1) t h t + b (1) t ), (7) y t = softmax(W (2) t h (1) t + b (2) t ), (8) where W (1) t and W (2) t are weights and b (1) t and b (2) t are bias terms.", "Molecular Structure-based DDI Classification We represent drug pairs in molecular graph structures using two GCN methods: CNNs for fingerprints (NFP) (Duvenaud et al., 2015) and Gated Graph Neural Networks (GGNN) .", "They both convert a drug molecule graph G into a fixed size vector h g by aggregating the representation h T v of an atom node v in G. We represent atoms as nodes and bonds as edges in the graph.", "NFP first obtains the representation h t v by the following equations (Duvenaud et al., 2015) .", "m t+1 v = h t v + w∈N (v) h t w , (9) h t+1 v = σ(H deg(v) t m t+1 v ), (10) where h t v is the representation of v in the t-th step, N (v) is the neighbors of v, and H deg(v) t is a weight parameter.", "h 0 v is initialized by the atom features of v. deg(v) is the degree of a node v and σ is a sigmoid function.", "NFP then acquires the representation of the graph structure h g = v,t softmax(W t h t v ), (11) where W t is a weight matrix.", "GGNN first obtains the representation h t v by using Gated Recurrent Unit (GRU)-based recurrent neural networks as follows: m t+1 v = w∈N (v) A evw h t w (12) h t+1 v = GRU([h t v ; m t+1 v ]), (13) where A evw is a weight for the bond type of each edge e vw .", "GGNN then acquires the representation of the graph structure.", "h g = v σ(i([h T v ; h 0 v ])) (j(h T v )), (14) where i and j are linear layers and is the element-wise product.", "We obtain the representation of a molecular pair by concatenating the molecular graph representations of drugs g 1 and g 2 , i.e., h m = [h g 1 ; h g 2 ].", "We get a predictionŷ m as follows: h (1) m = relu(W (1) m h m + b (1) m ), (15) y m = softmax(W (2) m h (1) m + b (2) m ), (16) where W (1) m and W (2) m are weights and b (1) m and b (2) m are bias terms.", "DDI Extraction from Texts Using Molecular Structures We realize the simultaneous use of textual and molecular information by concatenating a textbased and molecule-based vectors: h all = [h t ; h m ].", "We normalize molecule-based vectors.", "We then use h all instead of h t in Equation 7 .", "In training, we first train the molecular-based DDI classification model.", "The molecular-based classification is performed by minimizing the loss function L m = − y m logŷ m .", "We then fix the parameters for GCNs and train text-based DDI extraction model by minimizing the loss function L t = − y t logŷ t .", "Experimental Settings In this section, we explain the textual and molecular data and task settings and training settings.", "Text Corpus and Task Setting We followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task for the evaluation.", "This data set is composed of documents annotated with drug mentions and their four types of interactions: Mechanism, Effect, Advice and Int.", "For the data statistics, please refer to the supplementary materials.", "The task is a multi-class classification task, i.e., to classify a given pair of drugs into the four interaction types or no interaction.", "We evaluated the performance with micro-averaged precision (P), Figure 2 : Associating DrugBank entries with texts and molecular graph structures recall (R), and F-score (F) on all the interaction types.", "We used the official evaluation script provided by the task organizers.", "As preprocessing, we split sentences into words using the GENIA tagger (Tsuruoka et al., 2005) .", "We replaced the drug mentions of the target pair with DRUG1 and DRUG2 according to their order of appearance.", "We also replaced other drug mentions with DRUGOTHER.", "We did not employ negative instance filtering unlike other existing methods, e.g., Liu et al.", "(2016) , since our focus is to evaluate the effect of the molecular information on texts.", "We linked mentions in texts to DrugBank entries by string matching.", "We lowercased the mentions and the names in the entries and chose the entries with the most overlaps.", "As a result, 92.15% and 93.09% of drug mentions in train and test data set matched the DrugBank entries.", "Data and Task for Molecular Structures We extracted 255,229 interacting (positive) pairs from DrugBank.", "We note that, unlike text-based interactions, DrugBank only contains the information of interacting pairs; there are no detailed labels and no information for non-interacting (negative) pairs.", "We thus generated the same number of pseudo negative pairs by randomly pairing drugs and removing those in positive pairs.", "To avoid overestimation of the performance, we also deleted drug pairs mentioned in the test set of the text corpus.", "We split positive and negative pairs into 4:1 for training and test data, and we evaluated the classification accuracy using only the molecular information.", "To obtain the graph of a drug molecule, we took (Weininger, 1988) string encoding of the molecule from DrugBank and then converted it into the graph using RDKit (Landrum, 2016) as illustrated in Figure 2 .", "For the atom features, we used randomly embedded vectors for each atoms (i.e., C, O, N, ...).", "We also used 4 bond types: single, double, triple, or aromatic.", "Training Settings We employed mini-batch training using the Adam optimizer (Kingma and Ba, 2015) .", "We used L2 regularization to avoid over-fitting.", "We tuned the bias term b (2) t for negative examples in the final softmax layer.", "For the hyper-parameters, please refer to the supplementary materials.", "We employed pre-trained word embeddings trained by using the word2vec tool (Mikolov et al., 2013) on the 2014 MEDLINE/PubMed baseline distribution.", "The vocabulary size was 215,840.", "The embedding of the drugs, i.e., DRUG1 and DRUG2 were initialized with the pre-trained embedding of the word drug.", "The embeddings of training words that did not appear in the pretrained embeddings were initialized with the average of all pre-trained word embeddings.", "Words that appeared only once in the training data were replaced with an UNK word during training, and the embedding of words in the test data set that did not appear in both training and pre-trained embeddings were set to the embedding of the UNK word.", "Word position embeddings are initialized with random values drawn from a uniform distribution.", "We set the molecule-based vectors of unmatched entities to zero vectors.", "Table 1 shows the performance of DDI extraction models.", "We show the performance without negative instance filtering or ensemble for the fair comparison.", "We observe the increase of recall and F-score by using molecular information, Both GCNs improvements were statistically significant (p < 0.05 for NFP and p < 0.005 for GGNN) with randomized shuffled test.", "Table 2 shows F-scores on individual DDI types.", "The molecular information improves Fscores especially on type Mechanism and Effect.", "Results We also evaluated the accuracy of binary classification on DrugBank pairs by using only the molecular information in Table 3 .", "The performance is high, although the accuracy is evaluated on automatically generated negative instances.", "Finally, we applied the molecular-based DDI classification model trained on DrugBank to the DDIExtraction 2013 task data set.", "Since the Drug-Bank has no detailed labels, we mapped all four types of interactions to positive interactions and evaluated the classification performance.", "The results in Table 4 show that GCNs produce higher recall than precision and the overall performance is low considering the high performance on Drug-Bank pairs.", "This might be because the interactions of drugs are not always mentioned in texts even if the drugs can interact with each other and because hedged DDI mentions are annotated as DDIs in the text data set.", "We also trained the DDI extraction model only with molecular information by replacing h all with h m , but the F-scores were quite low (< 5%).", "These results show that we cannot predict textual relations only with molecular information.", "Related Work Various feature-based methods have been proposed during and after the DDIExtraction-2013 shared task .", "Kim et al.", "(2015) proposed a two-phase SVM-based approach that employed a linear SVM with rich features that consist of word, word pair, dependency graph, parse tree, and noun phrase-based constrained coordination features.", "Zheng et al.", "(2016) proposed a context vector graph kernel to exploit various types of contexts.", "Raihani and Laachfoubi (2017) also employed a two-phase SVM-based approach using non-linear kernels and they proposed five groups of features: word, drug, pair of drug, main verb and negative sentence features.", "Our model does not use any features or kernels.", "Various neural DDI extraction models have been recently proposed using CNNs and Recurrent Neural Networks (RNNs).", "Liu et al.", "(2016) built a CNN-based model based on word and position embeddings.", "Zheng et al.", "(2017) proposed a Bidirectional Long Short-Term Memory RNN (Bi-LSTM)-based model with an input attention mechanism, which obtained target drug-specific word representations before the Bi-LSTM.", "Lim et al.", "(2018) proposed Recursive neural networkbased model with a subtree containment feature and an ensemble method.", "This model showed the state-of-the-art performance on the DDIExtraction 2013 shared task data set if systems do not use negative instance filtering.", "These approaches did not consider molecular information, and they can also be enhanced by the molecular information.", "Vilar et al.", "(2017) focused on detecting DDIs from different sources such as pharmacovigilance sources, scientific biomedical literature and social media.", "They did not use deep neural networks and they did not consider molecular information.", "Learning representations of graphs are widely studied in several tasks such as knowledge base completion, drug discovery, and material science Gilmer et al., 2017) .", "Several graph convolutional neural networks have been proposed such as NFP (Duvenaud et al., 2015) , GGNN , and Molecular Graph Convolutions (Kearnes et al., 2016) , but they have not been applied to DDI extraction.", "Conclusions We proposed a novel neural method for DDI extraction using both textual and molecular informa-tion.", "The results show that DDIs can be predicted with high accuracy from molecular structure information and that the molecular information can improve DDI extraction from texts by 2.39 percept points in F-score on the data set of the DDIExtraction 2013 shared task.", "As future work, we would like to seek the way to model the textual and molecular representations jointly with alleviating the differences in labels.", "We will also investigate the use of other information in DrugBank." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Text-based DDI Extraction", "Molecular Structure-based DDI Classification", "DDI Extraction from Texts Using Molecular Structures", "Experimental Settings", "Text Corpus and Task Setting", "Data and Task for Molecular Structures", "Training Settings", "Results", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-61#paper-1120#slide-13
Analysis
Can molecular structures alone represent DDIs in texts ? Grepafloxacin inhibits the metabolism of Grepafloxacin GCN concat interact not interact - This might be because the drug pairs that interact can appear in the textual context that does not describe their interactions
Can molecular structures alone represent DDIs in texts ? Grepafloxacin inhibits the metabolism of Grepafloxacin GCN concat interact not interact - This might be because the drug pairs that interact can appear in the textual context that does not describe their interactions
[]
GEM-SciDuet-train-61#paper-1120#slide-14
1120
Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information
We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114 ], "paper_content_text": [ "Introduction When drugs are concomitantly administered to a patient, the effects of the drugs may be enhanced or weakened, which may also cause side effects.", "These kinds of interactions are called Drug-Drug Interactions (DDIs).", "Several drug databases have been maintained to summarize drug and DDI information such as DrugBank (Law et al., 2014) , Therapeutic Target database , and PharmGKB (Thorn et al., 2013) .", "Automatic DDI extraction from texts is expected to support the maintenance of databases with high coverage and quick update to help medical experts.", "Deep neural network-based methods have recently drawn a considerable attention (Liu et al., 2016; Sahu and Anand, 2017; Zheng et al., 2017; Lim et al., 2018) since they show state-of-the-art performance without manual feature engineering.", "In parallel to the progress in DDI extraction from texts, Graph Convolutional Networks (GCNs) have been proposed and applied to estimate physical and chemical properties of molec-ular graphs such as solubility and toxicity (Duvenaud et al., 2015; Gilmer et al., 2017) .", "In this study, we propose a novel method to utilize both textual and molecular information for DDI extraction from texts.", "We illustrate the overview of the proposed model in Figure 1 .", "We obtain the representations of drug pairs in molecular graph structures using GCNs and concatenate the representations with the representations of the textual mention pairs obtained by convolutional neural networks (CNNs).", "We trained the molecule-based model using interacting pairs mentioned in the DrugBank database and then trained the entire model using the labeled pairs in the text data set of the DDIExtraction 2013 shared task (SemEval-2013 Task 9) (Segura .", "In the experiment, we show GCNs can predict DDIs from molecular graphs in a high accuracy.", "We also show molecular information can enhance the performance of DDI extraction from texts in 2.39 percent points in F-score.", "The contribution of this paper is three-fold: • We propose a novel neural method to extract DDIs from texts with the related molecular structure information.", "• We apply GCNs to pairwise drug molecules for the first time and show GCNs can predict DDIs between drug molecular structures in a high accuracy.", "• We show the molecular information is useful in extracting DDIs from texts.", "Methods Text-based DDI Extraction Our model for extracting DDIs from texts is based on the CNN model by Zeng et al.", "(2014) .", "When an input sentence S = (w 1 , w 2 , · · · , w N ) is given, We prepare word embedding w w i of w i and word Figure 1 : Overview of the proposed model position embeddings w p i,1 and w p i,2 that correspond to the relative positions from the first and second target entities, respectively.", "We concatenate these embeddings as in Equation (1) , and we use the resulting vector as the input to the subsequent convolution layer: w i = [w w i ; w p i,1 ; w p i,2 ], (1) where [; ] denotes the concatenation.", "We calculate the expression for each filter j with the window size k l .", "z i,l = [w i−(k l −1)/2 , · · · , w i−(k l +1)/2 ], (2) m i,j,l = relu(W conv j z i,l + b conv ), (3) m j,l = max i m i,j,l , (4) where L is the number of windows, W conv j and b conv are the weight and bias of CNN, and max indicates max pooling (Boureau et al., 2010) .", "We convert the output of the convolution layer into a fixed-size vector that represents a textual pair as follows: m l = [m 1,l , · · · , m J,l ], (5) h t = [m 1 ; .", ".", ".", "; m L ], (6) where J is the number of filters.", "We get a predictionŷ t by the following fully connected neural networks: h (1) t = relu(W (1) t h t + b (1) t ), (7) y t = softmax(W (2) t h (1) t + b (2) t ), (8) where W (1) t and W (2) t are weights and b (1) t and b (2) t are bias terms.", "Molecular Structure-based DDI Classification We represent drug pairs in molecular graph structures using two GCN methods: CNNs for fingerprints (NFP) (Duvenaud et al., 2015) and Gated Graph Neural Networks (GGNN) .", "They both convert a drug molecule graph G into a fixed size vector h g by aggregating the representation h T v of an atom node v in G. We represent atoms as nodes and bonds as edges in the graph.", "NFP first obtains the representation h t v by the following equations (Duvenaud et al., 2015) .", "m t+1 v = h t v + w∈N (v) h t w , (9) h t+1 v = σ(H deg(v) t m t+1 v ), (10) where h t v is the representation of v in the t-th step, N (v) is the neighbors of v, and H deg(v) t is a weight parameter.", "h 0 v is initialized by the atom features of v. deg(v) is the degree of a node v and σ is a sigmoid function.", "NFP then acquires the representation of the graph structure h g = v,t softmax(W t h t v ), (11) where W t is a weight matrix.", "GGNN first obtains the representation h t v by using Gated Recurrent Unit (GRU)-based recurrent neural networks as follows: m t+1 v = w∈N (v) A evw h t w (12) h t+1 v = GRU([h t v ; m t+1 v ]), (13) where A evw is a weight for the bond type of each edge e vw .", "GGNN then acquires the representation of the graph structure.", "h g = v σ(i([h T v ; h 0 v ])) (j(h T v )), (14) where i and j are linear layers and is the element-wise product.", "We obtain the representation of a molecular pair by concatenating the molecular graph representations of drugs g 1 and g 2 , i.e., h m = [h g 1 ; h g 2 ].", "We get a predictionŷ m as follows: h (1) m = relu(W (1) m h m + b (1) m ), (15) y m = softmax(W (2) m h (1) m + b (2) m ), (16) where W (1) m and W (2) m are weights and b (1) m and b (2) m are bias terms.", "DDI Extraction from Texts Using Molecular Structures We realize the simultaneous use of textual and molecular information by concatenating a textbased and molecule-based vectors: h all = [h t ; h m ].", "We normalize molecule-based vectors.", "We then use h all instead of h t in Equation 7 .", "In training, we first train the molecular-based DDI classification model.", "The molecular-based classification is performed by minimizing the loss function L m = − y m logŷ m .", "We then fix the parameters for GCNs and train text-based DDI extraction model by minimizing the loss function L t = − y t logŷ t .", "Experimental Settings In this section, we explain the textual and molecular data and task settings and training settings.", "Text Corpus and Task Setting We followed the task setting of Task 9.2 in the DDIExtraction 2013 shared task for the evaluation.", "This data set is composed of documents annotated with drug mentions and their four types of interactions: Mechanism, Effect, Advice and Int.", "For the data statistics, please refer to the supplementary materials.", "The task is a multi-class classification task, i.e., to classify a given pair of drugs into the four interaction types or no interaction.", "We evaluated the performance with micro-averaged precision (P), Figure 2 : Associating DrugBank entries with texts and molecular graph structures recall (R), and F-score (F) on all the interaction types.", "We used the official evaluation script provided by the task organizers.", "As preprocessing, we split sentences into words using the GENIA tagger (Tsuruoka et al., 2005) .", "We replaced the drug mentions of the target pair with DRUG1 and DRUG2 according to their order of appearance.", "We also replaced other drug mentions with DRUGOTHER.", "We did not employ negative instance filtering unlike other existing methods, e.g., Liu et al.", "(2016) , since our focus is to evaluate the effect of the molecular information on texts.", "We linked mentions in texts to DrugBank entries by string matching.", "We lowercased the mentions and the names in the entries and chose the entries with the most overlaps.", "As a result, 92.15% and 93.09% of drug mentions in train and test data set matched the DrugBank entries.", "Data and Task for Molecular Structures We extracted 255,229 interacting (positive) pairs from DrugBank.", "We note that, unlike text-based interactions, DrugBank only contains the information of interacting pairs; there are no detailed labels and no information for non-interacting (negative) pairs.", "We thus generated the same number of pseudo negative pairs by randomly pairing drugs and removing those in positive pairs.", "To avoid overestimation of the performance, we also deleted drug pairs mentioned in the test set of the text corpus.", "We split positive and negative pairs into 4:1 for training and test data, and we evaluated the classification accuracy using only the molecular information.", "To obtain the graph of a drug molecule, we took (Weininger, 1988) string encoding of the molecule from DrugBank and then converted it into the graph using RDKit (Landrum, 2016) as illustrated in Figure 2 .", "For the atom features, we used randomly embedded vectors for each atoms (i.e., C, O, N, ...).", "We also used 4 bond types: single, double, triple, or aromatic.", "Training Settings We employed mini-batch training using the Adam optimizer (Kingma and Ba, 2015) .", "We used L2 regularization to avoid over-fitting.", "We tuned the bias term b (2) t for negative examples in the final softmax layer.", "For the hyper-parameters, please refer to the supplementary materials.", "We employed pre-trained word embeddings trained by using the word2vec tool (Mikolov et al., 2013) on the 2014 MEDLINE/PubMed baseline distribution.", "The vocabulary size was 215,840.", "The embedding of the drugs, i.e., DRUG1 and DRUG2 were initialized with the pre-trained embedding of the word drug.", "The embeddings of training words that did not appear in the pretrained embeddings were initialized with the average of all pre-trained word embeddings.", "Words that appeared only once in the training data were replaced with an UNK word during training, and the embedding of words in the test data set that did not appear in both training and pre-trained embeddings were set to the embedding of the UNK word.", "Word position embeddings are initialized with random values drawn from a uniform distribution.", "We set the molecule-based vectors of unmatched entities to zero vectors.", "Table 1 shows the performance of DDI extraction models.", "We show the performance without negative instance filtering or ensemble for the fair comparison.", "We observe the increase of recall and F-score by using molecular information, Both GCNs improvements were statistically significant (p < 0.05 for NFP and p < 0.005 for GGNN) with randomized shuffled test.", "Table 2 shows F-scores on individual DDI types.", "The molecular information improves Fscores especially on type Mechanism and Effect.", "Results We also evaluated the accuracy of binary classification on DrugBank pairs by using only the molecular information in Table 3 .", "The performance is high, although the accuracy is evaluated on automatically generated negative instances.", "Finally, we applied the molecular-based DDI classification model trained on DrugBank to the DDIExtraction 2013 task data set.", "Since the Drug-Bank has no detailed labels, we mapped all four types of interactions to positive interactions and evaluated the classification performance.", "The results in Table 4 show that GCNs produce higher recall than precision and the overall performance is low considering the high performance on Drug-Bank pairs.", "This might be because the interactions of drugs are not always mentioned in texts even if the drugs can interact with each other and because hedged DDI mentions are annotated as DDIs in the text data set.", "We also trained the DDI extraction model only with molecular information by replacing h all with h m , but the F-scores were quite low (< 5%).", "These results show that we cannot predict textual relations only with molecular information.", "Related Work Various feature-based methods have been proposed during and after the DDIExtraction-2013 shared task .", "Kim et al.", "(2015) proposed a two-phase SVM-based approach that employed a linear SVM with rich features that consist of word, word pair, dependency graph, parse tree, and noun phrase-based constrained coordination features.", "Zheng et al.", "(2016) proposed a context vector graph kernel to exploit various types of contexts.", "Raihani and Laachfoubi (2017) also employed a two-phase SVM-based approach using non-linear kernels and they proposed five groups of features: word, drug, pair of drug, main verb and negative sentence features.", "Our model does not use any features or kernels.", "Various neural DDI extraction models have been recently proposed using CNNs and Recurrent Neural Networks (RNNs).", "Liu et al.", "(2016) built a CNN-based model based on word and position embeddings.", "Zheng et al.", "(2017) proposed a Bidirectional Long Short-Term Memory RNN (Bi-LSTM)-based model with an input attention mechanism, which obtained target drug-specific word representations before the Bi-LSTM.", "Lim et al.", "(2018) proposed Recursive neural networkbased model with a subtree containment feature and an ensemble method.", "This model showed the state-of-the-art performance on the DDIExtraction 2013 shared task data set if systems do not use negative instance filtering.", "These approaches did not consider molecular information, and they can also be enhanced by the molecular information.", "Vilar et al.", "(2017) focused on detecting DDIs from different sources such as pharmacovigilance sources, scientific biomedical literature and social media.", "They did not use deep neural networks and they did not consider molecular information.", "Learning representations of graphs are widely studied in several tasks such as knowledge base completion, drug discovery, and material science Gilmer et al., 2017) .", "Several graph convolutional neural networks have been proposed such as NFP (Duvenaud et al., 2015) , GGNN , and Molecular Graph Convolutions (Kearnes et al., 2016) , but they have not been applied to DDI extraction.", "Conclusions We proposed a novel neural method for DDI extraction using both textual and molecular informa-tion.", "The results show that DDIs can be predicted with high accuracy from molecular structure information and that the molecular information can improve DDI extraction from texts by 2.39 percept points in F-score on the data set of the DDIExtraction 2013 shared task.", "As future work, we would like to seek the way to model the textual and molecular representations jointly with alleviating the differences in labels.", "We will also investigate the use of other information in DrugBank." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Text-based DDI Extraction", "Molecular Structure-based DDI Classification", "DDI Extraction from Texts Using Molecular Structures", "Experimental Settings", "Text Corpus and Task Setting", "Data and Task for Molecular Structures", "Training Settings", "Results", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-61#paper-1120#slide-14
Conclusions
We proposed a novel neural method for DDI extraction using both textual and molecular information The molecular information has improved DDI extraction performance As future work, we will investigate the use of other information in DrugBank
We proposed a novel neural method for DDI extraction using both textual and molecular information The molecular information has improved DDI extraction performance As future work, we will investigate the use of other information in DrugBank
[]
GEM-SciDuet-train-62#paper-1126#slide-0
1126
Deep Investigation of Cross-Language Plagiarism Detection Methods
This paper is a deep investigation of cross-language plagiarism detection methods on a new recently introduced open dataset, which contains parallel and comparable collections of documents with multiple characteristics (different genres, languages and sizes of texts). We investigate cross-language plagiarism detection methods for 6 language pairs on 2 granularities of text units in order to draw robust conclusions on the best methods while deeply analyzing correlations across document styles and languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154 ], "paper_content_text": [ "Introduction Plagiarism is a very significant problem nowadays, specifically in higher education institutions.", "In monolingual context, this problem is rather well treated by several recent researches (Potthast et al., 2014) .", "Nevertheless, the expansion of the Internet, which facilitates access to documents throughout the world and to increasingly efficient (freely available) machine translation tools, helps to spread cross-language plagiarism.", "Crosslanguage plagiarism means plagiarism by translation, i.e.", "a text has been plagiarized while being translated (manually or automatically).", "The challenge in detecting this kind of plagiarism is that the suspicious document is no longer in the same language of its source.", "In this relatively new field of research, no systematic evaluation of the main methods, on several language pairs, for different text granularities and for different text genres, has been proposed yet.", "This is what we propose in this paper.", "Contribution.", "The paper focus is on crosslanguage semantic textual similarity detection which is the main part (with source retrieval) in cross-language plagiarism detection.", "The evaluation dataset used (Ferrero et al., 2016) allows us to run a large amount of experiments and analyses.", "To our knowledge, this is the first time that full potential of such a diverse dataset is used for benchmarking.", "So, the paper main contribution is a systematic evaluation of cross-language similarity detection methods (using in plagiarism detection) on different languages, sizes and genres of texts through a reproducible evaluation protocol.", "Robust conclusions are derived on the best methods while deeply analyzing correlations across document styles and languages.", "Due to space limitations, we only provide a subset of our experiments in the paper while more result tables and correlation analyses are provided as supplementary material on a Web link 1 .", "Outline.", "After presenting the dataset used for our study in section 2, and reviewing the stateof-the-art methods of cross-language plagiarism detection that we evaluate in section 3, we describe the evaluation protocol employed in section 4.", "Then, section 5.1 presents the correla-tion of the methods across language pairs, while section 5.2 presents a detailed analysis on only English-French pair.", "Finally, section 6 concludes this work and gives a few perspectives.", "Dataset The reference dataset used during our study is the new dataset 2 recently introduced by Ferrero et al.", "(2016) .", "The dataset was specially designed for a rigorous evaluation of cross-language textual similarity detection.", "The different characteristics of the dataset are synthesized in Table 1 , while Table 2 presents the number of aligned units by subcorpus and by granularity.", "More precisely, the characteristics of the dataset are the following: • it is multilingual: it contains French, English and Spanish texts; • it proposes cross-language alignment information at different granularities: document level, sentence level and chunk level; • it is based on both parallel and comparable corpora (mix of Wikipedia, scientific conference papers, amazon product reviews, Europarl and JRC); • it contains both human and machine translated texts; • it contains different percentages of named entities; • part of it has been obfuscated (to make the cross-language similarity detection more complicated) while the rest remains without noise; • the documents were written and translated by multiple types of authors (from average to professionals); • it covers various fields.", "Overview of State-of-the-Art Methods Textual similarity detection methods are not exactly methods to detect plagiarism.", "Plagiarism is a statement that someone copied text deliberately without attribution, while these methods only detect textual similarities.", "There is no way 2 https://github.com/FerreroJeremy/ Cross-Language-Dataset of knowing why texts are similar and thus to assimilate these similarities to plagiarism.", "At the moment, there are five classes of approaches for cross-language plagiarism detection.", "The aim of each method is to estimate if two textual units in different languages express the same message or not.", "Figure 1 presents a taxonomy of Potthast et al.", "(2011) , enriched by the study of Danilova (2013) , of the different cross-language plagiarism detection methods grouped by class of approaches.", "We only describe below the state-of-the-art methods that we evaluate in the paper, one for each class of approaches (those in bold in the Figure 1 ).", "Cross-Language Character N-Gram (CL-CnG) is based on Mcnamee and Mayfield (2004) model.", "We use the CL-C3G Potthast et al.", "(2011)'s implementation.", "Only spaces and alphanumeric characters are kept.", "Any other diacritic or symbol is deleted and the texts are lower-cased.", "The texts are then segmented into 3-grams (sequences of 3 contiguous characters) and transformed into tf.idf vectors of character 3-grams.", "The metric used to compare two vectors is the cosine similarity.", "Cross-Language Conceptual Thesaurus-based Similarity (CL-CTS) aims to measure the semantic similarity using abstract concepts from words in textual units.", "We reuse the idea of Pataki (2012) which, for each sentence, build a bag-ofwords by getting all the available translations of each word of the sentence.", "For that, we use a linked lexical resource called DBNary (Sérasset, 2015) .", "The bag-of-words of a sentence is the merge of the bag-of-words of the words of the sentence.", "After, we use the Jaccard distance (Jaccard, 1912) with fuzzy matching between two bag-ofwords to measure the similarity between two sentences.", "Cross-Language Alignment-based Similarity Analysis (CL-ASA) was introduced for the first time by Barrón-Cedeño et al.", "(2008) and developed subsequently by Pinto et al.", "(2009) .", "The model aims to determinate how a textual unit is potentially the translation of another textual unit using bilingual unigram dictionary which contains translations pairs (and their probabilities) extracted from a parallel corpus.", "Our lexical dictionary is calculated applying the IBM-1 model Danilova (2013) , of different approaches for cross-language similarity detection.", "(Brown et al., 1993) on the concatenation of TED 4 (Cettolo et al., 2012) and News 5 parallel corpora.", "We reuse the implementation of Pinto et al.", "(2009) that proposed a formula that factored the alignment function.", "MT-Based Models Cross-Language Explicit Semantic Analysis (CL-ESA) is based on the explicit semantic analysis model introduced for the first time by Gabrilovich and Markovitch (2007) , which represents the meaning of a document by a vector based on the vocabulary derived from Wikipedia, to find a document within a corpus.", "It was reused by Potthast et al.", "(2008) in the context of cross-language document retrieval.", "Our implementation uses a part of Wikipedia, from which our test data was removed, to build the vector representations of the texts.", "Translation + Monolingual Analysis (T+MA) consists in translating suspect plagiarized text back into the same language of source text, in order to operate a monolingual comparison between them.", "We use the Muhr et al.", "(2010) 's implementation which consists in replacing each word of one text by its most likely translations in the language of the other text, leading to a bags-of-words.", "We use DBNary (Sérasset, 2015) to get the translations.", "The metric used to compare two texts is a monolingual matching based on strict intersection of bags-of-words.", "More recently, SemEval-2016 (Agirre et al., 2016) proposed a new subtask on evaluation of cross-lingual semantic textual similarity.", "Despite the fact that it was the first year that this subtask was attempted, there were 26 submissions from 10 teams.", "Most of the submissions relied on a machine translation step followed by a monolingual semantic similarity, but 4 teams tried to use learned vector representations (on words or sentences) combined with machine translation confidence (for instance the submission of Lo et al.", "(2016) or Ataman et al.", "(2016) ).", "The method that achieved the best performance (Brychcin and Svoboda, 2016 ) was a supervised system built on a word alignment-based method proposed by Sultan et al.", "(2015) .", "This very recent method is, however, not evaluated in this paper.", "Evaluation Protocol We apply the same evaluation protocol as in Ferrero et al.", "(2016)'s paper.", "We build a distance matrix of size N x M , with M = 1,000 and N = |S| where S is the evaluated sub-corpus.", "Each textual unit of S is compared to itself (actually, since this is cross-lingual similarity detection, each source language unit is compared to its corresponding unit in the target language) and to M -1 other units randomly selected from S. The same unit may be selected several times.", "Then, a matching score for each comparison performed is obtained, leading to the distance matrix.", "Thresholding on the matrix is applied to find the threshold giving the best F 1 score.", "The F 1 score is the harmonic mean of precision and recall.", "Precision is defined as the proportion of relevant matches (similar crosslanguage units) retrieved among all the matches retrieved.", "Recall is the proportion of relevant matches retrieved among all the relevant matches to retrieve.", "Each method is applied on each subcorpus for chunk and sentence granularities.", "For each configuration (i.e.", "a particular method applied on a particular sub-corpus considering a particular granularity), 10 folds are carried out by changing the M selected units.", "Investigation of Cross-Language Similarity Performances 5.1 Across Language Pairs Table 3 brings together the performances of all methods on all sub-corpora for each pair of languages at chunk and sentence level.", "In both sub-tables, at chunk and sentence level, the overall F 1 score over all sub-corpora of one method in one particular language pair is given.", "As a preliminary remark, one should note that CL-C3G and CL-ESA lead to the same results for a given language pair (same performance if we reverse source and target languages) due to their symmetrical property.", "Another remark we can make is that methods are consistent across language pairs: best performing methods are mostly the same, whatever the language pair considered.", "This is confirmed by the calculation of the Pearson correlation between performances of different pairs of languages, from Table 3 and reported in Table 4 .", "Table 4 represents the Pearson correlations between the different language pairs of the overall results of all methods on all sub-corpora.", "This result is interesting because some of these methods depend on the availability of lexical resources whose quality is heterogeneous across languages.", "Despite the variation of the source and target languages, a minimum Pearson correlation of 0.940 for EN→FR vs. FR→ES, and a maximum of 0.998 for EN→FR vs. EN→ES and ES→FR vs. FR→ES at chunk level is observed (see Table 4 ).", "For the sentence granularity, it is the same order of magnitude: the maximum Pearson correlation is 0.997 for ES→EN vs. EN→ES and ES→FR vs. FR→ES, and the minimum is 0.913 for EN→ES vs. FR→ES (see Table 4 ).", "In average the language pair EN→FR is 0.975 correlated with the other language pairs (0.980 at chunk-level and 0.971 at sentence-level), for instance.", "This correlation suggests the possibility to tune a method on one language and apply it to another language if needed.", "Table 5 synthesizes the top 3 methods for each language pair observed in Tables 3 and 4 .", "No matter the source and target languages or the granularity, CL-C3G generally outperforms the other methods.", "Then CL-ASA, CL-CTS and T+MA are also closely efficient but their behavior depends on the granularity.", "Generally, CL-ASA is better at the chunk granularity, followed by CL-CTS and T+MA.", "On the contrary, CL-CTS and T+MA are slightly more effective at sentence granularity.", "One explanation for this is that T+MA depends on the quality of machine translation, which may have poor performance on isolated chunks, while a short length text unit benefits the CL-CTS and CL-ASA methods because of their formula which Table 4 : Pearson correlations of the overall F 1 score over all sub-corpora of all methods between the different language pairs (EN: English; FR: French; ES: Spanish).", "will tend to minimize the number of false positives in this case.", "Anyway, despite these differences in ranking, the gap in term of performance values is small between these closest methods.", "For instance, we can see that when CL-CTS is more efficient than CL-C3G (ES→FR column at sentence level in Table 3 and Table 5 (b)), the difference of performance is very small (0.0068).", "Table 6 shows the Pearson correlations of the results (of all methods on all sub-corpora) by language pair between the chunk and the sentence granularity (correlations calculated from Table 3 , between the EN→FR column at chunk level with the EN→FR column at sentence level, and so on).", "We can see a strong Pearson correlation of the performances on the language pair between the chunk and the sentence granularity (an average of 0.9, with 0.907 for the EN→FR pair, for instance).", "This proves that all methods behave along a simi- lar trend at chunk and at sentence level, regardless of the languages on which they are used.", "However, we can see in Table 7 that if we collect correlation scores separately for each method (on all sub-corpora, on all language pairs) between chunk Table 7 : Pearson correlations of the results on all sub-corpora on all language pairs, between the chunk and the sentence granularity, by methods (calculated from Table 3 ).", "and sentence granularity performances (correlations also calculated from Table 3 , between the CL-C3G line at chunk level with the CL-C3G line at sentence level, and so on), we notice that some methods exhibit a different behavior at both chunk and sentence granularities: for instance, this is the case for CL-ASA which seems to be really better at chunk level.", "In conclusion, we can say that the methods presented here may behave slightly differently depending on the text unit considered (chunk or sentence) but they behave practically the same no matter the languages of the compared texts are (as long as enough lexical resources are available for dealing with these languages).", "Detailed Analysis for English-French The previous sub-section has shown a consistent behavior of methods across language pairs (strongly consistent) and granularities (less strongly consistent).", "For this reason, we now propose a detailed analysis for different sub-corpora, for the English-French language pair -at chunk and sentence level -only.", "Providing these results for all language pairs and granularities would take too much space.", "Moreover, we also run those state-of-the-art methods on the dataset of the Spanish-English cross-lingual Semantic Textual Similarity task of SemEval-2016 (Agirre et al., 2016) and SemEval-2017 (Cer et al., 2017 , and propose a shallower but equally rigorous analysis.", "However, all those results are also made available as supplementary material on our paper Web page.", "Table 8 shows the performances of methods on the EN→FR sub-corpora.", "As mentioned earlier, CL-C3G is in general the most effective method.", "CL-ESA seems to show better results on comparable corpora, like Wikipedia.", "In contrast, CL-ASA obtains better results on parallel corpora such as JRC or Europarl collections.", "CL-CTS and T+MA are pretty efficient and versatile too.", "It is also interesting to note that the results of the methods are well correlated between certain types of sub-corpora.", "For instance, the Pearson correlation of the performances of all methods between the TALN sub-corpus and the APR sub-corpus, is 0.982 at the chunk level, and 0.937 at the sentence level.", "This means that a method could be optimized on a particular corpus (for instance APR) and applied efficiently on another corpus (for instance TALN which is made of scientific conference papers).", "Figure 2 : Distribution histograms of some state-of-the-art methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Beyond their capacity to correctly predict a (mis)match, an interesting feature of the methods is their clustering capacity, i.e.", "their ability to cor-rectly separate the positives (cross-lingual semantic textual similar units) and the negatives (textual units with different meaning) in order to minimize Table 9 : Precision (P), Recall (R) and F 1 score, reached at a certain threshold (T), of some stateof-the-art methods for a data subset made with 1000 positives and 1000 negatives (mis)matches -10 folds validation.", "the doubts on the classification.", "To verify this phenomenon, we conducted another experience with a new protocol.", "We built a data subset by concatenating some documents of the previously presented dataset (Ferrero et al., 2016) .", "More precisely we used 200 pairs of each sub-corpora at sentence level only.", "We compared 1000 English textual units to their corresponding unit in French, and to one other (not relevant) French unit.", "So, each English textual unit must strictly leads to one match and one mismatch, i.e.", "in the end, we have exactly 1000 matches and 1000 mismatches for a run.", "We repeat this experiment 10 times for each method, leading to 10 folds for each method.", "The results of this experiment are reported on Table 9 , that shows the average for the 10 folds of the Precision (P), the Recall (R) and the F 1 score of some state-of-the-art methods, reached at a certain threshold (T).", "The results are also reported in Figure 2 , in the form of distribution histograms of the evaluated methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Distribution histograms on Figure 2 highlights the fact that each method has its own fingerprint: even if two methods looks equivalent in term of performances (see Table 9 ), their clustering capacity, and so the distribution of their (mis)matches can be different.", "For instance, we can see that a random distribution is a very bad distribution (Figure 2 (a) ).", "We can also see that CL-C3G has a narrow distribution of negatives and a broad distribution for positives (Figure 2 (c) ), whereas the opposite is true for CL-ASA (Figure 2 (e) ).", "Table 9 confirms this phenomenon by the fact that the decision threshold is very different for CL-ASA (0.762) compared to the other methods (around 0.1).", "This means that CL-ASA discriminates more correctly the positives that the negatives, when it seems to be the opposite for the other methods.", "For this reason, we can make the assumption that some methods are complementary, due to their different fingerprint.", "These behaviors suggest that fusion between these methods (notably decision tree based fusion) should lead to very promising results.", "Conclusion We conducted a deep investigation of crosslanguage plagiarism detection methods on a challenging dataset.", "Our results have shown a common behavior of methods across different language pairs.", "We revealed strong correlations across languages but also across text units considered.", "This means that when a method is more effective than another on a sufficiently large dataset, it is generally more effective in any other case.", "This also means that if a method is efficient on a particular language pair, it will be similarly efficient on another language pair as long as enough lexical resources are available for these languages.", "We also investigated the behavior of the methods through the different types of texts on a particular language pair: English-French.", "We revealed strong correlations across types of texts.", "This means that a method could be optimized on a particular corpus and applied efficiently on another corpus.", "Finally, we have shown that methods behave differently in clustering match and mismatched units, even if they seem similar in performance.", "This opens new possibilities for their combination or fusion.", "More results supporting these facts are provided as supplementary material 6 ." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.2", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Overview of State-of-the-Art Methods", "Evaluation Protocol", "Investigation of Cross-Language", "Detailed Analysis for English-French", "Conclusion" ] }
GEM-SciDuet-train-62#paper-1126#slide-0
What is Cross Language Plagiarism Detection
Cross-Language Plagiarism is a plagiarism by translation, i.e. a text has been plagiarized while being translated (manually or automatically). From a text in a language L, we must find similar passage(s) in other text(s) from a set of candidate texts in language L (cross-language textual similarity).
Cross-Language Plagiarism is a plagiarism by translation, i.e. a text has been plagiarized while being translated (manually or automatically). From a text in a language L, we must find similar passage(s) in other text(s) from a set of candidate texts in language L (cross-language textual similarity).
[]
GEM-SciDuet-train-62#paper-1126#slide-1
1126
Deep Investigation of Cross-Language Plagiarism Detection Methods
This paper is a deep investigation of cross-language plagiarism detection methods on a new recently introduced open dataset, which contains parallel and comparable collections of documents with multiple characteristics (different genres, languages and sizes of texts). We investigate cross-language plagiarism detection methods for 6 language pairs on 2 granularities of text units in order to draw robust conclusions on the best methods while deeply analyzing correlations across document styles and languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154 ], "paper_content_text": [ "Introduction Plagiarism is a very significant problem nowadays, specifically in higher education institutions.", "In monolingual context, this problem is rather well treated by several recent researches (Potthast et al., 2014) .", "Nevertheless, the expansion of the Internet, which facilitates access to documents throughout the world and to increasingly efficient (freely available) machine translation tools, helps to spread cross-language plagiarism.", "Crosslanguage plagiarism means plagiarism by translation, i.e.", "a text has been plagiarized while being translated (manually or automatically).", "The challenge in detecting this kind of plagiarism is that the suspicious document is no longer in the same language of its source.", "In this relatively new field of research, no systematic evaluation of the main methods, on several language pairs, for different text granularities and for different text genres, has been proposed yet.", "This is what we propose in this paper.", "Contribution.", "The paper focus is on crosslanguage semantic textual similarity detection which is the main part (with source retrieval) in cross-language plagiarism detection.", "The evaluation dataset used (Ferrero et al., 2016) allows us to run a large amount of experiments and analyses.", "To our knowledge, this is the first time that full potential of such a diverse dataset is used for benchmarking.", "So, the paper main contribution is a systematic evaluation of cross-language similarity detection methods (using in plagiarism detection) on different languages, sizes and genres of texts through a reproducible evaluation protocol.", "Robust conclusions are derived on the best methods while deeply analyzing correlations across document styles and languages.", "Due to space limitations, we only provide a subset of our experiments in the paper while more result tables and correlation analyses are provided as supplementary material on a Web link 1 .", "Outline.", "After presenting the dataset used for our study in section 2, and reviewing the stateof-the-art methods of cross-language plagiarism detection that we evaluate in section 3, we describe the evaluation protocol employed in section 4.", "Then, section 5.1 presents the correla-tion of the methods across language pairs, while section 5.2 presents a detailed analysis on only English-French pair.", "Finally, section 6 concludes this work and gives a few perspectives.", "Dataset The reference dataset used during our study is the new dataset 2 recently introduced by Ferrero et al.", "(2016) .", "The dataset was specially designed for a rigorous evaluation of cross-language textual similarity detection.", "The different characteristics of the dataset are synthesized in Table 1 , while Table 2 presents the number of aligned units by subcorpus and by granularity.", "More precisely, the characteristics of the dataset are the following: • it is multilingual: it contains French, English and Spanish texts; • it proposes cross-language alignment information at different granularities: document level, sentence level and chunk level; • it is based on both parallel and comparable corpora (mix of Wikipedia, scientific conference papers, amazon product reviews, Europarl and JRC); • it contains both human and machine translated texts; • it contains different percentages of named entities; • part of it has been obfuscated (to make the cross-language similarity detection more complicated) while the rest remains without noise; • the documents were written and translated by multiple types of authors (from average to professionals); • it covers various fields.", "Overview of State-of-the-Art Methods Textual similarity detection methods are not exactly methods to detect plagiarism.", "Plagiarism is a statement that someone copied text deliberately without attribution, while these methods only detect textual similarities.", "There is no way 2 https://github.com/FerreroJeremy/ Cross-Language-Dataset of knowing why texts are similar and thus to assimilate these similarities to plagiarism.", "At the moment, there are five classes of approaches for cross-language plagiarism detection.", "The aim of each method is to estimate if two textual units in different languages express the same message or not.", "Figure 1 presents a taxonomy of Potthast et al.", "(2011) , enriched by the study of Danilova (2013) , of the different cross-language plagiarism detection methods grouped by class of approaches.", "We only describe below the state-of-the-art methods that we evaluate in the paper, one for each class of approaches (those in bold in the Figure 1 ).", "Cross-Language Character N-Gram (CL-CnG) is based on Mcnamee and Mayfield (2004) model.", "We use the CL-C3G Potthast et al.", "(2011)'s implementation.", "Only spaces and alphanumeric characters are kept.", "Any other diacritic or symbol is deleted and the texts are lower-cased.", "The texts are then segmented into 3-grams (sequences of 3 contiguous characters) and transformed into tf.idf vectors of character 3-grams.", "The metric used to compare two vectors is the cosine similarity.", "Cross-Language Conceptual Thesaurus-based Similarity (CL-CTS) aims to measure the semantic similarity using abstract concepts from words in textual units.", "We reuse the idea of Pataki (2012) which, for each sentence, build a bag-ofwords by getting all the available translations of each word of the sentence.", "For that, we use a linked lexical resource called DBNary (Sérasset, 2015) .", "The bag-of-words of a sentence is the merge of the bag-of-words of the words of the sentence.", "After, we use the Jaccard distance (Jaccard, 1912) with fuzzy matching between two bag-ofwords to measure the similarity between two sentences.", "Cross-Language Alignment-based Similarity Analysis (CL-ASA) was introduced for the first time by Barrón-Cedeño et al.", "(2008) and developed subsequently by Pinto et al.", "(2009) .", "The model aims to determinate how a textual unit is potentially the translation of another textual unit using bilingual unigram dictionary which contains translations pairs (and their probabilities) extracted from a parallel corpus.", "Our lexical dictionary is calculated applying the IBM-1 model Danilova (2013) , of different approaches for cross-language similarity detection.", "(Brown et al., 1993) on the concatenation of TED 4 (Cettolo et al., 2012) and News 5 parallel corpora.", "We reuse the implementation of Pinto et al.", "(2009) that proposed a formula that factored the alignment function.", "MT-Based Models Cross-Language Explicit Semantic Analysis (CL-ESA) is based on the explicit semantic analysis model introduced for the first time by Gabrilovich and Markovitch (2007) , which represents the meaning of a document by a vector based on the vocabulary derived from Wikipedia, to find a document within a corpus.", "It was reused by Potthast et al.", "(2008) in the context of cross-language document retrieval.", "Our implementation uses a part of Wikipedia, from which our test data was removed, to build the vector representations of the texts.", "Translation + Monolingual Analysis (T+MA) consists in translating suspect plagiarized text back into the same language of source text, in order to operate a monolingual comparison between them.", "We use the Muhr et al.", "(2010) 's implementation which consists in replacing each word of one text by its most likely translations in the language of the other text, leading to a bags-of-words.", "We use DBNary (Sérasset, 2015) to get the translations.", "The metric used to compare two texts is a monolingual matching based on strict intersection of bags-of-words.", "More recently, SemEval-2016 (Agirre et al., 2016) proposed a new subtask on evaluation of cross-lingual semantic textual similarity.", "Despite the fact that it was the first year that this subtask was attempted, there were 26 submissions from 10 teams.", "Most of the submissions relied on a machine translation step followed by a monolingual semantic similarity, but 4 teams tried to use learned vector representations (on words or sentences) combined with machine translation confidence (for instance the submission of Lo et al.", "(2016) or Ataman et al.", "(2016) ).", "The method that achieved the best performance (Brychcin and Svoboda, 2016 ) was a supervised system built on a word alignment-based method proposed by Sultan et al.", "(2015) .", "This very recent method is, however, not evaluated in this paper.", "Evaluation Protocol We apply the same evaluation protocol as in Ferrero et al.", "(2016)'s paper.", "We build a distance matrix of size N x M , with M = 1,000 and N = |S| where S is the evaluated sub-corpus.", "Each textual unit of S is compared to itself (actually, since this is cross-lingual similarity detection, each source language unit is compared to its corresponding unit in the target language) and to M -1 other units randomly selected from S. The same unit may be selected several times.", "Then, a matching score for each comparison performed is obtained, leading to the distance matrix.", "Thresholding on the matrix is applied to find the threshold giving the best F 1 score.", "The F 1 score is the harmonic mean of precision and recall.", "Precision is defined as the proportion of relevant matches (similar crosslanguage units) retrieved among all the matches retrieved.", "Recall is the proportion of relevant matches retrieved among all the relevant matches to retrieve.", "Each method is applied on each subcorpus for chunk and sentence granularities.", "For each configuration (i.e.", "a particular method applied on a particular sub-corpus considering a particular granularity), 10 folds are carried out by changing the M selected units.", "Investigation of Cross-Language Similarity Performances 5.1 Across Language Pairs Table 3 brings together the performances of all methods on all sub-corpora for each pair of languages at chunk and sentence level.", "In both sub-tables, at chunk and sentence level, the overall F 1 score over all sub-corpora of one method in one particular language pair is given.", "As a preliminary remark, one should note that CL-C3G and CL-ESA lead to the same results for a given language pair (same performance if we reverse source and target languages) due to their symmetrical property.", "Another remark we can make is that methods are consistent across language pairs: best performing methods are mostly the same, whatever the language pair considered.", "This is confirmed by the calculation of the Pearson correlation between performances of different pairs of languages, from Table 3 and reported in Table 4 .", "Table 4 represents the Pearson correlations between the different language pairs of the overall results of all methods on all sub-corpora.", "This result is interesting because some of these methods depend on the availability of lexical resources whose quality is heterogeneous across languages.", "Despite the variation of the source and target languages, a minimum Pearson correlation of 0.940 for EN→FR vs. FR→ES, and a maximum of 0.998 for EN→FR vs. EN→ES and ES→FR vs. FR→ES at chunk level is observed (see Table 4 ).", "For the sentence granularity, it is the same order of magnitude: the maximum Pearson correlation is 0.997 for ES→EN vs. EN→ES and ES→FR vs. FR→ES, and the minimum is 0.913 for EN→ES vs. FR→ES (see Table 4 ).", "In average the language pair EN→FR is 0.975 correlated with the other language pairs (0.980 at chunk-level and 0.971 at sentence-level), for instance.", "This correlation suggests the possibility to tune a method on one language and apply it to another language if needed.", "Table 5 synthesizes the top 3 methods for each language pair observed in Tables 3 and 4 .", "No matter the source and target languages or the granularity, CL-C3G generally outperforms the other methods.", "Then CL-ASA, CL-CTS and T+MA are also closely efficient but their behavior depends on the granularity.", "Generally, CL-ASA is better at the chunk granularity, followed by CL-CTS and T+MA.", "On the contrary, CL-CTS and T+MA are slightly more effective at sentence granularity.", "One explanation for this is that T+MA depends on the quality of machine translation, which may have poor performance on isolated chunks, while a short length text unit benefits the CL-CTS and CL-ASA methods because of their formula which Table 4 : Pearson correlations of the overall F 1 score over all sub-corpora of all methods between the different language pairs (EN: English; FR: French; ES: Spanish).", "will tend to minimize the number of false positives in this case.", "Anyway, despite these differences in ranking, the gap in term of performance values is small between these closest methods.", "For instance, we can see that when CL-CTS is more efficient than CL-C3G (ES→FR column at sentence level in Table 3 and Table 5 (b)), the difference of performance is very small (0.0068).", "Table 6 shows the Pearson correlations of the results (of all methods on all sub-corpora) by language pair between the chunk and the sentence granularity (correlations calculated from Table 3 , between the EN→FR column at chunk level with the EN→FR column at sentence level, and so on).", "We can see a strong Pearson correlation of the performances on the language pair between the chunk and the sentence granularity (an average of 0.9, with 0.907 for the EN→FR pair, for instance).", "This proves that all methods behave along a simi- lar trend at chunk and at sentence level, regardless of the languages on which they are used.", "However, we can see in Table 7 that if we collect correlation scores separately for each method (on all sub-corpora, on all language pairs) between chunk Table 7 : Pearson correlations of the results on all sub-corpora on all language pairs, between the chunk and the sentence granularity, by methods (calculated from Table 3 ).", "and sentence granularity performances (correlations also calculated from Table 3 , between the CL-C3G line at chunk level with the CL-C3G line at sentence level, and so on), we notice that some methods exhibit a different behavior at both chunk and sentence granularities: for instance, this is the case for CL-ASA which seems to be really better at chunk level.", "In conclusion, we can say that the methods presented here may behave slightly differently depending on the text unit considered (chunk or sentence) but they behave practically the same no matter the languages of the compared texts are (as long as enough lexical resources are available for dealing with these languages).", "Detailed Analysis for English-French The previous sub-section has shown a consistent behavior of methods across language pairs (strongly consistent) and granularities (less strongly consistent).", "For this reason, we now propose a detailed analysis for different sub-corpora, for the English-French language pair -at chunk and sentence level -only.", "Providing these results for all language pairs and granularities would take too much space.", "Moreover, we also run those state-of-the-art methods on the dataset of the Spanish-English cross-lingual Semantic Textual Similarity task of SemEval-2016 (Agirre et al., 2016) and SemEval-2017 (Cer et al., 2017 , and propose a shallower but equally rigorous analysis.", "However, all those results are also made available as supplementary material on our paper Web page.", "Table 8 shows the performances of methods on the EN→FR sub-corpora.", "As mentioned earlier, CL-C3G is in general the most effective method.", "CL-ESA seems to show better results on comparable corpora, like Wikipedia.", "In contrast, CL-ASA obtains better results on parallel corpora such as JRC or Europarl collections.", "CL-CTS and T+MA are pretty efficient and versatile too.", "It is also interesting to note that the results of the methods are well correlated between certain types of sub-corpora.", "For instance, the Pearson correlation of the performances of all methods between the TALN sub-corpus and the APR sub-corpus, is 0.982 at the chunk level, and 0.937 at the sentence level.", "This means that a method could be optimized on a particular corpus (for instance APR) and applied efficiently on another corpus (for instance TALN which is made of scientific conference papers).", "Figure 2 : Distribution histograms of some state-of-the-art methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Beyond their capacity to correctly predict a (mis)match, an interesting feature of the methods is their clustering capacity, i.e.", "their ability to cor-rectly separate the positives (cross-lingual semantic textual similar units) and the negatives (textual units with different meaning) in order to minimize Table 9 : Precision (P), Recall (R) and F 1 score, reached at a certain threshold (T), of some stateof-the-art methods for a data subset made with 1000 positives and 1000 negatives (mis)matches -10 folds validation.", "the doubts on the classification.", "To verify this phenomenon, we conducted another experience with a new protocol.", "We built a data subset by concatenating some documents of the previously presented dataset (Ferrero et al., 2016) .", "More precisely we used 200 pairs of each sub-corpora at sentence level only.", "We compared 1000 English textual units to their corresponding unit in French, and to one other (not relevant) French unit.", "So, each English textual unit must strictly leads to one match and one mismatch, i.e.", "in the end, we have exactly 1000 matches and 1000 mismatches for a run.", "We repeat this experiment 10 times for each method, leading to 10 folds for each method.", "The results of this experiment are reported on Table 9 , that shows the average for the 10 folds of the Precision (P), the Recall (R) and the F 1 score of some state-of-the-art methods, reached at a certain threshold (T).", "The results are also reported in Figure 2 , in the form of distribution histograms of the evaluated methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Distribution histograms on Figure 2 highlights the fact that each method has its own fingerprint: even if two methods looks equivalent in term of performances (see Table 9 ), their clustering capacity, and so the distribution of their (mis)matches can be different.", "For instance, we can see that a random distribution is a very bad distribution (Figure 2 (a) ).", "We can also see that CL-C3G has a narrow distribution of negatives and a broad distribution for positives (Figure 2 (c) ), whereas the opposite is true for CL-ASA (Figure 2 (e) ).", "Table 9 confirms this phenomenon by the fact that the decision threshold is very different for CL-ASA (0.762) compared to the other methods (around 0.1).", "This means that CL-ASA discriminates more correctly the positives that the negatives, when it seems to be the opposite for the other methods.", "For this reason, we can make the assumption that some methods are complementary, due to their different fingerprint.", "These behaviors suggest that fusion between these methods (notably decision tree based fusion) should lead to very promising results.", "Conclusion We conducted a deep investigation of crosslanguage plagiarism detection methods on a challenging dataset.", "Our results have shown a common behavior of methods across different language pairs.", "We revealed strong correlations across languages but also across text units considered.", "This means that when a method is more effective than another on a sufficiently large dataset, it is generally more effective in any other case.", "This also means that if a method is efficient on a particular language pair, it will be similarly efficient on another language pair as long as enough lexical resources are available for these languages.", "We also investigated the behavior of the methods through the different types of texts on a particular language pair: English-French.", "We revealed strong correlations across types of texts.", "This means that a method could be optimized on a particular corpus and applied efficiently on another corpus.", "Finally, we have shown that methods behave differently in clustering match and mismatched units, even if they seem similar in performance.", "This opens new possibilities for their combination or fusion.", "More results supporting these facts are provided as supplementary material 6 ." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.2", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Overview of State-of-the-Art Methods", "Evaluation Protocol", "Investigation of Cross-Language", "Detailed Analysis for English-French", "Conclusion" ] }
GEM-SciDuet-train-62#paper-1126#slide-1
Why is it so important
- McCabe, D. (2010). Students cheating takes a high-tech turn. In Rutgers Business School. - Josephson Institute. (2011). What would honest Abe Lincoln say?
- McCabe, D. (2010). Students cheating takes a high-tech turn. In Rutgers Business School. - Josephson Institute. (2011). What would honest Abe Lincoln say?
[]
GEM-SciDuet-train-62#paper-1126#slide-2
1126
Deep Investigation of Cross-Language Plagiarism Detection Methods
This paper is a deep investigation of cross-language plagiarism detection methods on a new recently introduced open dataset, which contains parallel and comparable collections of documents with multiple characteristics (different genres, languages and sizes of texts). We investigate cross-language plagiarism detection methods for 6 language pairs on 2 granularities of text units in order to draw robust conclusions on the best methods while deeply analyzing correlations across document styles and languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154 ], "paper_content_text": [ "Introduction Plagiarism is a very significant problem nowadays, specifically in higher education institutions.", "In monolingual context, this problem is rather well treated by several recent researches (Potthast et al., 2014) .", "Nevertheless, the expansion of the Internet, which facilitates access to documents throughout the world and to increasingly efficient (freely available) machine translation tools, helps to spread cross-language plagiarism.", "Crosslanguage plagiarism means plagiarism by translation, i.e.", "a text has been plagiarized while being translated (manually or automatically).", "The challenge in detecting this kind of plagiarism is that the suspicious document is no longer in the same language of its source.", "In this relatively new field of research, no systematic evaluation of the main methods, on several language pairs, for different text granularities and for different text genres, has been proposed yet.", "This is what we propose in this paper.", "Contribution.", "The paper focus is on crosslanguage semantic textual similarity detection which is the main part (with source retrieval) in cross-language plagiarism detection.", "The evaluation dataset used (Ferrero et al., 2016) allows us to run a large amount of experiments and analyses.", "To our knowledge, this is the first time that full potential of such a diverse dataset is used for benchmarking.", "So, the paper main contribution is a systematic evaluation of cross-language similarity detection methods (using in plagiarism detection) on different languages, sizes and genres of texts through a reproducible evaluation protocol.", "Robust conclusions are derived on the best methods while deeply analyzing correlations across document styles and languages.", "Due to space limitations, we only provide a subset of our experiments in the paper while more result tables and correlation analyses are provided as supplementary material on a Web link 1 .", "Outline.", "After presenting the dataset used for our study in section 2, and reviewing the stateof-the-art methods of cross-language plagiarism detection that we evaluate in section 3, we describe the evaluation protocol employed in section 4.", "Then, section 5.1 presents the correla-tion of the methods across language pairs, while section 5.2 presents a detailed analysis on only English-French pair.", "Finally, section 6 concludes this work and gives a few perspectives.", "Dataset The reference dataset used during our study is the new dataset 2 recently introduced by Ferrero et al.", "(2016) .", "The dataset was specially designed for a rigorous evaluation of cross-language textual similarity detection.", "The different characteristics of the dataset are synthesized in Table 1 , while Table 2 presents the number of aligned units by subcorpus and by granularity.", "More precisely, the characteristics of the dataset are the following: • it is multilingual: it contains French, English and Spanish texts; • it proposes cross-language alignment information at different granularities: document level, sentence level and chunk level; • it is based on both parallel and comparable corpora (mix of Wikipedia, scientific conference papers, amazon product reviews, Europarl and JRC); • it contains both human and machine translated texts; • it contains different percentages of named entities; • part of it has been obfuscated (to make the cross-language similarity detection more complicated) while the rest remains without noise; • the documents were written and translated by multiple types of authors (from average to professionals); • it covers various fields.", "Overview of State-of-the-Art Methods Textual similarity detection methods are not exactly methods to detect plagiarism.", "Plagiarism is a statement that someone copied text deliberately without attribution, while these methods only detect textual similarities.", "There is no way 2 https://github.com/FerreroJeremy/ Cross-Language-Dataset of knowing why texts are similar and thus to assimilate these similarities to plagiarism.", "At the moment, there are five classes of approaches for cross-language plagiarism detection.", "The aim of each method is to estimate if two textual units in different languages express the same message or not.", "Figure 1 presents a taxonomy of Potthast et al.", "(2011) , enriched by the study of Danilova (2013) , of the different cross-language plagiarism detection methods grouped by class of approaches.", "We only describe below the state-of-the-art methods that we evaluate in the paper, one for each class of approaches (those in bold in the Figure 1 ).", "Cross-Language Character N-Gram (CL-CnG) is based on Mcnamee and Mayfield (2004) model.", "We use the CL-C3G Potthast et al.", "(2011)'s implementation.", "Only spaces and alphanumeric characters are kept.", "Any other diacritic or symbol is deleted and the texts are lower-cased.", "The texts are then segmented into 3-grams (sequences of 3 contiguous characters) and transformed into tf.idf vectors of character 3-grams.", "The metric used to compare two vectors is the cosine similarity.", "Cross-Language Conceptual Thesaurus-based Similarity (CL-CTS) aims to measure the semantic similarity using abstract concepts from words in textual units.", "We reuse the idea of Pataki (2012) which, for each sentence, build a bag-ofwords by getting all the available translations of each word of the sentence.", "For that, we use a linked lexical resource called DBNary (Sérasset, 2015) .", "The bag-of-words of a sentence is the merge of the bag-of-words of the words of the sentence.", "After, we use the Jaccard distance (Jaccard, 1912) with fuzzy matching between two bag-ofwords to measure the similarity between two sentences.", "Cross-Language Alignment-based Similarity Analysis (CL-ASA) was introduced for the first time by Barrón-Cedeño et al.", "(2008) and developed subsequently by Pinto et al.", "(2009) .", "The model aims to determinate how a textual unit is potentially the translation of another textual unit using bilingual unigram dictionary which contains translations pairs (and their probabilities) extracted from a parallel corpus.", "Our lexical dictionary is calculated applying the IBM-1 model Danilova (2013) , of different approaches for cross-language similarity detection.", "(Brown et al., 1993) on the concatenation of TED 4 (Cettolo et al., 2012) and News 5 parallel corpora.", "We reuse the implementation of Pinto et al.", "(2009) that proposed a formula that factored the alignment function.", "MT-Based Models Cross-Language Explicit Semantic Analysis (CL-ESA) is based on the explicit semantic analysis model introduced for the first time by Gabrilovich and Markovitch (2007) , which represents the meaning of a document by a vector based on the vocabulary derived from Wikipedia, to find a document within a corpus.", "It was reused by Potthast et al.", "(2008) in the context of cross-language document retrieval.", "Our implementation uses a part of Wikipedia, from which our test data was removed, to build the vector representations of the texts.", "Translation + Monolingual Analysis (T+MA) consists in translating suspect plagiarized text back into the same language of source text, in order to operate a monolingual comparison between them.", "We use the Muhr et al.", "(2010) 's implementation which consists in replacing each word of one text by its most likely translations in the language of the other text, leading to a bags-of-words.", "We use DBNary (Sérasset, 2015) to get the translations.", "The metric used to compare two texts is a monolingual matching based on strict intersection of bags-of-words.", "More recently, SemEval-2016 (Agirre et al., 2016) proposed a new subtask on evaluation of cross-lingual semantic textual similarity.", "Despite the fact that it was the first year that this subtask was attempted, there were 26 submissions from 10 teams.", "Most of the submissions relied on a machine translation step followed by a monolingual semantic similarity, but 4 teams tried to use learned vector representations (on words or sentences) combined with machine translation confidence (for instance the submission of Lo et al.", "(2016) or Ataman et al.", "(2016) ).", "The method that achieved the best performance (Brychcin and Svoboda, 2016 ) was a supervised system built on a word alignment-based method proposed by Sultan et al.", "(2015) .", "This very recent method is, however, not evaluated in this paper.", "Evaluation Protocol We apply the same evaluation protocol as in Ferrero et al.", "(2016)'s paper.", "We build a distance matrix of size N x M , with M = 1,000 and N = |S| where S is the evaluated sub-corpus.", "Each textual unit of S is compared to itself (actually, since this is cross-lingual similarity detection, each source language unit is compared to its corresponding unit in the target language) and to M -1 other units randomly selected from S. The same unit may be selected several times.", "Then, a matching score for each comparison performed is obtained, leading to the distance matrix.", "Thresholding on the matrix is applied to find the threshold giving the best F 1 score.", "The F 1 score is the harmonic mean of precision and recall.", "Precision is defined as the proportion of relevant matches (similar crosslanguage units) retrieved among all the matches retrieved.", "Recall is the proportion of relevant matches retrieved among all the relevant matches to retrieve.", "Each method is applied on each subcorpus for chunk and sentence granularities.", "For each configuration (i.e.", "a particular method applied on a particular sub-corpus considering a particular granularity), 10 folds are carried out by changing the M selected units.", "Investigation of Cross-Language Similarity Performances 5.1 Across Language Pairs Table 3 brings together the performances of all methods on all sub-corpora for each pair of languages at chunk and sentence level.", "In both sub-tables, at chunk and sentence level, the overall F 1 score over all sub-corpora of one method in one particular language pair is given.", "As a preliminary remark, one should note that CL-C3G and CL-ESA lead to the same results for a given language pair (same performance if we reverse source and target languages) due to their symmetrical property.", "Another remark we can make is that methods are consistent across language pairs: best performing methods are mostly the same, whatever the language pair considered.", "This is confirmed by the calculation of the Pearson correlation between performances of different pairs of languages, from Table 3 and reported in Table 4 .", "Table 4 represents the Pearson correlations between the different language pairs of the overall results of all methods on all sub-corpora.", "This result is interesting because some of these methods depend on the availability of lexical resources whose quality is heterogeneous across languages.", "Despite the variation of the source and target languages, a minimum Pearson correlation of 0.940 for EN→FR vs. FR→ES, and a maximum of 0.998 for EN→FR vs. EN→ES and ES→FR vs. FR→ES at chunk level is observed (see Table 4 ).", "For the sentence granularity, it is the same order of magnitude: the maximum Pearson correlation is 0.997 for ES→EN vs. EN→ES and ES→FR vs. FR→ES, and the minimum is 0.913 for EN→ES vs. FR→ES (see Table 4 ).", "In average the language pair EN→FR is 0.975 correlated with the other language pairs (0.980 at chunk-level and 0.971 at sentence-level), for instance.", "This correlation suggests the possibility to tune a method on one language and apply it to another language if needed.", "Table 5 synthesizes the top 3 methods for each language pair observed in Tables 3 and 4 .", "No matter the source and target languages or the granularity, CL-C3G generally outperforms the other methods.", "Then CL-ASA, CL-CTS and T+MA are also closely efficient but their behavior depends on the granularity.", "Generally, CL-ASA is better at the chunk granularity, followed by CL-CTS and T+MA.", "On the contrary, CL-CTS and T+MA are slightly more effective at sentence granularity.", "One explanation for this is that T+MA depends on the quality of machine translation, which may have poor performance on isolated chunks, while a short length text unit benefits the CL-CTS and CL-ASA methods because of their formula which Table 4 : Pearson correlations of the overall F 1 score over all sub-corpora of all methods between the different language pairs (EN: English; FR: French; ES: Spanish).", "will tend to minimize the number of false positives in this case.", "Anyway, despite these differences in ranking, the gap in term of performance values is small between these closest methods.", "For instance, we can see that when CL-CTS is more efficient than CL-C3G (ES→FR column at sentence level in Table 3 and Table 5 (b)), the difference of performance is very small (0.0068).", "Table 6 shows the Pearson correlations of the results (of all methods on all sub-corpora) by language pair between the chunk and the sentence granularity (correlations calculated from Table 3 , between the EN→FR column at chunk level with the EN→FR column at sentence level, and so on).", "We can see a strong Pearson correlation of the performances on the language pair between the chunk and the sentence granularity (an average of 0.9, with 0.907 for the EN→FR pair, for instance).", "This proves that all methods behave along a simi- lar trend at chunk and at sentence level, regardless of the languages on which they are used.", "However, we can see in Table 7 that if we collect correlation scores separately for each method (on all sub-corpora, on all language pairs) between chunk Table 7 : Pearson correlations of the results on all sub-corpora on all language pairs, between the chunk and the sentence granularity, by methods (calculated from Table 3 ).", "and sentence granularity performances (correlations also calculated from Table 3 , between the CL-C3G line at chunk level with the CL-C3G line at sentence level, and so on), we notice that some methods exhibit a different behavior at both chunk and sentence granularities: for instance, this is the case for CL-ASA which seems to be really better at chunk level.", "In conclusion, we can say that the methods presented here may behave slightly differently depending on the text unit considered (chunk or sentence) but they behave practically the same no matter the languages of the compared texts are (as long as enough lexical resources are available for dealing with these languages).", "Detailed Analysis for English-French The previous sub-section has shown a consistent behavior of methods across language pairs (strongly consistent) and granularities (less strongly consistent).", "For this reason, we now propose a detailed analysis for different sub-corpora, for the English-French language pair -at chunk and sentence level -only.", "Providing these results for all language pairs and granularities would take too much space.", "Moreover, we also run those state-of-the-art methods on the dataset of the Spanish-English cross-lingual Semantic Textual Similarity task of SemEval-2016 (Agirre et al., 2016) and SemEval-2017 (Cer et al., 2017 , and propose a shallower but equally rigorous analysis.", "However, all those results are also made available as supplementary material on our paper Web page.", "Table 8 shows the performances of methods on the EN→FR sub-corpora.", "As mentioned earlier, CL-C3G is in general the most effective method.", "CL-ESA seems to show better results on comparable corpora, like Wikipedia.", "In contrast, CL-ASA obtains better results on parallel corpora such as JRC or Europarl collections.", "CL-CTS and T+MA are pretty efficient and versatile too.", "It is also interesting to note that the results of the methods are well correlated between certain types of sub-corpora.", "For instance, the Pearson correlation of the performances of all methods between the TALN sub-corpus and the APR sub-corpus, is 0.982 at the chunk level, and 0.937 at the sentence level.", "This means that a method could be optimized on a particular corpus (for instance APR) and applied efficiently on another corpus (for instance TALN which is made of scientific conference papers).", "Figure 2 : Distribution histograms of some state-of-the-art methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Beyond their capacity to correctly predict a (mis)match, an interesting feature of the methods is their clustering capacity, i.e.", "their ability to cor-rectly separate the positives (cross-lingual semantic textual similar units) and the negatives (textual units with different meaning) in order to minimize Table 9 : Precision (P), Recall (R) and F 1 score, reached at a certain threshold (T), of some stateof-the-art methods for a data subset made with 1000 positives and 1000 negatives (mis)matches -10 folds validation.", "the doubts on the classification.", "To verify this phenomenon, we conducted another experience with a new protocol.", "We built a data subset by concatenating some documents of the previously presented dataset (Ferrero et al., 2016) .", "More precisely we used 200 pairs of each sub-corpora at sentence level only.", "We compared 1000 English textual units to their corresponding unit in French, and to one other (not relevant) French unit.", "So, each English textual unit must strictly leads to one match and one mismatch, i.e.", "in the end, we have exactly 1000 matches and 1000 mismatches for a run.", "We repeat this experiment 10 times for each method, leading to 10 folds for each method.", "The results of this experiment are reported on Table 9 , that shows the average for the 10 folds of the Precision (P), the Recall (R) and the F 1 score of some state-of-the-art methods, reached at a certain threshold (T).", "The results are also reported in Figure 2 , in the form of distribution histograms of the evaluated methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Distribution histograms on Figure 2 highlights the fact that each method has its own fingerprint: even if two methods looks equivalent in term of performances (see Table 9 ), their clustering capacity, and so the distribution of their (mis)matches can be different.", "For instance, we can see that a random distribution is a very bad distribution (Figure 2 (a) ).", "We can also see that CL-C3G has a narrow distribution of negatives and a broad distribution for positives (Figure 2 (c) ), whereas the opposite is true for CL-ASA (Figure 2 (e) ).", "Table 9 confirms this phenomenon by the fact that the decision threshold is very different for CL-ASA (0.762) compared to the other methods (around 0.1).", "This means that CL-ASA discriminates more correctly the positives that the negatives, when it seems to be the opposite for the other methods.", "For this reason, we can make the assumption that some methods are complementary, due to their different fingerprint.", "These behaviors suggest that fusion between these methods (notably decision tree based fusion) should lead to very promising results.", "Conclusion We conducted a deep investigation of crosslanguage plagiarism detection methods on a challenging dataset.", "Our results have shown a common behavior of methods across different language pairs.", "We revealed strong correlations across languages but also across text units considered.", "This means that when a method is more effective than another on a sufficiently large dataset, it is generally more effective in any other case.", "This also means that if a method is efficient on a particular language pair, it will be similarly efficient on another language pair as long as enough lexical resources are available for these languages.", "We also investigated the behavior of the methods through the different types of texts on a particular language pair: English-French.", "We revealed strong correlations across types of texts.", "This means that a method could be optimized on a particular corpus and applied efficiently on another corpus.", "Finally, we have shown that methods behave differently in clustering match and mismatched units, even if they seem similar in performance.", "This opens new possibilities for their combination or fusion.", "More results supporting these facts are provided as supplementary material 6 ." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.2", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Overview of State-of-the-Art Methods", "Evaluation Protocol", "Investigation of Cross-Language", "Detailed Analysis for English-French", "Conclusion" ] }
GEM-SciDuet-train-62#paper-1126#slide-2
Research Questions
How do the state-of-the-art methods behave according to the characteristics of the compared texts? Are the methods depend on the characteristics of the compared texts? And if so, which characteristics? Are the state-of-the-art methods complementary?
How do the state-of-the-art methods behave according to the characteristics of the compared texts? Are the methods depend on the characteristics of the compared texts? And if so, which characteristics? Are the state-of-the-art methods complementary?
[]
GEM-SciDuet-train-62#paper-1126#slide-3
1126
Deep Investigation of Cross-Language Plagiarism Detection Methods
This paper is a deep investigation of cross-language plagiarism detection methods on a new recently introduced open dataset, which contains parallel and comparable collections of documents with multiple characteristics (different genres, languages and sizes of texts). We investigate cross-language plagiarism detection methods for 6 language pairs on 2 granularities of text units in order to draw robust conclusions on the best methods while deeply analyzing correlations across document styles and languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154 ], "paper_content_text": [ "Introduction Plagiarism is a very significant problem nowadays, specifically in higher education institutions.", "In monolingual context, this problem is rather well treated by several recent researches (Potthast et al., 2014) .", "Nevertheless, the expansion of the Internet, which facilitates access to documents throughout the world and to increasingly efficient (freely available) machine translation tools, helps to spread cross-language plagiarism.", "Crosslanguage plagiarism means plagiarism by translation, i.e.", "a text has been plagiarized while being translated (manually or automatically).", "The challenge in detecting this kind of plagiarism is that the suspicious document is no longer in the same language of its source.", "In this relatively new field of research, no systematic evaluation of the main methods, on several language pairs, for different text granularities and for different text genres, has been proposed yet.", "This is what we propose in this paper.", "Contribution.", "The paper focus is on crosslanguage semantic textual similarity detection which is the main part (with source retrieval) in cross-language plagiarism detection.", "The evaluation dataset used (Ferrero et al., 2016) allows us to run a large amount of experiments and analyses.", "To our knowledge, this is the first time that full potential of such a diverse dataset is used for benchmarking.", "So, the paper main contribution is a systematic evaluation of cross-language similarity detection methods (using in plagiarism detection) on different languages, sizes and genres of texts through a reproducible evaluation protocol.", "Robust conclusions are derived on the best methods while deeply analyzing correlations across document styles and languages.", "Due to space limitations, we only provide a subset of our experiments in the paper while more result tables and correlation analyses are provided as supplementary material on a Web link 1 .", "Outline.", "After presenting the dataset used for our study in section 2, and reviewing the stateof-the-art methods of cross-language plagiarism detection that we evaluate in section 3, we describe the evaluation protocol employed in section 4.", "Then, section 5.1 presents the correla-tion of the methods across language pairs, while section 5.2 presents a detailed analysis on only English-French pair.", "Finally, section 6 concludes this work and gives a few perspectives.", "Dataset The reference dataset used during our study is the new dataset 2 recently introduced by Ferrero et al.", "(2016) .", "The dataset was specially designed for a rigorous evaluation of cross-language textual similarity detection.", "The different characteristics of the dataset are synthesized in Table 1 , while Table 2 presents the number of aligned units by subcorpus and by granularity.", "More precisely, the characteristics of the dataset are the following: • it is multilingual: it contains French, English and Spanish texts; • it proposes cross-language alignment information at different granularities: document level, sentence level and chunk level; • it is based on both parallel and comparable corpora (mix of Wikipedia, scientific conference papers, amazon product reviews, Europarl and JRC); • it contains both human and machine translated texts; • it contains different percentages of named entities; • part of it has been obfuscated (to make the cross-language similarity detection more complicated) while the rest remains without noise; • the documents were written and translated by multiple types of authors (from average to professionals); • it covers various fields.", "Overview of State-of-the-Art Methods Textual similarity detection methods are not exactly methods to detect plagiarism.", "Plagiarism is a statement that someone copied text deliberately without attribution, while these methods only detect textual similarities.", "There is no way 2 https://github.com/FerreroJeremy/ Cross-Language-Dataset of knowing why texts are similar and thus to assimilate these similarities to plagiarism.", "At the moment, there are five classes of approaches for cross-language plagiarism detection.", "The aim of each method is to estimate if two textual units in different languages express the same message or not.", "Figure 1 presents a taxonomy of Potthast et al.", "(2011) , enriched by the study of Danilova (2013) , of the different cross-language plagiarism detection methods grouped by class of approaches.", "We only describe below the state-of-the-art methods that we evaluate in the paper, one for each class of approaches (those in bold in the Figure 1 ).", "Cross-Language Character N-Gram (CL-CnG) is based on Mcnamee and Mayfield (2004) model.", "We use the CL-C3G Potthast et al.", "(2011)'s implementation.", "Only spaces and alphanumeric characters are kept.", "Any other diacritic or symbol is deleted and the texts are lower-cased.", "The texts are then segmented into 3-grams (sequences of 3 contiguous characters) and transformed into tf.idf vectors of character 3-grams.", "The metric used to compare two vectors is the cosine similarity.", "Cross-Language Conceptual Thesaurus-based Similarity (CL-CTS) aims to measure the semantic similarity using abstract concepts from words in textual units.", "We reuse the idea of Pataki (2012) which, for each sentence, build a bag-ofwords by getting all the available translations of each word of the sentence.", "For that, we use a linked lexical resource called DBNary (Sérasset, 2015) .", "The bag-of-words of a sentence is the merge of the bag-of-words of the words of the sentence.", "After, we use the Jaccard distance (Jaccard, 1912) with fuzzy matching between two bag-ofwords to measure the similarity between two sentences.", "Cross-Language Alignment-based Similarity Analysis (CL-ASA) was introduced for the first time by Barrón-Cedeño et al.", "(2008) and developed subsequently by Pinto et al.", "(2009) .", "The model aims to determinate how a textual unit is potentially the translation of another textual unit using bilingual unigram dictionary which contains translations pairs (and their probabilities) extracted from a parallel corpus.", "Our lexical dictionary is calculated applying the IBM-1 model Danilova (2013) , of different approaches for cross-language similarity detection.", "(Brown et al., 1993) on the concatenation of TED 4 (Cettolo et al., 2012) and News 5 parallel corpora.", "We reuse the implementation of Pinto et al.", "(2009) that proposed a formula that factored the alignment function.", "MT-Based Models Cross-Language Explicit Semantic Analysis (CL-ESA) is based on the explicit semantic analysis model introduced for the first time by Gabrilovich and Markovitch (2007) , which represents the meaning of a document by a vector based on the vocabulary derived from Wikipedia, to find a document within a corpus.", "It was reused by Potthast et al.", "(2008) in the context of cross-language document retrieval.", "Our implementation uses a part of Wikipedia, from which our test data was removed, to build the vector representations of the texts.", "Translation + Monolingual Analysis (T+MA) consists in translating suspect plagiarized text back into the same language of source text, in order to operate a monolingual comparison between them.", "We use the Muhr et al.", "(2010) 's implementation which consists in replacing each word of one text by its most likely translations in the language of the other text, leading to a bags-of-words.", "We use DBNary (Sérasset, 2015) to get the translations.", "The metric used to compare two texts is a monolingual matching based on strict intersection of bags-of-words.", "More recently, SemEval-2016 (Agirre et al., 2016) proposed a new subtask on evaluation of cross-lingual semantic textual similarity.", "Despite the fact that it was the first year that this subtask was attempted, there were 26 submissions from 10 teams.", "Most of the submissions relied on a machine translation step followed by a monolingual semantic similarity, but 4 teams tried to use learned vector representations (on words or sentences) combined with machine translation confidence (for instance the submission of Lo et al.", "(2016) or Ataman et al.", "(2016) ).", "The method that achieved the best performance (Brychcin and Svoboda, 2016 ) was a supervised system built on a word alignment-based method proposed by Sultan et al.", "(2015) .", "This very recent method is, however, not evaluated in this paper.", "Evaluation Protocol We apply the same evaluation protocol as in Ferrero et al.", "(2016)'s paper.", "We build a distance matrix of size N x M , with M = 1,000 and N = |S| where S is the evaluated sub-corpus.", "Each textual unit of S is compared to itself (actually, since this is cross-lingual similarity detection, each source language unit is compared to its corresponding unit in the target language) and to M -1 other units randomly selected from S. The same unit may be selected several times.", "Then, a matching score for each comparison performed is obtained, leading to the distance matrix.", "Thresholding on the matrix is applied to find the threshold giving the best F 1 score.", "The F 1 score is the harmonic mean of precision and recall.", "Precision is defined as the proportion of relevant matches (similar crosslanguage units) retrieved among all the matches retrieved.", "Recall is the proportion of relevant matches retrieved among all the relevant matches to retrieve.", "Each method is applied on each subcorpus for chunk and sentence granularities.", "For each configuration (i.e.", "a particular method applied on a particular sub-corpus considering a particular granularity), 10 folds are carried out by changing the M selected units.", "Investigation of Cross-Language Similarity Performances 5.1 Across Language Pairs Table 3 brings together the performances of all methods on all sub-corpora for each pair of languages at chunk and sentence level.", "In both sub-tables, at chunk and sentence level, the overall F 1 score over all sub-corpora of one method in one particular language pair is given.", "As a preliminary remark, one should note that CL-C3G and CL-ESA lead to the same results for a given language pair (same performance if we reverse source and target languages) due to their symmetrical property.", "Another remark we can make is that methods are consistent across language pairs: best performing methods are mostly the same, whatever the language pair considered.", "This is confirmed by the calculation of the Pearson correlation between performances of different pairs of languages, from Table 3 and reported in Table 4 .", "Table 4 represents the Pearson correlations between the different language pairs of the overall results of all methods on all sub-corpora.", "This result is interesting because some of these methods depend on the availability of lexical resources whose quality is heterogeneous across languages.", "Despite the variation of the source and target languages, a minimum Pearson correlation of 0.940 for EN→FR vs. FR→ES, and a maximum of 0.998 for EN→FR vs. EN→ES and ES→FR vs. FR→ES at chunk level is observed (see Table 4 ).", "For the sentence granularity, it is the same order of magnitude: the maximum Pearson correlation is 0.997 for ES→EN vs. EN→ES and ES→FR vs. FR→ES, and the minimum is 0.913 for EN→ES vs. FR→ES (see Table 4 ).", "In average the language pair EN→FR is 0.975 correlated with the other language pairs (0.980 at chunk-level and 0.971 at sentence-level), for instance.", "This correlation suggests the possibility to tune a method on one language and apply it to another language if needed.", "Table 5 synthesizes the top 3 methods for each language pair observed in Tables 3 and 4 .", "No matter the source and target languages or the granularity, CL-C3G generally outperforms the other methods.", "Then CL-ASA, CL-CTS and T+MA are also closely efficient but their behavior depends on the granularity.", "Generally, CL-ASA is better at the chunk granularity, followed by CL-CTS and T+MA.", "On the contrary, CL-CTS and T+MA are slightly more effective at sentence granularity.", "One explanation for this is that T+MA depends on the quality of machine translation, which may have poor performance on isolated chunks, while a short length text unit benefits the CL-CTS and CL-ASA methods because of their formula which Table 4 : Pearson correlations of the overall F 1 score over all sub-corpora of all methods between the different language pairs (EN: English; FR: French; ES: Spanish).", "will tend to minimize the number of false positives in this case.", "Anyway, despite these differences in ranking, the gap in term of performance values is small between these closest methods.", "For instance, we can see that when CL-CTS is more efficient than CL-C3G (ES→FR column at sentence level in Table 3 and Table 5 (b)), the difference of performance is very small (0.0068).", "Table 6 shows the Pearson correlations of the results (of all methods on all sub-corpora) by language pair between the chunk and the sentence granularity (correlations calculated from Table 3 , between the EN→FR column at chunk level with the EN→FR column at sentence level, and so on).", "We can see a strong Pearson correlation of the performances on the language pair between the chunk and the sentence granularity (an average of 0.9, with 0.907 for the EN→FR pair, for instance).", "This proves that all methods behave along a simi- lar trend at chunk and at sentence level, regardless of the languages on which they are used.", "However, we can see in Table 7 that if we collect correlation scores separately for each method (on all sub-corpora, on all language pairs) between chunk Table 7 : Pearson correlations of the results on all sub-corpora on all language pairs, between the chunk and the sentence granularity, by methods (calculated from Table 3 ).", "and sentence granularity performances (correlations also calculated from Table 3 , between the CL-C3G line at chunk level with the CL-C3G line at sentence level, and so on), we notice that some methods exhibit a different behavior at both chunk and sentence granularities: for instance, this is the case for CL-ASA which seems to be really better at chunk level.", "In conclusion, we can say that the methods presented here may behave slightly differently depending on the text unit considered (chunk or sentence) but they behave practically the same no matter the languages of the compared texts are (as long as enough lexical resources are available for dealing with these languages).", "Detailed Analysis for English-French The previous sub-section has shown a consistent behavior of methods across language pairs (strongly consistent) and granularities (less strongly consistent).", "For this reason, we now propose a detailed analysis for different sub-corpora, for the English-French language pair -at chunk and sentence level -only.", "Providing these results for all language pairs and granularities would take too much space.", "Moreover, we also run those state-of-the-art methods on the dataset of the Spanish-English cross-lingual Semantic Textual Similarity task of SemEval-2016 (Agirre et al., 2016) and SemEval-2017 (Cer et al., 2017 , and propose a shallower but equally rigorous analysis.", "However, all those results are also made available as supplementary material on our paper Web page.", "Table 8 shows the performances of methods on the EN→FR sub-corpora.", "As mentioned earlier, CL-C3G is in general the most effective method.", "CL-ESA seems to show better results on comparable corpora, like Wikipedia.", "In contrast, CL-ASA obtains better results on parallel corpora such as JRC or Europarl collections.", "CL-CTS and T+MA are pretty efficient and versatile too.", "It is also interesting to note that the results of the methods are well correlated between certain types of sub-corpora.", "For instance, the Pearson correlation of the performances of all methods between the TALN sub-corpus and the APR sub-corpus, is 0.982 at the chunk level, and 0.937 at the sentence level.", "This means that a method could be optimized on a particular corpus (for instance APR) and applied efficiently on another corpus (for instance TALN which is made of scientific conference papers).", "Figure 2 : Distribution histograms of some state-of-the-art methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Beyond their capacity to correctly predict a (mis)match, an interesting feature of the methods is their clustering capacity, i.e.", "their ability to cor-rectly separate the positives (cross-lingual semantic textual similar units) and the negatives (textual units with different meaning) in order to minimize Table 9 : Precision (P), Recall (R) and F 1 score, reached at a certain threshold (T), of some stateof-the-art methods for a data subset made with 1000 positives and 1000 negatives (mis)matches -10 folds validation.", "the doubts on the classification.", "To verify this phenomenon, we conducted another experience with a new protocol.", "We built a data subset by concatenating some documents of the previously presented dataset (Ferrero et al., 2016) .", "More precisely we used 200 pairs of each sub-corpora at sentence level only.", "We compared 1000 English textual units to their corresponding unit in French, and to one other (not relevant) French unit.", "So, each English textual unit must strictly leads to one match and one mismatch, i.e.", "in the end, we have exactly 1000 matches and 1000 mismatches for a run.", "We repeat this experiment 10 times for each method, leading to 10 folds for each method.", "The results of this experiment are reported on Table 9 , that shows the average for the 10 folds of the Precision (P), the Recall (R) and the F 1 score of some state-of-the-art methods, reached at a certain threshold (T).", "The results are also reported in Figure 2 , in the form of distribution histograms of the evaluated methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Distribution histograms on Figure 2 highlights the fact that each method has its own fingerprint: even if two methods looks equivalent in term of performances (see Table 9 ), their clustering capacity, and so the distribution of their (mis)matches can be different.", "For instance, we can see that a random distribution is a very bad distribution (Figure 2 (a) ).", "We can also see that CL-C3G has a narrow distribution of negatives and a broad distribution for positives (Figure 2 (c) ), whereas the opposite is true for CL-ASA (Figure 2 (e) ).", "Table 9 confirms this phenomenon by the fact that the decision threshold is very different for CL-ASA (0.762) compared to the other methods (around 0.1).", "This means that CL-ASA discriminates more correctly the positives that the negatives, when it seems to be the opposite for the other methods.", "For this reason, we can make the assumption that some methods are complementary, due to their different fingerprint.", "These behaviors suggest that fusion between these methods (notably decision tree based fusion) should lead to very promising results.", "Conclusion We conducted a deep investigation of crosslanguage plagiarism detection methods on a challenging dataset.", "Our results have shown a common behavior of methods across different language pairs.", "We revealed strong correlations across languages but also across text units considered.", "This means that when a method is more effective than another on a sufficiently large dataset, it is generally more effective in any other case.", "This also means that if a method is efficient on a particular language pair, it will be similarly efficient on another language pair as long as enough lexical resources are available for these languages.", "We also investigated the behavior of the methods through the different types of texts on a particular language pair: English-French.", "We revealed strong correlations across types of texts.", "This means that a method could be optimized on a particular corpus and applied efficiently on another corpus.", "Finally, we have shown that methods behave differently in clustering match and mismatched units, even if they seem similar in performance.", "This opens new possibilities for their combination or fusion.", "More results supporting these facts are provided as supplementary material 6 ." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.2", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Overview of State-of-the-Art Methods", "Evaluation Protocol", "Investigation of Cross-Language", "Detailed Analysis for English-French", "Conclusion" ] }
GEM-SciDuet-train-62#paper-1126#slide-3
State of the Art Methods
Length Model, CL-CnG [Potthast et al., 2011], Cognateness MT-Based Models Translation + Monolingual Analysis [Muhr et al., 2010]
Length Model, CL-CnG [Potthast et al., 2011], Cognateness MT-Based Models Translation + Monolingual Analysis [Muhr et al., 2010]
[]
GEM-SciDuet-train-62#paper-1126#slide-4
1126
Deep Investigation of Cross-Language Plagiarism Detection Methods
This paper is a deep investigation of cross-language plagiarism detection methods on a new recently introduced open dataset, which contains parallel and comparable collections of documents with multiple characteristics (different genres, languages and sizes of texts). We investigate cross-language plagiarism detection methods for 6 language pairs on 2 granularities of text units in order to draw robust conclusions on the best methods while deeply analyzing correlations across document styles and languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154 ], "paper_content_text": [ "Introduction Plagiarism is a very significant problem nowadays, specifically in higher education institutions.", "In monolingual context, this problem is rather well treated by several recent researches (Potthast et al., 2014) .", "Nevertheless, the expansion of the Internet, which facilitates access to documents throughout the world and to increasingly efficient (freely available) machine translation tools, helps to spread cross-language plagiarism.", "Crosslanguage plagiarism means plagiarism by translation, i.e.", "a text has been plagiarized while being translated (manually or automatically).", "The challenge in detecting this kind of plagiarism is that the suspicious document is no longer in the same language of its source.", "In this relatively new field of research, no systematic evaluation of the main methods, on several language pairs, for different text granularities and for different text genres, has been proposed yet.", "This is what we propose in this paper.", "Contribution.", "The paper focus is on crosslanguage semantic textual similarity detection which is the main part (with source retrieval) in cross-language plagiarism detection.", "The evaluation dataset used (Ferrero et al., 2016) allows us to run a large amount of experiments and analyses.", "To our knowledge, this is the first time that full potential of such a diverse dataset is used for benchmarking.", "So, the paper main contribution is a systematic evaluation of cross-language similarity detection methods (using in plagiarism detection) on different languages, sizes and genres of texts through a reproducible evaluation protocol.", "Robust conclusions are derived on the best methods while deeply analyzing correlations across document styles and languages.", "Due to space limitations, we only provide a subset of our experiments in the paper while more result tables and correlation analyses are provided as supplementary material on a Web link 1 .", "Outline.", "After presenting the dataset used for our study in section 2, and reviewing the stateof-the-art methods of cross-language plagiarism detection that we evaluate in section 3, we describe the evaluation protocol employed in section 4.", "Then, section 5.1 presents the correla-tion of the methods across language pairs, while section 5.2 presents a detailed analysis on only English-French pair.", "Finally, section 6 concludes this work and gives a few perspectives.", "Dataset The reference dataset used during our study is the new dataset 2 recently introduced by Ferrero et al.", "(2016) .", "The dataset was specially designed for a rigorous evaluation of cross-language textual similarity detection.", "The different characteristics of the dataset are synthesized in Table 1 , while Table 2 presents the number of aligned units by subcorpus and by granularity.", "More precisely, the characteristics of the dataset are the following: • it is multilingual: it contains French, English and Spanish texts; • it proposes cross-language alignment information at different granularities: document level, sentence level and chunk level; • it is based on both parallel and comparable corpora (mix of Wikipedia, scientific conference papers, amazon product reviews, Europarl and JRC); • it contains both human and machine translated texts; • it contains different percentages of named entities; • part of it has been obfuscated (to make the cross-language similarity detection more complicated) while the rest remains without noise; • the documents were written and translated by multiple types of authors (from average to professionals); • it covers various fields.", "Overview of State-of-the-Art Methods Textual similarity detection methods are not exactly methods to detect plagiarism.", "Plagiarism is a statement that someone copied text deliberately without attribution, while these methods only detect textual similarities.", "There is no way 2 https://github.com/FerreroJeremy/ Cross-Language-Dataset of knowing why texts are similar and thus to assimilate these similarities to plagiarism.", "At the moment, there are five classes of approaches for cross-language plagiarism detection.", "The aim of each method is to estimate if two textual units in different languages express the same message or not.", "Figure 1 presents a taxonomy of Potthast et al.", "(2011) , enriched by the study of Danilova (2013) , of the different cross-language plagiarism detection methods grouped by class of approaches.", "We only describe below the state-of-the-art methods that we evaluate in the paper, one for each class of approaches (those in bold in the Figure 1 ).", "Cross-Language Character N-Gram (CL-CnG) is based on Mcnamee and Mayfield (2004) model.", "We use the CL-C3G Potthast et al.", "(2011)'s implementation.", "Only spaces and alphanumeric characters are kept.", "Any other diacritic or symbol is deleted and the texts are lower-cased.", "The texts are then segmented into 3-grams (sequences of 3 contiguous characters) and transformed into tf.idf vectors of character 3-grams.", "The metric used to compare two vectors is the cosine similarity.", "Cross-Language Conceptual Thesaurus-based Similarity (CL-CTS) aims to measure the semantic similarity using abstract concepts from words in textual units.", "We reuse the idea of Pataki (2012) which, for each sentence, build a bag-ofwords by getting all the available translations of each word of the sentence.", "For that, we use a linked lexical resource called DBNary (Sérasset, 2015) .", "The bag-of-words of a sentence is the merge of the bag-of-words of the words of the sentence.", "After, we use the Jaccard distance (Jaccard, 1912) with fuzzy matching between two bag-ofwords to measure the similarity between two sentences.", "Cross-Language Alignment-based Similarity Analysis (CL-ASA) was introduced for the first time by Barrón-Cedeño et al.", "(2008) and developed subsequently by Pinto et al.", "(2009) .", "The model aims to determinate how a textual unit is potentially the translation of another textual unit using bilingual unigram dictionary which contains translations pairs (and their probabilities) extracted from a parallel corpus.", "Our lexical dictionary is calculated applying the IBM-1 model Danilova (2013) , of different approaches for cross-language similarity detection.", "(Brown et al., 1993) on the concatenation of TED 4 (Cettolo et al., 2012) and News 5 parallel corpora.", "We reuse the implementation of Pinto et al.", "(2009) that proposed a formula that factored the alignment function.", "MT-Based Models Cross-Language Explicit Semantic Analysis (CL-ESA) is based on the explicit semantic analysis model introduced for the first time by Gabrilovich and Markovitch (2007) , which represents the meaning of a document by a vector based on the vocabulary derived from Wikipedia, to find a document within a corpus.", "It was reused by Potthast et al.", "(2008) in the context of cross-language document retrieval.", "Our implementation uses a part of Wikipedia, from which our test data was removed, to build the vector representations of the texts.", "Translation + Monolingual Analysis (T+MA) consists in translating suspect plagiarized text back into the same language of source text, in order to operate a monolingual comparison between them.", "We use the Muhr et al.", "(2010) 's implementation which consists in replacing each word of one text by its most likely translations in the language of the other text, leading to a bags-of-words.", "We use DBNary (Sérasset, 2015) to get the translations.", "The metric used to compare two texts is a monolingual matching based on strict intersection of bags-of-words.", "More recently, SemEval-2016 (Agirre et al., 2016) proposed a new subtask on evaluation of cross-lingual semantic textual similarity.", "Despite the fact that it was the first year that this subtask was attempted, there were 26 submissions from 10 teams.", "Most of the submissions relied on a machine translation step followed by a monolingual semantic similarity, but 4 teams tried to use learned vector representations (on words or sentences) combined with machine translation confidence (for instance the submission of Lo et al.", "(2016) or Ataman et al.", "(2016) ).", "The method that achieved the best performance (Brychcin and Svoboda, 2016 ) was a supervised system built on a word alignment-based method proposed by Sultan et al.", "(2015) .", "This very recent method is, however, not evaluated in this paper.", "Evaluation Protocol We apply the same evaluation protocol as in Ferrero et al.", "(2016)'s paper.", "We build a distance matrix of size N x M , with M = 1,000 and N = |S| where S is the evaluated sub-corpus.", "Each textual unit of S is compared to itself (actually, since this is cross-lingual similarity detection, each source language unit is compared to its corresponding unit in the target language) and to M -1 other units randomly selected from S. The same unit may be selected several times.", "Then, a matching score for each comparison performed is obtained, leading to the distance matrix.", "Thresholding on the matrix is applied to find the threshold giving the best F 1 score.", "The F 1 score is the harmonic mean of precision and recall.", "Precision is defined as the proportion of relevant matches (similar crosslanguage units) retrieved among all the matches retrieved.", "Recall is the proportion of relevant matches retrieved among all the relevant matches to retrieve.", "Each method is applied on each subcorpus for chunk and sentence granularities.", "For each configuration (i.e.", "a particular method applied on a particular sub-corpus considering a particular granularity), 10 folds are carried out by changing the M selected units.", "Investigation of Cross-Language Similarity Performances 5.1 Across Language Pairs Table 3 brings together the performances of all methods on all sub-corpora for each pair of languages at chunk and sentence level.", "In both sub-tables, at chunk and sentence level, the overall F 1 score over all sub-corpora of one method in one particular language pair is given.", "As a preliminary remark, one should note that CL-C3G and CL-ESA lead to the same results for a given language pair (same performance if we reverse source and target languages) due to their symmetrical property.", "Another remark we can make is that methods are consistent across language pairs: best performing methods are mostly the same, whatever the language pair considered.", "This is confirmed by the calculation of the Pearson correlation between performances of different pairs of languages, from Table 3 and reported in Table 4 .", "Table 4 represents the Pearson correlations between the different language pairs of the overall results of all methods on all sub-corpora.", "This result is interesting because some of these methods depend on the availability of lexical resources whose quality is heterogeneous across languages.", "Despite the variation of the source and target languages, a minimum Pearson correlation of 0.940 for EN→FR vs. FR→ES, and a maximum of 0.998 for EN→FR vs. EN→ES and ES→FR vs. FR→ES at chunk level is observed (see Table 4 ).", "For the sentence granularity, it is the same order of magnitude: the maximum Pearson correlation is 0.997 for ES→EN vs. EN→ES and ES→FR vs. FR→ES, and the minimum is 0.913 for EN→ES vs. FR→ES (see Table 4 ).", "In average the language pair EN→FR is 0.975 correlated with the other language pairs (0.980 at chunk-level and 0.971 at sentence-level), for instance.", "This correlation suggests the possibility to tune a method on one language and apply it to another language if needed.", "Table 5 synthesizes the top 3 methods for each language pair observed in Tables 3 and 4 .", "No matter the source and target languages or the granularity, CL-C3G generally outperforms the other methods.", "Then CL-ASA, CL-CTS and T+MA are also closely efficient but their behavior depends on the granularity.", "Generally, CL-ASA is better at the chunk granularity, followed by CL-CTS and T+MA.", "On the contrary, CL-CTS and T+MA are slightly more effective at sentence granularity.", "One explanation for this is that T+MA depends on the quality of machine translation, which may have poor performance on isolated chunks, while a short length text unit benefits the CL-CTS and CL-ASA methods because of their formula which Table 4 : Pearson correlations of the overall F 1 score over all sub-corpora of all methods between the different language pairs (EN: English; FR: French; ES: Spanish).", "will tend to minimize the number of false positives in this case.", "Anyway, despite these differences in ranking, the gap in term of performance values is small between these closest methods.", "For instance, we can see that when CL-CTS is more efficient than CL-C3G (ES→FR column at sentence level in Table 3 and Table 5 (b)), the difference of performance is very small (0.0068).", "Table 6 shows the Pearson correlations of the results (of all methods on all sub-corpora) by language pair between the chunk and the sentence granularity (correlations calculated from Table 3 , between the EN→FR column at chunk level with the EN→FR column at sentence level, and so on).", "We can see a strong Pearson correlation of the performances on the language pair between the chunk and the sentence granularity (an average of 0.9, with 0.907 for the EN→FR pair, for instance).", "This proves that all methods behave along a simi- lar trend at chunk and at sentence level, regardless of the languages on which they are used.", "However, we can see in Table 7 that if we collect correlation scores separately for each method (on all sub-corpora, on all language pairs) between chunk Table 7 : Pearson correlations of the results on all sub-corpora on all language pairs, between the chunk and the sentence granularity, by methods (calculated from Table 3 ).", "and sentence granularity performances (correlations also calculated from Table 3 , between the CL-C3G line at chunk level with the CL-C3G line at sentence level, and so on), we notice that some methods exhibit a different behavior at both chunk and sentence granularities: for instance, this is the case for CL-ASA which seems to be really better at chunk level.", "In conclusion, we can say that the methods presented here may behave slightly differently depending on the text unit considered (chunk or sentence) but they behave practically the same no matter the languages of the compared texts are (as long as enough lexical resources are available for dealing with these languages).", "Detailed Analysis for English-French The previous sub-section has shown a consistent behavior of methods across language pairs (strongly consistent) and granularities (less strongly consistent).", "For this reason, we now propose a detailed analysis for different sub-corpora, for the English-French language pair -at chunk and sentence level -only.", "Providing these results for all language pairs and granularities would take too much space.", "Moreover, we also run those state-of-the-art methods on the dataset of the Spanish-English cross-lingual Semantic Textual Similarity task of SemEval-2016 (Agirre et al., 2016) and SemEval-2017 (Cer et al., 2017 , and propose a shallower but equally rigorous analysis.", "However, all those results are also made available as supplementary material on our paper Web page.", "Table 8 shows the performances of methods on the EN→FR sub-corpora.", "As mentioned earlier, CL-C3G is in general the most effective method.", "CL-ESA seems to show better results on comparable corpora, like Wikipedia.", "In contrast, CL-ASA obtains better results on parallel corpora such as JRC or Europarl collections.", "CL-CTS and T+MA are pretty efficient and versatile too.", "It is also interesting to note that the results of the methods are well correlated between certain types of sub-corpora.", "For instance, the Pearson correlation of the performances of all methods between the TALN sub-corpus and the APR sub-corpus, is 0.982 at the chunk level, and 0.937 at the sentence level.", "This means that a method could be optimized on a particular corpus (for instance APR) and applied efficiently on another corpus (for instance TALN which is made of scientific conference papers).", "Figure 2 : Distribution histograms of some state-of-the-art methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Beyond their capacity to correctly predict a (mis)match, an interesting feature of the methods is their clustering capacity, i.e.", "their ability to cor-rectly separate the positives (cross-lingual semantic textual similar units) and the negatives (textual units with different meaning) in order to minimize Table 9 : Precision (P), Recall (R) and F 1 score, reached at a certain threshold (T), of some stateof-the-art methods for a data subset made with 1000 positives and 1000 negatives (mis)matches -10 folds validation.", "the doubts on the classification.", "To verify this phenomenon, we conducted another experience with a new protocol.", "We built a data subset by concatenating some documents of the previously presented dataset (Ferrero et al., 2016) .", "More precisely we used 200 pairs of each sub-corpora at sentence level only.", "We compared 1000 English textual units to their corresponding unit in French, and to one other (not relevant) French unit.", "So, each English textual unit must strictly leads to one match and one mismatch, i.e.", "in the end, we have exactly 1000 matches and 1000 mismatches for a run.", "We repeat this experiment 10 times for each method, leading to 10 folds for each method.", "The results of this experiment are reported on Table 9 , that shows the average for the 10 folds of the Precision (P), the Recall (R) and the F 1 score of some state-of-the-art methods, reached at a certain threshold (T).", "The results are also reported in Figure 2 , in the form of distribution histograms of the evaluated methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Distribution histograms on Figure 2 highlights the fact that each method has its own fingerprint: even if two methods looks equivalent in term of performances (see Table 9 ), their clustering capacity, and so the distribution of their (mis)matches can be different.", "For instance, we can see that a random distribution is a very bad distribution (Figure 2 (a) ).", "We can also see that CL-C3G has a narrow distribution of negatives and a broad distribution for positives (Figure 2 (c) ), whereas the opposite is true for CL-ASA (Figure 2 (e) ).", "Table 9 confirms this phenomenon by the fact that the decision threshold is very different for CL-ASA (0.762) compared to the other methods (around 0.1).", "This means that CL-ASA discriminates more correctly the positives that the negatives, when it seems to be the opposite for the other methods.", "For this reason, we can make the assumption that some methods are complementary, due to their different fingerprint.", "These behaviors suggest that fusion between these methods (notably decision tree based fusion) should lead to very promising results.", "Conclusion We conducted a deep investigation of crosslanguage plagiarism detection methods on a challenging dataset.", "Our results have shown a common behavior of methods across different language pairs.", "We revealed strong correlations across languages but also across text units considered.", "This means that when a method is more effective than another on a sufficiently large dataset, it is generally more effective in any other case.", "This also means that if a method is efficient on a particular language pair, it will be similarly efficient on another language pair as long as enough lexical resources are available for these languages.", "We also investigated the behavior of the methods through the different types of texts on a particular language pair: English-French.", "We revealed strong correlations across types of texts.", "This means that a method could be optimized on a particular corpus and applied efficiently on another corpus.", "Finally, we have shown that methods behave differently in clustering match and mismatched units, even if they seem similar in performance.", "This opens new possibilities for their combination or fusion.", "More results supporting these facts are provided as supplementary material 6 ." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.2", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Overview of State-of-the-Art Methods", "Evaluation Protocol", "Investigation of Cross-Language", "Detailed Analysis for English-French", "Conclusion" ] }
GEM-SciDuet-train-62#paper-1126#slide-4
Evaluation Dataset Ferrero et al 2016
French, English and Spanish; Parallel and comparable (mix of Wikipedia, conference papers, product reviews, Different granularities: document level, sentence level and chunk level; Human and machine translated texts; Obfuscated (to make the similarity detection more complicated) and without added noise; Written and translated by multiple types of authors; 1A Multilingual, Multi-style and Multi-granularity Dataset for Cross-language Textual Similarity Detection. In Proceedings of LREC 2016.
French, English and Spanish; Parallel and comparable (mix of Wikipedia, conference papers, product reviews, Different granularities: document level, sentence level and chunk level; Human and machine translated texts; Obfuscated (to make the similarity detection more complicated) and without added noise; Written and translated by multiple types of authors; 1A Multilingual, Multi-style and Multi-granularity Dataset for Cross-language Textual Similarity Detection. In Proceedings of LREC 2016.
[]
GEM-SciDuet-train-62#paper-1126#slide-5
1126
Deep Investigation of Cross-Language Plagiarism Detection Methods
This paper is a deep investigation of cross-language plagiarism detection methods on a new recently introduced open dataset, which contains parallel and comparable collections of documents with multiple characteristics (different genres, languages and sizes of texts). We investigate cross-language plagiarism detection methods for 6 language pairs on 2 granularities of text units in order to draw robust conclusions on the best methods while deeply analyzing correlations across document styles and languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154 ], "paper_content_text": [ "Introduction Plagiarism is a very significant problem nowadays, specifically in higher education institutions.", "In monolingual context, this problem is rather well treated by several recent researches (Potthast et al., 2014) .", "Nevertheless, the expansion of the Internet, which facilitates access to documents throughout the world and to increasingly efficient (freely available) machine translation tools, helps to spread cross-language plagiarism.", "Crosslanguage plagiarism means plagiarism by translation, i.e.", "a text has been plagiarized while being translated (manually or automatically).", "The challenge in detecting this kind of plagiarism is that the suspicious document is no longer in the same language of its source.", "In this relatively new field of research, no systematic evaluation of the main methods, on several language pairs, for different text granularities and for different text genres, has been proposed yet.", "This is what we propose in this paper.", "Contribution.", "The paper focus is on crosslanguage semantic textual similarity detection which is the main part (with source retrieval) in cross-language plagiarism detection.", "The evaluation dataset used (Ferrero et al., 2016) allows us to run a large amount of experiments and analyses.", "To our knowledge, this is the first time that full potential of such a diverse dataset is used for benchmarking.", "So, the paper main contribution is a systematic evaluation of cross-language similarity detection methods (using in plagiarism detection) on different languages, sizes and genres of texts through a reproducible evaluation protocol.", "Robust conclusions are derived on the best methods while deeply analyzing correlations across document styles and languages.", "Due to space limitations, we only provide a subset of our experiments in the paper while more result tables and correlation analyses are provided as supplementary material on a Web link 1 .", "Outline.", "After presenting the dataset used for our study in section 2, and reviewing the stateof-the-art methods of cross-language plagiarism detection that we evaluate in section 3, we describe the evaluation protocol employed in section 4.", "Then, section 5.1 presents the correla-tion of the methods across language pairs, while section 5.2 presents a detailed analysis on only English-French pair.", "Finally, section 6 concludes this work and gives a few perspectives.", "Dataset The reference dataset used during our study is the new dataset 2 recently introduced by Ferrero et al.", "(2016) .", "The dataset was specially designed for a rigorous evaluation of cross-language textual similarity detection.", "The different characteristics of the dataset are synthesized in Table 1 , while Table 2 presents the number of aligned units by subcorpus and by granularity.", "More precisely, the characteristics of the dataset are the following: • it is multilingual: it contains French, English and Spanish texts; • it proposes cross-language alignment information at different granularities: document level, sentence level and chunk level; • it is based on both parallel and comparable corpora (mix of Wikipedia, scientific conference papers, amazon product reviews, Europarl and JRC); • it contains both human and machine translated texts; • it contains different percentages of named entities; • part of it has been obfuscated (to make the cross-language similarity detection more complicated) while the rest remains without noise; • the documents were written and translated by multiple types of authors (from average to professionals); • it covers various fields.", "Overview of State-of-the-Art Methods Textual similarity detection methods are not exactly methods to detect plagiarism.", "Plagiarism is a statement that someone copied text deliberately without attribution, while these methods only detect textual similarities.", "There is no way 2 https://github.com/FerreroJeremy/ Cross-Language-Dataset of knowing why texts are similar and thus to assimilate these similarities to plagiarism.", "At the moment, there are five classes of approaches for cross-language plagiarism detection.", "The aim of each method is to estimate if two textual units in different languages express the same message or not.", "Figure 1 presents a taxonomy of Potthast et al.", "(2011) , enriched by the study of Danilova (2013) , of the different cross-language plagiarism detection methods grouped by class of approaches.", "We only describe below the state-of-the-art methods that we evaluate in the paper, one for each class of approaches (those in bold in the Figure 1 ).", "Cross-Language Character N-Gram (CL-CnG) is based on Mcnamee and Mayfield (2004) model.", "We use the CL-C3G Potthast et al.", "(2011)'s implementation.", "Only spaces and alphanumeric characters are kept.", "Any other diacritic or symbol is deleted and the texts are lower-cased.", "The texts are then segmented into 3-grams (sequences of 3 contiguous characters) and transformed into tf.idf vectors of character 3-grams.", "The metric used to compare two vectors is the cosine similarity.", "Cross-Language Conceptual Thesaurus-based Similarity (CL-CTS) aims to measure the semantic similarity using abstract concepts from words in textual units.", "We reuse the idea of Pataki (2012) which, for each sentence, build a bag-ofwords by getting all the available translations of each word of the sentence.", "For that, we use a linked lexical resource called DBNary (Sérasset, 2015) .", "The bag-of-words of a sentence is the merge of the bag-of-words of the words of the sentence.", "After, we use the Jaccard distance (Jaccard, 1912) with fuzzy matching between two bag-ofwords to measure the similarity between two sentences.", "Cross-Language Alignment-based Similarity Analysis (CL-ASA) was introduced for the first time by Barrón-Cedeño et al.", "(2008) and developed subsequently by Pinto et al.", "(2009) .", "The model aims to determinate how a textual unit is potentially the translation of another textual unit using bilingual unigram dictionary which contains translations pairs (and their probabilities) extracted from a parallel corpus.", "Our lexical dictionary is calculated applying the IBM-1 model Danilova (2013) , of different approaches for cross-language similarity detection.", "(Brown et al., 1993) on the concatenation of TED 4 (Cettolo et al., 2012) and News 5 parallel corpora.", "We reuse the implementation of Pinto et al.", "(2009) that proposed a formula that factored the alignment function.", "MT-Based Models Cross-Language Explicit Semantic Analysis (CL-ESA) is based on the explicit semantic analysis model introduced for the first time by Gabrilovich and Markovitch (2007) , which represents the meaning of a document by a vector based on the vocabulary derived from Wikipedia, to find a document within a corpus.", "It was reused by Potthast et al.", "(2008) in the context of cross-language document retrieval.", "Our implementation uses a part of Wikipedia, from which our test data was removed, to build the vector representations of the texts.", "Translation + Monolingual Analysis (T+MA) consists in translating suspect plagiarized text back into the same language of source text, in order to operate a monolingual comparison between them.", "We use the Muhr et al.", "(2010) 's implementation which consists in replacing each word of one text by its most likely translations in the language of the other text, leading to a bags-of-words.", "We use DBNary (Sérasset, 2015) to get the translations.", "The metric used to compare two texts is a monolingual matching based on strict intersection of bags-of-words.", "More recently, SemEval-2016 (Agirre et al., 2016) proposed a new subtask on evaluation of cross-lingual semantic textual similarity.", "Despite the fact that it was the first year that this subtask was attempted, there were 26 submissions from 10 teams.", "Most of the submissions relied on a machine translation step followed by a monolingual semantic similarity, but 4 teams tried to use learned vector representations (on words or sentences) combined with machine translation confidence (for instance the submission of Lo et al.", "(2016) or Ataman et al.", "(2016) ).", "The method that achieved the best performance (Brychcin and Svoboda, 2016 ) was a supervised system built on a word alignment-based method proposed by Sultan et al.", "(2015) .", "This very recent method is, however, not evaluated in this paper.", "Evaluation Protocol We apply the same evaluation protocol as in Ferrero et al.", "(2016)'s paper.", "We build a distance matrix of size N x M , with M = 1,000 and N = |S| where S is the evaluated sub-corpus.", "Each textual unit of S is compared to itself (actually, since this is cross-lingual similarity detection, each source language unit is compared to its corresponding unit in the target language) and to M -1 other units randomly selected from S. The same unit may be selected several times.", "Then, a matching score for each comparison performed is obtained, leading to the distance matrix.", "Thresholding on the matrix is applied to find the threshold giving the best F 1 score.", "The F 1 score is the harmonic mean of precision and recall.", "Precision is defined as the proportion of relevant matches (similar crosslanguage units) retrieved among all the matches retrieved.", "Recall is the proportion of relevant matches retrieved among all the relevant matches to retrieve.", "Each method is applied on each subcorpus for chunk and sentence granularities.", "For each configuration (i.e.", "a particular method applied on a particular sub-corpus considering a particular granularity), 10 folds are carried out by changing the M selected units.", "Investigation of Cross-Language Similarity Performances 5.1 Across Language Pairs Table 3 brings together the performances of all methods on all sub-corpora for each pair of languages at chunk and sentence level.", "In both sub-tables, at chunk and sentence level, the overall F 1 score over all sub-corpora of one method in one particular language pair is given.", "As a preliminary remark, one should note that CL-C3G and CL-ESA lead to the same results for a given language pair (same performance if we reverse source and target languages) due to their symmetrical property.", "Another remark we can make is that methods are consistent across language pairs: best performing methods are mostly the same, whatever the language pair considered.", "This is confirmed by the calculation of the Pearson correlation between performances of different pairs of languages, from Table 3 and reported in Table 4 .", "Table 4 represents the Pearson correlations between the different language pairs of the overall results of all methods on all sub-corpora.", "This result is interesting because some of these methods depend on the availability of lexical resources whose quality is heterogeneous across languages.", "Despite the variation of the source and target languages, a minimum Pearson correlation of 0.940 for EN→FR vs. FR→ES, and a maximum of 0.998 for EN→FR vs. EN→ES and ES→FR vs. FR→ES at chunk level is observed (see Table 4 ).", "For the sentence granularity, it is the same order of magnitude: the maximum Pearson correlation is 0.997 for ES→EN vs. EN→ES and ES→FR vs. FR→ES, and the minimum is 0.913 for EN→ES vs. FR→ES (see Table 4 ).", "In average the language pair EN→FR is 0.975 correlated with the other language pairs (0.980 at chunk-level and 0.971 at sentence-level), for instance.", "This correlation suggests the possibility to tune a method on one language and apply it to another language if needed.", "Table 5 synthesizes the top 3 methods for each language pair observed in Tables 3 and 4 .", "No matter the source and target languages or the granularity, CL-C3G generally outperforms the other methods.", "Then CL-ASA, CL-CTS and T+MA are also closely efficient but their behavior depends on the granularity.", "Generally, CL-ASA is better at the chunk granularity, followed by CL-CTS and T+MA.", "On the contrary, CL-CTS and T+MA are slightly more effective at sentence granularity.", "One explanation for this is that T+MA depends on the quality of machine translation, which may have poor performance on isolated chunks, while a short length text unit benefits the CL-CTS and CL-ASA methods because of their formula which Table 4 : Pearson correlations of the overall F 1 score over all sub-corpora of all methods between the different language pairs (EN: English; FR: French; ES: Spanish).", "will tend to minimize the number of false positives in this case.", "Anyway, despite these differences in ranking, the gap in term of performance values is small between these closest methods.", "For instance, we can see that when CL-CTS is more efficient than CL-C3G (ES→FR column at sentence level in Table 3 and Table 5 (b)), the difference of performance is very small (0.0068).", "Table 6 shows the Pearson correlations of the results (of all methods on all sub-corpora) by language pair between the chunk and the sentence granularity (correlations calculated from Table 3 , between the EN→FR column at chunk level with the EN→FR column at sentence level, and so on).", "We can see a strong Pearson correlation of the performances on the language pair between the chunk and the sentence granularity (an average of 0.9, with 0.907 for the EN→FR pair, for instance).", "This proves that all methods behave along a simi- lar trend at chunk and at sentence level, regardless of the languages on which they are used.", "However, we can see in Table 7 that if we collect correlation scores separately for each method (on all sub-corpora, on all language pairs) between chunk Table 7 : Pearson correlations of the results on all sub-corpora on all language pairs, between the chunk and the sentence granularity, by methods (calculated from Table 3 ).", "and sentence granularity performances (correlations also calculated from Table 3 , between the CL-C3G line at chunk level with the CL-C3G line at sentence level, and so on), we notice that some methods exhibit a different behavior at both chunk and sentence granularities: for instance, this is the case for CL-ASA which seems to be really better at chunk level.", "In conclusion, we can say that the methods presented here may behave slightly differently depending on the text unit considered (chunk or sentence) but they behave practically the same no matter the languages of the compared texts are (as long as enough lexical resources are available for dealing with these languages).", "Detailed Analysis for English-French The previous sub-section has shown a consistent behavior of methods across language pairs (strongly consistent) and granularities (less strongly consistent).", "For this reason, we now propose a detailed analysis for different sub-corpora, for the English-French language pair -at chunk and sentence level -only.", "Providing these results for all language pairs and granularities would take too much space.", "Moreover, we also run those state-of-the-art methods on the dataset of the Spanish-English cross-lingual Semantic Textual Similarity task of SemEval-2016 (Agirre et al., 2016) and SemEval-2017 (Cer et al., 2017 , and propose a shallower but equally rigorous analysis.", "However, all those results are also made available as supplementary material on our paper Web page.", "Table 8 shows the performances of methods on the EN→FR sub-corpora.", "As mentioned earlier, CL-C3G is in general the most effective method.", "CL-ESA seems to show better results on comparable corpora, like Wikipedia.", "In contrast, CL-ASA obtains better results on parallel corpora such as JRC or Europarl collections.", "CL-CTS and T+MA are pretty efficient and versatile too.", "It is also interesting to note that the results of the methods are well correlated between certain types of sub-corpora.", "For instance, the Pearson correlation of the performances of all methods between the TALN sub-corpus and the APR sub-corpus, is 0.982 at the chunk level, and 0.937 at the sentence level.", "This means that a method could be optimized on a particular corpus (for instance APR) and applied efficiently on another corpus (for instance TALN which is made of scientific conference papers).", "Figure 2 : Distribution histograms of some state-of-the-art methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Beyond their capacity to correctly predict a (mis)match, an interesting feature of the methods is their clustering capacity, i.e.", "their ability to cor-rectly separate the positives (cross-lingual semantic textual similar units) and the negatives (textual units with different meaning) in order to minimize Table 9 : Precision (P), Recall (R) and F 1 score, reached at a certain threshold (T), of some stateof-the-art methods for a data subset made with 1000 positives and 1000 negatives (mis)matches -10 folds validation.", "the doubts on the classification.", "To verify this phenomenon, we conducted another experience with a new protocol.", "We built a data subset by concatenating some documents of the previously presented dataset (Ferrero et al., 2016) .", "More precisely we used 200 pairs of each sub-corpora at sentence level only.", "We compared 1000 English textual units to their corresponding unit in French, and to one other (not relevant) French unit.", "So, each English textual unit must strictly leads to one match and one mismatch, i.e.", "in the end, we have exactly 1000 matches and 1000 mismatches for a run.", "We repeat this experiment 10 times for each method, leading to 10 folds for each method.", "The results of this experiment are reported on Table 9 , that shows the average for the 10 folds of the Precision (P), the Recall (R) and the F 1 score of some state-of-the-art methods, reached at a certain threshold (T).", "The results are also reported in Figure 2 , in the form of distribution histograms of the evaluated methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Distribution histograms on Figure 2 highlights the fact that each method has its own fingerprint: even if two methods looks equivalent in term of performances (see Table 9 ), their clustering capacity, and so the distribution of their (mis)matches can be different.", "For instance, we can see that a random distribution is a very bad distribution (Figure 2 (a) ).", "We can also see that CL-C3G has a narrow distribution of negatives and a broad distribution for positives (Figure 2 (c) ), whereas the opposite is true for CL-ASA (Figure 2 (e) ).", "Table 9 confirms this phenomenon by the fact that the decision threshold is very different for CL-ASA (0.762) compared to the other methods (around 0.1).", "This means that CL-ASA discriminates more correctly the positives that the negatives, when it seems to be the opposite for the other methods.", "For this reason, we can make the assumption that some methods are complementary, due to their different fingerprint.", "These behaviors suggest that fusion between these methods (notably decision tree based fusion) should lead to very promising results.", "Conclusion We conducted a deep investigation of crosslanguage plagiarism detection methods on a challenging dataset.", "Our results have shown a common behavior of methods across different language pairs.", "We revealed strong correlations across languages but also across text units considered.", "This means that when a method is more effective than another on a sufficiently large dataset, it is generally more effective in any other case.", "This also means that if a method is efficient on a particular language pair, it will be similarly efficient on another language pair as long as enough lexical resources are available for these languages.", "We also investigated the behavior of the methods through the different types of texts on a particular language pair: English-French.", "We revealed strong correlations across types of texts.", "This means that a method could be optimized on a particular corpus and applied efficiently on another corpus.", "Finally, we have shown that methods behave differently in clustering match and mismatched units, even if they seem similar in performance.", "This opens new possibilities for their combination or fusion.", "More results supporting these facts are provided as supplementary material 6 ." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.2", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Overview of State-of-the-Art Methods", "Evaluation Protocol", "Investigation of Cross-Language", "Detailed Analysis for English-French", "Conclusion" ] }
GEM-SciDuet-train-62#paper-1126#slide-5
First experiment Evaluation Protocol
We compared each textual unit to its corresponding unit in another language and to 999 other units randomly selected; We threshold the obtained distance matrix to find the threshold giving the best F1 score; We repeat these two steps 10 times, leading to a 10 folds validation; The final value are the average of the 10 F1 score.
We compared each textual unit to its corresponding unit in another language and to 999 other units randomly selected; We threshold the obtained distance matrix to find the threshold giving the best F1 score; We repeat these two steps 10 times, leading to a 10 folds validation; The final value are the average of the 10 F1 score.
[]
GEM-SciDuet-train-62#paper-1126#slide-6
1126
Deep Investigation of Cross-Language Plagiarism Detection Methods
This paper is a deep investigation of cross-language plagiarism detection methods on a new recently introduced open dataset, which contains parallel and comparable collections of documents with multiple characteristics (different genres, languages and sizes of texts). We investigate cross-language plagiarism detection methods for 6 language pairs on 2 granularities of text units in order to draw robust conclusions on the best methods while deeply analyzing correlations across document styles and languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154 ], "paper_content_text": [ "Introduction Plagiarism is a very significant problem nowadays, specifically in higher education institutions.", "In monolingual context, this problem is rather well treated by several recent researches (Potthast et al., 2014) .", "Nevertheless, the expansion of the Internet, which facilitates access to documents throughout the world and to increasingly efficient (freely available) machine translation tools, helps to spread cross-language plagiarism.", "Crosslanguage plagiarism means plagiarism by translation, i.e.", "a text has been plagiarized while being translated (manually or automatically).", "The challenge in detecting this kind of plagiarism is that the suspicious document is no longer in the same language of its source.", "In this relatively new field of research, no systematic evaluation of the main methods, on several language pairs, for different text granularities and for different text genres, has been proposed yet.", "This is what we propose in this paper.", "Contribution.", "The paper focus is on crosslanguage semantic textual similarity detection which is the main part (with source retrieval) in cross-language plagiarism detection.", "The evaluation dataset used (Ferrero et al., 2016) allows us to run a large amount of experiments and analyses.", "To our knowledge, this is the first time that full potential of such a diverse dataset is used for benchmarking.", "So, the paper main contribution is a systematic evaluation of cross-language similarity detection methods (using in plagiarism detection) on different languages, sizes and genres of texts through a reproducible evaluation protocol.", "Robust conclusions are derived on the best methods while deeply analyzing correlations across document styles and languages.", "Due to space limitations, we only provide a subset of our experiments in the paper while more result tables and correlation analyses are provided as supplementary material on a Web link 1 .", "Outline.", "After presenting the dataset used for our study in section 2, and reviewing the stateof-the-art methods of cross-language plagiarism detection that we evaluate in section 3, we describe the evaluation protocol employed in section 4.", "Then, section 5.1 presents the correla-tion of the methods across language pairs, while section 5.2 presents a detailed analysis on only English-French pair.", "Finally, section 6 concludes this work and gives a few perspectives.", "Dataset The reference dataset used during our study is the new dataset 2 recently introduced by Ferrero et al.", "(2016) .", "The dataset was specially designed for a rigorous evaluation of cross-language textual similarity detection.", "The different characteristics of the dataset are synthesized in Table 1 , while Table 2 presents the number of aligned units by subcorpus and by granularity.", "More precisely, the characteristics of the dataset are the following: • it is multilingual: it contains French, English and Spanish texts; • it proposes cross-language alignment information at different granularities: document level, sentence level and chunk level; • it is based on both parallel and comparable corpora (mix of Wikipedia, scientific conference papers, amazon product reviews, Europarl and JRC); • it contains both human and machine translated texts; • it contains different percentages of named entities; • part of it has been obfuscated (to make the cross-language similarity detection more complicated) while the rest remains without noise; • the documents were written and translated by multiple types of authors (from average to professionals); • it covers various fields.", "Overview of State-of-the-Art Methods Textual similarity detection methods are not exactly methods to detect plagiarism.", "Plagiarism is a statement that someone copied text deliberately without attribution, while these methods only detect textual similarities.", "There is no way 2 https://github.com/FerreroJeremy/ Cross-Language-Dataset of knowing why texts are similar and thus to assimilate these similarities to plagiarism.", "At the moment, there are five classes of approaches for cross-language plagiarism detection.", "The aim of each method is to estimate if two textual units in different languages express the same message or not.", "Figure 1 presents a taxonomy of Potthast et al.", "(2011) , enriched by the study of Danilova (2013) , of the different cross-language plagiarism detection methods grouped by class of approaches.", "We only describe below the state-of-the-art methods that we evaluate in the paper, one for each class of approaches (those in bold in the Figure 1 ).", "Cross-Language Character N-Gram (CL-CnG) is based on Mcnamee and Mayfield (2004) model.", "We use the CL-C3G Potthast et al.", "(2011)'s implementation.", "Only spaces and alphanumeric characters are kept.", "Any other diacritic or symbol is deleted and the texts are lower-cased.", "The texts are then segmented into 3-grams (sequences of 3 contiguous characters) and transformed into tf.idf vectors of character 3-grams.", "The metric used to compare two vectors is the cosine similarity.", "Cross-Language Conceptual Thesaurus-based Similarity (CL-CTS) aims to measure the semantic similarity using abstract concepts from words in textual units.", "We reuse the idea of Pataki (2012) which, for each sentence, build a bag-ofwords by getting all the available translations of each word of the sentence.", "For that, we use a linked lexical resource called DBNary (Sérasset, 2015) .", "The bag-of-words of a sentence is the merge of the bag-of-words of the words of the sentence.", "After, we use the Jaccard distance (Jaccard, 1912) with fuzzy matching between two bag-ofwords to measure the similarity between two sentences.", "Cross-Language Alignment-based Similarity Analysis (CL-ASA) was introduced for the first time by Barrón-Cedeño et al.", "(2008) and developed subsequently by Pinto et al.", "(2009) .", "The model aims to determinate how a textual unit is potentially the translation of another textual unit using bilingual unigram dictionary which contains translations pairs (and their probabilities) extracted from a parallel corpus.", "Our lexical dictionary is calculated applying the IBM-1 model Danilova (2013) , of different approaches for cross-language similarity detection.", "(Brown et al., 1993) on the concatenation of TED 4 (Cettolo et al., 2012) and News 5 parallel corpora.", "We reuse the implementation of Pinto et al.", "(2009) that proposed a formula that factored the alignment function.", "MT-Based Models Cross-Language Explicit Semantic Analysis (CL-ESA) is based on the explicit semantic analysis model introduced for the first time by Gabrilovich and Markovitch (2007) , which represents the meaning of a document by a vector based on the vocabulary derived from Wikipedia, to find a document within a corpus.", "It was reused by Potthast et al.", "(2008) in the context of cross-language document retrieval.", "Our implementation uses a part of Wikipedia, from which our test data was removed, to build the vector representations of the texts.", "Translation + Monolingual Analysis (T+MA) consists in translating suspect plagiarized text back into the same language of source text, in order to operate a monolingual comparison between them.", "We use the Muhr et al.", "(2010) 's implementation which consists in replacing each word of one text by its most likely translations in the language of the other text, leading to a bags-of-words.", "We use DBNary (Sérasset, 2015) to get the translations.", "The metric used to compare two texts is a monolingual matching based on strict intersection of bags-of-words.", "More recently, SemEval-2016 (Agirre et al., 2016) proposed a new subtask on evaluation of cross-lingual semantic textual similarity.", "Despite the fact that it was the first year that this subtask was attempted, there were 26 submissions from 10 teams.", "Most of the submissions relied on a machine translation step followed by a monolingual semantic similarity, but 4 teams tried to use learned vector representations (on words or sentences) combined with machine translation confidence (for instance the submission of Lo et al.", "(2016) or Ataman et al.", "(2016) ).", "The method that achieved the best performance (Brychcin and Svoboda, 2016 ) was a supervised system built on a word alignment-based method proposed by Sultan et al.", "(2015) .", "This very recent method is, however, not evaluated in this paper.", "Evaluation Protocol We apply the same evaluation protocol as in Ferrero et al.", "(2016)'s paper.", "We build a distance matrix of size N x M , with M = 1,000 and N = |S| where S is the evaluated sub-corpus.", "Each textual unit of S is compared to itself (actually, since this is cross-lingual similarity detection, each source language unit is compared to its corresponding unit in the target language) and to M -1 other units randomly selected from S. The same unit may be selected several times.", "Then, a matching score for each comparison performed is obtained, leading to the distance matrix.", "Thresholding on the matrix is applied to find the threshold giving the best F 1 score.", "The F 1 score is the harmonic mean of precision and recall.", "Precision is defined as the proportion of relevant matches (similar crosslanguage units) retrieved among all the matches retrieved.", "Recall is the proportion of relevant matches retrieved among all the relevant matches to retrieve.", "Each method is applied on each subcorpus for chunk and sentence granularities.", "For each configuration (i.e.", "a particular method applied on a particular sub-corpus considering a particular granularity), 10 folds are carried out by changing the M selected units.", "Investigation of Cross-Language Similarity Performances 5.1 Across Language Pairs Table 3 brings together the performances of all methods on all sub-corpora for each pair of languages at chunk and sentence level.", "In both sub-tables, at chunk and sentence level, the overall F 1 score over all sub-corpora of one method in one particular language pair is given.", "As a preliminary remark, one should note that CL-C3G and CL-ESA lead to the same results for a given language pair (same performance if we reverse source and target languages) due to their symmetrical property.", "Another remark we can make is that methods are consistent across language pairs: best performing methods are mostly the same, whatever the language pair considered.", "This is confirmed by the calculation of the Pearson correlation between performances of different pairs of languages, from Table 3 and reported in Table 4 .", "Table 4 represents the Pearson correlations between the different language pairs of the overall results of all methods on all sub-corpora.", "This result is interesting because some of these methods depend on the availability of lexical resources whose quality is heterogeneous across languages.", "Despite the variation of the source and target languages, a minimum Pearson correlation of 0.940 for EN→FR vs. FR→ES, and a maximum of 0.998 for EN→FR vs. EN→ES and ES→FR vs. FR→ES at chunk level is observed (see Table 4 ).", "For the sentence granularity, it is the same order of magnitude: the maximum Pearson correlation is 0.997 for ES→EN vs. EN→ES and ES→FR vs. FR→ES, and the minimum is 0.913 for EN→ES vs. FR→ES (see Table 4 ).", "In average the language pair EN→FR is 0.975 correlated with the other language pairs (0.980 at chunk-level and 0.971 at sentence-level), for instance.", "This correlation suggests the possibility to tune a method on one language and apply it to another language if needed.", "Table 5 synthesizes the top 3 methods for each language pair observed in Tables 3 and 4 .", "No matter the source and target languages or the granularity, CL-C3G generally outperforms the other methods.", "Then CL-ASA, CL-CTS and T+MA are also closely efficient but their behavior depends on the granularity.", "Generally, CL-ASA is better at the chunk granularity, followed by CL-CTS and T+MA.", "On the contrary, CL-CTS and T+MA are slightly more effective at sentence granularity.", "One explanation for this is that T+MA depends on the quality of machine translation, which may have poor performance on isolated chunks, while a short length text unit benefits the CL-CTS and CL-ASA methods because of their formula which Table 4 : Pearson correlations of the overall F 1 score over all sub-corpora of all methods between the different language pairs (EN: English; FR: French; ES: Spanish).", "will tend to minimize the number of false positives in this case.", "Anyway, despite these differences in ranking, the gap in term of performance values is small between these closest methods.", "For instance, we can see that when CL-CTS is more efficient than CL-C3G (ES→FR column at sentence level in Table 3 and Table 5 (b)), the difference of performance is very small (0.0068).", "Table 6 shows the Pearson correlations of the results (of all methods on all sub-corpora) by language pair between the chunk and the sentence granularity (correlations calculated from Table 3 , between the EN→FR column at chunk level with the EN→FR column at sentence level, and so on).", "We can see a strong Pearson correlation of the performances on the language pair between the chunk and the sentence granularity (an average of 0.9, with 0.907 for the EN→FR pair, for instance).", "This proves that all methods behave along a simi- lar trend at chunk and at sentence level, regardless of the languages on which they are used.", "However, we can see in Table 7 that if we collect correlation scores separately for each method (on all sub-corpora, on all language pairs) between chunk Table 7 : Pearson correlations of the results on all sub-corpora on all language pairs, between the chunk and the sentence granularity, by methods (calculated from Table 3 ).", "and sentence granularity performances (correlations also calculated from Table 3 , between the CL-C3G line at chunk level with the CL-C3G line at sentence level, and so on), we notice that some methods exhibit a different behavior at both chunk and sentence granularities: for instance, this is the case for CL-ASA which seems to be really better at chunk level.", "In conclusion, we can say that the methods presented here may behave slightly differently depending on the text unit considered (chunk or sentence) but they behave practically the same no matter the languages of the compared texts are (as long as enough lexical resources are available for dealing with these languages).", "Detailed Analysis for English-French The previous sub-section has shown a consistent behavior of methods across language pairs (strongly consistent) and granularities (less strongly consistent).", "For this reason, we now propose a detailed analysis for different sub-corpora, for the English-French language pair -at chunk and sentence level -only.", "Providing these results for all language pairs and granularities would take too much space.", "Moreover, we also run those state-of-the-art methods on the dataset of the Spanish-English cross-lingual Semantic Textual Similarity task of SemEval-2016 (Agirre et al., 2016) and SemEval-2017 (Cer et al., 2017 , and propose a shallower but equally rigorous analysis.", "However, all those results are also made available as supplementary material on our paper Web page.", "Table 8 shows the performances of methods on the EN→FR sub-corpora.", "As mentioned earlier, CL-C3G is in general the most effective method.", "CL-ESA seems to show better results on comparable corpora, like Wikipedia.", "In contrast, CL-ASA obtains better results on parallel corpora such as JRC or Europarl collections.", "CL-CTS and T+MA are pretty efficient and versatile too.", "It is also interesting to note that the results of the methods are well correlated between certain types of sub-corpora.", "For instance, the Pearson correlation of the performances of all methods between the TALN sub-corpus and the APR sub-corpus, is 0.982 at the chunk level, and 0.937 at the sentence level.", "This means that a method could be optimized on a particular corpus (for instance APR) and applied efficiently on another corpus (for instance TALN which is made of scientific conference papers).", "Figure 2 : Distribution histograms of some state-of-the-art methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Beyond their capacity to correctly predict a (mis)match, an interesting feature of the methods is their clustering capacity, i.e.", "their ability to cor-rectly separate the positives (cross-lingual semantic textual similar units) and the negatives (textual units with different meaning) in order to minimize Table 9 : Precision (P), Recall (R) and F 1 score, reached at a certain threshold (T), of some stateof-the-art methods for a data subset made with 1000 positives and 1000 negatives (mis)matches -10 folds validation.", "the doubts on the classification.", "To verify this phenomenon, we conducted another experience with a new protocol.", "We built a data subset by concatenating some documents of the previously presented dataset (Ferrero et al., 2016) .", "More precisely we used 200 pairs of each sub-corpora at sentence level only.", "We compared 1000 English textual units to their corresponding unit in French, and to one other (not relevant) French unit.", "So, each English textual unit must strictly leads to one match and one mismatch, i.e.", "in the end, we have exactly 1000 matches and 1000 mismatches for a run.", "We repeat this experiment 10 times for each method, leading to 10 folds for each method.", "The results of this experiment are reported on Table 9 , that shows the average for the 10 folds of the Precision (P), the Recall (R) and the F 1 score of some state-of-the-art methods, reached at a certain threshold (T).", "The results are also reported in Figure 2 , in the form of distribution histograms of the evaluated methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Distribution histograms on Figure 2 highlights the fact that each method has its own fingerprint: even if two methods looks equivalent in term of performances (see Table 9 ), their clustering capacity, and so the distribution of their (mis)matches can be different.", "For instance, we can see that a random distribution is a very bad distribution (Figure 2 (a) ).", "We can also see that CL-C3G has a narrow distribution of negatives and a broad distribution for positives (Figure 2 (c) ), whereas the opposite is true for CL-ASA (Figure 2 (e) ).", "Table 9 confirms this phenomenon by the fact that the decision threshold is very different for CL-ASA (0.762) compared to the other methods (around 0.1).", "This means that CL-ASA discriminates more correctly the positives that the negatives, when it seems to be the opposite for the other methods.", "For this reason, we can make the assumption that some methods are complementary, due to their different fingerprint.", "These behaviors suggest that fusion between these methods (notably decision tree based fusion) should lead to very promising results.", "Conclusion We conducted a deep investigation of crosslanguage plagiarism detection methods on a challenging dataset.", "Our results have shown a common behavior of methods across different language pairs.", "We revealed strong correlations across languages but also across text units considered.", "This means that when a method is more effective than another on a sufficiently large dataset, it is generally more effective in any other case.", "This also means that if a method is efficient on a particular language pair, it will be similarly efficient on another language pair as long as enough lexical resources are available for these languages.", "We also investigated the behavior of the methods through the different types of texts on a particular language pair: English-French.", "We revealed strong correlations across types of texts.", "This means that a method could be optimized on a particular corpus and applied efficiently on another corpus.", "Finally, we have shown that methods behave differently in clustering match and mismatched units, even if they seem similar in performance.", "This opens new possibilities for their combination or fusion.", "More results supporting these facts are provided as supplementary material 6 ." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.2", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Overview of State-of-the-Art Methods", "Evaluation Protocol", "Investigation of Cross-Language", "Detailed Analysis for English-French", "Conclusion" ] }
GEM-SciDuet-train-62#paper-1126#slide-6
Results Across Language Pairs
Methods ENFR FREN ENES ESEN ESFR FRES Table 1: Overall F1 score over all sub-corpora of the state-of-the-art methods for each language pair (EN: English; FR: French; ES: Spanish). (a) Chunk granularity (b) Sentence granularity Table 2: Top 3 methods by source and target language. Strong correlation between languages! ENFR FREN ENES ESEN ESFR FRES Overall Lang. Pair ENFR FREN ENES ESEN ESFR FRES Table 3: Pearson correlations of the overall F1 score over all sub-corpora of all methods between the different language pairs (EN: English; FR: French; ES: Spanish). Strong correlation between granularities! Table 4: Pearson correlations of the results of all methods on all sub-corpora, between the chunk and the sentence granularity, by language pair (EN: English; FR: French; ES: Spanish) Table 5: Pearson correlations of the results on all sub-corpora on all language pairs, between the chunk and the sentence granularity, by methods (calculated from Table 1).
Methods ENFR FREN ENES ESEN ESFR FRES Table 1: Overall F1 score over all sub-corpora of the state-of-the-art methods for each language pair (EN: English; FR: French; ES: Spanish). (a) Chunk granularity (b) Sentence granularity Table 2: Top 3 methods by source and target language. Strong correlation between languages! ENFR FREN ENES ESEN ESFR FRES Overall Lang. Pair ENFR FREN ENES ESEN ESFR FRES Table 3: Pearson correlations of the overall F1 score over all sub-corpora of all methods between the different language pairs (EN: English; FR: French; ES: Spanish). Strong correlation between granularities! Table 4: Pearson correlations of the results of all methods on all sub-corpora, between the chunk and the sentence granularity, by language pair (EN: English; FR: French; ES: Spanish) Table 5: Pearson correlations of the results on all sub-corpora on all language pairs, between the chunk and the sentence granularity, by methods (calculated from Table 1).
[]
GEM-SciDuet-train-62#paper-1126#slide-7
1126
Deep Investigation of Cross-Language Plagiarism Detection Methods
This paper is a deep investigation of cross-language plagiarism detection methods on a new recently introduced open dataset, which contains parallel and comparable collections of documents with multiple characteristics (different genres, languages and sizes of texts). We investigate cross-language plagiarism detection methods for 6 language pairs on 2 granularities of text units in order to draw robust conclusions on the best methods while deeply analyzing correlations across document styles and languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154 ], "paper_content_text": [ "Introduction Plagiarism is a very significant problem nowadays, specifically in higher education institutions.", "In monolingual context, this problem is rather well treated by several recent researches (Potthast et al., 2014) .", "Nevertheless, the expansion of the Internet, which facilitates access to documents throughout the world and to increasingly efficient (freely available) machine translation tools, helps to spread cross-language plagiarism.", "Crosslanguage plagiarism means plagiarism by translation, i.e.", "a text has been plagiarized while being translated (manually or automatically).", "The challenge in detecting this kind of plagiarism is that the suspicious document is no longer in the same language of its source.", "In this relatively new field of research, no systematic evaluation of the main methods, on several language pairs, for different text granularities and for different text genres, has been proposed yet.", "This is what we propose in this paper.", "Contribution.", "The paper focus is on crosslanguage semantic textual similarity detection which is the main part (with source retrieval) in cross-language plagiarism detection.", "The evaluation dataset used (Ferrero et al., 2016) allows us to run a large amount of experiments and analyses.", "To our knowledge, this is the first time that full potential of such a diverse dataset is used for benchmarking.", "So, the paper main contribution is a systematic evaluation of cross-language similarity detection methods (using in plagiarism detection) on different languages, sizes and genres of texts through a reproducible evaluation protocol.", "Robust conclusions are derived on the best methods while deeply analyzing correlations across document styles and languages.", "Due to space limitations, we only provide a subset of our experiments in the paper while more result tables and correlation analyses are provided as supplementary material on a Web link 1 .", "Outline.", "After presenting the dataset used for our study in section 2, and reviewing the stateof-the-art methods of cross-language plagiarism detection that we evaluate in section 3, we describe the evaluation protocol employed in section 4.", "Then, section 5.1 presents the correla-tion of the methods across language pairs, while section 5.2 presents a detailed analysis on only English-French pair.", "Finally, section 6 concludes this work and gives a few perspectives.", "Dataset The reference dataset used during our study is the new dataset 2 recently introduced by Ferrero et al.", "(2016) .", "The dataset was specially designed for a rigorous evaluation of cross-language textual similarity detection.", "The different characteristics of the dataset are synthesized in Table 1 , while Table 2 presents the number of aligned units by subcorpus and by granularity.", "More precisely, the characteristics of the dataset are the following: • it is multilingual: it contains French, English and Spanish texts; • it proposes cross-language alignment information at different granularities: document level, sentence level and chunk level; • it is based on both parallel and comparable corpora (mix of Wikipedia, scientific conference papers, amazon product reviews, Europarl and JRC); • it contains both human and machine translated texts; • it contains different percentages of named entities; • part of it has been obfuscated (to make the cross-language similarity detection more complicated) while the rest remains without noise; • the documents were written and translated by multiple types of authors (from average to professionals); • it covers various fields.", "Overview of State-of-the-Art Methods Textual similarity detection methods are not exactly methods to detect plagiarism.", "Plagiarism is a statement that someone copied text deliberately without attribution, while these methods only detect textual similarities.", "There is no way 2 https://github.com/FerreroJeremy/ Cross-Language-Dataset of knowing why texts are similar and thus to assimilate these similarities to plagiarism.", "At the moment, there are five classes of approaches for cross-language plagiarism detection.", "The aim of each method is to estimate if two textual units in different languages express the same message or not.", "Figure 1 presents a taxonomy of Potthast et al.", "(2011) , enriched by the study of Danilova (2013) , of the different cross-language plagiarism detection methods grouped by class of approaches.", "We only describe below the state-of-the-art methods that we evaluate in the paper, one for each class of approaches (those in bold in the Figure 1 ).", "Cross-Language Character N-Gram (CL-CnG) is based on Mcnamee and Mayfield (2004) model.", "We use the CL-C3G Potthast et al.", "(2011)'s implementation.", "Only spaces and alphanumeric characters are kept.", "Any other diacritic or symbol is deleted and the texts are lower-cased.", "The texts are then segmented into 3-grams (sequences of 3 contiguous characters) and transformed into tf.idf vectors of character 3-grams.", "The metric used to compare two vectors is the cosine similarity.", "Cross-Language Conceptual Thesaurus-based Similarity (CL-CTS) aims to measure the semantic similarity using abstract concepts from words in textual units.", "We reuse the idea of Pataki (2012) which, for each sentence, build a bag-ofwords by getting all the available translations of each word of the sentence.", "For that, we use a linked lexical resource called DBNary (Sérasset, 2015) .", "The bag-of-words of a sentence is the merge of the bag-of-words of the words of the sentence.", "After, we use the Jaccard distance (Jaccard, 1912) with fuzzy matching between two bag-ofwords to measure the similarity between two sentences.", "Cross-Language Alignment-based Similarity Analysis (CL-ASA) was introduced for the first time by Barrón-Cedeño et al.", "(2008) and developed subsequently by Pinto et al.", "(2009) .", "The model aims to determinate how a textual unit is potentially the translation of another textual unit using bilingual unigram dictionary which contains translations pairs (and their probabilities) extracted from a parallel corpus.", "Our lexical dictionary is calculated applying the IBM-1 model Danilova (2013) , of different approaches for cross-language similarity detection.", "(Brown et al., 1993) on the concatenation of TED 4 (Cettolo et al., 2012) and News 5 parallel corpora.", "We reuse the implementation of Pinto et al.", "(2009) that proposed a formula that factored the alignment function.", "MT-Based Models Cross-Language Explicit Semantic Analysis (CL-ESA) is based on the explicit semantic analysis model introduced for the first time by Gabrilovich and Markovitch (2007) , which represents the meaning of a document by a vector based on the vocabulary derived from Wikipedia, to find a document within a corpus.", "It was reused by Potthast et al.", "(2008) in the context of cross-language document retrieval.", "Our implementation uses a part of Wikipedia, from which our test data was removed, to build the vector representations of the texts.", "Translation + Monolingual Analysis (T+MA) consists in translating suspect plagiarized text back into the same language of source text, in order to operate a monolingual comparison between them.", "We use the Muhr et al.", "(2010) 's implementation which consists in replacing each word of one text by its most likely translations in the language of the other text, leading to a bags-of-words.", "We use DBNary (Sérasset, 2015) to get the translations.", "The metric used to compare two texts is a monolingual matching based on strict intersection of bags-of-words.", "More recently, SemEval-2016 (Agirre et al., 2016) proposed a new subtask on evaluation of cross-lingual semantic textual similarity.", "Despite the fact that it was the first year that this subtask was attempted, there were 26 submissions from 10 teams.", "Most of the submissions relied on a machine translation step followed by a monolingual semantic similarity, but 4 teams tried to use learned vector representations (on words or sentences) combined with machine translation confidence (for instance the submission of Lo et al.", "(2016) or Ataman et al.", "(2016) ).", "The method that achieved the best performance (Brychcin and Svoboda, 2016 ) was a supervised system built on a word alignment-based method proposed by Sultan et al.", "(2015) .", "This very recent method is, however, not evaluated in this paper.", "Evaluation Protocol We apply the same evaluation protocol as in Ferrero et al.", "(2016)'s paper.", "We build a distance matrix of size N x M , with M = 1,000 and N = |S| where S is the evaluated sub-corpus.", "Each textual unit of S is compared to itself (actually, since this is cross-lingual similarity detection, each source language unit is compared to its corresponding unit in the target language) and to M -1 other units randomly selected from S. The same unit may be selected several times.", "Then, a matching score for each comparison performed is obtained, leading to the distance matrix.", "Thresholding on the matrix is applied to find the threshold giving the best F 1 score.", "The F 1 score is the harmonic mean of precision and recall.", "Precision is defined as the proportion of relevant matches (similar crosslanguage units) retrieved among all the matches retrieved.", "Recall is the proportion of relevant matches retrieved among all the relevant matches to retrieve.", "Each method is applied on each subcorpus for chunk and sentence granularities.", "For each configuration (i.e.", "a particular method applied on a particular sub-corpus considering a particular granularity), 10 folds are carried out by changing the M selected units.", "Investigation of Cross-Language Similarity Performances 5.1 Across Language Pairs Table 3 brings together the performances of all methods on all sub-corpora for each pair of languages at chunk and sentence level.", "In both sub-tables, at chunk and sentence level, the overall F 1 score over all sub-corpora of one method in one particular language pair is given.", "As a preliminary remark, one should note that CL-C3G and CL-ESA lead to the same results for a given language pair (same performance if we reverse source and target languages) due to their symmetrical property.", "Another remark we can make is that methods are consistent across language pairs: best performing methods are mostly the same, whatever the language pair considered.", "This is confirmed by the calculation of the Pearson correlation between performances of different pairs of languages, from Table 3 and reported in Table 4 .", "Table 4 represents the Pearson correlations between the different language pairs of the overall results of all methods on all sub-corpora.", "This result is interesting because some of these methods depend on the availability of lexical resources whose quality is heterogeneous across languages.", "Despite the variation of the source and target languages, a minimum Pearson correlation of 0.940 for EN→FR vs. FR→ES, and a maximum of 0.998 for EN→FR vs. EN→ES and ES→FR vs. FR→ES at chunk level is observed (see Table 4 ).", "For the sentence granularity, it is the same order of magnitude: the maximum Pearson correlation is 0.997 for ES→EN vs. EN→ES and ES→FR vs. FR→ES, and the minimum is 0.913 for EN→ES vs. FR→ES (see Table 4 ).", "In average the language pair EN→FR is 0.975 correlated with the other language pairs (0.980 at chunk-level and 0.971 at sentence-level), for instance.", "This correlation suggests the possibility to tune a method on one language and apply it to another language if needed.", "Table 5 synthesizes the top 3 methods for each language pair observed in Tables 3 and 4 .", "No matter the source and target languages or the granularity, CL-C3G generally outperforms the other methods.", "Then CL-ASA, CL-CTS and T+MA are also closely efficient but their behavior depends on the granularity.", "Generally, CL-ASA is better at the chunk granularity, followed by CL-CTS and T+MA.", "On the contrary, CL-CTS and T+MA are slightly more effective at sentence granularity.", "One explanation for this is that T+MA depends on the quality of machine translation, which may have poor performance on isolated chunks, while a short length text unit benefits the CL-CTS and CL-ASA methods because of their formula which Table 4 : Pearson correlations of the overall F 1 score over all sub-corpora of all methods between the different language pairs (EN: English; FR: French; ES: Spanish).", "will tend to minimize the number of false positives in this case.", "Anyway, despite these differences in ranking, the gap in term of performance values is small between these closest methods.", "For instance, we can see that when CL-CTS is more efficient than CL-C3G (ES→FR column at sentence level in Table 3 and Table 5 (b)), the difference of performance is very small (0.0068).", "Table 6 shows the Pearson correlations of the results (of all methods on all sub-corpora) by language pair between the chunk and the sentence granularity (correlations calculated from Table 3 , between the EN→FR column at chunk level with the EN→FR column at sentence level, and so on).", "We can see a strong Pearson correlation of the performances on the language pair between the chunk and the sentence granularity (an average of 0.9, with 0.907 for the EN→FR pair, for instance).", "This proves that all methods behave along a simi- lar trend at chunk and at sentence level, regardless of the languages on which they are used.", "However, we can see in Table 7 that if we collect correlation scores separately for each method (on all sub-corpora, on all language pairs) between chunk Table 7 : Pearson correlations of the results on all sub-corpora on all language pairs, between the chunk and the sentence granularity, by methods (calculated from Table 3 ).", "and sentence granularity performances (correlations also calculated from Table 3 , between the CL-C3G line at chunk level with the CL-C3G line at sentence level, and so on), we notice that some methods exhibit a different behavior at both chunk and sentence granularities: for instance, this is the case for CL-ASA which seems to be really better at chunk level.", "In conclusion, we can say that the methods presented here may behave slightly differently depending on the text unit considered (chunk or sentence) but they behave practically the same no matter the languages of the compared texts are (as long as enough lexical resources are available for dealing with these languages).", "Detailed Analysis for English-French The previous sub-section has shown a consistent behavior of methods across language pairs (strongly consistent) and granularities (less strongly consistent).", "For this reason, we now propose a detailed analysis for different sub-corpora, for the English-French language pair -at chunk and sentence level -only.", "Providing these results for all language pairs and granularities would take too much space.", "Moreover, we also run those state-of-the-art methods on the dataset of the Spanish-English cross-lingual Semantic Textual Similarity task of SemEval-2016 (Agirre et al., 2016) and SemEval-2017 (Cer et al., 2017 , and propose a shallower but equally rigorous analysis.", "However, all those results are also made available as supplementary material on our paper Web page.", "Table 8 shows the performances of methods on the EN→FR sub-corpora.", "As mentioned earlier, CL-C3G is in general the most effective method.", "CL-ESA seems to show better results on comparable corpora, like Wikipedia.", "In contrast, CL-ASA obtains better results on parallel corpora such as JRC or Europarl collections.", "CL-CTS and T+MA are pretty efficient and versatile too.", "It is also interesting to note that the results of the methods are well correlated between certain types of sub-corpora.", "For instance, the Pearson correlation of the performances of all methods between the TALN sub-corpus and the APR sub-corpus, is 0.982 at the chunk level, and 0.937 at the sentence level.", "This means that a method could be optimized on a particular corpus (for instance APR) and applied efficiently on another corpus (for instance TALN which is made of scientific conference papers).", "Figure 2 : Distribution histograms of some state-of-the-art methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Beyond their capacity to correctly predict a (mis)match, an interesting feature of the methods is their clustering capacity, i.e.", "their ability to cor-rectly separate the positives (cross-lingual semantic textual similar units) and the negatives (textual units with different meaning) in order to minimize Table 9 : Precision (P), Recall (R) and F 1 score, reached at a certain threshold (T), of some stateof-the-art methods for a data subset made with 1000 positives and 1000 negatives (mis)matches -10 folds validation.", "the doubts on the classification.", "To verify this phenomenon, we conducted another experience with a new protocol.", "We built a data subset by concatenating some documents of the previously presented dataset (Ferrero et al., 2016) .", "More precisely we used 200 pairs of each sub-corpora at sentence level only.", "We compared 1000 English textual units to their corresponding unit in French, and to one other (not relevant) French unit.", "So, each English textual unit must strictly leads to one match and one mismatch, i.e.", "in the end, we have exactly 1000 matches and 1000 mismatches for a run.", "We repeat this experiment 10 times for each method, leading to 10 folds for each method.", "The results of this experiment are reported on Table 9 , that shows the average for the 10 folds of the Precision (P), the Recall (R) and the F 1 score of some state-of-the-art methods, reached at a certain threshold (T).", "The results are also reported in Figure 2 , in the form of distribution histograms of the evaluated methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Distribution histograms on Figure 2 highlights the fact that each method has its own fingerprint: even if two methods looks equivalent in term of performances (see Table 9 ), their clustering capacity, and so the distribution of their (mis)matches can be different.", "For instance, we can see that a random distribution is a very bad distribution (Figure 2 (a) ).", "We can also see that CL-C3G has a narrow distribution of negatives and a broad distribution for positives (Figure 2 (c) ), whereas the opposite is true for CL-ASA (Figure 2 (e) ).", "Table 9 confirms this phenomenon by the fact that the decision threshold is very different for CL-ASA (0.762) compared to the other methods (around 0.1).", "This means that CL-ASA discriminates more correctly the positives that the negatives, when it seems to be the opposite for the other methods.", "For this reason, we can make the assumption that some methods are complementary, due to their different fingerprint.", "These behaviors suggest that fusion between these methods (notably decision tree based fusion) should lead to very promising results.", "Conclusion We conducted a deep investigation of crosslanguage plagiarism detection methods on a challenging dataset.", "Our results have shown a common behavior of methods across different language pairs.", "We revealed strong correlations across languages but also across text units considered.", "This means that when a method is more effective than another on a sufficiently large dataset, it is generally more effective in any other case.", "This also means that if a method is efficient on a particular language pair, it will be similarly efficient on another language pair as long as enough lexical resources are available for these languages.", "We also investigated the behavior of the methods through the different types of texts on a particular language pair: English-French.", "We revealed strong correlations across types of texts.", "This means that a method could be optimized on a particular corpus and applied efficiently on another corpus.", "Finally, we have shown that methods behave differently in clustering match and mismatched units, even if they seem similar in performance.", "This opens new possibilities for their combination or fusion.", "More results supporting these facts are provided as supplementary material 6 ." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.2", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Overview of State-of-the-Art Methods", "Evaluation Protocol", "Investigation of Cross-Language", "Detailed Analysis for English-French", "Conclusion" ] }
GEM-SciDuet-train-62#paper-1126#slide-7
Results Detailed Analysis for English French
CL-CTS CL-ASA CL-ESA T+MA Table 6: Average F1 scores and confidence intervals of methods applied on ENFR sub-corpora at chunk and sentence level 10 folds validation.
CL-CTS CL-ASA CL-ESA T+MA Table 6: Average F1 scores and confidence intervals of methods applied on ENFR sub-corpora at chunk and sentence level 10 folds validation.
[]
GEM-SciDuet-train-62#paper-1126#slide-8
1126
Deep Investigation of Cross-Language Plagiarism Detection Methods
This paper is a deep investigation of cross-language plagiarism detection methods on a new recently introduced open dataset, which contains parallel and comparable collections of documents with multiple characteristics (different genres, languages and sizes of texts). We investigate cross-language plagiarism detection methods for 6 language pairs on 2 granularities of text units in order to draw robust conclusions on the best methods while deeply analyzing correlations across document styles and languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154 ], "paper_content_text": [ "Introduction Plagiarism is a very significant problem nowadays, specifically in higher education institutions.", "In monolingual context, this problem is rather well treated by several recent researches (Potthast et al., 2014) .", "Nevertheless, the expansion of the Internet, which facilitates access to documents throughout the world and to increasingly efficient (freely available) machine translation tools, helps to spread cross-language plagiarism.", "Crosslanguage plagiarism means plagiarism by translation, i.e.", "a text has been plagiarized while being translated (manually or automatically).", "The challenge in detecting this kind of plagiarism is that the suspicious document is no longer in the same language of its source.", "In this relatively new field of research, no systematic evaluation of the main methods, on several language pairs, for different text granularities and for different text genres, has been proposed yet.", "This is what we propose in this paper.", "Contribution.", "The paper focus is on crosslanguage semantic textual similarity detection which is the main part (with source retrieval) in cross-language plagiarism detection.", "The evaluation dataset used (Ferrero et al., 2016) allows us to run a large amount of experiments and analyses.", "To our knowledge, this is the first time that full potential of such a diverse dataset is used for benchmarking.", "So, the paper main contribution is a systematic evaluation of cross-language similarity detection methods (using in plagiarism detection) on different languages, sizes and genres of texts through a reproducible evaluation protocol.", "Robust conclusions are derived on the best methods while deeply analyzing correlations across document styles and languages.", "Due to space limitations, we only provide a subset of our experiments in the paper while more result tables and correlation analyses are provided as supplementary material on a Web link 1 .", "Outline.", "After presenting the dataset used for our study in section 2, and reviewing the stateof-the-art methods of cross-language plagiarism detection that we evaluate in section 3, we describe the evaluation protocol employed in section 4.", "Then, section 5.1 presents the correla-tion of the methods across language pairs, while section 5.2 presents a detailed analysis on only English-French pair.", "Finally, section 6 concludes this work and gives a few perspectives.", "Dataset The reference dataset used during our study is the new dataset 2 recently introduced by Ferrero et al.", "(2016) .", "The dataset was specially designed for a rigorous evaluation of cross-language textual similarity detection.", "The different characteristics of the dataset are synthesized in Table 1 , while Table 2 presents the number of aligned units by subcorpus and by granularity.", "More precisely, the characteristics of the dataset are the following: • it is multilingual: it contains French, English and Spanish texts; • it proposes cross-language alignment information at different granularities: document level, sentence level and chunk level; • it is based on both parallel and comparable corpora (mix of Wikipedia, scientific conference papers, amazon product reviews, Europarl and JRC); • it contains both human and machine translated texts; • it contains different percentages of named entities; • part of it has been obfuscated (to make the cross-language similarity detection more complicated) while the rest remains without noise; • the documents were written and translated by multiple types of authors (from average to professionals); • it covers various fields.", "Overview of State-of-the-Art Methods Textual similarity detection methods are not exactly methods to detect plagiarism.", "Plagiarism is a statement that someone copied text deliberately without attribution, while these methods only detect textual similarities.", "There is no way 2 https://github.com/FerreroJeremy/ Cross-Language-Dataset of knowing why texts are similar and thus to assimilate these similarities to plagiarism.", "At the moment, there are five classes of approaches for cross-language plagiarism detection.", "The aim of each method is to estimate if two textual units in different languages express the same message or not.", "Figure 1 presents a taxonomy of Potthast et al.", "(2011) , enriched by the study of Danilova (2013) , of the different cross-language plagiarism detection methods grouped by class of approaches.", "We only describe below the state-of-the-art methods that we evaluate in the paper, one for each class of approaches (those in bold in the Figure 1 ).", "Cross-Language Character N-Gram (CL-CnG) is based on Mcnamee and Mayfield (2004) model.", "We use the CL-C3G Potthast et al.", "(2011)'s implementation.", "Only spaces and alphanumeric characters are kept.", "Any other diacritic or symbol is deleted and the texts are lower-cased.", "The texts are then segmented into 3-grams (sequences of 3 contiguous characters) and transformed into tf.idf vectors of character 3-grams.", "The metric used to compare two vectors is the cosine similarity.", "Cross-Language Conceptual Thesaurus-based Similarity (CL-CTS) aims to measure the semantic similarity using abstract concepts from words in textual units.", "We reuse the idea of Pataki (2012) which, for each sentence, build a bag-ofwords by getting all the available translations of each word of the sentence.", "For that, we use a linked lexical resource called DBNary (Sérasset, 2015) .", "The bag-of-words of a sentence is the merge of the bag-of-words of the words of the sentence.", "After, we use the Jaccard distance (Jaccard, 1912) with fuzzy matching between two bag-ofwords to measure the similarity between two sentences.", "Cross-Language Alignment-based Similarity Analysis (CL-ASA) was introduced for the first time by Barrón-Cedeño et al.", "(2008) and developed subsequently by Pinto et al.", "(2009) .", "The model aims to determinate how a textual unit is potentially the translation of another textual unit using bilingual unigram dictionary which contains translations pairs (and their probabilities) extracted from a parallel corpus.", "Our lexical dictionary is calculated applying the IBM-1 model Danilova (2013) , of different approaches for cross-language similarity detection.", "(Brown et al., 1993) on the concatenation of TED 4 (Cettolo et al., 2012) and News 5 parallel corpora.", "We reuse the implementation of Pinto et al.", "(2009) that proposed a formula that factored the alignment function.", "MT-Based Models Cross-Language Explicit Semantic Analysis (CL-ESA) is based on the explicit semantic analysis model introduced for the first time by Gabrilovich and Markovitch (2007) , which represents the meaning of a document by a vector based on the vocabulary derived from Wikipedia, to find a document within a corpus.", "It was reused by Potthast et al.", "(2008) in the context of cross-language document retrieval.", "Our implementation uses a part of Wikipedia, from which our test data was removed, to build the vector representations of the texts.", "Translation + Monolingual Analysis (T+MA) consists in translating suspect plagiarized text back into the same language of source text, in order to operate a monolingual comparison between them.", "We use the Muhr et al.", "(2010) 's implementation which consists in replacing each word of one text by its most likely translations in the language of the other text, leading to a bags-of-words.", "We use DBNary (Sérasset, 2015) to get the translations.", "The metric used to compare two texts is a monolingual matching based on strict intersection of bags-of-words.", "More recently, SemEval-2016 (Agirre et al., 2016) proposed a new subtask on evaluation of cross-lingual semantic textual similarity.", "Despite the fact that it was the first year that this subtask was attempted, there were 26 submissions from 10 teams.", "Most of the submissions relied on a machine translation step followed by a monolingual semantic similarity, but 4 teams tried to use learned vector representations (on words or sentences) combined with machine translation confidence (for instance the submission of Lo et al.", "(2016) or Ataman et al.", "(2016) ).", "The method that achieved the best performance (Brychcin and Svoboda, 2016 ) was a supervised system built on a word alignment-based method proposed by Sultan et al.", "(2015) .", "This very recent method is, however, not evaluated in this paper.", "Evaluation Protocol We apply the same evaluation protocol as in Ferrero et al.", "(2016)'s paper.", "We build a distance matrix of size N x M , with M = 1,000 and N = |S| where S is the evaluated sub-corpus.", "Each textual unit of S is compared to itself (actually, since this is cross-lingual similarity detection, each source language unit is compared to its corresponding unit in the target language) and to M -1 other units randomly selected from S. The same unit may be selected several times.", "Then, a matching score for each comparison performed is obtained, leading to the distance matrix.", "Thresholding on the matrix is applied to find the threshold giving the best F 1 score.", "The F 1 score is the harmonic mean of precision and recall.", "Precision is defined as the proportion of relevant matches (similar crosslanguage units) retrieved among all the matches retrieved.", "Recall is the proportion of relevant matches retrieved among all the relevant matches to retrieve.", "Each method is applied on each subcorpus for chunk and sentence granularities.", "For each configuration (i.e.", "a particular method applied on a particular sub-corpus considering a particular granularity), 10 folds are carried out by changing the M selected units.", "Investigation of Cross-Language Similarity Performances 5.1 Across Language Pairs Table 3 brings together the performances of all methods on all sub-corpora for each pair of languages at chunk and sentence level.", "In both sub-tables, at chunk and sentence level, the overall F 1 score over all sub-corpora of one method in one particular language pair is given.", "As a preliminary remark, one should note that CL-C3G and CL-ESA lead to the same results for a given language pair (same performance if we reverse source and target languages) due to their symmetrical property.", "Another remark we can make is that methods are consistent across language pairs: best performing methods are mostly the same, whatever the language pair considered.", "This is confirmed by the calculation of the Pearson correlation between performances of different pairs of languages, from Table 3 and reported in Table 4 .", "Table 4 represents the Pearson correlations between the different language pairs of the overall results of all methods on all sub-corpora.", "This result is interesting because some of these methods depend on the availability of lexical resources whose quality is heterogeneous across languages.", "Despite the variation of the source and target languages, a minimum Pearson correlation of 0.940 for EN→FR vs. FR→ES, and a maximum of 0.998 for EN→FR vs. EN→ES and ES→FR vs. FR→ES at chunk level is observed (see Table 4 ).", "For the sentence granularity, it is the same order of magnitude: the maximum Pearson correlation is 0.997 for ES→EN vs. EN→ES and ES→FR vs. FR→ES, and the minimum is 0.913 for EN→ES vs. FR→ES (see Table 4 ).", "In average the language pair EN→FR is 0.975 correlated with the other language pairs (0.980 at chunk-level and 0.971 at sentence-level), for instance.", "This correlation suggests the possibility to tune a method on one language and apply it to another language if needed.", "Table 5 synthesizes the top 3 methods for each language pair observed in Tables 3 and 4 .", "No matter the source and target languages or the granularity, CL-C3G generally outperforms the other methods.", "Then CL-ASA, CL-CTS and T+MA are also closely efficient but their behavior depends on the granularity.", "Generally, CL-ASA is better at the chunk granularity, followed by CL-CTS and T+MA.", "On the contrary, CL-CTS and T+MA are slightly more effective at sentence granularity.", "One explanation for this is that T+MA depends on the quality of machine translation, which may have poor performance on isolated chunks, while a short length text unit benefits the CL-CTS and CL-ASA methods because of their formula which Table 4 : Pearson correlations of the overall F 1 score over all sub-corpora of all methods between the different language pairs (EN: English; FR: French; ES: Spanish).", "will tend to minimize the number of false positives in this case.", "Anyway, despite these differences in ranking, the gap in term of performance values is small between these closest methods.", "For instance, we can see that when CL-CTS is more efficient than CL-C3G (ES→FR column at sentence level in Table 3 and Table 5 (b)), the difference of performance is very small (0.0068).", "Table 6 shows the Pearson correlations of the results (of all methods on all sub-corpora) by language pair between the chunk and the sentence granularity (correlations calculated from Table 3 , between the EN→FR column at chunk level with the EN→FR column at sentence level, and so on).", "We can see a strong Pearson correlation of the performances on the language pair between the chunk and the sentence granularity (an average of 0.9, with 0.907 for the EN→FR pair, for instance).", "This proves that all methods behave along a simi- lar trend at chunk and at sentence level, regardless of the languages on which they are used.", "However, we can see in Table 7 that if we collect correlation scores separately for each method (on all sub-corpora, on all language pairs) between chunk Table 7 : Pearson correlations of the results on all sub-corpora on all language pairs, between the chunk and the sentence granularity, by methods (calculated from Table 3 ).", "and sentence granularity performances (correlations also calculated from Table 3 , between the CL-C3G line at chunk level with the CL-C3G line at sentence level, and so on), we notice that some methods exhibit a different behavior at both chunk and sentence granularities: for instance, this is the case for CL-ASA which seems to be really better at chunk level.", "In conclusion, we can say that the methods presented here may behave slightly differently depending on the text unit considered (chunk or sentence) but they behave practically the same no matter the languages of the compared texts are (as long as enough lexical resources are available for dealing with these languages).", "Detailed Analysis for English-French The previous sub-section has shown a consistent behavior of methods across language pairs (strongly consistent) and granularities (less strongly consistent).", "For this reason, we now propose a detailed analysis for different sub-corpora, for the English-French language pair -at chunk and sentence level -only.", "Providing these results for all language pairs and granularities would take too much space.", "Moreover, we also run those state-of-the-art methods on the dataset of the Spanish-English cross-lingual Semantic Textual Similarity task of SemEval-2016 (Agirre et al., 2016) and SemEval-2017 (Cer et al., 2017 , and propose a shallower but equally rigorous analysis.", "However, all those results are also made available as supplementary material on our paper Web page.", "Table 8 shows the performances of methods on the EN→FR sub-corpora.", "As mentioned earlier, CL-C3G is in general the most effective method.", "CL-ESA seems to show better results on comparable corpora, like Wikipedia.", "In contrast, CL-ASA obtains better results on parallel corpora such as JRC or Europarl collections.", "CL-CTS and T+MA are pretty efficient and versatile too.", "It is also interesting to note that the results of the methods are well correlated between certain types of sub-corpora.", "For instance, the Pearson correlation of the performances of all methods between the TALN sub-corpus and the APR sub-corpus, is 0.982 at the chunk level, and 0.937 at the sentence level.", "This means that a method could be optimized on a particular corpus (for instance APR) and applied efficiently on another corpus (for instance TALN which is made of scientific conference papers).", "Figure 2 : Distribution histograms of some state-of-the-art methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Beyond their capacity to correctly predict a (mis)match, an interesting feature of the methods is their clustering capacity, i.e.", "their ability to cor-rectly separate the positives (cross-lingual semantic textual similar units) and the negatives (textual units with different meaning) in order to minimize Table 9 : Precision (P), Recall (R) and F 1 score, reached at a certain threshold (T), of some stateof-the-art methods for a data subset made with 1000 positives and 1000 negatives (mis)matches -10 folds validation.", "the doubts on the classification.", "To verify this phenomenon, we conducted another experience with a new protocol.", "We built a data subset by concatenating some documents of the previously presented dataset (Ferrero et al., 2016) .", "More precisely we used 200 pairs of each sub-corpora at sentence level only.", "We compared 1000 English textual units to their corresponding unit in French, and to one other (not relevant) French unit.", "So, each English textual unit must strictly leads to one match and one mismatch, i.e.", "in the end, we have exactly 1000 matches and 1000 mismatches for a run.", "We repeat this experiment 10 times for each method, leading to 10 folds for each method.", "The results of this experiment are reported on Table 9 , that shows the average for the 10 folds of the Precision (P), the Recall (R) and the F 1 score of some state-of-the-art methods, reached at a certain threshold (T).", "The results are also reported in Figure 2 , in the form of distribution histograms of the evaluated methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Distribution histograms on Figure 2 highlights the fact that each method has its own fingerprint: even if two methods looks equivalent in term of performances (see Table 9 ), their clustering capacity, and so the distribution of their (mis)matches can be different.", "For instance, we can see that a random distribution is a very bad distribution (Figure 2 (a) ).", "We can also see that CL-C3G has a narrow distribution of negatives and a broad distribution for positives (Figure 2 (c) ), whereas the opposite is true for CL-ASA (Figure 2 (e) ).", "Table 9 confirms this phenomenon by the fact that the decision threshold is very different for CL-ASA (0.762) compared to the other methods (around 0.1).", "This means that CL-ASA discriminates more correctly the positives that the negatives, when it seems to be the opposite for the other methods.", "For this reason, we can make the assumption that some methods are complementary, due to their different fingerprint.", "These behaviors suggest that fusion between these methods (notably decision tree based fusion) should lead to very promising results.", "Conclusion We conducted a deep investigation of crosslanguage plagiarism detection methods on a challenging dataset.", "Our results have shown a common behavior of methods across different language pairs.", "We revealed strong correlations across languages but also across text units considered.", "This means that when a method is more effective than another on a sufficiently large dataset, it is generally more effective in any other case.", "This also means that if a method is efficient on a particular language pair, it will be similarly efficient on another language pair as long as enough lexical resources are available for these languages.", "We also investigated the behavior of the methods through the different types of texts on a particular language pair: English-French.", "We revealed strong correlations across types of texts.", "This means that a method could be optimized on a particular corpus and applied efficiently on another corpus.", "Finally, we have shown that methods behave differently in clustering match and mismatched units, even if they seem similar in performance.", "This opens new possibilities for their combination or fusion.", "More results supporting these facts are provided as supplementary material 6 ." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.2", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Overview of State-of-the-Art Methods", "Evaluation Protocol", "Investigation of Cross-Language", "Detailed Analysis for English-French", "Conclusion" ] }
GEM-SciDuet-train-62#paper-1126#slide-8
Second Experiment Evaluation Protocol
We compare 1000 English textual units to their corresponding unit in French, and to one other (not relevant) French unit; Each unit must strictly leads to one match and one mismatch We repeat these two steps 10 times, leading to a 10 folds validation.
We compare 1000 English textual units to their corresponding unit in French, and to one other (not relevant) French unit; Each unit must strictly leads to one match and one mismatch We repeat these two steps 10 times, leading to a 10 folds validation.
[]
GEM-SciDuet-train-62#paper-1126#slide-9
1126
Deep Investigation of Cross-Language Plagiarism Detection Methods
This paper is a deep investigation of cross-language plagiarism detection methods on a new recently introduced open dataset, which contains parallel and comparable collections of documents with multiple characteristics (different genres, languages and sizes of texts). We investigate cross-language plagiarism detection methods for 6 language pairs on 2 granularities of text units in order to draw robust conclusions on the best methods while deeply analyzing correlations across document styles and languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154 ], "paper_content_text": [ "Introduction Plagiarism is a very significant problem nowadays, specifically in higher education institutions.", "In monolingual context, this problem is rather well treated by several recent researches (Potthast et al., 2014) .", "Nevertheless, the expansion of the Internet, which facilitates access to documents throughout the world and to increasingly efficient (freely available) machine translation tools, helps to spread cross-language plagiarism.", "Crosslanguage plagiarism means plagiarism by translation, i.e.", "a text has been plagiarized while being translated (manually or automatically).", "The challenge in detecting this kind of plagiarism is that the suspicious document is no longer in the same language of its source.", "In this relatively new field of research, no systematic evaluation of the main methods, on several language pairs, for different text granularities and for different text genres, has been proposed yet.", "This is what we propose in this paper.", "Contribution.", "The paper focus is on crosslanguage semantic textual similarity detection which is the main part (with source retrieval) in cross-language plagiarism detection.", "The evaluation dataset used (Ferrero et al., 2016) allows us to run a large amount of experiments and analyses.", "To our knowledge, this is the first time that full potential of such a diverse dataset is used for benchmarking.", "So, the paper main contribution is a systematic evaluation of cross-language similarity detection methods (using in plagiarism detection) on different languages, sizes and genres of texts through a reproducible evaluation protocol.", "Robust conclusions are derived on the best methods while deeply analyzing correlations across document styles and languages.", "Due to space limitations, we only provide a subset of our experiments in the paper while more result tables and correlation analyses are provided as supplementary material on a Web link 1 .", "Outline.", "After presenting the dataset used for our study in section 2, and reviewing the stateof-the-art methods of cross-language plagiarism detection that we evaluate in section 3, we describe the evaluation protocol employed in section 4.", "Then, section 5.1 presents the correla-tion of the methods across language pairs, while section 5.2 presents a detailed analysis on only English-French pair.", "Finally, section 6 concludes this work and gives a few perspectives.", "Dataset The reference dataset used during our study is the new dataset 2 recently introduced by Ferrero et al.", "(2016) .", "The dataset was specially designed for a rigorous evaluation of cross-language textual similarity detection.", "The different characteristics of the dataset are synthesized in Table 1 , while Table 2 presents the number of aligned units by subcorpus and by granularity.", "More precisely, the characteristics of the dataset are the following: • it is multilingual: it contains French, English and Spanish texts; • it proposes cross-language alignment information at different granularities: document level, sentence level and chunk level; • it is based on both parallel and comparable corpora (mix of Wikipedia, scientific conference papers, amazon product reviews, Europarl and JRC); • it contains both human and machine translated texts; • it contains different percentages of named entities; • part of it has been obfuscated (to make the cross-language similarity detection more complicated) while the rest remains without noise; • the documents were written and translated by multiple types of authors (from average to professionals); • it covers various fields.", "Overview of State-of-the-Art Methods Textual similarity detection methods are not exactly methods to detect plagiarism.", "Plagiarism is a statement that someone copied text deliberately without attribution, while these methods only detect textual similarities.", "There is no way 2 https://github.com/FerreroJeremy/ Cross-Language-Dataset of knowing why texts are similar and thus to assimilate these similarities to plagiarism.", "At the moment, there are five classes of approaches for cross-language plagiarism detection.", "The aim of each method is to estimate if two textual units in different languages express the same message or not.", "Figure 1 presents a taxonomy of Potthast et al.", "(2011) , enriched by the study of Danilova (2013) , of the different cross-language plagiarism detection methods grouped by class of approaches.", "We only describe below the state-of-the-art methods that we evaluate in the paper, one for each class of approaches (those in bold in the Figure 1 ).", "Cross-Language Character N-Gram (CL-CnG) is based on Mcnamee and Mayfield (2004) model.", "We use the CL-C3G Potthast et al.", "(2011)'s implementation.", "Only spaces and alphanumeric characters are kept.", "Any other diacritic or symbol is deleted and the texts are lower-cased.", "The texts are then segmented into 3-grams (sequences of 3 contiguous characters) and transformed into tf.idf vectors of character 3-grams.", "The metric used to compare two vectors is the cosine similarity.", "Cross-Language Conceptual Thesaurus-based Similarity (CL-CTS) aims to measure the semantic similarity using abstract concepts from words in textual units.", "We reuse the idea of Pataki (2012) which, for each sentence, build a bag-ofwords by getting all the available translations of each word of the sentence.", "For that, we use a linked lexical resource called DBNary (Sérasset, 2015) .", "The bag-of-words of a sentence is the merge of the bag-of-words of the words of the sentence.", "After, we use the Jaccard distance (Jaccard, 1912) with fuzzy matching between two bag-ofwords to measure the similarity between two sentences.", "Cross-Language Alignment-based Similarity Analysis (CL-ASA) was introduced for the first time by Barrón-Cedeño et al.", "(2008) and developed subsequently by Pinto et al.", "(2009) .", "The model aims to determinate how a textual unit is potentially the translation of another textual unit using bilingual unigram dictionary which contains translations pairs (and their probabilities) extracted from a parallel corpus.", "Our lexical dictionary is calculated applying the IBM-1 model Danilova (2013) , of different approaches for cross-language similarity detection.", "(Brown et al., 1993) on the concatenation of TED 4 (Cettolo et al., 2012) and News 5 parallel corpora.", "We reuse the implementation of Pinto et al.", "(2009) that proposed a formula that factored the alignment function.", "MT-Based Models Cross-Language Explicit Semantic Analysis (CL-ESA) is based on the explicit semantic analysis model introduced for the first time by Gabrilovich and Markovitch (2007) , which represents the meaning of a document by a vector based on the vocabulary derived from Wikipedia, to find a document within a corpus.", "It was reused by Potthast et al.", "(2008) in the context of cross-language document retrieval.", "Our implementation uses a part of Wikipedia, from which our test data was removed, to build the vector representations of the texts.", "Translation + Monolingual Analysis (T+MA) consists in translating suspect plagiarized text back into the same language of source text, in order to operate a monolingual comparison between them.", "We use the Muhr et al.", "(2010) 's implementation which consists in replacing each word of one text by its most likely translations in the language of the other text, leading to a bags-of-words.", "We use DBNary (Sérasset, 2015) to get the translations.", "The metric used to compare two texts is a monolingual matching based on strict intersection of bags-of-words.", "More recently, SemEval-2016 (Agirre et al., 2016) proposed a new subtask on evaluation of cross-lingual semantic textual similarity.", "Despite the fact that it was the first year that this subtask was attempted, there were 26 submissions from 10 teams.", "Most of the submissions relied on a machine translation step followed by a monolingual semantic similarity, but 4 teams tried to use learned vector representations (on words or sentences) combined with machine translation confidence (for instance the submission of Lo et al.", "(2016) or Ataman et al.", "(2016) ).", "The method that achieved the best performance (Brychcin and Svoboda, 2016 ) was a supervised system built on a word alignment-based method proposed by Sultan et al.", "(2015) .", "This very recent method is, however, not evaluated in this paper.", "Evaluation Protocol We apply the same evaluation protocol as in Ferrero et al.", "(2016)'s paper.", "We build a distance matrix of size N x M , with M = 1,000 and N = |S| where S is the evaluated sub-corpus.", "Each textual unit of S is compared to itself (actually, since this is cross-lingual similarity detection, each source language unit is compared to its corresponding unit in the target language) and to M -1 other units randomly selected from S. The same unit may be selected several times.", "Then, a matching score for each comparison performed is obtained, leading to the distance matrix.", "Thresholding on the matrix is applied to find the threshold giving the best F 1 score.", "The F 1 score is the harmonic mean of precision and recall.", "Precision is defined as the proportion of relevant matches (similar crosslanguage units) retrieved among all the matches retrieved.", "Recall is the proportion of relevant matches retrieved among all the relevant matches to retrieve.", "Each method is applied on each subcorpus for chunk and sentence granularities.", "For each configuration (i.e.", "a particular method applied on a particular sub-corpus considering a particular granularity), 10 folds are carried out by changing the M selected units.", "Investigation of Cross-Language Similarity Performances 5.1 Across Language Pairs Table 3 brings together the performances of all methods on all sub-corpora for each pair of languages at chunk and sentence level.", "In both sub-tables, at chunk and sentence level, the overall F 1 score over all sub-corpora of one method in one particular language pair is given.", "As a preliminary remark, one should note that CL-C3G and CL-ESA lead to the same results for a given language pair (same performance if we reverse source and target languages) due to their symmetrical property.", "Another remark we can make is that methods are consistent across language pairs: best performing methods are mostly the same, whatever the language pair considered.", "This is confirmed by the calculation of the Pearson correlation between performances of different pairs of languages, from Table 3 and reported in Table 4 .", "Table 4 represents the Pearson correlations between the different language pairs of the overall results of all methods on all sub-corpora.", "This result is interesting because some of these methods depend on the availability of lexical resources whose quality is heterogeneous across languages.", "Despite the variation of the source and target languages, a minimum Pearson correlation of 0.940 for EN→FR vs. FR→ES, and a maximum of 0.998 for EN→FR vs. EN→ES and ES→FR vs. FR→ES at chunk level is observed (see Table 4 ).", "For the sentence granularity, it is the same order of magnitude: the maximum Pearson correlation is 0.997 for ES→EN vs. EN→ES and ES→FR vs. FR→ES, and the minimum is 0.913 for EN→ES vs. FR→ES (see Table 4 ).", "In average the language pair EN→FR is 0.975 correlated with the other language pairs (0.980 at chunk-level and 0.971 at sentence-level), for instance.", "This correlation suggests the possibility to tune a method on one language and apply it to another language if needed.", "Table 5 synthesizes the top 3 methods for each language pair observed in Tables 3 and 4 .", "No matter the source and target languages or the granularity, CL-C3G generally outperforms the other methods.", "Then CL-ASA, CL-CTS and T+MA are also closely efficient but their behavior depends on the granularity.", "Generally, CL-ASA is better at the chunk granularity, followed by CL-CTS and T+MA.", "On the contrary, CL-CTS and T+MA are slightly more effective at sentence granularity.", "One explanation for this is that T+MA depends on the quality of machine translation, which may have poor performance on isolated chunks, while a short length text unit benefits the CL-CTS and CL-ASA methods because of their formula which Table 4 : Pearson correlations of the overall F 1 score over all sub-corpora of all methods between the different language pairs (EN: English; FR: French; ES: Spanish).", "will tend to minimize the number of false positives in this case.", "Anyway, despite these differences in ranking, the gap in term of performance values is small between these closest methods.", "For instance, we can see that when CL-CTS is more efficient than CL-C3G (ES→FR column at sentence level in Table 3 and Table 5 (b)), the difference of performance is very small (0.0068).", "Table 6 shows the Pearson correlations of the results (of all methods on all sub-corpora) by language pair between the chunk and the sentence granularity (correlations calculated from Table 3 , between the EN→FR column at chunk level with the EN→FR column at sentence level, and so on).", "We can see a strong Pearson correlation of the performances on the language pair between the chunk and the sentence granularity (an average of 0.9, with 0.907 for the EN→FR pair, for instance).", "This proves that all methods behave along a simi- lar trend at chunk and at sentence level, regardless of the languages on which they are used.", "However, we can see in Table 7 that if we collect correlation scores separately for each method (on all sub-corpora, on all language pairs) between chunk Table 7 : Pearson correlations of the results on all sub-corpora on all language pairs, between the chunk and the sentence granularity, by methods (calculated from Table 3 ).", "and sentence granularity performances (correlations also calculated from Table 3 , between the CL-C3G line at chunk level with the CL-C3G line at sentence level, and so on), we notice that some methods exhibit a different behavior at both chunk and sentence granularities: for instance, this is the case for CL-ASA which seems to be really better at chunk level.", "In conclusion, we can say that the methods presented here may behave slightly differently depending on the text unit considered (chunk or sentence) but they behave practically the same no matter the languages of the compared texts are (as long as enough lexical resources are available for dealing with these languages).", "Detailed Analysis for English-French The previous sub-section has shown a consistent behavior of methods across language pairs (strongly consistent) and granularities (less strongly consistent).", "For this reason, we now propose a detailed analysis for different sub-corpora, for the English-French language pair -at chunk and sentence level -only.", "Providing these results for all language pairs and granularities would take too much space.", "Moreover, we also run those state-of-the-art methods on the dataset of the Spanish-English cross-lingual Semantic Textual Similarity task of SemEval-2016 (Agirre et al., 2016) and SemEval-2017 (Cer et al., 2017 , and propose a shallower but equally rigorous analysis.", "However, all those results are also made available as supplementary material on our paper Web page.", "Table 8 shows the performances of methods on the EN→FR sub-corpora.", "As mentioned earlier, CL-C3G is in general the most effective method.", "CL-ESA seems to show better results on comparable corpora, like Wikipedia.", "In contrast, CL-ASA obtains better results on parallel corpora such as JRC or Europarl collections.", "CL-CTS and T+MA are pretty efficient and versatile too.", "It is also interesting to note that the results of the methods are well correlated between certain types of sub-corpora.", "For instance, the Pearson correlation of the performances of all methods between the TALN sub-corpus and the APR sub-corpus, is 0.982 at the chunk level, and 0.937 at the sentence level.", "This means that a method could be optimized on a particular corpus (for instance APR) and applied efficiently on another corpus (for instance TALN which is made of scientific conference papers).", "Figure 2 : Distribution histograms of some state-of-the-art methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Beyond their capacity to correctly predict a (mis)match, an interesting feature of the methods is their clustering capacity, i.e.", "their ability to cor-rectly separate the positives (cross-lingual semantic textual similar units) and the negatives (textual units with different meaning) in order to minimize Table 9 : Precision (P), Recall (R) and F 1 score, reached at a certain threshold (T), of some stateof-the-art methods for a data subset made with 1000 positives and 1000 negatives (mis)matches -10 folds validation.", "the doubts on the classification.", "To verify this phenomenon, we conducted another experience with a new protocol.", "We built a data subset by concatenating some documents of the previously presented dataset (Ferrero et al., 2016) .", "More precisely we used 200 pairs of each sub-corpora at sentence level only.", "We compared 1000 English textual units to their corresponding unit in French, and to one other (not relevant) French unit.", "So, each English textual unit must strictly leads to one match and one mismatch, i.e.", "in the end, we have exactly 1000 matches and 1000 mismatches for a run.", "We repeat this experiment 10 times for each method, leading to 10 folds for each method.", "The results of this experiment are reported on Table 9 , that shows the average for the 10 folds of the Precision (P), the Recall (R) and the F 1 score of some state-of-the-art methods, reached at a certain threshold (T).", "The results are also reported in Figure 2 , in the form of distribution histograms of the evaluated methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Distribution histograms on Figure 2 highlights the fact that each method has its own fingerprint: even if two methods looks equivalent in term of performances (see Table 9 ), their clustering capacity, and so the distribution of their (mis)matches can be different.", "For instance, we can see that a random distribution is a very bad distribution (Figure 2 (a) ).", "We can also see that CL-C3G has a narrow distribution of negatives and a broad distribution for positives (Figure 2 (c) ), whereas the opposite is true for CL-ASA (Figure 2 (e) ).", "Table 9 confirms this phenomenon by the fact that the decision threshold is very different for CL-ASA (0.762) compared to the other methods (around 0.1).", "This means that CL-ASA discriminates more correctly the positives that the negatives, when it seems to be the opposite for the other methods.", "For this reason, we can make the assumption that some methods are complementary, due to their different fingerprint.", "These behaviors suggest that fusion between these methods (notably decision tree based fusion) should lead to very promising results.", "Conclusion We conducted a deep investigation of crosslanguage plagiarism detection methods on a challenging dataset.", "Our results have shown a common behavior of methods across different language pairs.", "We revealed strong correlations across languages but also across text units considered.", "This means that when a method is more effective than another on a sufficiently large dataset, it is generally more effective in any other case.", "This also means that if a method is efficient on a particular language pair, it will be similarly efficient on another language pair as long as enough lexical resources are available for these languages.", "We also investigated the behavior of the methods through the different types of texts on a particular language pair: English-French.", "We revealed strong correlations across types of texts.", "This means that a method could be optimized on a particular corpus and applied efficiently on another corpus.", "Finally, we have shown that methods behave differently in clustering match and mismatched units, even if they seem similar in performance.", "This opens new possibilities for their combination or fusion.", "More results supporting these facts are provided as supplementary material 6 ." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.2", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Overview of State-of-the-Art Methods", "Evaluation Protocol", "Investigation of Cross-Language", "Detailed Analysis for English-French", "Conclusion" ] }
GEM-SciDuet-train-62#paper-1126#slide-9
Complementarity
Figure 1: Distribution histograms of Random Baseline (left) and CL-C3G (right) for 1000 positives (lightgreen) and 1000 negatives (darkred) (mis)matches. Figure 2: Distribution histograms of CL-ASA (left) and CL-C3G (right) for 1000 positives
Figure 1: Distribution histograms of Random Baseline (left) and CL-C3G (right) for 1000 positives (lightgreen) and 1000 negatives (darkred) (mis)matches. Figure 2: Distribution histograms of CL-ASA (left) and CL-C3G (right) for 1000 positives
[]
GEM-SciDuet-train-62#paper-1126#slide-10
1126
Deep Investigation of Cross-Language Plagiarism Detection Methods
This paper is a deep investigation of cross-language plagiarism detection methods on a new recently introduced open dataset, which contains parallel and comparable collections of documents with multiple characteristics (different genres, languages and sizes of texts). We investigate cross-language plagiarism detection methods for 6 language pairs on 2 granularities of text units in order to draw robust conclusions on the best methods while deeply analyzing correlations across document styles and languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154 ], "paper_content_text": [ "Introduction Plagiarism is a very significant problem nowadays, specifically in higher education institutions.", "In monolingual context, this problem is rather well treated by several recent researches (Potthast et al., 2014) .", "Nevertheless, the expansion of the Internet, which facilitates access to documents throughout the world and to increasingly efficient (freely available) machine translation tools, helps to spread cross-language plagiarism.", "Crosslanguage plagiarism means plagiarism by translation, i.e.", "a text has been plagiarized while being translated (manually or automatically).", "The challenge in detecting this kind of plagiarism is that the suspicious document is no longer in the same language of its source.", "In this relatively new field of research, no systematic evaluation of the main methods, on several language pairs, for different text granularities and for different text genres, has been proposed yet.", "This is what we propose in this paper.", "Contribution.", "The paper focus is on crosslanguage semantic textual similarity detection which is the main part (with source retrieval) in cross-language plagiarism detection.", "The evaluation dataset used (Ferrero et al., 2016) allows us to run a large amount of experiments and analyses.", "To our knowledge, this is the first time that full potential of such a diverse dataset is used for benchmarking.", "So, the paper main contribution is a systematic evaluation of cross-language similarity detection methods (using in plagiarism detection) on different languages, sizes and genres of texts through a reproducible evaluation protocol.", "Robust conclusions are derived on the best methods while deeply analyzing correlations across document styles and languages.", "Due to space limitations, we only provide a subset of our experiments in the paper while more result tables and correlation analyses are provided as supplementary material on a Web link 1 .", "Outline.", "After presenting the dataset used for our study in section 2, and reviewing the stateof-the-art methods of cross-language plagiarism detection that we evaluate in section 3, we describe the evaluation protocol employed in section 4.", "Then, section 5.1 presents the correla-tion of the methods across language pairs, while section 5.2 presents a detailed analysis on only English-French pair.", "Finally, section 6 concludes this work and gives a few perspectives.", "Dataset The reference dataset used during our study is the new dataset 2 recently introduced by Ferrero et al.", "(2016) .", "The dataset was specially designed for a rigorous evaluation of cross-language textual similarity detection.", "The different characteristics of the dataset are synthesized in Table 1 , while Table 2 presents the number of aligned units by subcorpus and by granularity.", "More precisely, the characteristics of the dataset are the following: • it is multilingual: it contains French, English and Spanish texts; • it proposes cross-language alignment information at different granularities: document level, sentence level and chunk level; • it is based on both parallel and comparable corpora (mix of Wikipedia, scientific conference papers, amazon product reviews, Europarl and JRC); • it contains both human and machine translated texts; • it contains different percentages of named entities; • part of it has been obfuscated (to make the cross-language similarity detection more complicated) while the rest remains without noise; • the documents were written and translated by multiple types of authors (from average to professionals); • it covers various fields.", "Overview of State-of-the-Art Methods Textual similarity detection methods are not exactly methods to detect plagiarism.", "Plagiarism is a statement that someone copied text deliberately without attribution, while these methods only detect textual similarities.", "There is no way 2 https://github.com/FerreroJeremy/ Cross-Language-Dataset of knowing why texts are similar and thus to assimilate these similarities to plagiarism.", "At the moment, there are five classes of approaches for cross-language plagiarism detection.", "The aim of each method is to estimate if two textual units in different languages express the same message or not.", "Figure 1 presents a taxonomy of Potthast et al.", "(2011) , enriched by the study of Danilova (2013) , of the different cross-language plagiarism detection methods grouped by class of approaches.", "We only describe below the state-of-the-art methods that we evaluate in the paper, one for each class of approaches (those in bold in the Figure 1 ).", "Cross-Language Character N-Gram (CL-CnG) is based on Mcnamee and Mayfield (2004) model.", "We use the CL-C3G Potthast et al.", "(2011)'s implementation.", "Only spaces and alphanumeric characters are kept.", "Any other diacritic or symbol is deleted and the texts are lower-cased.", "The texts are then segmented into 3-grams (sequences of 3 contiguous characters) and transformed into tf.idf vectors of character 3-grams.", "The metric used to compare two vectors is the cosine similarity.", "Cross-Language Conceptual Thesaurus-based Similarity (CL-CTS) aims to measure the semantic similarity using abstract concepts from words in textual units.", "We reuse the idea of Pataki (2012) which, for each sentence, build a bag-ofwords by getting all the available translations of each word of the sentence.", "For that, we use a linked lexical resource called DBNary (Sérasset, 2015) .", "The bag-of-words of a sentence is the merge of the bag-of-words of the words of the sentence.", "After, we use the Jaccard distance (Jaccard, 1912) with fuzzy matching between two bag-ofwords to measure the similarity between two sentences.", "Cross-Language Alignment-based Similarity Analysis (CL-ASA) was introduced for the first time by Barrón-Cedeño et al.", "(2008) and developed subsequently by Pinto et al.", "(2009) .", "The model aims to determinate how a textual unit is potentially the translation of another textual unit using bilingual unigram dictionary which contains translations pairs (and their probabilities) extracted from a parallel corpus.", "Our lexical dictionary is calculated applying the IBM-1 model Danilova (2013) , of different approaches for cross-language similarity detection.", "(Brown et al., 1993) on the concatenation of TED 4 (Cettolo et al., 2012) and News 5 parallel corpora.", "We reuse the implementation of Pinto et al.", "(2009) that proposed a formula that factored the alignment function.", "MT-Based Models Cross-Language Explicit Semantic Analysis (CL-ESA) is based on the explicit semantic analysis model introduced for the first time by Gabrilovich and Markovitch (2007) , which represents the meaning of a document by a vector based on the vocabulary derived from Wikipedia, to find a document within a corpus.", "It was reused by Potthast et al.", "(2008) in the context of cross-language document retrieval.", "Our implementation uses a part of Wikipedia, from which our test data was removed, to build the vector representations of the texts.", "Translation + Monolingual Analysis (T+MA) consists in translating suspect plagiarized text back into the same language of source text, in order to operate a monolingual comparison between them.", "We use the Muhr et al.", "(2010) 's implementation which consists in replacing each word of one text by its most likely translations in the language of the other text, leading to a bags-of-words.", "We use DBNary (Sérasset, 2015) to get the translations.", "The metric used to compare two texts is a monolingual matching based on strict intersection of bags-of-words.", "More recently, SemEval-2016 (Agirre et al., 2016) proposed a new subtask on evaluation of cross-lingual semantic textual similarity.", "Despite the fact that it was the first year that this subtask was attempted, there were 26 submissions from 10 teams.", "Most of the submissions relied on a machine translation step followed by a monolingual semantic similarity, but 4 teams tried to use learned vector representations (on words or sentences) combined with machine translation confidence (for instance the submission of Lo et al.", "(2016) or Ataman et al.", "(2016) ).", "The method that achieved the best performance (Brychcin and Svoboda, 2016 ) was a supervised system built on a word alignment-based method proposed by Sultan et al.", "(2015) .", "This very recent method is, however, not evaluated in this paper.", "Evaluation Protocol We apply the same evaluation protocol as in Ferrero et al.", "(2016)'s paper.", "We build a distance matrix of size N x M , with M = 1,000 and N = |S| where S is the evaluated sub-corpus.", "Each textual unit of S is compared to itself (actually, since this is cross-lingual similarity detection, each source language unit is compared to its corresponding unit in the target language) and to M -1 other units randomly selected from S. The same unit may be selected several times.", "Then, a matching score for each comparison performed is obtained, leading to the distance matrix.", "Thresholding on the matrix is applied to find the threshold giving the best F 1 score.", "The F 1 score is the harmonic mean of precision and recall.", "Precision is defined as the proportion of relevant matches (similar crosslanguage units) retrieved among all the matches retrieved.", "Recall is the proportion of relevant matches retrieved among all the relevant matches to retrieve.", "Each method is applied on each subcorpus for chunk and sentence granularities.", "For each configuration (i.e.", "a particular method applied on a particular sub-corpus considering a particular granularity), 10 folds are carried out by changing the M selected units.", "Investigation of Cross-Language Similarity Performances 5.1 Across Language Pairs Table 3 brings together the performances of all methods on all sub-corpora for each pair of languages at chunk and sentence level.", "In both sub-tables, at chunk and sentence level, the overall F 1 score over all sub-corpora of one method in one particular language pair is given.", "As a preliminary remark, one should note that CL-C3G and CL-ESA lead to the same results for a given language pair (same performance if we reverse source and target languages) due to their symmetrical property.", "Another remark we can make is that methods are consistent across language pairs: best performing methods are mostly the same, whatever the language pair considered.", "This is confirmed by the calculation of the Pearson correlation between performances of different pairs of languages, from Table 3 and reported in Table 4 .", "Table 4 represents the Pearson correlations between the different language pairs of the overall results of all methods on all sub-corpora.", "This result is interesting because some of these methods depend on the availability of lexical resources whose quality is heterogeneous across languages.", "Despite the variation of the source and target languages, a minimum Pearson correlation of 0.940 for EN→FR vs. FR→ES, and a maximum of 0.998 for EN→FR vs. EN→ES and ES→FR vs. FR→ES at chunk level is observed (see Table 4 ).", "For the sentence granularity, it is the same order of magnitude: the maximum Pearson correlation is 0.997 for ES→EN vs. EN→ES and ES→FR vs. FR→ES, and the minimum is 0.913 for EN→ES vs. FR→ES (see Table 4 ).", "In average the language pair EN→FR is 0.975 correlated with the other language pairs (0.980 at chunk-level and 0.971 at sentence-level), for instance.", "This correlation suggests the possibility to tune a method on one language and apply it to another language if needed.", "Table 5 synthesizes the top 3 methods for each language pair observed in Tables 3 and 4 .", "No matter the source and target languages or the granularity, CL-C3G generally outperforms the other methods.", "Then CL-ASA, CL-CTS and T+MA are also closely efficient but their behavior depends on the granularity.", "Generally, CL-ASA is better at the chunk granularity, followed by CL-CTS and T+MA.", "On the contrary, CL-CTS and T+MA are slightly more effective at sentence granularity.", "One explanation for this is that T+MA depends on the quality of machine translation, which may have poor performance on isolated chunks, while a short length text unit benefits the CL-CTS and CL-ASA methods because of their formula which Table 4 : Pearson correlations of the overall F 1 score over all sub-corpora of all methods between the different language pairs (EN: English; FR: French; ES: Spanish).", "will tend to minimize the number of false positives in this case.", "Anyway, despite these differences in ranking, the gap in term of performance values is small between these closest methods.", "For instance, we can see that when CL-CTS is more efficient than CL-C3G (ES→FR column at sentence level in Table 3 and Table 5 (b)), the difference of performance is very small (0.0068).", "Table 6 shows the Pearson correlations of the results (of all methods on all sub-corpora) by language pair between the chunk and the sentence granularity (correlations calculated from Table 3 , between the EN→FR column at chunk level with the EN→FR column at sentence level, and so on).", "We can see a strong Pearson correlation of the performances on the language pair between the chunk and the sentence granularity (an average of 0.9, with 0.907 for the EN→FR pair, for instance).", "This proves that all methods behave along a simi- lar trend at chunk and at sentence level, regardless of the languages on which they are used.", "However, we can see in Table 7 that if we collect correlation scores separately for each method (on all sub-corpora, on all language pairs) between chunk Table 7 : Pearson correlations of the results on all sub-corpora on all language pairs, between the chunk and the sentence granularity, by methods (calculated from Table 3 ).", "and sentence granularity performances (correlations also calculated from Table 3 , between the CL-C3G line at chunk level with the CL-C3G line at sentence level, and so on), we notice that some methods exhibit a different behavior at both chunk and sentence granularities: for instance, this is the case for CL-ASA which seems to be really better at chunk level.", "In conclusion, we can say that the methods presented here may behave slightly differently depending on the text unit considered (chunk or sentence) but they behave practically the same no matter the languages of the compared texts are (as long as enough lexical resources are available for dealing with these languages).", "Detailed Analysis for English-French The previous sub-section has shown a consistent behavior of methods across language pairs (strongly consistent) and granularities (less strongly consistent).", "For this reason, we now propose a detailed analysis for different sub-corpora, for the English-French language pair -at chunk and sentence level -only.", "Providing these results for all language pairs and granularities would take too much space.", "Moreover, we also run those state-of-the-art methods on the dataset of the Spanish-English cross-lingual Semantic Textual Similarity task of SemEval-2016 (Agirre et al., 2016) and SemEval-2017 (Cer et al., 2017 , and propose a shallower but equally rigorous analysis.", "However, all those results are also made available as supplementary material on our paper Web page.", "Table 8 shows the performances of methods on the EN→FR sub-corpora.", "As mentioned earlier, CL-C3G is in general the most effective method.", "CL-ESA seems to show better results on comparable corpora, like Wikipedia.", "In contrast, CL-ASA obtains better results on parallel corpora such as JRC or Europarl collections.", "CL-CTS and T+MA are pretty efficient and versatile too.", "It is also interesting to note that the results of the methods are well correlated between certain types of sub-corpora.", "For instance, the Pearson correlation of the performances of all methods between the TALN sub-corpus and the APR sub-corpus, is 0.982 at the chunk level, and 0.937 at the sentence level.", "This means that a method could be optimized on a particular corpus (for instance APR) and applied efficiently on another corpus (for instance TALN which is made of scientific conference papers).", "Figure 2 : Distribution histograms of some state-of-the-art methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Beyond their capacity to correctly predict a (mis)match, an interesting feature of the methods is their clustering capacity, i.e.", "their ability to cor-rectly separate the positives (cross-lingual semantic textual similar units) and the negatives (textual units with different meaning) in order to minimize Table 9 : Precision (P), Recall (R) and F 1 score, reached at a certain threshold (T), of some stateof-the-art methods for a data subset made with 1000 positives and 1000 negatives (mis)matches -10 folds validation.", "the doubts on the classification.", "To verify this phenomenon, we conducted another experience with a new protocol.", "We built a data subset by concatenating some documents of the previously presented dataset (Ferrero et al., 2016) .", "More precisely we used 200 pairs of each sub-corpora at sentence level only.", "We compared 1000 English textual units to their corresponding unit in French, and to one other (not relevant) French unit.", "So, each English textual unit must strictly leads to one match and one mismatch, i.e.", "in the end, we have exactly 1000 matches and 1000 mismatches for a run.", "We repeat this experiment 10 times for each method, leading to 10 folds for each method.", "The results of this experiment are reported on Table 9 , that shows the average for the 10 folds of the Precision (P), the Recall (R) and the F 1 score of some state-of-the-art methods, reached at a certain threshold (T).", "The results are also reported in Figure 2 , in the form of distribution histograms of the evaluated methods for 1000 positives and 1000 negatives (mis)matches.", "X-axis represents the similarity score (in percentage) computed by the method, and Y-axis represents the number of (mis)matches found for a given similarity score.", "In white, in the upper part of the figures, the positives (units that needed to be matched), and in black, in the lower part, the negatives (units that should not be matched).", "Distribution histograms on Figure 2 highlights the fact that each method has its own fingerprint: even if two methods looks equivalent in term of performances (see Table 9 ), their clustering capacity, and so the distribution of their (mis)matches can be different.", "For instance, we can see that a random distribution is a very bad distribution (Figure 2 (a) ).", "We can also see that CL-C3G has a narrow distribution of negatives and a broad distribution for positives (Figure 2 (c) ), whereas the opposite is true for CL-ASA (Figure 2 (e) ).", "Table 9 confirms this phenomenon by the fact that the decision threshold is very different for CL-ASA (0.762) compared to the other methods (around 0.1).", "This means that CL-ASA discriminates more correctly the positives that the negatives, when it seems to be the opposite for the other methods.", "For this reason, we can make the assumption that some methods are complementary, due to their different fingerprint.", "These behaviors suggest that fusion between these methods (notably decision tree based fusion) should lead to very promising results.", "Conclusion We conducted a deep investigation of crosslanguage plagiarism detection methods on a challenging dataset.", "Our results have shown a common behavior of methods across different language pairs.", "We revealed strong correlations across languages but also across text units considered.", "This means that when a method is more effective than another on a sufficiently large dataset, it is generally more effective in any other case.", "This also means that if a method is efficient on a particular language pair, it will be similarly efficient on another language pair as long as enough lexical resources are available for these languages.", "We also investigated the behavior of the methods through the different types of texts on a particular language pair: English-French.", "We revealed strong correlations across types of texts.", "This means that a method could be optimized on a particular corpus and applied efficiently on another corpus.", "Finally, we have shown that methods behave differently in clustering match and mismatched units, even if they seem similar in performance.", "This opens new possibilities for their combination or fusion.", "More results supporting these facts are provided as supplementary material 6 ." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.2", "6" ], "paper_header_content": [ "Introduction", "Dataset", "Overview of State-of-the-Art Methods", "Evaluation Protocol", "Investigation of Cross-Language", "Detailed Analysis for English-French", "Conclusion" ] }
GEM-SciDuet-train-62#paper-1126#slide-10
Conclusion
Results show a common behavior of methods across different language pairs; Strong correlations across languages, sizes and types of texts; Methods behave differently in clustering, even if they seem similar in performance combination or fusion? I invit you to come see my poster this afternoon at SemEval workshop to verify
Results show a common behavior of methods across different language pairs; Strong correlations across languages, sizes and types of texts; Methods behave differently in clustering, even if they seem similar in performance combination or fusion? I invit you to come see my poster this afternoon at SemEval workshop to verify
[]
GEM-SciDuet-train-63#paper-1130#slide-0
1130
A High Coverage Method for Automatic False Friends Detection for Spanish and Portuguese
False friends are words in two languages that look or sound similar, but have different meanings. They are a common source of confusion among language learners. Methods to detect them automatically do exist, however they make use of large aligned bilingual corpora, which are hard to find and expensive to build, or encounter problems dealing with infrequent words. In this work we propose a high coverage method that uses word vector representations to build a false friends classifier for any pair of languages, which we apply to the particular case of Spanish and Portuguese. The required resources are a large corpus for each language and a small bilingual lexicon for the pair. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: https:// creativecommons.org/licenses/by/4.0/.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163 ], "paper_content_text": [ "Introduction Closely related languages often share a significant number of similar words which may have different meanings in each language.", "Similar words with different meanings are called false friends, while similar words sharing meaning are called cognates.", "For instance, between Spanish and Portuguese, the amount of cognates reaches the 85% of the total vocabulary (Ulsh, 1971) .", "This fact represents a clear advantage for language learners, but it may also lead to an important number of interferences, since similar words will be interpreted as in the native language, which is not correct in the case of false friends.", "Generally, the expression false friends refers not only to pairs of identical words, but also to pairs of similar words, differing in a few characters.", "Thus, the Spanish verb halagar (\"to flatten\") and the similar Portuguese verb alagar (\"to flood\") are usually considered false friends.", "Besides traditional false friends, that are similar words with different meanings, Humblé (2006) analyses three more types.", "First, he mentions words with similar meanings but used in different contexts, as esclarecer, which is used in a few contexts in Spanish (esclarecer un crimen, \"clarify a crime\"), but not in other contexts where aclarar is used (aclarar una duda, \"clarify a doubt\"), while in Portuguese esclarecer is used in all these contexts.", "Secondly, there are similar words with partial meaning differences, as abrigo, which in Spanish means \"shelter\" and \"coat\", but in Portuguese has just the first meaning.", "Finally, Humblé (2006) also considers false friends as similar words with the same meaning but used in different syntactic structures in each language, as the Spanish verb hablar (\"to speak\"), which does not accept a sentential direct object, and its Portuguese equivalent falar, which does (*yo hablé que .", ".", ".", "/ eu falei que .", ".", ".", ", *\"I spoke that .", ".", ".", "\").", "These non-traditional false friends are more difficult to detect by language learners than traditional ones, because of their subtle differences.", "Having a list of false friends can help native speakers of one language to avoid confusion when speaking and writing in the other language.", "Such a list could be integrated into a writing assistant to prevent the writer when using these words.", "For Spanish/Portuguese, in particular, while there are printed dictionaries that compile false friends (Otero Brabo Cruz, 2004) , we did not find a complete digital false friends list, therefore, an automatic method for false friends detection would be useful.", "Furthermore, it is interesting to study methods which could generate false friends lists for any pair of similar languages, particularly, languages for which this phenomenon has not been studied.", "In this work we present an automatic method for false friends detection.", "We focus on the traditional false friends definition (similar words with different meanings) because of the dataset we count with and also to present our method in a simple context.", "We describe a supervised classifier we constructed to distinguish false friends from cognates based on word embeddings.", "Although for the method development and evaluation we used Spanish and Portuguese, the method could be applied to other language pairs, provided that the resources needed for the method building are available.", "We do not deal with the problem of determining if two words are similar or not, which is prior to the issue we tackle.", "The paper is organized as follows: in Section 2 we describe some related work, in Section 3 we introduce the word embeddings used in this work, in Section 4 we describe our method, in Section 5 we present and analyze the experiments carried out.", "Finally, in Section 6, we present our conclusions and sketch some future work.", "Related Work Previous work use a combination of orthographic, syntactic, semantic and frequency-based features.", "Frunza (2006) worked with French and English, focusing only on orthographic features via a supervised machine learning algorithm.", "While this method can work in some cases -e.g.", "to detect true cognates with a common root, such as inaccesible in Spanish and inacessível in Portuguese (\"inaccessible\"), that come from the Latin word inaccessibilis -it does not take into account the meanings of the words.", "Mitkov et al.", "(2007) used both a distributional and taxonomy-based approach to multiple language pairs: English-French, English-German, English-Spanish and French-Spanish.", "For the former approach, they build vectors based on the words that appear in a window in the corpus, computing the co-occurrence probability.", "Then they defined two methods for classification: one that considers the N nearest neighbors for each word in the pair and computes the Dice coefficient to determine the similarity between both 1 , and another one that is similar but using syntactically related words instead of the adjacent words.", "Additionally, they evaluated a method which uses a taxonomy to classify false friends, and fails back to the distributional similarity for words not included in the taxonomy.", "They achieved better results under this experiment than only using the distributional similarity.", "Based on the former technique, Ljubešic et al.", "(2013) focused on detecting false friends in closely related languages: Slovene and Croatian.", "Likewise, they exploited a distributional technique but also propose the use of Pointwise Mutual Information (PMI) as an effective way to classify false friends via the frequencies in the corpora.", "Sepúlveda and Aluísio (2011) tackled this task for Portuguese and Spanish, taking the same orthographic approach as Frunza (2006) .", "Nonetheless, they carried out an additional experiment in which they added a new feature whose value is the likelihood of one of the words of the pair to be a translation of the other one.", "This number was obtained from a probabilistic Spanish-Portuguese dictionary, previously generated taking a large sentence-aligned bilingual corpus.", "Word Vector Representations As seen in the previous section, some authors (Mitkov et al., 2007; Ljubešic et al., 2013) represented words as vectors by counting occurrences or by building tf-idf vectors, among other techniques.", "Similarly, Mikolov et al.", "(2013a) proposed an unsupervised technique, known as word2vec, to efficiently represent words as vectors from a large unlabeled corpus, which has proven to outperform several other representations in tasks involving text as input (LeCun et al., 2015) .", "As it is a vector-based distributional representation technique, it is based on computing a vector space in which vectors are close if their corresponding words appear frequently in the same contexts in the corpus used to train it.", "Interesting relationships and patterns are learned in particular with this method, e.g.", "the result of the vector calculation vector(\"M adrid ) − vector(\"Spain ) + vector(\"F rance ) is closer to vector(\"P aris ) than to any other word vector (Mikolov et al., 2013a) .", "Additionally, Mikolov et al.", "(2013c) has shown a technique properties.", "The 2D graphs represent Spanish and Portuguese word spaces after applying PCA, scaling and rotating to exaggerate the similarities and emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "to detect common phrases such as \"New York\" to be part of the vector space, being able to detect more entities and at the same time enhancing the context of others.", "To exploit multi-language capabilities, Mikolov et al.", "(2013b) developed a method to automatically generate dictionaries and phrase tables from small bilingual data (translation word pairs), based on the calculation of a linear transformation between the vector spaces built with word2vec.", "This is presented as an optimization problem that tries to minimize the sum of the Euclidean distances between the translated source word vectors and the target vectors of each pair, and the translation matrix is obtained by means of stochastic gradient descent.", "We chose this distributional representation technique because of this translation property, which is what our method is mainly based on.", "These concepts around word2vec are shown in Fig.", "1 .", "In the example, the five word vectors corresponding to the numbers from \"one\" to \"five\" are shown, and also the word vector \"carpet\" for each language.", "More related words have closer vectors, while unrelated word vectors are at a greater distance.", "At the same time, groups of words are arranged in a similar way, allowing to build translation candidates.", "Method Description As false friends are word pairs in which one seems to be a translation of the other one, our idea is to compare their vectors using Mikolov et al.", "(2013b) technique.", "Our hypothesis is that a word vector in one language should be close to the cognate word vector in another language when it is transformed using this technique, but far when they are false friends, as described hereafter.", "First, we exploited the Spanish and Portuguese Wikipedia's (containing several hundreds of thousands of words) to build the vector spaces we needed, using Gensim's skip-gram based word2vec implementation (Řehůřek and Sojka, 2010) .", "The preprocessing of the Wikipedia's involved the following steps.", "The text was tokenized based on the alphabet of each language, removing words that contain other characters.", "Numbers were converted to their equivalent words.", "Wikipedia non-article pages were removed (e.g.", "disambiguation pages) and punctuation marks were discarded as well.", "Portuguese was harder to tokenize provided that the hyphen is widely used as part of the words in the language.", "For example, bem-vindo (\"welcome\") is a single word whereas Uruguai-Japão (\"Uruguay-Japan\") in jogo Uruguai-Japão (\"Uruguay-Japan match\") are two different words, used with an hyphen only in some contexts.", "The right option is to treat them as separate tokens in order to avoid spurious words in the model and to provide more information to existing words (Uruguai and Japão).", "As the word embedding method exploits the text at the level of sentences (and to avoid splitting ambiguous sentences), paragraphs were used as sentences, which still keep semantic relationships.", "A word had to appear at least five times in the corresponding Wikipedia to be considered for construction of the vector space.", "The 2D graphs represent the word spaces after applying PCA, scaling and rotating to emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "Secondly, WordNet (Fellbaum, 1998) was used as the bilingual lexicon to build the linear transformation between the vector spaces by applying the same technique described in (Mikolov et al., 2013b) , taking advantage of the multi-language synset alignment available in NLTK (Bird et al., 2009) between Spanish (Gonzalez-Agirre et al., 2012) and Portuguese (de Paiva and Rademaker, 2012), based on Open Multilingual WordNet (Bond and Paik, 2012) .", "We generated this lexicon by iterating through each of the 40,000 WordNet synsets and forming pairs taking their most common Spanish word and Portuguese word.", "Note that this is a small figure compared with the corpus sizes, and we show in the next section that it could be considerably lower.", "We also show that the transformation source needs not to be WordNet (we used it just for convenience), which is an expensive and carefully handcrafted resource; it could be just a bilingual dictionary.", "Finally, we defined a method to distinguish false friends from cognates.", "We defined a binary classifier for determining the class, false friends or cognates, for each pair of similar words.", "Given a candidate pair (source_word, target_word), and the corresponding vectors (source_vector, target_vector), the first step consists of transforming source_vector to the space computed for the target language, using the transformation described above.", "Let T (source_vector) be the result of this transformation.", "Then, to determine if source_word and target_word are cognates (if one of them is a possible translation of the other one), we analyzed the relationship between T (source_vector) and target_vector.", "According to Mikolov et al.", "(2013b) , the transformation we compute between the vector spaces keeps semantic relations between words from the source space to the target space.", "So, if (source_word, target_word) is a pair of cognates, then T (source_vector) should be close to target_vector.", "Otherwise, source_word and target_word are false friends.", "The method is illustrated in Fig.", "2 .", "In the example, the pair (persona, pessoa) are cognates (meaning \"person\" in English) while the pair (af eitar, af ectar) are false friends (meaning \"to shave\" and \"that affects\", respectively).", "If we transform the source word vectors (persona and af eitar) and thus obtain vectors in the target vector space, T (persona) and pessoa are close while T (af eitar) and af ectar are far from each other (while a valid translation of af eitar, barbear, is close to T (af eitar)).", "Following this idea, a threshold needs to be established by which two words are considered cognates.", "In addition to this, we wanted to see if similar properties help to constitute an acceptable division.", "Hence, we trained and tested by means of cross-validation a supervised binary Support Vector Machines classifier, based on three features: • Feature 1: the cosine distance between T (source_vector) and target_vector.", "• Feature 2: the number of word vectors in the target vector space closer to target_vector than T (source_vector), using the cosine distance.", "We believe that in some cases the distance for cognates may be larger but what it counts is if the transformed vector lays within the closest ones to the target vector.", "• Feature 3: the sum of the distances between target_vector and T (source_vector i ) for the five word vectors source_vector i nearest to source_vector, using the cosine distance.", "The idea here is that the first feature may be error prone since it only considers one vector, so considering more vectors (by taking both the context from the source vector and the one from its transformed vector) should reduce the variance, as neighbor word vectors from the source word should be neighbors of the target word.", "We carried out different experiments alternating the language we used as the source and the language we used as the target, and also other parameters, which we show in the next section.", "The source code is public and available to use.", "2 5 Experimental Analysis Unfortunately, we are not able to compare our method to several others presented by other authors as they are not only based on non-public code, but also on non-public datasets which are not directly comparable with the one used here.", "Nevertheless, we compare our technique against several methods, for the particular case of Spanish and Portuguese and show it is solid.", "First, we set a simple baseline that does the following: it checks if there exist a WordNet synset which contains both pair words within the Spanish and Portuguese words of it, and if it is does, then they are considered cognates.", "Then, we compare to the Machine Translation software Apertium 3 : we take one of the pair words, translate it and check if the translation matches the other word.", "We chose this software since it can be accessed offline and it is freely available.", "Apart from this, we compare with Sepúlveda and Aluísio (2011, experiment 2 and 3.2) method and also with a variant of our method that adds a word frequency feature (the relative number of times each word appeared in the corpus).", "Word frequencies are used by other authors and we believe they are a different data source from what the word2vec vectors can provide.", "For these experiments we use the same data set as in (Sepúlveda and Aluísio, 2011) .", "4 This resource is composed by 710 Spanish-Portuguese word pairs: 338 cognates and 372 false friends.", "The word pairs were selected from the following resources: an online Spanish-Brazilian Portuguese dictionary, an online Spanish-Portuguese dictionary, a list of the most frequent words in Portuguese and Spanish and an online list of different words in Portuguese and Spanish.", "There are not multi-word expressions and roughly half of the pairs are composed of identically spelled words.", "It was annotated by two people.", "It is important to consider that the word coverage is a concern in this task since every method can only works when the pair words are present in their resources (in other words, they are not out of a method's vocabulary).", "The accuracy thus only takes into account the covered pairs.", "The coverage for the simple baseline can be measured by counting the pairs were both words are present in WordNet.", "Sepúlveda and Aluísio (2011, experiment 2) only considers orthographic and phonetic differences, so always covers all pairs.", "Sepúlveda and Aluísio (2011, experiment 3 .2) uses a dictionary, then the pairs that are in it count towards the coverage.", "The words that could not be translated by Apertium are counted against the coverage of its related method.", "Finally, the pairs that cannot be translated into vectors are counted as not covered by our methods.", "Results are shown in Table 1 .", "It can be appreciated that our method provides both high accuracy and coverage, and that word embedding information can be further improved if additional information, such as the word frequencies, is included.", "We also tested a version of our method that only uses Feature 1 via logistic regression, which reduced the accuracy by 3% roughly, showing that the other two features add some missing information to improve the accuracy.", "As an additional experiment, we tried exploiting WordNet to compute taxonomy-based distances as features in the same manner as Mitkov et al.", "(2007) did, but we did not obtain a significant difference, thus we conclude that it does not add information to what already lays in the features built upon the embeddings.", "As Mikolov et al.", "(2013b) did, we wondered how our method works under different vector configurations, hence we carried out several experiments, varying vector space dimensions.", "We also experimented with vectors for phrases up to two words.", "Finally, we evaluated how the election of the source language, Spanish or Portuguese, affects the results.", "Accuracy obtained for the ten best configurations, and for the experiment with two word vectors are presented in Table 2 .", "For the experiment we used the vector dimensions 100, 200, 400 and 800; source vector space Spanish and Portuguese; and we also tried with a single run with two-word phrases (with Spanish as source and 100 as the vector dimension), summing up 33 configurations in total.", "As it can be noted, there are no significant differences in the accuracy of our method when varying the vector sizes.", "Higher dimensions do not provide better results and they even worsen when the target language dimension is greater than or equal to the source language dimension, as Mikolov et al.", "(2013b) claimed.", "Taking Spanish as the source language seems to be better, maybe this is due to the corpus sizes: the corpus used to generate the Spanish vector space is 1.4 times larger than the one used for Portuguese.", "Finally, we can observe that including vectors for two-word phrases does not improve results.", "Linear Transformation Analysis We were intrigued in knowing how different qualities and quantities of bilingual lexicon entries would affect our method performance.", "We show how the accuracy varies according to the bilingual lexicon size and its source in the Fig.", "3 .", "WN seems to be slightly better than using Apertium as source, albeit they both perform well.", "Also, both rapidly achieve acceptable results, with less than a thousand entries, and : Accuracy of our method with respect to different bilingual lexicon sizes and sources.", "WN is the original approach we take to build the bilingual lexicon, WN all is a method that takes every pair of lemmas from both languages in every WordNet synset and Apertium uses the translations of the top 50,000 Spanish words in frequencies from the Wikipedia (and that could be translated to Portuguese).", "Note that the usage of Apertium here has nothing to do with Apertium baseline.", "yield stable results when the number of entries is larger.", "This is not the case for the method WN all, which needs more word pairs to achieve reasonable results (around 5,000) and it is less stable with larger number of entries.", "Even though we use WordNet to build the lexicon, which is a rich and expensive resource, it could also be built with less quality entries, such as those that come from the output of a Machine Translation software or just by having a list of known word translations.", "Furthermore, our method proved to work with a small number of word pairs, it can be applied to language pairs with scarce bilingual resources.", "Additionally, it is interesting to observe that despite the fact that some test set pairs may appear in the bilingual lexicon in which our method is based on, when having changed it (by reducing its size or using Apertium), it still shows great performance.", "This suggest the results are not biased towards the test set used in this work.", "Conclusions and Future Work We have provided an approach to classify false friends and cognates which showed to have both high accuracy and coverage, studying it for the particular case of Spanish and Portuguese and providing state-of-the-art results for this pair of languages.", "Here we use up-to-date word embedding techniques, which have shown to excel in other tasks, and which can be enriched with other information such as the words frequencies to enhance the classifier.", "In the future we want to experiment with other word vector representations and state-of-the-art vector space linear transformation such as (Artetxe et al., 2017; Artetxe et al., 2018) .", "Also, we would like to work on fine-grained classifications, as we mentioned before there are some word pairs that behave like cognates in some cases but like false friends in others.", "Our method can be applied to any pair of languages, without requiring a large bilingual corpus or taxonomy, which can be hard to find or expensive to build.", "In contrast, large untagged monolingual corpora are easily obtained on the Internet.", "Similar languages, that commonly have a high number of false friends, can benefit from the technique we present in this document, for example by generating a list of false friends pairs automatically based on words that are written in both languages in the same way." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Word Vector Representations", "Method Description", "Linear Transformation Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-63#paper-1130#slide-0
Introduction
Objective: classify between false friends or cognates for False friends: pair of words from different languages that are written or pronounced in a similar way, but have different meanings.
Objective: classify between false friends or cognates for False friends: pair of words from different languages that are written or pronounced in a similar way, but have different meanings.
[]
GEM-SciDuet-train-63#paper-1130#slide-1
1130
A High Coverage Method for Automatic False Friends Detection for Spanish and Portuguese
False friends are words in two languages that look or sound similar, but have different meanings. They are a common source of confusion among language learners. Methods to detect them automatically do exist, however they make use of large aligned bilingual corpora, which are hard to find and expensive to build, or encounter problems dealing with infrequent words. In this work we propose a high coverage method that uses word vector representations to build a false friends classifier for any pair of languages, which we apply to the particular case of Spanish and Portuguese. The required resources are a large corpus for each language and a small bilingual lexicon for the pair. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: https:// creativecommons.org/licenses/by/4.0/.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163 ], "paper_content_text": [ "Introduction Closely related languages often share a significant number of similar words which may have different meanings in each language.", "Similar words with different meanings are called false friends, while similar words sharing meaning are called cognates.", "For instance, between Spanish and Portuguese, the amount of cognates reaches the 85% of the total vocabulary (Ulsh, 1971) .", "This fact represents a clear advantage for language learners, but it may also lead to an important number of interferences, since similar words will be interpreted as in the native language, which is not correct in the case of false friends.", "Generally, the expression false friends refers not only to pairs of identical words, but also to pairs of similar words, differing in a few characters.", "Thus, the Spanish verb halagar (\"to flatten\") and the similar Portuguese verb alagar (\"to flood\") are usually considered false friends.", "Besides traditional false friends, that are similar words with different meanings, Humblé (2006) analyses three more types.", "First, he mentions words with similar meanings but used in different contexts, as esclarecer, which is used in a few contexts in Spanish (esclarecer un crimen, \"clarify a crime\"), but not in other contexts where aclarar is used (aclarar una duda, \"clarify a doubt\"), while in Portuguese esclarecer is used in all these contexts.", "Secondly, there are similar words with partial meaning differences, as abrigo, which in Spanish means \"shelter\" and \"coat\", but in Portuguese has just the first meaning.", "Finally, Humblé (2006) also considers false friends as similar words with the same meaning but used in different syntactic structures in each language, as the Spanish verb hablar (\"to speak\"), which does not accept a sentential direct object, and its Portuguese equivalent falar, which does (*yo hablé que .", ".", ".", "/ eu falei que .", ".", ".", ", *\"I spoke that .", ".", ".", "\").", "These non-traditional false friends are more difficult to detect by language learners than traditional ones, because of their subtle differences.", "Having a list of false friends can help native speakers of one language to avoid confusion when speaking and writing in the other language.", "Such a list could be integrated into a writing assistant to prevent the writer when using these words.", "For Spanish/Portuguese, in particular, while there are printed dictionaries that compile false friends (Otero Brabo Cruz, 2004) , we did not find a complete digital false friends list, therefore, an automatic method for false friends detection would be useful.", "Furthermore, it is interesting to study methods which could generate false friends lists for any pair of similar languages, particularly, languages for which this phenomenon has not been studied.", "In this work we present an automatic method for false friends detection.", "We focus on the traditional false friends definition (similar words with different meanings) because of the dataset we count with and also to present our method in a simple context.", "We describe a supervised classifier we constructed to distinguish false friends from cognates based on word embeddings.", "Although for the method development and evaluation we used Spanish and Portuguese, the method could be applied to other language pairs, provided that the resources needed for the method building are available.", "We do not deal with the problem of determining if two words are similar or not, which is prior to the issue we tackle.", "The paper is organized as follows: in Section 2 we describe some related work, in Section 3 we introduce the word embeddings used in this work, in Section 4 we describe our method, in Section 5 we present and analyze the experiments carried out.", "Finally, in Section 6, we present our conclusions and sketch some future work.", "Related Work Previous work use a combination of orthographic, syntactic, semantic and frequency-based features.", "Frunza (2006) worked with French and English, focusing only on orthographic features via a supervised machine learning algorithm.", "While this method can work in some cases -e.g.", "to detect true cognates with a common root, such as inaccesible in Spanish and inacessível in Portuguese (\"inaccessible\"), that come from the Latin word inaccessibilis -it does not take into account the meanings of the words.", "Mitkov et al.", "(2007) used both a distributional and taxonomy-based approach to multiple language pairs: English-French, English-German, English-Spanish and French-Spanish.", "For the former approach, they build vectors based on the words that appear in a window in the corpus, computing the co-occurrence probability.", "Then they defined two methods for classification: one that considers the N nearest neighbors for each word in the pair and computes the Dice coefficient to determine the similarity between both 1 , and another one that is similar but using syntactically related words instead of the adjacent words.", "Additionally, they evaluated a method which uses a taxonomy to classify false friends, and fails back to the distributional similarity for words not included in the taxonomy.", "They achieved better results under this experiment than only using the distributional similarity.", "Based on the former technique, Ljubešic et al.", "(2013) focused on detecting false friends in closely related languages: Slovene and Croatian.", "Likewise, they exploited a distributional technique but also propose the use of Pointwise Mutual Information (PMI) as an effective way to classify false friends via the frequencies in the corpora.", "Sepúlveda and Aluísio (2011) tackled this task for Portuguese and Spanish, taking the same orthographic approach as Frunza (2006) .", "Nonetheless, they carried out an additional experiment in which they added a new feature whose value is the likelihood of one of the words of the pair to be a translation of the other one.", "This number was obtained from a probabilistic Spanish-Portuguese dictionary, previously generated taking a large sentence-aligned bilingual corpus.", "Word Vector Representations As seen in the previous section, some authors (Mitkov et al., 2007; Ljubešic et al., 2013) represented words as vectors by counting occurrences or by building tf-idf vectors, among other techniques.", "Similarly, Mikolov et al.", "(2013a) proposed an unsupervised technique, known as word2vec, to efficiently represent words as vectors from a large unlabeled corpus, which has proven to outperform several other representations in tasks involving text as input (LeCun et al., 2015) .", "As it is a vector-based distributional representation technique, it is based on computing a vector space in which vectors are close if their corresponding words appear frequently in the same contexts in the corpus used to train it.", "Interesting relationships and patterns are learned in particular with this method, e.g.", "the result of the vector calculation vector(\"M adrid ) − vector(\"Spain ) + vector(\"F rance ) is closer to vector(\"P aris ) than to any other word vector (Mikolov et al., 2013a) .", "Additionally, Mikolov et al.", "(2013c) has shown a technique properties.", "The 2D graphs represent Spanish and Portuguese word spaces after applying PCA, scaling and rotating to exaggerate the similarities and emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "to detect common phrases such as \"New York\" to be part of the vector space, being able to detect more entities and at the same time enhancing the context of others.", "To exploit multi-language capabilities, Mikolov et al.", "(2013b) developed a method to automatically generate dictionaries and phrase tables from small bilingual data (translation word pairs), based on the calculation of a linear transformation between the vector spaces built with word2vec.", "This is presented as an optimization problem that tries to minimize the sum of the Euclidean distances between the translated source word vectors and the target vectors of each pair, and the translation matrix is obtained by means of stochastic gradient descent.", "We chose this distributional representation technique because of this translation property, which is what our method is mainly based on.", "These concepts around word2vec are shown in Fig.", "1 .", "In the example, the five word vectors corresponding to the numbers from \"one\" to \"five\" are shown, and also the word vector \"carpet\" for each language.", "More related words have closer vectors, while unrelated word vectors are at a greater distance.", "At the same time, groups of words are arranged in a similar way, allowing to build translation candidates.", "Method Description As false friends are word pairs in which one seems to be a translation of the other one, our idea is to compare their vectors using Mikolov et al.", "(2013b) technique.", "Our hypothesis is that a word vector in one language should be close to the cognate word vector in another language when it is transformed using this technique, but far when they are false friends, as described hereafter.", "First, we exploited the Spanish and Portuguese Wikipedia's (containing several hundreds of thousands of words) to build the vector spaces we needed, using Gensim's skip-gram based word2vec implementation (Řehůřek and Sojka, 2010) .", "The preprocessing of the Wikipedia's involved the following steps.", "The text was tokenized based on the alphabet of each language, removing words that contain other characters.", "Numbers were converted to their equivalent words.", "Wikipedia non-article pages were removed (e.g.", "disambiguation pages) and punctuation marks were discarded as well.", "Portuguese was harder to tokenize provided that the hyphen is widely used as part of the words in the language.", "For example, bem-vindo (\"welcome\") is a single word whereas Uruguai-Japão (\"Uruguay-Japan\") in jogo Uruguai-Japão (\"Uruguay-Japan match\") are two different words, used with an hyphen only in some contexts.", "The right option is to treat them as separate tokens in order to avoid spurious words in the model and to provide more information to existing words (Uruguai and Japão).", "As the word embedding method exploits the text at the level of sentences (and to avoid splitting ambiguous sentences), paragraphs were used as sentences, which still keep semantic relationships.", "A word had to appear at least five times in the corresponding Wikipedia to be considered for construction of the vector space.", "The 2D graphs represent the word spaces after applying PCA, scaling and rotating to emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "Secondly, WordNet (Fellbaum, 1998) was used as the bilingual lexicon to build the linear transformation between the vector spaces by applying the same technique described in (Mikolov et al., 2013b) , taking advantage of the multi-language synset alignment available in NLTK (Bird et al., 2009) between Spanish (Gonzalez-Agirre et al., 2012) and Portuguese (de Paiva and Rademaker, 2012), based on Open Multilingual WordNet (Bond and Paik, 2012) .", "We generated this lexicon by iterating through each of the 40,000 WordNet synsets and forming pairs taking their most common Spanish word and Portuguese word.", "Note that this is a small figure compared with the corpus sizes, and we show in the next section that it could be considerably lower.", "We also show that the transformation source needs not to be WordNet (we used it just for convenience), which is an expensive and carefully handcrafted resource; it could be just a bilingual dictionary.", "Finally, we defined a method to distinguish false friends from cognates.", "We defined a binary classifier for determining the class, false friends or cognates, for each pair of similar words.", "Given a candidate pair (source_word, target_word), and the corresponding vectors (source_vector, target_vector), the first step consists of transforming source_vector to the space computed for the target language, using the transformation described above.", "Let T (source_vector) be the result of this transformation.", "Then, to determine if source_word and target_word are cognates (if one of them is a possible translation of the other one), we analyzed the relationship between T (source_vector) and target_vector.", "According to Mikolov et al.", "(2013b) , the transformation we compute between the vector spaces keeps semantic relations between words from the source space to the target space.", "So, if (source_word, target_word) is a pair of cognates, then T (source_vector) should be close to target_vector.", "Otherwise, source_word and target_word are false friends.", "The method is illustrated in Fig.", "2 .", "In the example, the pair (persona, pessoa) are cognates (meaning \"person\" in English) while the pair (af eitar, af ectar) are false friends (meaning \"to shave\" and \"that affects\", respectively).", "If we transform the source word vectors (persona and af eitar) and thus obtain vectors in the target vector space, T (persona) and pessoa are close while T (af eitar) and af ectar are far from each other (while a valid translation of af eitar, barbear, is close to T (af eitar)).", "Following this idea, a threshold needs to be established by which two words are considered cognates.", "In addition to this, we wanted to see if similar properties help to constitute an acceptable division.", "Hence, we trained and tested by means of cross-validation a supervised binary Support Vector Machines classifier, based on three features: • Feature 1: the cosine distance between T (source_vector) and target_vector.", "• Feature 2: the number of word vectors in the target vector space closer to target_vector than T (source_vector), using the cosine distance.", "We believe that in some cases the distance for cognates may be larger but what it counts is if the transformed vector lays within the closest ones to the target vector.", "• Feature 3: the sum of the distances between target_vector and T (source_vector i ) for the five word vectors source_vector i nearest to source_vector, using the cosine distance.", "The idea here is that the first feature may be error prone since it only considers one vector, so considering more vectors (by taking both the context from the source vector and the one from its transformed vector) should reduce the variance, as neighbor word vectors from the source word should be neighbors of the target word.", "We carried out different experiments alternating the language we used as the source and the language we used as the target, and also other parameters, which we show in the next section.", "The source code is public and available to use.", "2 5 Experimental Analysis Unfortunately, we are not able to compare our method to several others presented by other authors as they are not only based on non-public code, but also on non-public datasets which are not directly comparable with the one used here.", "Nevertheless, we compare our technique against several methods, for the particular case of Spanish and Portuguese and show it is solid.", "First, we set a simple baseline that does the following: it checks if there exist a WordNet synset which contains both pair words within the Spanish and Portuguese words of it, and if it is does, then they are considered cognates.", "Then, we compare to the Machine Translation software Apertium 3 : we take one of the pair words, translate it and check if the translation matches the other word.", "We chose this software since it can be accessed offline and it is freely available.", "Apart from this, we compare with Sepúlveda and Aluísio (2011, experiment 2 and 3.2) method and also with a variant of our method that adds a word frequency feature (the relative number of times each word appeared in the corpus).", "Word frequencies are used by other authors and we believe they are a different data source from what the word2vec vectors can provide.", "For these experiments we use the same data set as in (Sepúlveda and Aluísio, 2011) .", "4 This resource is composed by 710 Spanish-Portuguese word pairs: 338 cognates and 372 false friends.", "The word pairs were selected from the following resources: an online Spanish-Brazilian Portuguese dictionary, an online Spanish-Portuguese dictionary, a list of the most frequent words in Portuguese and Spanish and an online list of different words in Portuguese and Spanish.", "There are not multi-word expressions and roughly half of the pairs are composed of identically spelled words.", "It was annotated by two people.", "It is important to consider that the word coverage is a concern in this task since every method can only works when the pair words are present in their resources (in other words, they are not out of a method's vocabulary).", "The accuracy thus only takes into account the covered pairs.", "The coverage for the simple baseline can be measured by counting the pairs were both words are present in WordNet.", "Sepúlveda and Aluísio (2011, experiment 2) only considers orthographic and phonetic differences, so always covers all pairs.", "Sepúlveda and Aluísio (2011, experiment 3 .2) uses a dictionary, then the pairs that are in it count towards the coverage.", "The words that could not be translated by Apertium are counted against the coverage of its related method.", "Finally, the pairs that cannot be translated into vectors are counted as not covered by our methods.", "Results are shown in Table 1 .", "It can be appreciated that our method provides both high accuracy and coverage, and that word embedding information can be further improved if additional information, such as the word frequencies, is included.", "We also tested a version of our method that only uses Feature 1 via logistic regression, which reduced the accuracy by 3% roughly, showing that the other two features add some missing information to improve the accuracy.", "As an additional experiment, we tried exploiting WordNet to compute taxonomy-based distances as features in the same manner as Mitkov et al.", "(2007) did, but we did not obtain a significant difference, thus we conclude that it does not add information to what already lays in the features built upon the embeddings.", "As Mikolov et al.", "(2013b) did, we wondered how our method works under different vector configurations, hence we carried out several experiments, varying vector space dimensions.", "We also experimented with vectors for phrases up to two words.", "Finally, we evaluated how the election of the source language, Spanish or Portuguese, affects the results.", "Accuracy obtained for the ten best configurations, and for the experiment with two word vectors are presented in Table 2 .", "For the experiment we used the vector dimensions 100, 200, 400 and 800; source vector space Spanish and Portuguese; and we also tried with a single run with two-word phrases (with Spanish as source and 100 as the vector dimension), summing up 33 configurations in total.", "As it can be noted, there are no significant differences in the accuracy of our method when varying the vector sizes.", "Higher dimensions do not provide better results and they even worsen when the target language dimension is greater than or equal to the source language dimension, as Mikolov et al.", "(2013b) claimed.", "Taking Spanish as the source language seems to be better, maybe this is due to the corpus sizes: the corpus used to generate the Spanish vector space is 1.4 times larger than the one used for Portuguese.", "Finally, we can observe that including vectors for two-word phrases does not improve results.", "Linear Transformation Analysis We were intrigued in knowing how different qualities and quantities of bilingual lexicon entries would affect our method performance.", "We show how the accuracy varies according to the bilingual lexicon size and its source in the Fig.", "3 .", "WN seems to be slightly better than using Apertium as source, albeit they both perform well.", "Also, both rapidly achieve acceptable results, with less than a thousand entries, and : Accuracy of our method with respect to different bilingual lexicon sizes and sources.", "WN is the original approach we take to build the bilingual lexicon, WN all is a method that takes every pair of lemmas from both languages in every WordNet synset and Apertium uses the translations of the top 50,000 Spanish words in frequencies from the Wikipedia (and that could be translated to Portuguese).", "Note that the usage of Apertium here has nothing to do with Apertium baseline.", "yield stable results when the number of entries is larger.", "This is not the case for the method WN all, which needs more word pairs to achieve reasonable results (around 5,000) and it is less stable with larger number of entries.", "Even though we use WordNet to build the lexicon, which is a rich and expensive resource, it could also be built with less quality entries, such as those that come from the output of a Machine Translation software or just by having a list of known word translations.", "Furthermore, our method proved to work with a small number of word pairs, it can be applied to language pairs with scarce bilingual resources.", "Additionally, it is interesting to observe that despite the fact that some test set pairs may appear in the bilingual lexicon in which our method is based on, when having changed it (by reducing its size or using Apertium), it still shows great performance.", "This suggest the results are not biased towards the test set used in this work.", "Conclusions and Future Work We have provided an approach to classify false friends and cognates which showed to have both high accuracy and coverage, studying it for the particular case of Spanish and Portuguese and providing state-of-the-art results for this pair of languages.", "Here we use up-to-date word embedding techniques, which have shown to excel in other tasks, and which can be enriched with other information such as the words frequencies to enhance the classifier.", "In the future we want to experiment with other word vector representations and state-of-the-art vector space linear transformation such as (Artetxe et al., 2017; Artetxe et al., 2018) .", "Also, we would like to work on fine-grained classifications, as we mentioned before there are some word pairs that behave like cognates in some cases but like false friends in others.", "Our method can be applied to any pair of languages, without requiring a large bilingual corpus or taxonomy, which can be hard to find or expensive to build.", "In contrast, large untagged monolingual corpora are easily obtained on the Internet.", "Similar languages, that commonly have a high number of false friends, can benefit from the technique we present in this document, for example by generating a list of false friends pairs automatically based on words that are written in both languages in the same way." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Word Vector Representations", "Method Description", "Linear Transformation Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-63#paper-1130#slide-1
Example False Friends
obligado obrigado no no aceite aceite borracha borracha cadera cadeira desenvolver desenvolver propina propina
obligado obrigado no no aceite aceite borracha borracha cadera cadeira desenvolver desenvolver propina propina
[]
GEM-SciDuet-train-63#paper-1130#slide-2
1130
A High Coverage Method for Automatic False Friends Detection for Spanish and Portuguese
False friends are words in two languages that look or sound similar, but have different meanings. They are a common source of confusion among language learners. Methods to detect them automatically do exist, however they make use of large aligned bilingual corpora, which are hard to find and expensive to build, or encounter problems dealing with infrequent words. In this work we propose a high coverage method that uses word vector representations to build a false friends classifier for any pair of languages, which we apply to the particular case of Spanish and Portuguese. The required resources are a large corpus for each language and a small bilingual lexicon for the pair. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: https:// creativecommons.org/licenses/by/4.0/.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163 ], "paper_content_text": [ "Introduction Closely related languages often share a significant number of similar words which may have different meanings in each language.", "Similar words with different meanings are called false friends, while similar words sharing meaning are called cognates.", "For instance, between Spanish and Portuguese, the amount of cognates reaches the 85% of the total vocabulary (Ulsh, 1971) .", "This fact represents a clear advantage for language learners, but it may also lead to an important number of interferences, since similar words will be interpreted as in the native language, which is not correct in the case of false friends.", "Generally, the expression false friends refers not only to pairs of identical words, but also to pairs of similar words, differing in a few characters.", "Thus, the Spanish verb halagar (\"to flatten\") and the similar Portuguese verb alagar (\"to flood\") are usually considered false friends.", "Besides traditional false friends, that are similar words with different meanings, Humblé (2006) analyses three more types.", "First, he mentions words with similar meanings but used in different contexts, as esclarecer, which is used in a few contexts in Spanish (esclarecer un crimen, \"clarify a crime\"), but not in other contexts where aclarar is used (aclarar una duda, \"clarify a doubt\"), while in Portuguese esclarecer is used in all these contexts.", "Secondly, there are similar words with partial meaning differences, as abrigo, which in Spanish means \"shelter\" and \"coat\", but in Portuguese has just the first meaning.", "Finally, Humblé (2006) also considers false friends as similar words with the same meaning but used in different syntactic structures in each language, as the Spanish verb hablar (\"to speak\"), which does not accept a sentential direct object, and its Portuguese equivalent falar, which does (*yo hablé que .", ".", ".", "/ eu falei que .", ".", ".", ", *\"I spoke that .", ".", ".", "\").", "These non-traditional false friends are more difficult to detect by language learners than traditional ones, because of their subtle differences.", "Having a list of false friends can help native speakers of one language to avoid confusion when speaking and writing in the other language.", "Such a list could be integrated into a writing assistant to prevent the writer when using these words.", "For Spanish/Portuguese, in particular, while there are printed dictionaries that compile false friends (Otero Brabo Cruz, 2004) , we did not find a complete digital false friends list, therefore, an automatic method for false friends detection would be useful.", "Furthermore, it is interesting to study methods which could generate false friends lists for any pair of similar languages, particularly, languages for which this phenomenon has not been studied.", "In this work we present an automatic method for false friends detection.", "We focus on the traditional false friends definition (similar words with different meanings) because of the dataset we count with and also to present our method in a simple context.", "We describe a supervised classifier we constructed to distinguish false friends from cognates based on word embeddings.", "Although for the method development and evaluation we used Spanish and Portuguese, the method could be applied to other language pairs, provided that the resources needed for the method building are available.", "We do not deal with the problem of determining if two words are similar or not, which is prior to the issue we tackle.", "The paper is organized as follows: in Section 2 we describe some related work, in Section 3 we introduce the word embeddings used in this work, in Section 4 we describe our method, in Section 5 we present and analyze the experiments carried out.", "Finally, in Section 6, we present our conclusions and sketch some future work.", "Related Work Previous work use a combination of orthographic, syntactic, semantic and frequency-based features.", "Frunza (2006) worked with French and English, focusing only on orthographic features via a supervised machine learning algorithm.", "While this method can work in some cases -e.g.", "to detect true cognates with a common root, such as inaccesible in Spanish and inacessível in Portuguese (\"inaccessible\"), that come from the Latin word inaccessibilis -it does not take into account the meanings of the words.", "Mitkov et al.", "(2007) used both a distributional and taxonomy-based approach to multiple language pairs: English-French, English-German, English-Spanish and French-Spanish.", "For the former approach, they build vectors based on the words that appear in a window in the corpus, computing the co-occurrence probability.", "Then they defined two methods for classification: one that considers the N nearest neighbors for each word in the pair and computes the Dice coefficient to determine the similarity between both 1 , and another one that is similar but using syntactically related words instead of the adjacent words.", "Additionally, they evaluated a method which uses a taxonomy to classify false friends, and fails back to the distributional similarity for words not included in the taxonomy.", "They achieved better results under this experiment than only using the distributional similarity.", "Based on the former technique, Ljubešic et al.", "(2013) focused on detecting false friends in closely related languages: Slovene and Croatian.", "Likewise, they exploited a distributional technique but also propose the use of Pointwise Mutual Information (PMI) as an effective way to classify false friends via the frequencies in the corpora.", "Sepúlveda and Aluísio (2011) tackled this task for Portuguese and Spanish, taking the same orthographic approach as Frunza (2006) .", "Nonetheless, they carried out an additional experiment in which they added a new feature whose value is the likelihood of one of the words of the pair to be a translation of the other one.", "This number was obtained from a probabilistic Spanish-Portuguese dictionary, previously generated taking a large sentence-aligned bilingual corpus.", "Word Vector Representations As seen in the previous section, some authors (Mitkov et al., 2007; Ljubešic et al., 2013) represented words as vectors by counting occurrences or by building tf-idf vectors, among other techniques.", "Similarly, Mikolov et al.", "(2013a) proposed an unsupervised technique, known as word2vec, to efficiently represent words as vectors from a large unlabeled corpus, which has proven to outperform several other representations in tasks involving text as input (LeCun et al., 2015) .", "As it is a vector-based distributional representation technique, it is based on computing a vector space in which vectors are close if their corresponding words appear frequently in the same contexts in the corpus used to train it.", "Interesting relationships and patterns are learned in particular with this method, e.g.", "the result of the vector calculation vector(\"M adrid ) − vector(\"Spain ) + vector(\"F rance ) is closer to vector(\"P aris ) than to any other word vector (Mikolov et al., 2013a) .", "Additionally, Mikolov et al.", "(2013c) has shown a technique properties.", "The 2D graphs represent Spanish and Portuguese word spaces after applying PCA, scaling and rotating to exaggerate the similarities and emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "to detect common phrases such as \"New York\" to be part of the vector space, being able to detect more entities and at the same time enhancing the context of others.", "To exploit multi-language capabilities, Mikolov et al.", "(2013b) developed a method to automatically generate dictionaries and phrase tables from small bilingual data (translation word pairs), based on the calculation of a linear transformation between the vector spaces built with word2vec.", "This is presented as an optimization problem that tries to minimize the sum of the Euclidean distances between the translated source word vectors and the target vectors of each pair, and the translation matrix is obtained by means of stochastic gradient descent.", "We chose this distributional representation technique because of this translation property, which is what our method is mainly based on.", "These concepts around word2vec are shown in Fig.", "1 .", "In the example, the five word vectors corresponding to the numbers from \"one\" to \"five\" are shown, and also the word vector \"carpet\" for each language.", "More related words have closer vectors, while unrelated word vectors are at a greater distance.", "At the same time, groups of words are arranged in a similar way, allowing to build translation candidates.", "Method Description As false friends are word pairs in which one seems to be a translation of the other one, our idea is to compare their vectors using Mikolov et al.", "(2013b) technique.", "Our hypothesis is that a word vector in one language should be close to the cognate word vector in another language when it is transformed using this technique, but far when they are false friends, as described hereafter.", "First, we exploited the Spanish and Portuguese Wikipedia's (containing several hundreds of thousands of words) to build the vector spaces we needed, using Gensim's skip-gram based word2vec implementation (Řehůřek and Sojka, 2010) .", "The preprocessing of the Wikipedia's involved the following steps.", "The text was tokenized based on the alphabet of each language, removing words that contain other characters.", "Numbers were converted to their equivalent words.", "Wikipedia non-article pages were removed (e.g.", "disambiguation pages) and punctuation marks were discarded as well.", "Portuguese was harder to tokenize provided that the hyphen is widely used as part of the words in the language.", "For example, bem-vindo (\"welcome\") is a single word whereas Uruguai-Japão (\"Uruguay-Japan\") in jogo Uruguai-Japão (\"Uruguay-Japan match\") are two different words, used with an hyphen only in some contexts.", "The right option is to treat them as separate tokens in order to avoid spurious words in the model and to provide more information to existing words (Uruguai and Japão).", "As the word embedding method exploits the text at the level of sentences (and to avoid splitting ambiguous sentences), paragraphs were used as sentences, which still keep semantic relationships.", "A word had to appear at least five times in the corresponding Wikipedia to be considered for construction of the vector space.", "The 2D graphs represent the word spaces after applying PCA, scaling and rotating to emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "Secondly, WordNet (Fellbaum, 1998) was used as the bilingual lexicon to build the linear transformation between the vector spaces by applying the same technique described in (Mikolov et al., 2013b) , taking advantage of the multi-language synset alignment available in NLTK (Bird et al., 2009) between Spanish (Gonzalez-Agirre et al., 2012) and Portuguese (de Paiva and Rademaker, 2012), based on Open Multilingual WordNet (Bond and Paik, 2012) .", "We generated this lexicon by iterating through each of the 40,000 WordNet synsets and forming pairs taking their most common Spanish word and Portuguese word.", "Note that this is a small figure compared with the corpus sizes, and we show in the next section that it could be considerably lower.", "We also show that the transformation source needs not to be WordNet (we used it just for convenience), which is an expensive and carefully handcrafted resource; it could be just a bilingual dictionary.", "Finally, we defined a method to distinguish false friends from cognates.", "We defined a binary classifier for determining the class, false friends or cognates, for each pair of similar words.", "Given a candidate pair (source_word, target_word), and the corresponding vectors (source_vector, target_vector), the first step consists of transforming source_vector to the space computed for the target language, using the transformation described above.", "Let T (source_vector) be the result of this transformation.", "Then, to determine if source_word and target_word are cognates (if one of them is a possible translation of the other one), we analyzed the relationship between T (source_vector) and target_vector.", "According to Mikolov et al.", "(2013b) , the transformation we compute between the vector spaces keeps semantic relations between words from the source space to the target space.", "So, if (source_word, target_word) is a pair of cognates, then T (source_vector) should be close to target_vector.", "Otherwise, source_word and target_word are false friends.", "The method is illustrated in Fig.", "2 .", "In the example, the pair (persona, pessoa) are cognates (meaning \"person\" in English) while the pair (af eitar, af ectar) are false friends (meaning \"to shave\" and \"that affects\", respectively).", "If we transform the source word vectors (persona and af eitar) and thus obtain vectors in the target vector space, T (persona) and pessoa are close while T (af eitar) and af ectar are far from each other (while a valid translation of af eitar, barbear, is close to T (af eitar)).", "Following this idea, a threshold needs to be established by which two words are considered cognates.", "In addition to this, we wanted to see if similar properties help to constitute an acceptable division.", "Hence, we trained and tested by means of cross-validation a supervised binary Support Vector Machines classifier, based on three features: • Feature 1: the cosine distance between T (source_vector) and target_vector.", "• Feature 2: the number of word vectors in the target vector space closer to target_vector than T (source_vector), using the cosine distance.", "We believe that in some cases the distance for cognates may be larger but what it counts is if the transformed vector lays within the closest ones to the target vector.", "• Feature 3: the sum of the distances between target_vector and T (source_vector i ) for the five word vectors source_vector i nearest to source_vector, using the cosine distance.", "The idea here is that the first feature may be error prone since it only considers one vector, so considering more vectors (by taking both the context from the source vector and the one from its transformed vector) should reduce the variance, as neighbor word vectors from the source word should be neighbors of the target word.", "We carried out different experiments alternating the language we used as the source and the language we used as the target, and also other parameters, which we show in the next section.", "The source code is public and available to use.", "2 5 Experimental Analysis Unfortunately, we are not able to compare our method to several others presented by other authors as they are not only based on non-public code, but also on non-public datasets which are not directly comparable with the one used here.", "Nevertheless, we compare our technique against several methods, for the particular case of Spanish and Portuguese and show it is solid.", "First, we set a simple baseline that does the following: it checks if there exist a WordNet synset which contains both pair words within the Spanish and Portuguese words of it, and if it is does, then they are considered cognates.", "Then, we compare to the Machine Translation software Apertium 3 : we take one of the pair words, translate it and check if the translation matches the other word.", "We chose this software since it can be accessed offline and it is freely available.", "Apart from this, we compare with Sepúlveda and Aluísio (2011, experiment 2 and 3.2) method and also with a variant of our method that adds a word frequency feature (the relative number of times each word appeared in the corpus).", "Word frequencies are used by other authors and we believe they are a different data source from what the word2vec vectors can provide.", "For these experiments we use the same data set as in (Sepúlveda and Aluísio, 2011) .", "4 This resource is composed by 710 Spanish-Portuguese word pairs: 338 cognates and 372 false friends.", "The word pairs were selected from the following resources: an online Spanish-Brazilian Portuguese dictionary, an online Spanish-Portuguese dictionary, a list of the most frequent words in Portuguese and Spanish and an online list of different words in Portuguese and Spanish.", "There are not multi-word expressions and roughly half of the pairs are composed of identically spelled words.", "It was annotated by two people.", "It is important to consider that the word coverage is a concern in this task since every method can only works when the pair words are present in their resources (in other words, they are not out of a method's vocabulary).", "The accuracy thus only takes into account the covered pairs.", "The coverage for the simple baseline can be measured by counting the pairs were both words are present in WordNet.", "Sepúlveda and Aluísio (2011, experiment 2) only considers orthographic and phonetic differences, so always covers all pairs.", "Sepúlveda and Aluísio (2011, experiment 3 .2) uses a dictionary, then the pairs that are in it count towards the coverage.", "The words that could not be translated by Apertium are counted against the coverage of its related method.", "Finally, the pairs that cannot be translated into vectors are counted as not covered by our methods.", "Results are shown in Table 1 .", "It can be appreciated that our method provides both high accuracy and coverage, and that word embedding information can be further improved if additional information, such as the word frequencies, is included.", "We also tested a version of our method that only uses Feature 1 via logistic regression, which reduced the accuracy by 3% roughly, showing that the other two features add some missing information to improve the accuracy.", "As an additional experiment, we tried exploiting WordNet to compute taxonomy-based distances as features in the same manner as Mitkov et al.", "(2007) did, but we did not obtain a significant difference, thus we conclude that it does not add information to what already lays in the features built upon the embeddings.", "As Mikolov et al.", "(2013b) did, we wondered how our method works under different vector configurations, hence we carried out several experiments, varying vector space dimensions.", "We also experimented with vectors for phrases up to two words.", "Finally, we evaluated how the election of the source language, Spanish or Portuguese, affects the results.", "Accuracy obtained for the ten best configurations, and for the experiment with two word vectors are presented in Table 2 .", "For the experiment we used the vector dimensions 100, 200, 400 and 800; source vector space Spanish and Portuguese; and we also tried with a single run with two-word phrases (with Spanish as source and 100 as the vector dimension), summing up 33 configurations in total.", "As it can be noted, there are no significant differences in the accuracy of our method when varying the vector sizes.", "Higher dimensions do not provide better results and they even worsen when the target language dimension is greater than or equal to the source language dimension, as Mikolov et al.", "(2013b) claimed.", "Taking Spanish as the source language seems to be better, maybe this is due to the corpus sizes: the corpus used to generate the Spanish vector space is 1.4 times larger than the one used for Portuguese.", "Finally, we can observe that including vectors for two-word phrases does not improve results.", "Linear Transformation Analysis We were intrigued in knowing how different qualities and quantities of bilingual lexicon entries would affect our method performance.", "We show how the accuracy varies according to the bilingual lexicon size and its source in the Fig.", "3 .", "WN seems to be slightly better than using Apertium as source, albeit they both perform well.", "Also, both rapidly achieve acceptable results, with less than a thousand entries, and : Accuracy of our method with respect to different bilingual lexicon sizes and sources.", "WN is the original approach we take to build the bilingual lexicon, WN all is a method that takes every pair of lemmas from both languages in every WordNet synset and Apertium uses the translations of the top 50,000 Spanish words in frequencies from the Wikipedia (and that could be translated to Portuguese).", "Note that the usage of Apertium here has nothing to do with Apertium baseline.", "yield stable results when the number of entries is larger.", "This is not the case for the method WN all, which needs more word pairs to achieve reasonable results (around 5,000) and it is less stable with larger number of entries.", "Even though we use WordNet to build the lexicon, which is a rich and expensive resource, it could also be built with less quality entries, such as those that come from the output of a Machine Translation software or just by having a list of known word translations.", "Furthermore, our method proved to work with a small number of word pairs, it can be applied to language pairs with scarce bilingual resources.", "Additionally, it is interesting to observe that despite the fact that some test set pairs may appear in the bilingual lexicon in which our method is based on, when having changed it (by reducing its size or using Apertium), it still shows great performance.", "This suggest the results are not biased towards the test set used in this work.", "Conclusions and Future Work We have provided an approach to classify false friends and cognates which showed to have both high accuracy and coverage, studying it for the particular case of Spanish and Portuguese and providing state-of-the-art results for this pair of languages.", "Here we use up-to-date word embedding techniques, which have shown to excel in other tasks, and which can be enriched with other information such as the words frequencies to enhance the classifier.", "In the future we want to experiment with other word vector representations and state-of-the-art vector space linear transformation such as (Artetxe et al., 2017; Artetxe et al., 2018) .", "Also, we would like to work on fine-grained classifications, as we mentioned before there are some word pairs that behave like cognates in some cases but like false friends in others.", "Our method can be applied to any pair of languages, without requiring a large bilingual corpus or taxonomy, which can be hard to find or expensive to build.", "In contrast, large untagged monolingual corpora are easily obtained on the Internet.", "Similar languages, that commonly have a high number of false friends, can benefit from the technique we present in this document, for example by generating a list of false friends pairs automatically based on words that are written in both languages in the same way." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Word Vector Representations", "Method Description", "Linear Transformation Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-63#paper-1130#slide-2
Motivation
False friends make harder to learn a language or to communicate, especially when its similar to the mother tongue. Between Spanish and Portuguese, the amount of cognates reaches the 85% of the total vocabulary (Ulsh, 1971).
False friends make harder to learn a language or to communicate, especially when its similar to the mother tongue. Between Spanish and Portuguese, the amount of cognates reaches the 85% of the total vocabulary (Ulsh, 1971).
[]
GEM-SciDuet-train-63#paper-1130#slide-3
1130
A High Coverage Method for Automatic False Friends Detection for Spanish and Portuguese
False friends are words in two languages that look or sound similar, but have different meanings. They are a common source of confusion among language learners. Methods to detect them automatically do exist, however they make use of large aligned bilingual corpora, which are hard to find and expensive to build, or encounter problems dealing with infrequent words. In this work we propose a high coverage method that uses word vector representations to build a false friends classifier for any pair of languages, which we apply to the particular case of Spanish and Portuguese. The required resources are a large corpus for each language and a small bilingual lexicon for the pair. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: https:// creativecommons.org/licenses/by/4.0/.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163 ], "paper_content_text": [ "Introduction Closely related languages often share a significant number of similar words which may have different meanings in each language.", "Similar words with different meanings are called false friends, while similar words sharing meaning are called cognates.", "For instance, between Spanish and Portuguese, the amount of cognates reaches the 85% of the total vocabulary (Ulsh, 1971) .", "This fact represents a clear advantage for language learners, but it may also lead to an important number of interferences, since similar words will be interpreted as in the native language, which is not correct in the case of false friends.", "Generally, the expression false friends refers not only to pairs of identical words, but also to pairs of similar words, differing in a few characters.", "Thus, the Spanish verb halagar (\"to flatten\") and the similar Portuguese verb alagar (\"to flood\") are usually considered false friends.", "Besides traditional false friends, that are similar words with different meanings, Humblé (2006) analyses three more types.", "First, he mentions words with similar meanings but used in different contexts, as esclarecer, which is used in a few contexts in Spanish (esclarecer un crimen, \"clarify a crime\"), but not in other contexts where aclarar is used (aclarar una duda, \"clarify a doubt\"), while in Portuguese esclarecer is used in all these contexts.", "Secondly, there are similar words with partial meaning differences, as abrigo, which in Spanish means \"shelter\" and \"coat\", but in Portuguese has just the first meaning.", "Finally, Humblé (2006) also considers false friends as similar words with the same meaning but used in different syntactic structures in each language, as the Spanish verb hablar (\"to speak\"), which does not accept a sentential direct object, and its Portuguese equivalent falar, which does (*yo hablé que .", ".", ".", "/ eu falei que .", ".", ".", ", *\"I spoke that .", ".", ".", "\").", "These non-traditional false friends are more difficult to detect by language learners than traditional ones, because of their subtle differences.", "Having a list of false friends can help native speakers of one language to avoid confusion when speaking and writing in the other language.", "Such a list could be integrated into a writing assistant to prevent the writer when using these words.", "For Spanish/Portuguese, in particular, while there are printed dictionaries that compile false friends (Otero Brabo Cruz, 2004) , we did not find a complete digital false friends list, therefore, an automatic method for false friends detection would be useful.", "Furthermore, it is interesting to study methods which could generate false friends lists for any pair of similar languages, particularly, languages for which this phenomenon has not been studied.", "In this work we present an automatic method for false friends detection.", "We focus on the traditional false friends definition (similar words with different meanings) because of the dataset we count with and also to present our method in a simple context.", "We describe a supervised classifier we constructed to distinguish false friends from cognates based on word embeddings.", "Although for the method development and evaluation we used Spanish and Portuguese, the method could be applied to other language pairs, provided that the resources needed for the method building are available.", "We do not deal with the problem of determining if two words are similar or not, which is prior to the issue we tackle.", "The paper is organized as follows: in Section 2 we describe some related work, in Section 3 we introduce the word embeddings used in this work, in Section 4 we describe our method, in Section 5 we present and analyze the experiments carried out.", "Finally, in Section 6, we present our conclusions and sketch some future work.", "Related Work Previous work use a combination of orthographic, syntactic, semantic and frequency-based features.", "Frunza (2006) worked with French and English, focusing only on orthographic features via a supervised machine learning algorithm.", "While this method can work in some cases -e.g.", "to detect true cognates with a common root, such as inaccesible in Spanish and inacessível in Portuguese (\"inaccessible\"), that come from the Latin word inaccessibilis -it does not take into account the meanings of the words.", "Mitkov et al.", "(2007) used both a distributional and taxonomy-based approach to multiple language pairs: English-French, English-German, English-Spanish and French-Spanish.", "For the former approach, they build vectors based on the words that appear in a window in the corpus, computing the co-occurrence probability.", "Then they defined two methods for classification: one that considers the N nearest neighbors for each word in the pair and computes the Dice coefficient to determine the similarity between both 1 , and another one that is similar but using syntactically related words instead of the adjacent words.", "Additionally, they evaluated a method which uses a taxonomy to classify false friends, and fails back to the distributional similarity for words not included in the taxonomy.", "They achieved better results under this experiment than only using the distributional similarity.", "Based on the former technique, Ljubešic et al.", "(2013) focused on detecting false friends in closely related languages: Slovene and Croatian.", "Likewise, they exploited a distributional technique but also propose the use of Pointwise Mutual Information (PMI) as an effective way to classify false friends via the frequencies in the corpora.", "Sepúlveda and Aluísio (2011) tackled this task for Portuguese and Spanish, taking the same orthographic approach as Frunza (2006) .", "Nonetheless, they carried out an additional experiment in which they added a new feature whose value is the likelihood of one of the words of the pair to be a translation of the other one.", "This number was obtained from a probabilistic Spanish-Portuguese dictionary, previously generated taking a large sentence-aligned bilingual corpus.", "Word Vector Representations As seen in the previous section, some authors (Mitkov et al., 2007; Ljubešic et al., 2013) represented words as vectors by counting occurrences or by building tf-idf vectors, among other techniques.", "Similarly, Mikolov et al.", "(2013a) proposed an unsupervised technique, known as word2vec, to efficiently represent words as vectors from a large unlabeled corpus, which has proven to outperform several other representations in tasks involving text as input (LeCun et al., 2015) .", "As it is a vector-based distributional representation technique, it is based on computing a vector space in which vectors are close if their corresponding words appear frequently in the same contexts in the corpus used to train it.", "Interesting relationships and patterns are learned in particular with this method, e.g.", "the result of the vector calculation vector(\"M adrid ) − vector(\"Spain ) + vector(\"F rance ) is closer to vector(\"P aris ) than to any other word vector (Mikolov et al., 2013a) .", "Additionally, Mikolov et al.", "(2013c) has shown a technique properties.", "The 2D graphs represent Spanish and Portuguese word spaces after applying PCA, scaling and rotating to exaggerate the similarities and emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "to detect common phrases such as \"New York\" to be part of the vector space, being able to detect more entities and at the same time enhancing the context of others.", "To exploit multi-language capabilities, Mikolov et al.", "(2013b) developed a method to automatically generate dictionaries and phrase tables from small bilingual data (translation word pairs), based on the calculation of a linear transformation between the vector spaces built with word2vec.", "This is presented as an optimization problem that tries to minimize the sum of the Euclidean distances between the translated source word vectors and the target vectors of each pair, and the translation matrix is obtained by means of stochastic gradient descent.", "We chose this distributional representation technique because of this translation property, which is what our method is mainly based on.", "These concepts around word2vec are shown in Fig.", "1 .", "In the example, the five word vectors corresponding to the numbers from \"one\" to \"five\" are shown, and also the word vector \"carpet\" for each language.", "More related words have closer vectors, while unrelated word vectors are at a greater distance.", "At the same time, groups of words are arranged in a similar way, allowing to build translation candidates.", "Method Description As false friends are word pairs in which one seems to be a translation of the other one, our idea is to compare their vectors using Mikolov et al.", "(2013b) technique.", "Our hypothesis is that a word vector in one language should be close to the cognate word vector in another language when it is transformed using this technique, but far when they are false friends, as described hereafter.", "First, we exploited the Spanish and Portuguese Wikipedia's (containing several hundreds of thousands of words) to build the vector spaces we needed, using Gensim's skip-gram based word2vec implementation (Řehůřek and Sojka, 2010) .", "The preprocessing of the Wikipedia's involved the following steps.", "The text was tokenized based on the alphabet of each language, removing words that contain other characters.", "Numbers were converted to their equivalent words.", "Wikipedia non-article pages were removed (e.g.", "disambiguation pages) and punctuation marks were discarded as well.", "Portuguese was harder to tokenize provided that the hyphen is widely used as part of the words in the language.", "For example, bem-vindo (\"welcome\") is a single word whereas Uruguai-Japão (\"Uruguay-Japan\") in jogo Uruguai-Japão (\"Uruguay-Japan match\") are two different words, used with an hyphen only in some contexts.", "The right option is to treat them as separate tokens in order to avoid spurious words in the model and to provide more information to existing words (Uruguai and Japão).", "As the word embedding method exploits the text at the level of sentences (and to avoid splitting ambiguous sentences), paragraphs were used as sentences, which still keep semantic relationships.", "A word had to appear at least five times in the corresponding Wikipedia to be considered for construction of the vector space.", "The 2D graphs represent the word spaces after applying PCA, scaling and rotating to emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "Secondly, WordNet (Fellbaum, 1998) was used as the bilingual lexicon to build the linear transformation between the vector spaces by applying the same technique described in (Mikolov et al., 2013b) , taking advantage of the multi-language synset alignment available in NLTK (Bird et al., 2009) between Spanish (Gonzalez-Agirre et al., 2012) and Portuguese (de Paiva and Rademaker, 2012), based on Open Multilingual WordNet (Bond and Paik, 2012) .", "We generated this lexicon by iterating through each of the 40,000 WordNet synsets and forming pairs taking their most common Spanish word and Portuguese word.", "Note that this is a small figure compared with the corpus sizes, and we show in the next section that it could be considerably lower.", "We also show that the transformation source needs not to be WordNet (we used it just for convenience), which is an expensive and carefully handcrafted resource; it could be just a bilingual dictionary.", "Finally, we defined a method to distinguish false friends from cognates.", "We defined a binary classifier for determining the class, false friends or cognates, for each pair of similar words.", "Given a candidate pair (source_word, target_word), and the corresponding vectors (source_vector, target_vector), the first step consists of transforming source_vector to the space computed for the target language, using the transformation described above.", "Let T (source_vector) be the result of this transformation.", "Then, to determine if source_word and target_word are cognates (if one of them is a possible translation of the other one), we analyzed the relationship between T (source_vector) and target_vector.", "According to Mikolov et al.", "(2013b) , the transformation we compute between the vector spaces keeps semantic relations between words from the source space to the target space.", "So, if (source_word, target_word) is a pair of cognates, then T (source_vector) should be close to target_vector.", "Otherwise, source_word and target_word are false friends.", "The method is illustrated in Fig.", "2 .", "In the example, the pair (persona, pessoa) are cognates (meaning \"person\" in English) while the pair (af eitar, af ectar) are false friends (meaning \"to shave\" and \"that affects\", respectively).", "If we transform the source word vectors (persona and af eitar) and thus obtain vectors in the target vector space, T (persona) and pessoa are close while T (af eitar) and af ectar are far from each other (while a valid translation of af eitar, barbear, is close to T (af eitar)).", "Following this idea, a threshold needs to be established by which two words are considered cognates.", "In addition to this, we wanted to see if similar properties help to constitute an acceptable division.", "Hence, we trained and tested by means of cross-validation a supervised binary Support Vector Machines classifier, based on three features: • Feature 1: the cosine distance between T (source_vector) and target_vector.", "• Feature 2: the number of word vectors in the target vector space closer to target_vector than T (source_vector), using the cosine distance.", "We believe that in some cases the distance for cognates may be larger but what it counts is if the transformed vector lays within the closest ones to the target vector.", "• Feature 3: the sum of the distances between target_vector and T (source_vector i ) for the five word vectors source_vector i nearest to source_vector, using the cosine distance.", "The idea here is that the first feature may be error prone since it only considers one vector, so considering more vectors (by taking both the context from the source vector and the one from its transformed vector) should reduce the variance, as neighbor word vectors from the source word should be neighbors of the target word.", "We carried out different experiments alternating the language we used as the source and the language we used as the target, and also other parameters, which we show in the next section.", "The source code is public and available to use.", "2 5 Experimental Analysis Unfortunately, we are not able to compare our method to several others presented by other authors as they are not only based on non-public code, but also on non-public datasets which are not directly comparable with the one used here.", "Nevertheless, we compare our technique against several methods, for the particular case of Spanish and Portuguese and show it is solid.", "First, we set a simple baseline that does the following: it checks if there exist a WordNet synset which contains both pair words within the Spanish and Portuguese words of it, and if it is does, then they are considered cognates.", "Then, we compare to the Machine Translation software Apertium 3 : we take one of the pair words, translate it and check if the translation matches the other word.", "We chose this software since it can be accessed offline and it is freely available.", "Apart from this, we compare with Sepúlveda and Aluísio (2011, experiment 2 and 3.2) method and also with a variant of our method that adds a word frequency feature (the relative number of times each word appeared in the corpus).", "Word frequencies are used by other authors and we believe they are a different data source from what the word2vec vectors can provide.", "For these experiments we use the same data set as in (Sepúlveda and Aluísio, 2011) .", "4 This resource is composed by 710 Spanish-Portuguese word pairs: 338 cognates and 372 false friends.", "The word pairs were selected from the following resources: an online Spanish-Brazilian Portuguese dictionary, an online Spanish-Portuguese dictionary, a list of the most frequent words in Portuguese and Spanish and an online list of different words in Portuguese and Spanish.", "There are not multi-word expressions and roughly half of the pairs are composed of identically spelled words.", "It was annotated by two people.", "It is important to consider that the word coverage is a concern in this task since every method can only works when the pair words are present in their resources (in other words, they are not out of a method's vocabulary).", "The accuracy thus only takes into account the covered pairs.", "The coverage for the simple baseline can be measured by counting the pairs were both words are present in WordNet.", "Sepúlveda and Aluísio (2011, experiment 2) only considers orthographic and phonetic differences, so always covers all pairs.", "Sepúlveda and Aluísio (2011, experiment 3 .2) uses a dictionary, then the pairs that are in it count towards the coverage.", "The words that could not be translated by Apertium are counted against the coverage of its related method.", "Finally, the pairs that cannot be translated into vectors are counted as not covered by our methods.", "Results are shown in Table 1 .", "It can be appreciated that our method provides both high accuracy and coverage, and that word embedding information can be further improved if additional information, such as the word frequencies, is included.", "We also tested a version of our method that only uses Feature 1 via logistic regression, which reduced the accuracy by 3% roughly, showing that the other two features add some missing information to improve the accuracy.", "As an additional experiment, we tried exploiting WordNet to compute taxonomy-based distances as features in the same manner as Mitkov et al.", "(2007) did, but we did not obtain a significant difference, thus we conclude that it does not add information to what already lays in the features built upon the embeddings.", "As Mikolov et al.", "(2013b) did, we wondered how our method works under different vector configurations, hence we carried out several experiments, varying vector space dimensions.", "We also experimented with vectors for phrases up to two words.", "Finally, we evaluated how the election of the source language, Spanish or Portuguese, affects the results.", "Accuracy obtained for the ten best configurations, and for the experiment with two word vectors are presented in Table 2 .", "For the experiment we used the vector dimensions 100, 200, 400 and 800; source vector space Spanish and Portuguese; and we also tried with a single run with two-word phrases (with Spanish as source and 100 as the vector dimension), summing up 33 configurations in total.", "As it can be noted, there are no significant differences in the accuracy of our method when varying the vector sizes.", "Higher dimensions do not provide better results and they even worsen when the target language dimension is greater than or equal to the source language dimension, as Mikolov et al.", "(2013b) claimed.", "Taking Spanish as the source language seems to be better, maybe this is due to the corpus sizes: the corpus used to generate the Spanish vector space is 1.4 times larger than the one used for Portuguese.", "Finally, we can observe that including vectors for two-word phrases does not improve results.", "Linear Transformation Analysis We were intrigued in knowing how different qualities and quantities of bilingual lexicon entries would affect our method performance.", "We show how the accuracy varies according to the bilingual lexicon size and its source in the Fig.", "3 .", "WN seems to be slightly better than using Apertium as source, albeit they both perform well.", "Also, both rapidly achieve acceptable results, with less than a thousand entries, and : Accuracy of our method with respect to different bilingual lexicon sizes and sources.", "WN is the original approach we take to build the bilingual lexicon, WN all is a method that takes every pair of lemmas from both languages in every WordNet synset and Apertium uses the translations of the top 50,000 Spanish words in frequencies from the Wikipedia (and that could be translated to Portuguese).", "Note that the usage of Apertium here has nothing to do with Apertium baseline.", "yield stable results when the number of entries is larger.", "This is not the case for the method WN all, which needs more word pairs to achieve reasonable results (around 5,000) and it is less stable with larger number of entries.", "Even though we use WordNet to build the lexicon, which is a rich and expensive resource, it could also be built with less quality entries, such as those that come from the output of a Machine Translation software or just by having a list of known word translations.", "Furthermore, our method proved to work with a small number of word pairs, it can be applied to language pairs with scarce bilingual resources.", "Additionally, it is interesting to observe that despite the fact that some test set pairs may appear in the bilingual lexicon in which our method is based on, when having changed it (by reducing its size or using Apertium), it still shows great performance.", "This suggest the results are not biased towards the test set used in this work.", "Conclusions and Future Work We have provided an approach to classify false friends and cognates which showed to have both high accuracy and coverage, studying it for the particular case of Spanish and Portuguese and providing state-of-the-art results for this pair of languages.", "Here we use up-to-date word embedding techniques, which have shown to excel in other tasks, and which can be enriched with other information such as the words frequencies to enhance the classifier.", "In the future we want to experiment with other word vector representations and state-of-the-art vector space linear transformation such as (Artetxe et al., 2017; Artetxe et al., 2018) .", "Also, we would like to work on fine-grained classifications, as we mentioned before there are some word pairs that behave like cognates in some cases but like false friends in others.", "Our method can be applied to any pair of languages, without requiring a large bilingual corpus or taxonomy, which can be hard to find or expensive to build.", "In contrast, large untagged monolingual corpora are easily obtained on the Internet.", "Similar languages, that commonly have a high number of false friends, can benefit from the technique we present in this document, for example by generating a list of false friends pairs automatically based on words that are written in both languages in the same way." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Word Vector Representations", "Method Description", "Linear Transformation Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-63#paper-1130#slide-3
Related Work
Frunza, 2006: supervised machine learning using orthographic distances as features to classify between cognates, false friends or unrelated. Mitkov et al., 2007: used a combination of distributional and taxonomy-based approaches. Worked with English-French, They use WordNet taxonomy similarities to classify, and if a word is missing they fall back to a distributional method. For the distributional method they build vectors based on word windows, computing the co-occurrence probability. Then, they compared the N closest words of each word in the pair, translate one of them and count occurrences in the other one. They defined a threshold based on Dice coefficient. experiment with several ways to build the vector space (e.g. tf-idf) and measure vector distances (e.g. cosine distance). They also proposed to use PMI. They worked with closely related languages: Slovene and Sepulveda and Aluisio, 2011: false friends resolution for Spanish-Portuguese, highly based on (Frunza, 2006). They added an experiment with a new feature whose value is the likelihood of translation, from a probabilistic dictionary (generated taking a large sentence-aligned bilingual corpus).
Frunza, 2006: supervised machine learning using orthographic distances as features to classify between cognates, false friends or unrelated. Mitkov et al., 2007: used a combination of distributional and taxonomy-based approaches. Worked with English-French, They use WordNet taxonomy similarities to classify, and if a word is missing they fall back to a distributional method. For the distributional method they build vectors based on word windows, computing the co-occurrence probability. Then, they compared the N closest words of each word in the pair, translate one of them and count occurrences in the other one. They defined a threshold based on Dice coefficient. experiment with several ways to build the vector space (e.g. tf-idf) and measure vector distances (e.g. cosine distance). They also proposed to use PMI. They worked with closely related languages: Slovene and Sepulveda and Aluisio, 2011: false friends resolution for Spanish-Portuguese, highly based on (Frunza, 2006). They added an experiment with a new feature whose value is the likelihood of translation, from a probabilistic dictionary (generated taking a large sentence-aligned bilingual corpus).
[]
GEM-SciDuet-train-63#paper-1130#slide-4
1130
A High Coverage Method for Automatic False Friends Detection for Spanish and Portuguese
False friends are words in two languages that look or sound similar, but have different meanings. They are a common source of confusion among language learners. Methods to detect them automatically do exist, however they make use of large aligned bilingual corpora, which are hard to find and expensive to build, or encounter problems dealing with infrequent words. In this work we propose a high coverage method that uses word vector representations to build a false friends classifier for any pair of languages, which we apply to the particular case of Spanish and Portuguese. The required resources are a large corpus for each language and a small bilingual lexicon for the pair. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: https:// creativecommons.org/licenses/by/4.0/.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163 ], "paper_content_text": [ "Introduction Closely related languages often share a significant number of similar words which may have different meanings in each language.", "Similar words with different meanings are called false friends, while similar words sharing meaning are called cognates.", "For instance, between Spanish and Portuguese, the amount of cognates reaches the 85% of the total vocabulary (Ulsh, 1971) .", "This fact represents a clear advantage for language learners, but it may also lead to an important number of interferences, since similar words will be interpreted as in the native language, which is not correct in the case of false friends.", "Generally, the expression false friends refers not only to pairs of identical words, but also to pairs of similar words, differing in a few characters.", "Thus, the Spanish verb halagar (\"to flatten\") and the similar Portuguese verb alagar (\"to flood\") are usually considered false friends.", "Besides traditional false friends, that are similar words with different meanings, Humblé (2006) analyses three more types.", "First, he mentions words with similar meanings but used in different contexts, as esclarecer, which is used in a few contexts in Spanish (esclarecer un crimen, \"clarify a crime\"), but not in other contexts where aclarar is used (aclarar una duda, \"clarify a doubt\"), while in Portuguese esclarecer is used in all these contexts.", "Secondly, there are similar words with partial meaning differences, as abrigo, which in Spanish means \"shelter\" and \"coat\", but in Portuguese has just the first meaning.", "Finally, Humblé (2006) also considers false friends as similar words with the same meaning but used in different syntactic structures in each language, as the Spanish verb hablar (\"to speak\"), which does not accept a sentential direct object, and its Portuguese equivalent falar, which does (*yo hablé que .", ".", ".", "/ eu falei que .", ".", ".", ", *\"I spoke that .", ".", ".", "\").", "These non-traditional false friends are more difficult to detect by language learners than traditional ones, because of their subtle differences.", "Having a list of false friends can help native speakers of one language to avoid confusion when speaking and writing in the other language.", "Such a list could be integrated into a writing assistant to prevent the writer when using these words.", "For Spanish/Portuguese, in particular, while there are printed dictionaries that compile false friends (Otero Brabo Cruz, 2004) , we did not find a complete digital false friends list, therefore, an automatic method for false friends detection would be useful.", "Furthermore, it is interesting to study methods which could generate false friends lists for any pair of similar languages, particularly, languages for which this phenomenon has not been studied.", "In this work we present an automatic method for false friends detection.", "We focus on the traditional false friends definition (similar words with different meanings) because of the dataset we count with and also to present our method in a simple context.", "We describe a supervised classifier we constructed to distinguish false friends from cognates based on word embeddings.", "Although for the method development and evaluation we used Spanish and Portuguese, the method could be applied to other language pairs, provided that the resources needed for the method building are available.", "We do not deal with the problem of determining if two words are similar or not, which is prior to the issue we tackle.", "The paper is organized as follows: in Section 2 we describe some related work, in Section 3 we introduce the word embeddings used in this work, in Section 4 we describe our method, in Section 5 we present and analyze the experiments carried out.", "Finally, in Section 6, we present our conclusions and sketch some future work.", "Related Work Previous work use a combination of orthographic, syntactic, semantic and frequency-based features.", "Frunza (2006) worked with French and English, focusing only on orthographic features via a supervised machine learning algorithm.", "While this method can work in some cases -e.g.", "to detect true cognates with a common root, such as inaccesible in Spanish and inacessível in Portuguese (\"inaccessible\"), that come from the Latin word inaccessibilis -it does not take into account the meanings of the words.", "Mitkov et al.", "(2007) used both a distributional and taxonomy-based approach to multiple language pairs: English-French, English-German, English-Spanish and French-Spanish.", "For the former approach, they build vectors based on the words that appear in a window in the corpus, computing the co-occurrence probability.", "Then they defined two methods for classification: one that considers the N nearest neighbors for each word in the pair and computes the Dice coefficient to determine the similarity between both 1 , and another one that is similar but using syntactically related words instead of the adjacent words.", "Additionally, they evaluated a method which uses a taxonomy to classify false friends, and fails back to the distributional similarity for words not included in the taxonomy.", "They achieved better results under this experiment than only using the distributional similarity.", "Based on the former technique, Ljubešic et al.", "(2013) focused on detecting false friends in closely related languages: Slovene and Croatian.", "Likewise, they exploited a distributional technique but also propose the use of Pointwise Mutual Information (PMI) as an effective way to classify false friends via the frequencies in the corpora.", "Sepúlveda and Aluísio (2011) tackled this task for Portuguese and Spanish, taking the same orthographic approach as Frunza (2006) .", "Nonetheless, they carried out an additional experiment in which they added a new feature whose value is the likelihood of one of the words of the pair to be a translation of the other one.", "This number was obtained from a probabilistic Spanish-Portuguese dictionary, previously generated taking a large sentence-aligned bilingual corpus.", "Word Vector Representations As seen in the previous section, some authors (Mitkov et al., 2007; Ljubešic et al., 2013) represented words as vectors by counting occurrences or by building tf-idf vectors, among other techniques.", "Similarly, Mikolov et al.", "(2013a) proposed an unsupervised technique, known as word2vec, to efficiently represent words as vectors from a large unlabeled corpus, which has proven to outperform several other representations in tasks involving text as input (LeCun et al., 2015) .", "As it is a vector-based distributional representation technique, it is based on computing a vector space in which vectors are close if their corresponding words appear frequently in the same contexts in the corpus used to train it.", "Interesting relationships and patterns are learned in particular with this method, e.g.", "the result of the vector calculation vector(\"M adrid ) − vector(\"Spain ) + vector(\"F rance ) is closer to vector(\"P aris ) than to any other word vector (Mikolov et al., 2013a) .", "Additionally, Mikolov et al.", "(2013c) has shown a technique properties.", "The 2D graphs represent Spanish and Portuguese word spaces after applying PCA, scaling and rotating to exaggerate the similarities and emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "to detect common phrases such as \"New York\" to be part of the vector space, being able to detect more entities and at the same time enhancing the context of others.", "To exploit multi-language capabilities, Mikolov et al.", "(2013b) developed a method to automatically generate dictionaries and phrase tables from small bilingual data (translation word pairs), based on the calculation of a linear transformation between the vector spaces built with word2vec.", "This is presented as an optimization problem that tries to minimize the sum of the Euclidean distances between the translated source word vectors and the target vectors of each pair, and the translation matrix is obtained by means of stochastic gradient descent.", "We chose this distributional representation technique because of this translation property, which is what our method is mainly based on.", "These concepts around word2vec are shown in Fig.", "1 .", "In the example, the five word vectors corresponding to the numbers from \"one\" to \"five\" are shown, and also the word vector \"carpet\" for each language.", "More related words have closer vectors, while unrelated word vectors are at a greater distance.", "At the same time, groups of words are arranged in a similar way, allowing to build translation candidates.", "Method Description As false friends are word pairs in which one seems to be a translation of the other one, our idea is to compare their vectors using Mikolov et al.", "(2013b) technique.", "Our hypothesis is that a word vector in one language should be close to the cognate word vector in another language when it is transformed using this technique, but far when they are false friends, as described hereafter.", "First, we exploited the Spanish and Portuguese Wikipedia's (containing several hundreds of thousands of words) to build the vector spaces we needed, using Gensim's skip-gram based word2vec implementation (Řehůřek and Sojka, 2010) .", "The preprocessing of the Wikipedia's involved the following steps.", "The text was tokenized based on the alphabet of each language, removing words that contain other characters.", "Numbers were converted to their equivalent words.", "Wikipedia non-article pages were removed (e.g.", "disambiguation pages) and punctuation marks were discarded as well.", "Portuguese was harder to tokenize provided that the hyphen is widely used as part of the words in the language.", "For example, bem-vindo (\"welcome\") is a single word whereas Uruguai-Japão (\"Uruguay-Japan\") in jogo Uruguai-Japão (\"Uruguay-Japan match\") are two different words, used with an hyphen only in some contexts.", "The right option is to treat them as separate tokens in order to avoid spurious words in the model and to provide more information to existing words (Uruguai and Japão).", "As the word embedding method exploits the text at the level of sentences (and to avoid splitting ambiguous sentences), paragraphs were used as sentences, which still keep semantic relationships.", "A word had to appear at least five times in the corresponding Wikipedia to be considered for construction of the vector space.", "The 2D graphs represent the word spaces after applying PCA, scaling and rotating to emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "Secondly, WordNet (Fellbaum, 1998) was used as the bilingual lexicon to build the linear transformation between the vector spaces by applying the same technique described in (Mikolov et al., 2013b) , taking advantage of the multi-language synset alignment available in NLTK (Bird et al., 2009) between Spanish (Gonzalez-Agirre et al., 2012) and Portuguese (de Paiva and Rademaker, 2012), based on Open Multilingual WordNet (Bond and Paik, 2012) .", "We generated this lexicon by iterating through each of the 40,000 WordNet synsets and forming pairs taking their most common Spanish word and Portuguese word.", "Note that this is a small figure compared with the corpus sizes, and we show in the next section that it could be considerably lower.", "We also show that the transformation source needs not to be WordNet (we used it just for convenience), which is an expensive and carefully handcrafted resource; it could be just a bilingual dictionary.", "Finally, we defined a method to distinguish false friends from cognates.", "We defined a binary classifier for determining the class, false friends or cognates, for each pair of similar words.", "Given a candidate pair (source_word, target_word), and the corresponding vectors (source_vector, target_vector), the first step consists of transforming source_vector to the space computed for the target language, using the transformation described above.", "Let T (source_vector) be the result of this transformation.", "Then, to determine if source_word and target_word are cognates (if one of them is a possible translation of the other one), we analyzed the relationship between T (source_vector) and target_vector.", "According to Mikolov et al.", "(2013b) , the transformation we compute between the vector spaces keeps semantic relations between words from the source space to the target space.", "So, if (source_word, target_word) is a pair of cognates, then T (source_vector) should be close to target_vector.", "Otherwise, source_word and target_word are false friends.", "The method is illustrated in Fig.", "2 .", "In the example, the pair (persona, pessoa) are cognates (meaning \"person\" in English) while the pair (af eitar, af ectar) are false friends (meaning \"to shave\" and \"that affects\", respectively).", "If we transform the source word vectors (persona and af eitar) and thus obtain vectors in the target vector space, T (persona) and pessoa are close while T (af eitar) and af ectar are far from each other (while a valid translation of af eitar, barbear, is close to T (af eitar)).", "Following this idea, a threshold needs to be established by which two words are considered cognates.", "In addition to this, we wanted to see if similar properties help to constitute an acceptable division.", "Hence, we trained and tested by means of cross-validation a supervised binary Support Vector Machines classifier, based on three features: • Feature 1: the cosine distance between T (source_vector) and target_vector.", "• Feature 2: the number of word vectors in the target vector space closer to target_vector than T (source_vector), using the cosine distance.", "We believe that in some cases the distance for cognates may be larger but what it counts is if the transformed vector lays within the closest ones to the target vector.", "• Feature 3: the sum of the distances between target_vector and T (source_vector i ) for the five word vectors source_vector i nearest to source_vector, using the cosine distance.", "The idea here is that the first feature may be error prone since it only considers one vector, so considering more vectors (by taking both the context from the source vector and the one from its transformed vector) should reduce the variance, as neighbor word vectors from the source word should be neighbors of the target word.", "We carried out different experiments alternating the language we used as the source and the language we used as the target, and also other parameters, which we show in the next section.", "The source code is public and available to use.", "2 5 Experimental Analysis Unfortunately, we are not able to compare our method to several others presented by other authors as they are not only based on non-public code, but also on non-public datasets which are not directly comparable with the one used here.", "Nevertheless, we compare our technique against several methods, for the particular case of Spanish and Portuguese and show it is solid.", "First, we set a simple baseline that does the following: it checks if there exist a WordNet synset which contains both pair words within the Spanish and Portuguese words of it, and if it is does, then they are considered cognates.", "Then, we compare to the Machine Translation software Apertium 3 : we take one of the pair words, translate it and check if the translation matches the other word.", "We chose this software since it can be accessed offline and it is freely available.", "Apart from this, we compare with Sepúlveda and Aluísio (2011, experiment 2 and 3.2) method and also with a variant of our method that adds a word frequency feature (the relative number of times each word appeared in the corpus).", "Word frequencies are used by other authors and we believe they are a different data source from what the word2vec vectors can provide.", "For these experiments we use the same data set as in (Sepúlveda and Aluísio, 2011) .", "4 This resource is composed by 710 Spanish-Portuguese word pairs: 338 cognates and 372 false friends.", "The word pairs were selected from the following resources: an online Spanish-Brazilian Portuguese dictionary, an online Spanish-Portuguese dictionary, a list of the most frequent words in Portuguese and Spanish and an online list of different words in Portuguese and Spanish.", "There are not multi-word expressions and roughly half of the pairs are composed of identically spelled words.", "It was annotated by two people.", "It is important to consider that the word coverage is a concern in this task since every method can only works when the pair words are present in their resources (in other words, they are not out of a method's vocabulary).", "The accuracy thus only takes into account the covered pairs.", "The coverage for the simple baseline can be measured by counting the pairs were both words are present in WordNet.", "Sepúlveda and Aluísio (2011, experiment 2) only considers orthographic and phonetic differences, so always covers all pairs.", "Sepúlveda and Aluísio (2011, experiment 3 .2) uses a dictionary, then the pairs that are in it count towards the coverage.", "The words that could not be translated by Apertium are counted against the coverage of its related method.", "Finally, the pairs that cannot be translated into vectors are counted as not covered by our methods.", "Results are shown in Table 1 .", "It can be appreciated that our method provides both high accuracy and coverage, and that word embedding information can be further improved if additional information, such as the word frequencies, is included.", "We also tested a version of our method that only uses Feature 1 via logistic regression, which reduced the accuracy by 3% roughly, showing that the other two features add some missing information to improve the accuracy.", "As an additional experiment, we tried exploiting WordNet to compute taxonomy-based distances as features in the same manner as Mitkov et al.", "(2007) did, but we did not obtain a significant difference, thus we conclude that it does not add information to what already lays in the features built upon the embeddings.", "As Mikolov et al.", "(2013b) did, we wondered how our method works under different vector configurations, hence we carried out several experiments, varying vector space dimensions.", "We also experimented with vectors for phrases up to two words.", "Finally, we evaluated how the election of the source language, Spanish or Portuguese, affects the results.", "Accuracy obtained for the ten best configurations, and for the experiment with two word vectors are presented in Table 2 .", "For the experiment we used the vector dimensions 100, 200, 400 and 800; source vector space Spanish and Portuguese; and we also tried with a single run with two-word phrases (with Spanish as source and 100 as the vector dimension), summing up 33 configurations in total.", "As it can be noted, there are no significant differences in the accuracy of our method when varying the vector sizes.", "Higher dimensions do not provide better results and they even worsen when the target language dimension is greater than or equal to the source language dimension, as Mikolov et al.", "(2013b) claimed.", "Taking Spanish as the source language seems to be better, maybe this is due to the corpus sizes: the corpus used to generate the Spanish vector space is 1.4 times larger than the one used for Portuguese.", "Finally, we can observe that including vectors for two-word phrases does not improve results.", "Linear Transformation Analysis We were intrigued in knowing how different qualities and quantities of bilingual lexicon entries would affect our method performance.", "We show how the accuracy varies according to the bilingual lexicon size and its source in the Fig.", "3 .", "WN seems to be slightly better than using Apertium as source, albeit they both perform well.", "Also, both rapidly achieve acceptable results, with less than a thousand entries, and : Accuracy of our method with respect to different bilingual lexicon sizes and sources.", "WN is the original approach we take to build the bilingual lexicon, WN all is a method that takes every pair of lemmas from both languages in every WordNet synset and Apertium uses the translations of the top 50,000 Spanish words in frequencies from the Wikipedia (and that could be translated to Portuguese).", "Note that the usage of Apertium here has nothing to do with Apertium baseline.", "yield stable results when the number of entries is larger.", "This is not the case for the method WN all, which needs more word pairs to achieve reasonable results (around 5,000) and it is less stable with larger number of entries.", "Even though we use WordNet to build the lexicon, which is a rich and expensive resource, it could also be built with less quality entries, such as those that come from the output of a Machine Translation software or just by having a list of known word translations.", "Furthermore, our method proved to work with a small number of word pairs, it can be applied to language pairs with scarce bilingual resources.", "Additionally, it is interesting to observe that despite the fact that some test set pairs may appear in the bilingual lexicon in which our method is based on, when having changed it (by reducing its size or using Apertium), it still shows great performance.", "This suggest the results are not biased towards the test set used in this work.", "Conclusions and Future Work We have provided an approach to classify false friends and cognates which showed to have both high accuracy and coverage, studying it for the particular case of Spanish and Portuguese and providing state-of-the-art results for this pair of languages.", "Here we use up-to-date word embedding techniques, which have shown to excel in other tasks, and which can be enriched with other information such as the words frequencies to enhance the classifier.", "In the future we want to experiment with other word vector representations and state-of-the-art vector space linear transformation such as (Artetxe et al., 2017; Artetxe et al., 2018) .", "Also, we would like to work on fine-grained classifications, as we mentioned before there are some word pairs that behave like cognates in some cases but like false friends in others.", "Our method can be applied to any pair of languages, without requiring a large bilingual corpus or taxonomy, which can be hard to find or expensive to build.", "In contrast, large untagged monolingual corpora are easily obtained on the Internet.", "Similar languages, that commonly have a high number of false friends, can benefit from the technique we present in this document, for example by generating a list of false friends pairs automatically based on words that are written in both languages in the same way." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Word Vector Representations", "Method Description", "Linear Transformation Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-63#paper-1130#slide-4
Word Vector Representations
Related work crafted their own word vector representations. We propose to use the skip-gram-based word2vec model
Related work crafted their own word vector representations. We propose to use the skip-gram-based word2vec model
[]
GEM-SciDuet-train-63#paper-1130#slide-5
1130
A High Coverage Method for Automatic False Friends Detection for Spanish and Portuguese
False friends are words in two languages that look or sound similar, but have different meanings. They are a common source of confusion among language learners. Methods to detect them automatically do exist, however they make use of large aligned bilingual corpora, which are hard to find and expensive to build, or encounter problems dealing with infrequent words. In this work we propose a high coverage method that uses word vector representations to build a false friends classifier for any pair of languages, which we apply to the particular case of Spanish and Portuguese. The required resources are a large corpus for each language and a small bilingual lexicon for the pair. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: https:// creativecommons.org/licenses/by/4.0/.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163 ], "paper_content_text": [ "Introduction Closely related languages often share a significant number of similar words which may have different meanings in each language.", "Similar words with different meanings are called false friends, while similar words sharing meaning are called cognates.", "For instance, between Spanish and Portuguese, the amount of cognates reaches the 85% of the total vocabulary (Ulsh, 1971) .", "This fact represents a clear advantage for language learners, but it may also lead to an important number of interferences, since similar words will be interpreted as in the native language, which is not correct in the case of false friends.", "Generally, the expression false friends refers not only to pairs of identical words, but also to pairs of similar words, differing in a few characters.", "Thus, the Spanish verb halagar (\"to flatten\") and the similar Portuguese verb alagar (\"to flood\") are usually considered false friends.", "Besides traditional false friends, that are similar words with different meanings, Humblé (2006) analyses three more types.", "First, he mentions words with similar meanings but used in different contexts, as esclarecer, which is used in a few contexts in Spanish (esclarecer un crimen, \"clarify a crime\"), but not in other contexts where aclarar is used (aclarar una duda, \"clarify a doubt\"), while in Portuguese esclarecer is used in all these contexts.", "Secondly, there are similar words with partial meaning differences, as abrigo, which in Spanish means \"shelter\" and \"coat\", but in Portuguese has just the first meaning.", "Finally, Humblé (2006) also considers false friends as similar words with the same meaning but used in different syntactic structures in each language, as the Spanish verb hablar (\"to speak\"), which does not accept a sentential direct object, and its Portuguese equivalent falar, which does (*yo hablé que .", ".", ".", "/ eu falei que .", ".", ".", ", *\"I spoke that .", ".", ".", "\").", "These non-traditional false friends are more difficult to detect by language learners than traditional ones, because of their subtle differences.", "Having a list of false friends can help native speakers of one language to avoid confusion when speaking and writing in the other language.", "Such a list could be integrated into a writing assistant to prevent the writer when using these words.", "For Spanish/Portuguese, in particular, while there are printed dictionaries that compile false friends (Otero Brabo Cruz, 2004) , we did not find a complete digital false friends list, therefore, an automatic method for false friends detection would be useful.", "Furthermore, it is interesting to study methods which could generate false friends lists for any pair of similar languages, particularly, languages for which this phenomenon has not been studied.", "In this work we present an automatic method for false friends detection.", "We focus on the traditional false friends definition (similar words with different meanings) because of the dataset we count with and also to present our method in a simple context.", "We describe a supervised classifier we constructed to distinguish false friends from cognates based on word embeddings.", "Although for the method development and evaluation we used Spanish and Portuguese, the method could be applied to other language pairs, provided that the resources needed for the method building are available.", "We do not deal with the problem of determining if two words are similar or not, which is prior to the issue we tackle.", "The paper is organized as follows: in Section 2 we describe some related work, in Section 3 we introduce the word embeddings used in this work, in Section 4 we describe our method, in Section 5 we present and analyze the experiments carried out.", "Finally, in Section 6, we present our conclusions and sketch some future work.", "Related Work Previous work use a combination of orthographic, syntactic, semantic and frequency-based features.", "Frunza (2006) worked with French and English, focusing only on orthographic features via a supervised machine learning algorithm.", "While this method can work in some cases -e.g.", "to detect true cognates with a common root, such as inaccesible in Spanish and inacessível in Portuguese (\"inaccessible\"), that come from the Latin word inaccessibilis -it does not take into account the meanings of the words.", "Mitkov et al.", "(2007) used both a distributional and taxonomy-based approach to multiple language pairs: English-French, English-German, English-Spanish and French-Spanish.", "For the former approach, they build vectors based on the words that appear in a window in the corpus, computing the co-occurrence probability.", "Then they defined two methods for classification: one that considers the N nearest neighbors for each word in the pair and computes the Dice coefficient to determine the similarity between both 1 , and another one that is similar but using syntactically related words instead of the adjacent words.", "Additionally, they evaluated a method which uses a taxonomy to classify false friends, and fails back to the distributional similarity for words not included in the taxonomy.", "They achieved better results under this experiment than only using the distributional similarity.", "Based on the former technique, Ljubešic et al.", "(2013) focused on detecting false friends in closely related languages: Slovene and Croatian.", "Likewise, they exploited a distributional technique but also propose the use of Pointwise Mutual Information (PMI) as an effective way to classify false friends via the frequencies in the corpora.", "Sepúlveda and Aluísio (2011) tackled this task for Portuguese and Spanish, taking the same orthographic approach as Frunza (2006) .", "Nonetheless, they carried out an additional experiment in which they added a new feature whose value is the likelihood of one of the words of the pair to be a translation of the other one.", "This number was obtained from a probabilistic Spanish-Portuguese dictionary, previously generated taking a large sentence-aligned bilingual corpus.", "Word Vector Representations As seen in the previous section, some authors (Mitkov et al., 2007; Ljubešic et al., 2013) represented words as vectors by counting occurrences or by building tf-idf vectors, among other techniques.", "Similarly, Mikolov et al.", "(2013a) proposed an unsupervised technique, known as word2vec, to efficiently represent words as vectors from a large unlabeled corpus, which has proven to outperform several other representations in tasks involving text as input (LeCun et al., 2015) .", "As it is a vector-based distributional representation technique, it is based on computing a vector space in which vectors are close if their corresponding words appear frequently in the same contexts in the corpus used to train it.", "Interesting relationships and patterns are learned in particular with this method, e.g.", "the result of the vector calculation vector(\"M adrid ) − vector(\"Spain ) + vector(\"F rance ) is closer to vector(\"P aris ) than to any other word vector (Mikolov et al., 2013a) .", "Additionally, Mikolov et al.", "(2013c) has shown a technique properties.", "The 2D graphs represent Spanish and Portuguese word spaces after applying PCA, scaling and rotating to exaggerate the similarities and emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "to detect common phrases such as \"New York\" to be part of the vector space, being able to detect more entities and at the same time enhancing the context of others.", "To exploit multi-language capabilities, Mikolov et al.", "(2013b) developed a method to automatically generate dictionaries and phrase tables from small bilingual data (translation word pairs), based on the calculation of a linear transformation between the vector spaces built with word2vec.", "This is presented as an optimization problem that tries to minimize the sum of the Euclidean distances between the translated source word vectors and the target vectors of each pair, and the translation matrix is obtained by means of stochastic gradient descent.", "We chose this distributional representation technique because of this translation property, which is what our method is mainly based on.", "These concepts around word2vec are shown in Fig.", "1 .", "In the example, the five word vectors corresponding to the numbers from \"one\" to \"five\" are shown, and also the word vector \"carpet\" for each language.", "More related words have closer vectors, while unrelated word vectors are at a greater distance.", "At the same time, groups of words are arranged in a similar way, allowing to build translation candidates.", "Method Description As false friends are word pairs in which one seems to be a translation of the other one, our idea is to compare their vectors using Mikolov et al.", "(2013b) technique.", "Our hypothesis is that a word vector in one language should be close to the cognate word vector in another language when it is transformed using this technique, but far when they are false friends, as described hereafter.", "First, we exploited the Spanish and Portuguese Wikipedia's (containing several hundreds of thousands of words) to build the vector spaces we needed, using Gensim's skip-gram based word2vec implementation (Řehůřek and Sojka, 2010) .", "The preprocessing of the Wikipedia's involved the following steps.", "The text was tokenized based on the alphabet of each language, removing words that contain other characters.", "Numbers were converted to their equivalent words.", "Wikipedia non-article pages were removed (e.g.", "disambiguation pages) and punctuation marks were discarded as well.", "Portuguese was harder to tokenize provided that the hyphen is widely used as part of the words in the language.", "For example, bem-vindo (\"welcome\") is a single word whereas Uruguai-Japão (\"Uruguay-Japan\") in jogo Uruguai-Japão (\"Uruguay-Japan match\") are two different words, used with an hyphen only in some contexts.", "The right option is to treat them as separate tokens in order to avoid spurious words in the model and to provide more information to existing words (Uruguai and Japão).", "As the word embedding method exploits the text at the level of sentences (and to avoid splitting ambiguous sentences), paragraphs were used as sentences, which still keep semantic relationships.", "A word had to appear at least five times in the corresponding Wikipedia to be considered for construction of the vector space.", "The 2D graphs represent the word spaces after applying PCA, scaling and rotating to emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "Secondly, WordNet (Fellbaum, 1998) was used as the bilingual lexicon to build the linear transformation between the vector spaces by applying the same technique described in (Mikolov et al., 2013b) , taking advantage of the multi-language synset alignment available in NLTK (Bird et al., 2009) between Spanish (Gonzalez-Agirre et al., 2012) and Portuguese (de Paiva and Rademaker, 2012), based on Open Multilingual WordNet (Bond and Paik, 2012) .", "We generated this lexicon by iterating through each of the 40,000 WordNet synsets and forming pairs taking their most common Spanish word and Portuguese word.", "Note that this is a small figure compared with the corpus sizes, and we show in the next section that it could be considerably lower.", "We also show that the transformation source needs not to be WordNet (we used it just for convenience), which is an expensive and carefully handcrafted resource; it could be just a bilingual dictionary.", "Finally, we defined a method to distinguish false friends from cognates.", "We defined a binary classifier for determining the class, false friends or cognates, for each pair of similar words.", "Given a candidate pair (source_word, target_word), and the corresponding vectors (source_vector, target_vector), the first step consists of transforming source_vector to the space computed for the target language, using the transformation described above.", "Let T (source_vector) be the result of this transformation.", "Then, to determine if source_word and target_word are cognates (if one of them is a possible translation of the other one), we analyzed the relationship between T (source_vector) and target_vector.", "According to Mikolov et al.", "(2013b) , the transformation we compute between the vector spaces keeps semantic relations between words from the source space to the target space.", "So, if (source_word, target_word) is a pair of cognates, then T (source_vector) should be close to target_vector.", "Otherwise, source_word and target_word are false friends.", "The method is illustrated in Fig.", "2 .", "In the example, the pair (persona, pessoa) are cognates (meaning \"person\" in English) while the pair (af eitar, af ectar) are false friends (meaning \"to shave\" and \"that affects\", respectively).", "If we transform the source word vectors (persona and af eitar) and thus obtain vectors in the target vector space, T (persona) and pessoa are close while T (af eitar) and af ectar are far from each other (while a valid translation of af eitar, barbear, is close to T (af eitar)).", "Following this idea, a threshold needs to be established by which two words are considered cognates.", "In addition to this, we wanted to see if similar properties help to constitute an acceptable division.", "Hence, we trained and tested by means of cross-validation a supervised binary Support Vector Machines classifier, based on three features: • Feature 1: the cosine distance between T (source_vector) and target_vector.", "• Feature 2: the number of word vectors in the target vector space closer to target_vector than T (source_vector), using the cosine distance.", "We believe that in some cases the distance for cognates may be larger but what it counts is if the transformed vector lays within the closest ones to the target vector.", "• Feature 3: the sum of the distances between target_vector and T (source_vector i ) for the five word vectors source_vector i nearest to source_vector, using the cosine distance.", "The idea here is that the first feature may be error prone since it only considers one vector, so considering more vectors (by taking both the context from the source vector and the one from its transformed vector) should reduce the variance, as neighbor word vectors from the source word should be neighbors of the target word.", "We carried out different experiments alternating the language we used as the source and the language we used as the target, and also other parameters, which we show in the next section.", "The source code is public and available to use.", "2 5 Experimental Analysis Unfortunately, we are not able to compare our method to several others presented by other authors as they are not only based on non-public code, but also on non-public datasets which are not directly comparable with the one used here.", "Nevertheless, we compare our technique against several methods, for the particular case of Spanish and Portuguese and show it is solid.", "First, we set a simple baseline that does the following: it checks if there exist a WordNet synset which contains both pair words within the Spanish and Portuguese words of it, and if it is does, then they are considered cognates.", "Then, we compare to the Machine Translation software Apertium 3 : we take one of the pair words, translate it and check if the translation matches the other word.", "We chose this software since it can be accessed offline and it is freely available.", "Apart from this, we compare with Sepúlveda and Aluísio (2011, experiment 2 and 3.2) method and also with a variant of our method that adds a word frequency feature (the relative number of times each word appeared in the corpus).", "Word frequencies are used by other authors and we believe they are a different data source from what the word2vec vectors can provide.", "For these experiments we use the same data set as in (Sepúlveda and Aluísio, 2011) .", "4 This resource is composed by 710 Spanish-Portuguese word pairs: 338 cognates and 372 false friends.", "The word pairs were selected from the following resources: an online Spanish-Brazilian Portuguese dictionary, an online Spanish-Portuguese dictionary, a list of the most frequent words in Portuguese and Spanish and an online list of different words in Portuguese and Spanish.", "There are not multi-word expressions and roughly half of the pairs are composed of identically spelled words.", "It was annotated by two people.", "It is important to consider that the word coverage is a concern in this task since every method can only works when the pair words are present in their resources (in other words, they are not out of a method's vocabulary).", "The accuracy thus only takes into account the covered pairs.", "The coverage for the simple baseline can be measured by counting the pairs were both words are present in WordNet.", "Sepúlveda and Aluísio (2011, experiment 2) only considers orthographic and phonetic differences, so always covers all pairs.", "Sepúlveda and Aluísio (2011, experiment 3 .2) uses a dictionary, then the pairs that are in it count towards the coverage.", "The words that could not be translated by Apertium are counted against the coverage of its related method.", "Finally, the pairs that cannot be translated into vectors are counted as not covered by our methods.", "Results are shown in Table 1 .", "It can be appreciated that our method provides both high accuracy and coverage, and that word embedding information can be further improved if additional information, such as the word frequencies, is included.", "We also tested a version of our method that only uses Feature 1 via logistic regression, which reduced the accuracy by 3% roughly, showing that the other two features add some missing information to improve the accuracy.", "As an additional experiment, we tried exploiting WordNet to compute taxonomy-based distances as features in the same manner as Mitkov et al.", "(2007) did, but we did not obtain a significant difference, thus we conclude that it does not add information to what already lays in the features built upon the embeddings.", "As Mikolov et al.", "(2013b) did, we wondered how our method works under different vector configurations, hence we carried out several experiments, varying vector space dimensions.", "We also experimented with vectors for phrases up to two words.", "Finally, we evaluated how the election of the source language, Spanish or Portuguese, affects the results.", "Accuracy obtained for the ten best configurations, and for the experiment with two word vectors are presented in Table 2 .", "For the experiment we used the vector dimensions 100, 200, 400 and 800; source vector space Spanish and Portuguese; and we also tried with a single run with two-word phrases (with Spanish as source and 100 as the vector dimension), summing up 33 configurations in total.", "As it can be noted, there are no significant differences in the accuracy of our method when varying the vector sizes.", "Higher dimensions do not provide better results and they even worsen when the target language dimension is greater than or equal to the source language dimension, as Mikolov et al.", "(2013b) claimed.", "Taking Spanish as the source language seems to be better, maybe this is due to the corpus sizes: the corpus used to generate the Spanish vector space is 1.4 times larger than the one used for Portuguese.", "Finally, we can observe that including vectors for two-word phrases does not improve results.", "Linear Transformation Analysis We were intrigued in knowing how different qualities and quantities of bilingual lexicon entries would affect our method performance.", "We show how the accuracy varies according to the bilingual lexicon size and its source in the Fig.", "3 .", "WN seems to be slightly better than using Apertium as source, albeit they both perform well.", "Also, both rapidly achieve acceptable results, with less than a thousand entries, and : Accuracy of our method with respect to different bilingual lexicon sizes and sources.", "WN is the original approach we take to build the bilingual lexicon, WN all is a method that takes every pair of lemmas from both languages in every WordNet synset and Apertium uses the translations of the top 50,000 Spanish words in frequencies from the Wikipedia (and that could be translated to Portuguese).", "Note that the usage of Apertium here has nothing to do with Apertium baseline.", "yield stable results when the number of entries is larger.", "This is not the case for the method WN all, which needs more word pairs to achieve reasonable results (around 5,000) and it is less stable with larger number of entries.", "Even though we use WordNet to build the lexicon, which is a rich and expensive resource, it could also be built with less quality entries, such as those that come from the output of a Machine Translation software or just by having a list of known word translations.", "Furthermore, our method proved to work with a small number of word pairs, it can be applied to language pairs with scarce bilingual resources.", "Additionally, it is interesting to observe that despite the fact that some test set pairs may appear in the bilingual lexicon in which our method is based on, when having changed it (by reducing its size or using Apertium), it still shows great performance.", "This suggest the results are not biased towards the test set used in this work.", "Conclusions and Future Work We have provided an approach to classify false friends and cognates which showed to have both high accuracy and coverage, studying it for the particular case of Spanish and Portuguese and providing state-of-the-art results for this pair of languages.", "Here we use up-to-date word embedding techniques, which have shown to excel in other tasks, and which can be enriched with other information such as the words frequencies to enhance the classifier.", "In the future we want to experiment with other word vector representations and state-of-the-art vector space linear transformation such as (Artetxe et al., 2017; Artetxe et al., 2018) .", "Also, we would like to work on fine-grained classifications, as we mentioned before there are some word pairs that behave like cognates in some cases but like false friends in others.", "Our method can be applied to any pair of languages, without requiring a large bilingual corpus or taxonomy, which can be hard to find or expensive to build.", "In contrast, large untagged monolingual corpora are easily obtained on the Internet.", "Similar languages, that commonly have a high number of false friends, can benefit from the technique we present in this document, for example by generating a list of false friends pairs automatically based on words that are written in both languages in the same way." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Word Vector Representations", "Method Description", "Linear Transformation Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-63#paper-1130#slide-5
Transform between Vector Spaces
Mikolov et al, 2013b: propose a method to correspond two word2vec vector spaces via a linear transformation. Used to build dictionaries and phrase tables.
Mikolov et al, 2013b: propose a method to correspond two word2vec vector spaces via a linear transformation. Used to build dictionaries and phrase tables.
[]
GEM-SciDuet-train-63#paper-1130#slide-6
1130
A High Coverage Method for Automatic False Friends Detection for Spanish and Portuguese
False friends are words in two languages that look or sound similar, but have different meanings. They are a common source of confusion among language learners. Methods to detect them automatically do exist, however they make use of large aligned bilingual corpora, which are hard to find and expensive to build, or encounter problems dealing with infrequent words. In this work we propose a high coverage method that uses word vector representations to build a false friends classifier for any pair of languages, which we apply to the particular case of Spanish and Portuguese. The required resources are a large corpus for each language and a small bilingual lexicon for the pair. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: https:// creativecommons.org/licenses/by/4.0/.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163 ], "paper_content_text": [ "Introduction Closely related languages often share a significant number of similar words which may have different meanings in each language.", "Similar words with different meanings are called false friends, while similar words sharing meaning are called cognates.", "For instance, between Spanish and Portuguese, the amount of cognates reaches the 85% of the total vocabulary (Ulsh, 1971) .", "This fact represents a clear advantage for language learners, but it may also lead to an important number of interferences, since similar words will be interpreted as in the native language, which is not correct in the case of false friends.", "Generally, the expression false friends refers not only to pairs of identical words, but also to pairs of similar words, differing in a few characters.", "Thus, the Spanish verb halagar (\"to flatten\") and the similar Portuguese verb alagar (\"to flood\") are usually considered false friends.", "Besides traditional false friends, that are similar words with different meanings, Humblé (2006) analyses three more types.", "First, he mentions words with similar meanings but used in different contexts, as esclarecer, which is used in a few contexts in Spanish (esclarecer un crimen, \"clarify a crime\"), but not in other contexts where aclarar is used (aclarar una duda, \"clarify a doubt\"), while in Portuguese esclarecer is used in all these contexts.", "Secondly, there are similar words with partial meaning differences, as abrigo, which in Spanish means \"shelter\" and \"coat\", but in Portuguese has just the first meaning.", "Finally, Humblé (2006) also considers false friends as similar words with the same meaning but used in different syntactic structures in each language, as the Spanish verb hablar (\"to speak\"), which does not accept a sentential direct object, and its Portuguese equivalent falar, which does (*yo hablé que .", ".", ".", "/ eu falei que .", ".", ".", ", *\"I spoke that .", ".", ".", "\").", "These non-traditional false friends are more difficult to detect by language learners than traditional ones, because of their subtle differences.", "Having a list of false friends can help native speakers of one language to avoid confusion when speaking and writing in the other language.", "Such a list could be integrated into a writing assistant to prevent the writer when using these words.", "For Spanish/Portuguese, in particular, while there are printed dictionaries that compile false friends (Otero Brabo Cruz, 2004) , we did not find a complete digital false friends list, therefore, an automatic method for false friends detection would be useful.", "Furthermore, it is interesting to study methods which could generate false friends lists for any pair of similar languages, particularly, languages for which this phenomenon has not been studied.", "In this work we present an automatic method for false friends detection.", "We focus on the traditional false friends definition (similar words with different meanings) because of the dataset we count with and also to present our method in a simple context.", "We describe a supervised classifier we constructed to distinguish false friends from cognates based on word embeddings.", "Although for the method development and evaluation we used Spanish and Portuguese, the method could be applied to other language pairs, provided that the resources needed for the method building are available.", "We do not deal with the problem of determining if two words are similar or not, which is prior to the issue we tackle.", "The paper is organized as follows: in Section 2 we describe some related work, in Section 3 we introduce the word embeddings used in this work, in Section 4 we describe our method, in Section 5 we present and analyze the experiments carried out.", "Finally, in Section 6, we present our conclusions and sketch some future work.", "Related Work Previous work use a combination of orthographic, syntactic, semantic and frequency-based features.", "Frunza (2006) worked with French and English, focusing only on orthographic features via a supervised machine learning algorithm.", "While this method can work in some cases -e.g.", "to detect true cognates with a common root, such as inaccesible in Spanish and inacessível in Portuguese (\"inaccessible\"), that come from the Latin word inaccessibilis -it does not take into account the meanings of the words.", "Mitkov et al.", "(2007) used both a distributional and taxonomy-based approach to multiple language pairs: English-French, English-German, English-Spanish and French-Spanish.", "For the former approach, they build vectors based on the words that appear in a window in the corpus, computing the co-occurrence probability.", "Then they defined two methods for classification: one that considers the N nearest neighbors for each word in the pair and computes the Dice coefficient to determine the similarity between both 1 , and another one that is similar but using syntactically related words instead of the adjacent words.", "Additionally, they evaluated a method which uses a taxonomy to classify false friends, and fails back to the distributional similarity for words not included in the taxonomy.", "They achieved better results under this experiment than only using the distributional similarity.", "Based on the former technique, Ljubešic et al.", "(2013) focused on detecting false friends in closely related languages: Slovene and Croatian.", "Likewise, they exploited a distributional technique but also propose the use of Pointwise Mutual Information (PMI) as an effective way to classify false friends via the frequencies in the corpora.", "Sepúlveda and Aluísio (2011) tackled this task for Portuguese and Spanish, taking the same orthographic approach as Frunza (2006) .", "Nonetheless, they carried out an additional experiment in which they added a new feature whose value is the likelihood of one of the words of the pair to be a translation of the other one.", "This number was obtained from a probabilistic Spanish-Portuguese dictionary, previously generated taking a large sentence-aligned bilingual corpus.", "Word Vector Representations As seen in the previous section, some authors (Mitkov et al., 2007; Ljubešic et al., 2013) represented words as vectors by counting occurrences or by building tf-idf vectors, among other techniques.", "Similarly, Mikolov et al.", "(2013a) proposed an unsupervised technique, known as word2vec, to efficiently represent words as vectors from a large unlabeled corpus, which has proven to outperform several other representations in tasks involving text as input (LeCun et al., 2015) .", "As it is a vector-based distributional representation technique, it is based on computing a vector space in which vectors are close if their corresponding words appear frequently in the same contexts in the corpus used to train it.", "Interesting relationships and patterns are learned in particular with this method, e.g.", "the result of the vector calculation vector(\"M adrid ) − vector(\"Spain ) + vector(\"F rance ) is closer to vector(\"P aris ) than to any other word vector (Mikolov et al., 2013a) .", "Additionally, Mikolov et al.", "(2013c) has shown a technique properties.", "The 2D graphs represent Spanish and Portuguese word spaces after applying PCA, scaling and rotating to exaggerate the similarities and emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "to detect common phrases such as \"New York\" to be part of the vector space, being able to detect more entities and at the same time enhancing the context of others.", "To exploit multi-language capabilities, Mikolov et al.", "(2013b) developed a method to automatically generate dictionaries and phrase tables from small bilingual data (translation word pairs), based on the calculation of a linear transformation between the vector spaces built with word2vec.", "This is presented as an optimization problem that tries to minimize the sum of the Euclidean distances between the translated source word vectors and the target vectors of each pair, and the translation matrix is obtained by means of stochastic gradient descent.", "We chose this distributional representation technique because of this translation property, which is what our method is mainly based on.", "These concepts around word2vec are shown in Fig.", "1 .", "In the example, the five word vectors corresponding to the numbers from \"one\" to \"five\" are shown, and also the word vector \"carpet\" for each language.", "More related words have closer vectors, while unrelated word vectors are at a greater distance.", "At the same time, groups of words are arranged in a similar way, allowing to build translation candidates.", "Method Description As false friends are word pairs in which one seems to be a translation of the other one, our idea is to compare their vectors using Mikolov et al.", "(2013b) technique.", "Our hypothesis is that a word vector in one language should be close to the cognate word vector in another language when it is transformed using this technique, but far when they are false friends, as described hereafter.", "First, we exploited the Spanish and Portuguese Wikipedia's (containing several hundreds of thousands of words) to build the vector spaces we needed, using Gensim's skip-gram based word2vec implementation (Řehůřek and Sojka, 2010) .", "The preprocessing of the Wikipedia's involved the following steps.", "The text was tokenized based on the alphabet of each language, removing words that contain other characters.", "Numbers were converted to their equivalent words.", "Wikipedia non-article pages were removed (e.g.", "disambiguation pages) and punctuation marks were discarded as well.", "Portuguese was harder to tokenize provided that the hyphen is widely used as part of the words in the language.", "For example, bem-vindo (\"welcome\") is a single word whereas Uruguai-Japão (\"Uruguay-Japan\") in jogo Uruguai-Japão (\"Uruguay-Japan match\") are two different words, used with an hyphen only in some contexts.", "The right option is to treat them as separate tokens in order to avoid spurious words in the model and to provide more information to existing words (Uruguai and Japão).", "As the word embedding method exploits the text at the level of sentences (and to avoid splitting ambiguous sentences), paragraphs were used as sentences, which still keep semantic relationships.", "A word had to appear at least five times in the corresponding Wikipedia to be considered for construction of the vector space.", "The 2D graphs represent the word spaces after applying PCA, scaling and rotating to emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "Secondly, WordNet (Fellbaum, 1998) was used as the bilingual lexicon to build the linear transformation between the vector spaces by applying the same technique described in (Mikolov et al., 2013b) , taking advantage of the multi-language synset alignment available in NLTK (Bird et al., 2009) between Spanish (Gonzalez-Agirre et al., 2012) and Portuguese (de Paiva and Rademaker, 2012), based on Open Multilingual WordNet (Bond and Paik, 2012) .", "We generated this lexicon by iterating through each of the 40,000 WordNet synsets and forming pairs taking their most common Spanish word and Portuguese word.", "Note that this is a small figure compared with the corpus sizes, and we show in the next section that it could be considerably lower.", "We also show that the transformation source needs not to be WordNet (we used it just for convenience), which is an expensive and carefully handcrafted resource; it could be just a bilingual dictionary.", "Finally, we defined a method to distinguish false friends from cognates.", "We defined a binary classifier for determining the class, false friends or cognates, for each pair of similar words.", "Given a candidate pair (source_word, target_word), and the corresponding vectors (source_vector, target_vector), the first step consists of transforming source_vector to the space computed for the target language, using the transformation described above.", "Let T (source_vector) be the result of this transformation.", "Then, to determine if source_word and target_word are cognates (if one of them is a possible translation of the other one), we analyzed the relationship between T (source_vector) and target_vector.", "According to Mikolov et al.", "(2013b) , the transformation we compute between the vector spaces keeps semantic relations between words from the source space to the target space.", "So, if (source_word, target_word) is a pair of cognates, then T (source_vector) should be close to target_vector.", "Otherwise, source_word and target_word are false friends.", "The method is illustrated in Fig.", "2 .", "In the example, the pair (persona, pessoa) are cognates (meaning \"person\" in English) while the pair (af eitar, af ectar) are false friends (meaning \"to shave\" and \"that affects\", respectively).", "If we transform the source word vectors (persona and af eitar) and thus obtain vectors in the target vector space, T (persona) and pessoa are close while T (af eitar) and af ectar are far from each other (while a valid translation of af eitar, barbear, is close to T (af eitar)).", "Following this idea, a threshold needs to be established by which two words are considered cognates.", "In addition to this, we wanted to see if similar properties help to constitute an acceptable division.", "Hence, we trained and tested by means of cross-validation a supervised binary Support Vector Machines classifier, based on three features: • Feature 1: the cosine distance between T (source_vector) and target_vector.", "• Feature 2: the number of word vectors in the target vector space closer to target_vector than T (source_vector), using the cosine distance.", "We believe that in some cases the distance for cognates may be larger but what it counts is if the transformed vector lays within the closest ones to the target vector.", "• Feature 3: the sum of the distances between target_vector and T (source_vector i ) for the five word vectors source_vector i nearest to source_vector, using the cosine distance.", "The idea here is that the first feature may be error prone since it only considers one vector, so considering more vectors (by taking both the context from the source vector and the one from its transformed vector) should reduce the variance, as neighbor word vectors from the source word should be neighbors of the target word.", "We carried out different experiments alternating the language we used as the source and the language we used as the target, and also other parameters, which we show in the next section.", "The source code is public and available to use.", "2 5 Experimental Analysis Unfortunately, we are not able to compare our method to several others presented by other authors as they are not only based on non-public code, but also on non-public datasets which are not directly comparable with the one used here.", "Nevertheless, we compare our technique against several methods, for the particular case of Spanish and Portuguese and show it is solid.", "First, we set a simple baseline that does the following: it checks if there exist a WordNet synset which contains both pair words within the Spanish and Portuguese words of it, and if it is does, then they are considered cognates.", "Then, we compare to the Machine Translation software Apertium 3 : we take one of the pair words, translate it and check if the translation matches the other word.", "We chose this software since it can be accessed offline and it is freely available.", "Apart from this, we compare with Sepúlveda and Aluísio (2011, experiment 2 and 3.2) method and also with a variant of our method that adds a word frequency feature (the relative number of times each word appeared in the corpus).", "Word frequencies are used by other authors and we believe they are a different data source from what the word2vec vectors can provide.", "For these experiments we use the same data set as in (Sepúlveda and Aluísio, 2011) .", "4 This resource is composed by 710 Spanish-Portuguese word pairs: 338 cognates and 372 false friends.", "The word pairs were selected from the following resources: an online Spanish-Brazilian Portuguese dictionary, an online Spanish-Portuguese dictionary, a list of the most frequent words in Portuguese and Spanish and an online list of different words in Portuguese and Spanish.", "There are not multi-word expressions and roughly half of the pairs are composed of identically spelled words.", "It was annotated by two people.", "It is important to consider that the word coverage is a concern in this task since every method can only works when the pair words are present in their resources (in other words, they are not out of a method's vocabulary).", "The accuracy thus only takes into account the covered pairs.", "The coverage for the simple baseline can be measured by counting the pairs were both words are present in WordNet.", "Sepúlveda and Aluísio (2011, experiment 2) only considers orthographic and phonetic differences, so always covers all pairs.", "Sepúlveda and Aluísio (2011, experiment 3 .2) uses a dictionary, then the pairs that are in it count towards the coverage.", "The words that could not be translated by Apertium are counted against the coverage of its related method.", "Finally, the pairs that cannot be translated into vectors are counted as not covered by our methods.", "Results are shown in Table 1 .", "It can be appreciated that our method provides both high accuracy and coverage, and that word embedding information can be further improved if additional information, such as the word frequencies, is included.", "We also tested a version of our method that only uses Feature 1 via logistic regression, which reduced the accuracy by 3% roughly, showing that the other two features add some missing information to improve the accuracy.", "As an additional experiment, we tried exploiting WordNet to compute taxonomy-based distances as features in the same manner as Mitkov et al.", "(2007) did, but we did not obtain a significant difference, thus we conclude that it does not add information to what already lays in the features built upon the embeddings.", "As Mikolov et al.", "(2013b) did, we wondered how our method works under different vector configurations, hence we carried out several experiments, varying vector space dimensions.", "We also experimented with vectors for phrases up to two words.", "Finally, we evaluated how the election of the source language, Spanish or Portuguese, affects the results.", "Accuracy obtained for the ten best configurations, and for the experiment with two word vectors are presented in Table 2 .", "For the experiment we used the vector dimensions 100, 200, 400 and 800; source vector space Spanish and Portuguese; and we also tried with a single run with two-word phrases (with Spanish as source and 100 as the vector dimension), summing up 33 configurations in total.", "As it can be noted, there are no significant differences in the accuracy of our method when varying the vector sizes.", "Higher dimensions do not provide better results and they even worsen when the target language dimension is greater than or equal to the source language dimension, as Mikolov et al.", "(2013b) claimed.", "Taking Spanish as the source language seems to be better, maybe this is due to the corpus sizes: the corpus used to generate the Spanish vector space is 1.4 times larger than the one used for Portuguese.", "Finally, we can observe that including vectors for two-word phrases does not improve results.", "Linear Transformation Analysis We were intrigued in knowing how different qualities and quantities of bilingual lexicon entries would affect our method performance.", "We show how the accuracy varies according to the bilingual lexicon size and its source in the Fig.", "3 .", "WN seems to be slightly better than using Apertium as source, albeit they both perform well.", "Also, both rapidly achieve acceptable results, with less than a thousand entries, and : Accuracy of our method with respect to different bilingual lexicon sizes and sources.", "WN is the original approach we take to build the bilingual lexicon, WN all is a method that takes every pair of lemmas from both languages in every WordNet synset and Apertium uses the translations of the top 50,000 Spanish words in frequencies from the Wikipedia (and that could be translated to Portuguese).", "Note that the usage of Apertium here has nothing to do with Apertium baseline.", "yield stable results when the number of entries is larger.", "This is not the case for the method WN all, which needs more word pairs to achieve reasonable results (around 5,000) and it is less stable with larger number of entries.", "Even though we use WordNet to build the lexicon, which is a rich and expensive resource, it could also be built with less quality entries, such as those that come from the output of a Machine Translation software or just by having a list of known word translations.", "Furthermore, our method proved to work with a small number of word pairs, it can be applied to language pairs with scarce bilingual resources.", "Additionally, it is interesting to observe that despite the fact that some test set pairs may appear in the bilingual lexicon in which our method is based on, when having changed it (by reducing its size or using Apertium), it still shows great performance.", "This suggest the results are not biased towards the test set used in this work.", "Conclusions and Future Work We have provided an approach to classify false friends and cognates which showed to have both high accuracy and coverage, studying it for the particular case of Spanish and Portuguese and providing state-of-the-art results for this pair of languages.", "Here we use up-to-date word embedding techniques, which have shown to excel in other tasks, and which can be enriched with other information such as the words frequencies to enhance the classifier.", "In the future we want to experiment with other word vector representations and state-of-the-art vector space linear transformation such as (Artetxe et al., 2017; Artetxe et al., 2018) .", "Also, we would like to work on fine-grained classifications, as we mentioned before there are some word pairs that behave like cognates in some cases but like false friends in others.", "Our method can be applied to any pair of languages, without requiring a large bilingual corpus or taxonomy, which can be hard to find or expensive to build.", "In contrast, large untagged monolingual corpora are easily obtained on the Internet.", "Similar languages, that commonly have a high number of false friends, can benefit from the technique we present in this document, for example by generating a list of false friends pairs automatically based on words that are written in both languages in the same way." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Word Vector Representations", "Method Description", "Linear Transformation Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-63#paper-1130#slide-6
Our Method
Build word2vec vector spaces, find a linear transformation and measure vector distances. Note that we dont cope with related/unrelated, we just focus on cognate/false friends We used the Wikipedias for the vector spaces. Open Multilingual WordNet (Bond and Paik, 2012) was used as a bilingual lexicon to fit the linear transformation: we iterate over synsets and took lexical units from each language. Then we employed Least Squares. We take one of the word vectors, transform it to the other space and compute: The cosine distance between T(source_vector) and target_vector. The number of word vectors in the target vector space closer to target_vector than T(source_vector). The sum of the distances between target_vector and T(source_vector_i) for the top 5 word vectors source_vector_i nearest to source_vector.
Build word2vec vector spaces, find a linear transformation and measure vector distances. Note that we dont cope with related/unrelated, we just focus on cognate/false friends We used the Wikipedias for the vector spaces. Open Multilingual WordNet (Bond and Paik, 2012) was used as a bilingual lexicon to fit the linear transformation: we iterate over synsets and took lexical units from each language. Then we employed Least Squares. We take one of the word vectors, transform it to the other space and compute: The cosine distance between T(source_vector) and target_vector. The number of word vectors in the target vector space closer to target_vector than T(source_vector). The sum of the distances between target_vector and T(source_vector_i) for the top 5 word vectors source_vector_i nearest to source_vector.
[]
GEM-SciDuet-train-63#paper-1130#slide-7
1130
A High Coverage Method for Automatic False Friends Detection for Spanish and Portuguese
False friends are words in two languages that look or sound similar, but have different meanings. They are a common source of confusion among language learners. Methods to detect them automatically do exist, however they make use of large aligned bilingual corpora, which are hard to find and expensive to build, or encounter problems dealing with infrequent words. In this work we propose a high coverage method that uses word vector representations to build a false friends classifier for any pair of languages, which we apply to the particular case of Spanish and Portuguese. The required resources are a large corpus for each language and a small bilingual lexicon for the pair. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: https:// creativecommons.org/licenses/by/4.0/.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163 ], "paper_content_text": [ "Introduction Closely related languages often share a significant number of similar words which may have different meanings in each language.", "Similar words with different meanings are called false friends, while similar words sharing meaning are called cognates.", "For instance, between Spanish and Portuguese, the amount of cognates reaches the 85% of the total vocabulary (Ulsh, 1971) .", "This fact represents a clear advantage for language learners, but it may also lead to an important number of interferences, since similar words will be interpreted as in the native language, which is not correct in the case of false friends.", "Generally, the expression false friends refers not only to pairs of identical words, but also to pairs of similar words, differing in a few characters.", "Thus, the Spanish verb halagar (\"to flatten\") and the similar Portuguese verb alagar (\"to flood\") are usually considered false friends.", "Besides traditional false friends, that are similar words with different meanings, Humblé (2006) analyses three more types.", "First, he mentions words with similar meanings but used in different contexts, as esclarecer, which is used in a few contexts in Spanish (esclarecer un crimen, \"clarify a crime\"), but not in other contexts where aclarar is used (aclarar una duda, \"clarify a doubt\"), while in Portuguese esclarecer is used in all these contexts.", "Secondly, there are similar words with partial meaning differences, as abrigo, which in Spanish means \"shelter\" and \"coat\", but in Portuguese has just the first meaning.", "Finally, Humblé (2006) also considers false friends as similar words with the same meaning but used in different syntactic structures in each language, as the Spanish verb hablar (\"to speak\"), which does not accept a sentential direct object, and its Portuguese equivalent falar, which does (*yo hablé que .", ".", ".", "/ eu falei que .", ".", ".", ", *\"I spoke that .", ".", ".", "\").", "These non-traditional false friends are more difficult to detect by language learners than traditional ones, because of their subtle differences.", "Having a list of false friends can help native speakers of one language to avoid confusion when speaking and writing in the other language.", "Such a list could be integrated into a writing assistant to prevent the writer when using these words.", "For Spanish/Portuguese, in particular, while there are printed dictionaries that compile false friends (Otero Brabo Cruz, 2004) , we did not find a complete digital false friends list, therefore, an automatic method for false friends detection would be useful.", "Furthermore, it is interesting to study methods which could generate false friends lists for any pair of similar languages, particularly, languages for which this phenomenon has not been studied.", "In this work we present an automatic method for false friends detection.", "We focus on the traditional false friends definition (similar words with different meanings) because of the dataset we count with and also to present our method in a simple context.", "We describe a supervised classifier we constructed to distinguish false friends from cognates based on word embeddings.", "Although for the method development and evaluation we used Spanish and Portuguese, the method could be applied to other language pairs, provided that the resources needed for the method building are available.", "We do not deal with the problem of determining if two words are similar or not, which is prior to the issue we tackle.", "The paper is organized as follows: in Section 2 we describe some related work, in Section 3 we introduce the word embeddings used in this work, in Section 4 we describe our method, in Section 5 we present and analyze the experiments carried out.", "Finally, in Section 6, we present our conclusions and sketch some future work.", "Related Work Previous work use a combination of orthographic, syntactic, semantic and frequency-based features.", "Frunza (2006) worked with French and English, focusing only on orthographic features via a supervised machine learning algorithm.", "While this method can work in some cases -e.g.", "to detect true cognates with a common root, such as inaccesible in Spanish and inacessível in Portuguese (\"inaccessible\"), that come from the Latin word inaccessibilis -it does not take into account the meanings of the words.", "Mitkov et al.", "(2007) used both a distributional and taxonomy-based approach to multiple language pairs: English-French, English-German, English-Spanish and French-Spanish.", "For the former approach, they build vectors based on the words that appear in a window in the corpus, computing the co-occurrence probability.", "Then they defined two methods for classification: one that considers the N nearest neighbors for each word in the pair and computes the Dice coefficient to determine the similarity between both 1 , and another one that is similar but using syntactically related words instead of the adjacent words.", "Additionally, they evaluated a method which uses a taxonomy to classify false friends, and fails back to the distributional similarity for words not included in the taxonomy.", "They achieved better results under this experiment than only using the distributional similarity.", "Based on the former technique, Ljubešic et al.", "(2013) focused on detecting false friends in closely related languages: Slovene and Croatian.", "Likewise, they exploited a distributional technique but also propose the use of Pointwise Mutual Information (PMI) as an effective way to classify false friends via the frequencies in the corpora.", "Sepúlveda and Aluísio (2011) tackled this task for Portuguese and Spanish, taking the same orthographic approach as Frunza (2006) .", "Nonetheless, they carried out an additional experiment in which they added a new feature whose value is the likelihood of one of the words of the pair to be a translation of the other one.", "This number was obtained from a probabilistic Spanish-Portuguese dictionary, previously generated taking a large sentence-aligned bilingual corpus.", "Word Vector Representations As seen in the previous section, some authors (Mitkov et al., 2007; Ljubešic et al., 2013) represented words as vectors by counting occurrences or by building tf-idf vectors, among other techniques.", "Similarly, Mikolov et al.", "(2013a) proposed an unsupervised technique, known as word2vec, to efficiently represent words as vectors from a large unlabeled corpus, which has proven to outperform several other representations in tasks involving text as input (LeCun et al., 2015) .", "As it is a vector-based distributional representation technique, it is based on computing a vector space in which vectors are close if their corresponding words appear frequently in the same contexts in the corpus used to train it.", "Interesting relationships and patterns are learned in particular with this method, e.g.", "the result of the vector calculation vector(\"M adrid ) − vector(\"Spain ) + vector(\"F rance ) is closer to vector(\"P aris ) than to any other word vector (Mikolov et al., 2013a) .", "Additionally, Mikolov et al.", "(2013c) has shown a technique properties.", "The 2D graphs represent Spanish and Portuguese word spaces after applying PCA, scaling and rotating to exaggerate the similarities and emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "to detect common phrases such as \"New York\" to be part of the vector space, being able to detect more entities and at the same time enhancing the context of others.", "To exploit multi-language capabilities, Mikolov et al.", "(2013b) developed a method to automatically generate dictionaries and phrase tables from small bilingual data (translation word pairs), based on the calculation of a linear transformation between the vector spaces built with word2vec.", "This is presented as an optimization problem that tries to minimize the sum of the Euclidean distances between the translated source word vectors and the target vectors of each pair, and the translation matrix is obtained by means of stochastic gradient descent.", "We chose this distributional representation technique because of this translation property, which is what our method is mainly based on.", "These concepts around word2vec are shown in Fig.", "1 .", "In the example, the five word vectors corresponding to the numbers from \"one\" to \"five\" are shown, and also the word vector \"carpet\" for each language.", "More related words have closer vectors, while unrelated word vectors are at a greater distance.", "At the same time, groups of words are arranged in a similar way, allowing to build translation candidates.", "Method Description As false friends are word pairs in which one seems to be a translation of the other one, our idea is to compare their vectors using Mikolov et al.", "(2013b) technique.", "Our hypothesis is that a word vector in one language should be close to the cognate word vector in another language when it is transformed using this technique, but far when they are false friends, as described hereafter.", "First, we exploited the Spanish and Portuguese Wikipedia's (containing several hundreds of thousands of words) to build the vector spaces we needed, using Gensim's skip-gram based word2vec implementation (Řehůřek and Sojka, 2010) .", "The preprocessing of the Wikipedia's involved the following steps.", "The text was tokenized based on the alphabet of each language, removing words that contain other characters.", "Numbers were converted to their equivalent words.", "Wikipedia non-article pages were removed (e.g.", "disambiguation pages) and punctuation marks were discarded as well.", "Portuguese was harder to tokenize provided that the hyphen is widely used as part of the words in the language.", "For example, bem-vindo (\"welcome\") is a single word whereas Uruguai-Japão (\"Uruguay-Japan\") in jogo Uruguai-Japão (\"Uruguay-Japan match\") are two different words, used with an hyphen only in some contexts.", "The right option is to treat them as separate tokens in order to avoid spurious words in the model and to provide more information to existing words (Uruguai and Japão).", "As the word embedding method exploits the text at the level of sentences (and to avoid splitting ambiguous sentences), paragraphs were used as sentences, which still keep semantic relationships.", "A word had to appear at least five times in the corresponding Wikipedia to be considered for construction of the vector space.", "The 2D graphs represent the word spaces after applying PCA, scaling and rotating to emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "Secondly, WordNet (Fellbaum, 1998) was used as the bilingual lexicon to build the linear transformation between the vector spaces by applying the same technique described in (Mikolov et al., 2013b) , taking advantage of the multi-language synset alignment available in NLTK (Bird et al., 2009) between Spanish (Gonzalez-Agirre et al., 2012) and Portuguese (de Paiva and Rademaker, 2012), based on Open Multilingual WordNet (Bond and Paik, 2012) .", "We generated this lexicon by iterating through each of the 40,000 WordNet synsets and forming pairs taking their most common Spanish word and Portuguese word.", "Note that this is a small figure compared with the corpus sizes, and we show in the next section that it could be considerably lower.", "We also show that the transformation source needs not to be WordNet (we used it just for convenience), which is an expensive and carefully handcrafted resource; it could be just a bilingual dictionary.", "Finally, we defined a method to distinguish false friends from cognates.", "We defined a binary classifier for determining the class, false friends or cognates, for each pair of similar words.", "Given a candidate pair (source_word, target_word), and the corresponding vectors (source_vector, target_vector), the first step consists of transforming source_vector to the space computed for the target language, using the transformation described above.", "Let T (source_vector) be the result of this transformation.", "Then, to determine if source_word and target_word are cognates (if one of them is a possible translation of the other one), we analyzed the relationship between T (source_vector) and target_vector.", "According to Mikolov et al.", "(2013b) , the transformation we compute between the vector spaces keeps semantic relations between words from the source space to the target space.", "So, if (source_word, target_word) is a pair of cognates, then T (source_vector) should be close to target_vector.", "Otherwise, source_word and target_word are false friends.", "The method is illustrated in Fig.", "2 .", "In the example, the pair (persona, pessoa) are cognates (meaning \"person\" in English) while the pair (af eitar, af ectar) are false friends (meaning \"to shave\" and \"that affects\", respectively).", "If we transform the source word vectors (persona and af eitar) and thus obtain vectors in the target vector space, T (persona) and pessoa are close while T (af eitar) and af ectar are far from each other (while a valid translation of af eitar, barbear, is close to T (af eitar)).", "Following this idea, a threshold needs to be established by which two words are considered cognates.", "In addition to this, we wanted to see if similar properties help to constitute an acceptable division.", "Hence, we trained and tested by means of cross-validation a supervised binary Support Vector Machines classifier, based on three features: • Feature 1: the cosine distance between T (source_vector) and target_vector.", "• Feature 2: the number of word vectors in the target vector space closer to target_vector than T (source_vector), using the cosine distance.", "We believe that in some cases the distance for cognates may be larger but what it counts is if the transformed vector lays within the closest ones to the target vector.", "• Feature 3: the sum of the distances between target_vector and T (source_vector i ) for the five word vectors source_vector i nearest to source_vector, using the cosine distance.", "The idea here is that the first feature may be error prone since it only considers one vector, so considering more vectors (by taking both the context from the source vector and the one from its transformed vector) should reduce the variance, as neighbor word vectors from the source word should be neighbors of the target word.", "We carried out different experiments alternating the language we used as the source and the language we used as the target, and also other parameters, which we show in the next section.", "The source code is public and available to use.", "2 5 Experimental Analysis Unfortunately, we are not able to compare our method to several others presented by other authors as they are not only based on non-public code, but also on non-public datasets which are not directly comparable with the one used here.", "Nevertheless, we compare our technique against several methods, for the particular case of Spanish and Portuguese and show it is solid.", "First, we set a simple baseline that does the following: it checks if there exist a WordNet synset which contains both pair words within the Spanish and Portuguese words of it, and if it is does, then they are considered cognates.", "Then, we compare to the Machine Translation software Apertium 3 : we take one of the pair words, translate it and check if the translation matches the other word.", "We chose this software since it can be accessed offline and it is freely available.", "Apart from this, we compare with Sepúlveda and Aluísio (2011, experiment 2 and 3.2) method and also with a variant of our method that adds a word frequency feature (the relative number of times each word appeared in the corpus).", "Word frequencies are used by other authors and we believe they are a different data source from what the word2vec vectors can provide.", "For these experiments we use the same data set as in (Sepúlveda and Aluísio, 2011) .", "4 This resource is composed by 710 Spanish-Portuguese word pairs: 338 cognates and 372 false friends.", "The word pairs were selected from the following resources: an online Spanish-Brazilian Portuguese dictionary, an online Spanish-Portuguese dictionary, a list of the most frequent words in Portuguese and Spanish and an online list of different words in Portuguese and Spanish.", "There are not multi-word expressions and roughly half of the pairs are composed of identically spelled words.", "It was annotated by two people.", "It is important to consider that the word coverage is a concern in this task since every method can only works when the pair words are present in their resources (in other words, they are not out of a method's vocabulary).", "The accuracy thus only takes into account the covered pairs.", "The coverage for the simple baseline can be measured by counting the pairs were both words are present in WordNet.", "Sepúlveda and Aluísio (2011, experiment 2) only considers orthographic and phonetic differences, so always covers all pairs.", "Sepúlveda and Aluísio (2011, experiment 3 .2) uses a dictionary, then the pairs that are in it count towards the coverage.", "The words that could not be translated by Apertium are counted against the coverage of its related method.", "Finally, the pairs that cannot be translated into vectors are counted as not covered by our methods.", "Results are shown in Table 1 .", "It can be appreciated that our method provides both high accuracy and coverage, and that word embedding information can be further improved if additional information, such as the word frequencies, is included.", "We also tested a version of our method that only uses Feature 1 via logistic regression, which reduced the accuracy by 3% roughly, showing that the other two features add some missing information to improve the accuracy.", "As an additional experiment, we tried exploiting WordNet to compute taxonomy-based distances as features in the same manner as Mitkov et al.", "(2007) did, but we did not obtain a significant difference, thus we conclude that it does not add information to what already lays in the features built upon the embeddings.", "As Mikolov et al.", "(2013b) did, we wondered how our method works under different vector configurations, hence we carried out several experiments, varying vector space dimensions.", "We also experimented with vectors for phrases up to two words.", "Finally, we evaluated how the election of the source language, Spanish or Portuguese, affects the results.", "Accuracy obtained for the ten best configurations, and for the experiment with two word vectors are presented in Table 2 .", "For the experiment we used the vector dimensions 100, 200, 400 and 800; source vector space Spanish and Portuguese; and we also tried with a single run with two-word phrases (with Spanish as source and 100 as the vector dimension), summing up 33 configurations in total.", "As it can be noted, there are no significant differences in the accuracy of our method when varying the vector sizes.", "Higher dimensions do not provide better results and they even worsen when the target language dimension is greater than or equal to the source language dimension, as Mikolov et al.", "(2013b) claimed.", "Taking Spanish as the source language seems to be better, maybe this is due to the corpus sizes: the corpus used to generate the Spanish vector space is 1.4 times larger than the one used for Portuguese.", "Finally, we can observe that including vectors for two-word phrases does not improve results.", "Linear Transformation Analysis We were intrigued in knowing how different qualities and quantities of bilingual lexicon entries would affect our method performance.", "We show how the accuracy varies according to the bilingual lexicon size and its source in the Fig.", "3 .", "WN seems to be slightly better than using Apertium as source, albeit they both perform well.", "Also, both rapidly achieve acceptable results, with less than a thousand entries, and : Accuracy of our method with respect to different bilingual lexicon sizes and sources.", "WN is the original approach we take to build the bilingual lexicon, WN all is a method that takes every pair of lemmas from both languages in every WordNet synset and Apertium uses the translations of the top 50,000 Spanish words in frequencies from the Wikipedia (and that could be translated to Portuguese).", "Note that the usage of Apertium here has nothing to do with Apertium baseline.", "yield stable results when the number of entries is larger.", "This is not the case for the method WN all, which needs more word pairs to achieve reasonable results (around 5,000) and it is less stable with larger number of entries.", "Even though we use WordNet to build the lexicon, which is a rich and expensive resource, it could also be built with less quality entries, such as those that come from the output of a Machine Translation software or just by having a list of known word translations.", "Furthermore, our method proved to work with a small number of word pairs, it can be applied to language pairs with scarce bilingual resources.", "Additionally, it is interesting to observe that despite the fact that some test set pairs may appear in the bilingual lexicon in which our method is based on, when having changed it (by reducing its size or using Apertium), it still shows great performance.", "This suggest the results are not biased towards the test set used in this work.", "Conclusions and Future Work We have provided an approach to classify false friends and cognates which showed to have both high accuracy and coverage, studying it for the particular case of Spanish and Portuguese and providing state-of-the-art results for this pair of languages.", "Here we use up-to-date word embedding techniques, which have shown to excel in other tasks, and which can be enriched with other information such as the words frequencies to enhance the classifier.", "In the future we want to experiment with other word vector representations and state-of-the-art vector space linear transformation such as (Artetxe et al., 2017; Artetxe et al., 2018) .", "Also, we would like to work on fine-grained classifications, as we mentioned before there are some word pairs that behave like cognates in some cases but like false friends in others.", "Our method can be applied to any pair of languages, without requiring a large bilingual corpus or taxonomy, which can be hard to find or expensive to build.", "In contrast, large untagged monolingual corpora are easily obtained on the Internet.", "Similar languages, that commonly have a high number of false friends, can benefit from the technique we present in this document, for example by generating a list of false friends pairs automatically based on words that are written in both languages in the same way." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Word Vector Representations", "Method Description", "Linear Transformation Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-63#paper-1130#slide-7
Experiments
We used (Sepulveda and Aluisio, 2011) dataset, which is composed by 710 pairs (338 cognates and 372 false friends).
We used (Sepulveda and Aluisio, 2011) dataset, which is composed by 710 pairs (338 cognates and 372 false friends).
[]
GEM-SciDuet-train-63#paper-1130#slide-10
1130
A High Coverage Method for Automatic False Friends Detection for Spanish and Portuguese
False friends are words in two languages that look or sound similar, but have different meanings. They are a common source of confusion among language learners. Methods to detect them automatically do exist, however they make use of large aligned bilingual corpora, which are hard to find and expensive to build, or encounter problems dealing with infrequent words. In this work we propose a high coverage method that uses word vector representations to build a false friends classifier for any pair of languages, which we apply to the particular case of Spanish and Portuguese. The required resources are a large corpus for each language and a small bilingual lexicon for the pair. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: https:// creativecommons.org/licenses/by/4.0/.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163 ], "paper_content_text": [ "Introduction Closely related languages often share a significant number of similar words which may have different meanings in each language.", "Similar words with different meanings are called false friends, while similar words sharing meaning are called cognates.", "For instance, between Spanish and Portuguese, the amount of cognates reaches the 85% of the total vocabulary (Ulsh, 1971) .", "This fact represents a clear advantage for language learners, but it may also lead to an important number of interferences, since similar words will be interpreted as in the native language, which is not correct in the case of false friends.", "Generally, the expression false friends refers not only to pairs of identical words, but also to pairs of similar words, differing in a few characters.", "Thus, the Spanish verb halagar (\"to flatten\") and the similar Portuguese verb alagar (\"to flood\") are usually considered false friends.", "Besides traditional false friends, that are similar words with different meanings, Humblé (2006) analyses three more types.", "First, he mentions words with similar meanings but used in different contexts, as esclarecer, which is used in a few contexts in Spanish (esclarecer un crimen, \"clarify a crime\"), but not in other contexts where aclarar is used (aclarar una duda, \"clarify a doubt\"), while in Portuguese esclarecer is used in all these contexts.", "Secondly, there are similar words with partial meaning differences, as abrigo, which in Spanish means \"shelter\" and \"coat\", but in Portuguese has just the first meaning.", "Finally, Humblé (2006) also considers false friends as similar words with the same meaning but used in different syntactic structures in each language, as the Spanish verb hablar (\"to speak\"), which does not accept a sentential direct object, and its Portuguese equivalent falar, which does (*yo hablé que .", ".", ".", "/ eu falei que .", ".", ".", ", *\"I spoke that .", ".", ".", "\").", "These non-traditional false friends are more difficult to detect by language learners than traditional ones, because of their subtle differences.", "Having a list of false friends can help native speakers of one language to avoid confusion when speaking and writing in the other language.", "Such a list could be integrated into a writing assistant to prevent the writer when using these words.", "For Spanish/Portuguese, in particular, while there are printed dictionaries that compile false friends (Otero Brabo Cruz, 2004) , we did not find a complete digital false friends list, therefore, an automatic method for false friends detection would be useful.", "Furthermore, it is interesting to study methods which could generate false friends lists for any pair of similar languages, particularly, languages for which this phenomenon has not been studied.", "In this work we present an automatic method for false friends detection.", "We focus on the traditional false friends definition (similar words with different meanings) because of the dataset we count with and also to present our method in a simple context.", "We describe a supervised classifier we constructed to distinguish false friends from cognates based on word embeddings.", "Although for the method development and evaluation we used Spanish and Portuguese, the method could be applied to other language pairs, provided that the resources needed for the method building are available.", "We do not deal with the problem of determining if two words are similar or not, which is prior to the issue we tackle.", "The paper is organized as follows: in Section 2 we describe some related work, in Section 3 we introduce the word embeddings used in this work, in Section 4 we describe our method, in Section 5 we present and analyze the experiments carried out.", "Finally, in Section 6, we present our conclusions and sketch some future work.", "Related Work Previous work use a combination of orthographic, syntactic, semantic and frequency-based features.", "Frunza (2006) worked with French and English, focusing only on orthographic features via a supervised machine learning algorithm.", "While this method can work in some cases -e.g.", "to detect true cognates with a common root, such as inaccesible in Spanish and inacessível in Portuguese (\"inaccessible\"), that come from the Latin word inaccessibilis -it does not take into account the meanings of the words.", "Mitkov et al.", "(2007) used both a distributional and taxonomy-based approach to multiple language pairs: English-French, English-German, English-Spanish and French-Spanish.", "For the former approach, they build vectors based on the words that appear in a window in the corpus, computing the co-occurrence probability.", "Then they defined two methods for classification: one that considers the N nearest neighbors for each word in the pair and computes the Dice coefficient to determine the similarity between both 1 , and another one that is similar but using syntactically related words instead of the adjacent words.", "Additionally, they evaluated a method which uses a taxonomy to classify false friends, and fails back to the distributional similarity for words not included in the taxonomy.", "They achieved better results under this experiment than only using the distributional similarity.", "Based on the former technique, Ljubešic et al.", "(2013) focused on detecting false friends in closely related languages: Slovene and Croatian.", "Likewise, they exploited a distributional technique but also propose the use of Pointwise Mutual Information (PMI) as an effective way to classify false friends via the frequencies in the corpora.", "Sepúlveda and Aluísio (2011) tackled this task for Portuguese and Spanish, taking the same orthographic approach as Frunza (2006) .", "Nonetheless, they carried out an additional experiment in which they added a new feature whose value is the likelihood of one of the words of the pair to be a translation of the other one.", "This number was obtained from a probabilistic Spanish-Portuguese dictionary, previously generated taking a large sentence-aligned bilingual corpus.", "Word Vector Representations As seen in the previous section, some authors (Mitkov et al., 2007; Ljubešic et al., 2013) represented words as vectors by counting occurrences or by building tf-idf vectors, among other techniques.", "Similarly, Mikolov et al.", "(2013a) proposed an unsupervised technique, known as word2vec, to efficiently represent words as vectors from a large unlabeled corpus, which has proven to outperform several other representations in tasks involving text as input (LeCun et al., 2015) .", "As it is a vector-based distributional representation technique, it is based on computing a vector space in which vectors are close if their corresponding words appear frequently in the same contexts in the corpus used to train it.", "Interesting relationships and patterns are learned in particular with this method, e.g.", "the result of the vector calculation vector(\"M adrid ) − vector(\"Spain ) + vector(\"F rance ) is closer to vector(\"P aris ) than to any other word vector (Mikolov et al., 2013a) .", "Additionally, Mikolov et al.", "(2013c) has shown a technique properties.", "The 2D graphs represent Spanish and Portuguese word spaces after applying PCA, scaling and rotating to exaggerate the similarities and emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "to detect common phrases such as \"New York\" to be part of the vector space, being able to detect more entities and at the same time enhancing the context of others.", "To exploit multi-language capabilities, Mikolov et al.", "(2013b) developed a method to automatically generate dictionaries and phrase tables from small bilingual data (translation word pairs), based on the calculation of a linear transformation between the vector spaces built with word2vec.", "This is presented as an optimization problem that tries to minimize the sum of the Euclidean distances between the translated source word vectors and the target vectors of each pair, and the translation matrix is obtained by means of stochastic gradient descent.", "We chose this distributional representation technique because of this translation property, which is what our method is mainly based on.", "These concepts around word2vec are shown in Fig.", "1 .", "In the example, the five word vectors corresponding to the numbers from \"one\" to \"five\" are shown, and also the word vector \"carpet\" for each language.", "More related words have closer vectors, while unrelated word vectors are at a greater distance.", "At the same time, groups of words are arranged in a similar way, allowing to build translation candidates.", "Method Description As false friends are word pairs in which one seems to be a translation of the other one, our idea is to compare their vectors using Mikolov et al.", "(2013b) technique.", "Our hypothesis is that a word vector in one language should be close to the cognate word vector in another language when it is transformed using this technique, but far when they are false friends, as described hereafter.", "First, we exploited the Spanish and Portuguese Wikipedia's (containing several hundreds of thousands of words) to build the vector spaces we needed, using Gensim's skip-gram based word2vec implementation (Řehůřek and Sojka, 2010) .", "The preprocessing of the Wikipedia's involved the following steps.", "The text was tokenized based on the alphabet of each language, removing words that contain other characters.", "Numbers were converted to their equivalent words.", "Wikipedia non-article pages were removed (e.g.", "disambiguation pages) and punctuation marks were discarded as well.", "Portuguese was harder to tokenize provided that the hyphen is widely used as part of the words in the language.", "For example, bem-vindo (\"welcome\") is a single word whereas Uruguai-Japão (\"Uruguay-Japan\") in jogo Uruguai-Japão (\"Uruguay-Japan match\") are two different words, used with an hyphen only in some contexts.", "The right option is to treat them as separate tokens in order to avoid spurious words in the model and to provide more information to existing words (Uruguai and Japão).", "As the word embedding method exploits the text at the level of sentences (and to avoid splitting ambiguous sentences), paragraphs were used as sentences, which still keep semantic relationships.", "A word had to appear at least five times in the corresponding Wikipedia to be considered for construction of the vector space.", "The 2D graphs represent the word spaces after applying PCA, scaling and rotating to emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "Secondly, WordNet (Fellbaum, 1998) was used as the bilingual lexicon to build the linear transformation between the vector spaces by applying the same technique described in (Mikolov et al., 2013b) , taking advantage of the multi-language synset alignment available in NLTK (Bird et al., 2009) between Spanish (Gonzalez-Agirre et al., 2012) and Portuguese (de Paiva and Rademaker, 2012), based on Open Multilingual WordNet (Bond and Paik, 2012) .", "We generated this lexicon by iterating through each of the 40,000 WordNet synsets and forming pairs taking their most common Spanish word and Portuguese word.", "Note that this is a small figure compared with the corpus sizes, and we show in the next section that it could be considerably lower.", "We also show that the transformation source needs not to be WordNet (we used it just for convenience), which is an expensive and carefully handcrafted resource; it could be just a bilingual dictionary.", "Finally, we defined a method to distinguish false friends from cognates.", "We defined a binary classifier for determining the class, false friends or cognates, for each pair of similar words.", "Given a candidate pair (source_word, target_word), and the corresponding vectors (source_vector, target_vector), the first step consists of transforming source_vector to the space computed for the target language, using the transformation described above.", "Let T (source_vector) be the result of this transformation.", "Then, to determine if source_word and target_word are cognates (if one of them is a possible translation of the other one), we analyzed the relationship between T (source_vector) and target_vector.", "According to Mikolov et al.", "(2013b) , the transformation we compute between the vector spaces keeps semantic relations between words from the source space to the target space.", "So, if (source_word, target_word) is a pair of cognates, then T (source_vector) should be close to target_vector.", "Otherwise, source_word and target_word are false friends.", "The method is illustrated in Fig.", "2 .", "In the example, the pair (persona, pessoa) are cognates (meaning \"person\" in English) while the pair (af eitar, af ectar) are false friends (meaning \"to shave\" and \"that affects\", respectively).", "If we transform the source word vectors (persona and af eitar) and thus obtain vectors in the target vector space, T (persona) and pessoa are close while T (af eitar) and af ectar are far from each other (while a valid translation of af eitar, barbear, is close to T (af eitar)).", "Following this idea, a threshold needs to be established by which two words are considered cognates.", "In addition to this, we wanted to see if similar properties help to constitute an acceptable division.", "Hence, we trained and tested by means of cross-validation a supervised binary Support Vector Machines classifier, based on three features: • Feature 1: the cosine distance between T (source_vector) and target_vector.", "• Feature 2: the number of word vectors in the target vector space closer to target_vector than T (source_vector), using the cosine distance.", "We believe that in some cases the distance for cognates may be larger but what it counts is if the transformed vector lays within the closest ones to the target vector.", "• Feature 3: the sum of the distances between target_vector and T (source_vector i ) for the five word vectors source_vector i nearest to source_vector, using the cosine distance.", "The idea here is that the first feature may be error prone since it only considers one vector, so considering more vectors (by taking both the context from the source vector and the one from its transformed vector) should reduce the variance, as neighbor word vectors from the source word should be neighbors of the target word.", "We carried out different experiments alternating the language we used as the source and the language we used as the target, and also other parameters, which we show in the next section.", "The source code is public and available to use.", "2 5 Experimental Analysis Unfortunately, we are not able to compare our method to several others presented by other authors as they are not only based on non-public code, but also on non-public datasets which are not directly comparable with the one used here.", "Nevertheless, we compare our technique against several methods, for the particular case of Spanish and Portuguese and show it is solid.", "First, we set a simple baseline that does the following: it checks if there exist a WordNet synset which contains both pair words within the Spanish and Portuguese words of it, and if it is does, then they are considered cognates.", "Then, we compare to the Machine Translation software Apertium 3 : we take one of the pair words, translate it and check if the translation matches the other word.", "We chose this software since it can be accessed offline and it is freely available.", "Apart from this, we compare with Sepúlveda and Aluísio (2011, experiment 2 and 3.2) method and also with a variant of our method that adds a word frequency feature (the relative number of times each word appeared in the corpus).", "Word frequencies are used by other authors and we believe they are a different data source from what the word2vec vectors can provide.", "For these experiments we use the same data set as in (Sepúlveda and Aluísio, 2011) .", "4 This resource is composed by 710 Spanish-Portuguese word pairs: 338 cognates and 372 false friends.", "The word pairs were selected from the following resources: an online Spanish-Brazilian Portuguese dictionary, an online Spanish-Portuguese dictionary, a list of the most frequent words in Portuguese and Spanish and an online list of different words in Portuguese and Spanish.", "There are not multi-word expressions and roughly half of the pairs are composed of identically spelled words.", "It was annotated by two people.", "It is important to consider that the word coverage is a concern in this task since every method can only works when the pair words are present in their resources (in other words, they are not out of a method's vocabulary).", "The accuracy thus only takes into account the covered pairs.", "The coverage for the simple baseline can be measured by counting the pairs were both words are present in WordNet.", "Sepúlveda and Aluísio (2011, experiment 2) only considers orthographic and phonetic differences, so always covers all pairs.", "Sepúlveda and Aluísio (2011, experiment 3 .2) uses a dictionary, then the pairs that are in it count towards the coverage.", "The words that could not be translated by Apertium are counted against the coverage of its related method.", "Finally, the pairs that cannot be translated into vectors are counted as not covered by our methods.", "Results are shown in Table 1 .", "It can be appreciated that our method provides both high accuracy and coverage, and that word embedding information can be further improved if additional information, such as the word frequencies, is included.", "We also tested a version of our method that only uses Feature 1 via logistic regression, which reduced the accuracy by 3% roughly, showing that the other two features add some missing information to improve the accuracy.", "As an additional experiment, we tried exploiting WordNet to compute taxonomy-based distances as features in the same manner as Mitkov et al.", "(2007) did, but we did not obtain a significant difference, thus we conclude that it does not add information to what already lays in the features built upon the embeddings.", "As Mikolov et al.", "(2013b) did, we wondered how our method works under different vector configurations, hence we carried out several experiments, varying vector space dimensions.", "We also experimented with vectors for phrases up to two words.", "Finally, we evaluated how the election of the source language, Spanish or Portuguese, affects the results.", "Accuracy obtained for the ten best configurations, and for the experiment with two word vectors are presented in Table 2 .", "For the experiment we used the vector dimensions 100, 200, 400 and 800; source vector space Spanish and Portuguese; and we also tried with a single run with two-word phrases (with Spanish as source and 100 as the vector dimension), summing up 33 configurations in total.", "As it can be noted, there are no significant differences in the accuracy of our method when varying the vector sizes.", "Higher dimensions do not provide better results and they even worsen when the target language dimension is greater than or equal to the source language dimension, as Mikolov et al.", "(2013b) claimed.", "Taking Spanish as the source language seems to be better, maybe this is due to the corpus sizes: the corpus used to generate the Spanish vector space is 1.4 times larger than the one used for Portuguese.", "Finally, we can observe that including vectors for two-word phrases does not improve results.", "Linear Transformation Analysis We were intrigued in knowing how different qualities and quantities of bilingual lexicon entries would affect our method performance.", "We show how the accuracy varies according to the bilingual lexicon size and its source in the Fig.", "3 .", "WN seems to be slightly better than using Apertium as source, albeit they both perform well.", "Also, both rapidly achieve acceptable results, with less than a thousand entries, and : Accuracy of our method with respect to different bilingual lexicon sizes and sources.", "WN is the original approach we take to build the bilingual lexicon, WN all is a method that takes every pair of lemmas from both languages in every WordNet synset and Apertium uses the translations of the top 50,000 Spanish words in frequencies from the Wikipedia (and that could be translated to Portuguese).", "Note that the usage of Apertium here has nothing to do with Apertium baseline.", "yield stable results when the number of entries is larger.", "This is not the case for the method WN all, which needs more word pairs to achieve reasonable results (around 5,000) and it is less stable with larger number of entries.", "Even though we use WordNet to build the lexicon, which is a rich and expensive resource, it could also be built with less quality entries, such as those that come from the output of a Machine Translation software or just by having a list of known word translations.", "Furthermore, our method proved to work with a small number of word pairs, it can be applied to language pairs with scarce bilingual resources.", "Additionally, it is interesting to observe that despite the fact that some test set pairs may appear in the bilingual lexicon in which our method is based on, when having changed it (by reducing its size or using Apertium), it still shows great performance.", "This suggest the results are not biased towards the test set used in this work.", "Conclusions and Future Work We have provided an approach to classify false friends and cognates which showed to have both high accuracy and coverage, studying it for the particular case of Spanish and Portuguese and providing state-of-the-art results for this pair of languages.", "Here we use up-to-date word embedding techniques, which have shown to excel in other tasks, and which can be enriched with other information such as the words frequencies to enhance the classifier.", "In the future we want to experiment with other word vector representations and state-of-the-art vector space linear transformation such as (Artetxe et al., 2017; Artetxe et al., 2018) .", "Also, we would like to work on fine-grained classifications, as we mentioned before there are some word pairs that behave like cognates in some cases but like false friends in others.", "Our method can be applied to any pair of languages, without requiring a large bilingual corpus or taxonomy, which can be hard to find or expensive to build.", "In contrast, large untagged monolingual corpora are easily obtained on the Internet.", "Similar languages, that commonly have a high number of false friends, can benefit from the technique we present in this document, for example by generating a list of false friends pairs automatically based on words that are written in both languages in the same way." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Word Vector Representations", "Method Description", "Linear Transformation Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-63#paper-1130#slide-10
Conclusions
We have provided a new approach to classify false friends with high accuracy and coverage. We studied it for Spanish-Portuguese and provided state-of-the-art results for the pair. The method doesnt require rich bilingual datasets. It could be easily applied to other language pairs.
We have provided a new approach to classify false friends with high accuracy and coverage. We studied it for Spanish-Portuguese and provided state-of-the-art results for the pair. The method doesnt require rich bilingual datasets. It could be easily applied to other language pairs.
[]
GEM-SciDuet-train-63#paper-1130#slide-11
1130
A High Coverage Method for Automatic False Friends Detection for Spanish and Portuguese
False friends are words in two languages that look or sound similar, but have different meanings. They are a common source of confusion among language learners. Methods to detect them automatically do exist, however they make use of large aligned bilingual corpora, which are hard to find and expensive to build, or encounter problems dealing with infrequent words. In this work we propose a high coverage method that uses word vector representations to build a false friends classifier for any pair of languages, which we apply to the particular case of Spanish and Portuguese. The required resources are a large corpus for each language and a small bilingual lexicon for the pair. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: https:// creativecommons.org/licenses/by/4.0/.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163 ], "paper_content_text": [ "Introduction Closely related languages often share a significant number of similar words which may have different meanings in each language.", "Similar words with different meanings are called false friends, while similar words sharing meaning are called cognates.", "For instance, between Spanish and Portuguese, the amount of cognates reaches the 85% of the total vocabulary (Ulsh, 1971) .", "This fact represents a clear advantage for language learners, but it may also lead to an important number of interferences, since similar words will be interpreted as in the native language, which is not correct in the case of false friends.", "Generally, the expression false friends refers not only to pairs of identical words, but also to pairs of similar words, differing in a few characters.", "Thus, the Spanish verb halagar (\"to flatten\") and the similar Portuguese verb alagar (\"to flood\") are usually considered false friends.", "Besides traditional false friends, that are similar words with different meanings, Humblé (2006) analyses three more types.", "First, he mentions words with similar meanings but used in different contexts, as esclarecer, which is used in a few contexts in Spanish (esclarecer un crimen, \"clarify a crime\"), but not in other contexts where aclarar is used (aclarar una duda, \"clarify a doubt\"), while in Portuguese esclarecer is used in all these contexts.", "Secondly, there are similar words with partial meaning differences, as abrigo, which in Spanish means \"shelter\" and \"coat\", but in Portuguese has just the first meaning.", "Finally, Humblé (2006) also considers false friends as similar words with the same meaning but used in different syntactic structures in each language, as the Spanish verb hablar (\"to speak\"), which does not accept a sentential direct object, and its Portuguese equivalent falar, which does (*yo hablé que .", ".", ".", "/ eu falei que .", ".", ".", ", *\"I spoke that .", ".", ".", "\").", "These non-traditional false friends are more difficult to detect by language learners than traditional ones, because of their subtle differences.", "Having a list of false friends can help native speakers of one language to avoid confusion when speaking and writing in the other language.", "Such a list could be integrated into a writing assistant to prevent the writer when using these words.", "For Spanish/Portuguese, in particular, while there are printed dictionaries that compile false friends (Otero Brabo Cruz, 2004) , we did not find a complete digital false friends list, therefore, an automatic method for false friends detection would be useful.", "Furthermore, it is interesting to study methods which could generate false friends lists for any pair of similar languages, particularly, languages for which this phenomenon has not been studied.", "In this work we present an automatic method for false friends detection.", "We focus on the traditional false friends definition (similar words with different meanings) because of the dataset we count with and also to present our method in a simple context.", "We describe a supervised classifier we constructed to distinguish false friends from cognates based on word embeddings.", "Although for the method development and evaluation we used Spanish and Portuguese, the method could be applied to other language pairs, provided that the resources needed for the method building are available.", "We do not deal with the problem of determining if two words are similar or not, which is prior to the issue we tackle.", "The paper is organized as follows: in Section 2 we describe some related work, in Section 3 we introduce the word embeddings used in this work, in Section 4 we describe our method, in Section 5 we present and analyze the experiments carried out.", "Finally, in Section 6, we present our conclusions and sketch some future work.", "Related Work Previous work use a combination of orthographic, syntactic, semantic and frequency-based features.", "Frunza (2006) worked with French and English, focusing only on orthographic features via a supervised machine learning algorithm.", "While this method can work in some cases -e.g.", "to detect true cognates with a common root, such as inaccesible in Spanish and inacessível in Portuguese (\"inaccessible\"), that come from the Latin word inaccessibilis -it does not take into account the meanings of the words.", "Mitkov et al.", "(2007) used both a distributional and taxonomy-based approach to multiple language pairs: English-French, English-German, English-Spanish and French-Spanish.", "For the former approach, they build vectors based on the words that appear in a window in the corpus, computing the co-occurrence probability.", "Then they defined two methods for classification: one that considers the N nearest neighbors for each word in the pair and computes the Dice coefficient to determine the similarity between both 1 , and another one that is similar but using syntactically related words instead of the adjacent words.", "Additionally, they evaluated a method which uses a taxonomy to classify false friends, and fails back to the distributional similarity for words not included in the taxonomy.", "They achieved better results under this experiment than only using the distributional similarity.", "Based on the former technique, Ljubešic et al.", "(2013) focused on detecting false friends in closely related languages: Slovene and Croatian.", "Likewise, they exploited a distributional technique but also propose the use of Pointwise Mutual Information (PMI) as an effective way to classify false friends via the frequencies in the corpora.", "Sepúlveda and Aluísio (2011) tackled this task for Portuguese and Spanish, taking the same orthographic approach as Frunza (2006) .", "Nonetheless, they carried out an additional experiment in which they added a new feature whose value is the likelihood of one of the words of the pair to be a translation of the other one.", "This number was obtained from a probabilistic Spanish-Portuguese dictionary, previously generated taking a large sentence-aligned bilingual corpus.", "Word Vector Representations As seen in the previous section, some authors (Mitkov et al., 2007; Ljubešic et al., 2013) represented words as vectors by counting occurrences or by building tf-idf vectors, among other techniques.", "Similarly, Mikolov et al.", "(2013a) proposed an unsupervised technique, known as word2vec, to efficiently represent words as vectors from a large unlabeled corpus, which has proven to outperform several other representations in tasks involving text as input (LeCun et al., 2015) .", "As it is a vector-based distributional representation technique, it is based on computing a vector space in which vectors are close if their corresponding words appear frequently in the same contexts in the corpus used to train it.", "Interesting relationships and patterns are learned in particular with this method, e.g.", "the result of the vector calculation vector(\"M adrid ) − vector(\"Spain ) + vector(\"F rance ) is closer to vector(\"P aris ) than to any other word vector (Mikolov et al., 2013a) .", "Additionally, Mikolov et al.", "(2013c) has shown a technique properties.", "The 2D graphs represent Spanish and Portuguese word spaces after applying PCA, scaling and rotating to exaggerate the similarities and emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "to detect common phrases such as \"New York\" to be part of the vector space, being able to detect more entities and at the same time enhancing the context of others.", "To exploit multi-language capabilities, Mikolov et al.", "(2013b) developed a method to automatically generate dictionaries and phrase tables from small bilingual data (translation word pairs), based on the calculation of a linear transformation between the vector spaces built with word2vec.", "This is presented as an optimization problem that tries to minimize the sum of the Euclidean distances between the translated source word vectors and the target vectors of each pair, and the translation matrix is obtained by means of stochastic gradient descent.", "We chose this distributional representation technique because of this translation property, which is what our method is mainly based on.", "These concepts around word2vec are shown in Fig.", "1 .", "In the example, the five word vectors corresponding to the numbers from \"one\" to \"five\" are shown, and also the word vector \"carpet\" for each language.", "More related words have closer vectors, while unrelated word vectors are at a greater distance.", "At the same time, groups of words are arranged in a similar way, allowing to build translation candidates.", "Method Description As false friends are word pairs in which one seems to be a translation of the other one, our idea is to compare their vectors using Mikolov et al.", "(2013b) technique.", "Our hypothesis is that a word vector in one language should be close to the cognate word vector in another language when it is transformed using this technique, but far when they are false friends, as described hereafter.", "First, we exploited the Spanish and Portuguese Wikipedia's (containing several hundreds of thousands of words) to build the vector spaces we needed, using Gensim's skip-gram based word2vec implementation (Řehůřek and Sojka, 2010) .", "The preprocessing of the Wikipedia's involved the following steps.", "The text was tokenized based on the alphabet of each language, removing words that contain other characters.", "Numbers were converted to their equivalent words.", "Wikipedia non-article pages were removed (e.g.", "disambiguation pages) and punctuation marks were discarded as well.", "Portuguese was harder to tokenize provided that the hyphen is widely used as part of the words in the language.", "For example, bem-vindo (\"welcome\") is a single word whereas Uruguai-Japão (\"Uruguay-Japan\") in jogo Uruguai-Japão (\"Uruguay-Japan match\") are two different words, used with an hyphen only in some contexts.", "The right option is to treat them as separate tokens in order to avoid spurious words in the model and to provide more information to existing words (Uruguai and Japão).", "As the word embedding method exploits the text at the level of sentences (and to avoid splitting ambiguous sentences), paragraphs were used as sentences, which still keep semantic relationships.", "A word had to appear at least five times in the corresponding Wikipedia to be considered for construction of the vector space.", "The 2D graphs represent the word spaces after applying PCA, scaling and rotating to emphasize the differences.", "The left graph is the source language vector space (in this case Spanish) and the right one is the target language vector space (Portuguese).", "Secondly, WordNet (Fellbaum, 1998) was used as the bilingual lexicon to build the linear transformation between the vector spaces by applying the same technique described in (Mikolov et al., 2013b) , taking advantage of the multi-language synset alignment available in NLTK (Bird et al., 2009) between Spanish (Gonzalez-Agirre et al., 2012) and Portuguese (de Paiva and Rademaker, 2012), based on Open Multilingual WordNet (Bond and Paik, 2012) .", "We generated this lexicon by iterating through each of the 40,000 WordNet synsets and forming pairs taking their most common Spanish word and Portuguese word.", "Note that this is a small figure compared with the corpus sizes, and we show in the next section that it could be considerably lower.", "We also show that the transformation source needs not to be WordNet (we used it just for convenience), which is an expensive and carefully handcrafted resource; it could be just a bilingual dictionary.", "Finally, we defined a method to distinguish false friends from cognates.", "We defined a binary classifier for determining the class, false friends or cognates, for each pair of similar words.", "Given a candidate pair (source_word, target_word), and the corresponding vectors (source_vector, target_vector), the first step consists of transforming source_vector to the space computed for the target language, using the transformation described above.", "Let T (source_vector) be the result of this transformation.", "Then, to determine if source_word and target_word are cognates (if one of them is a possible translation of the other one), we analyzed the relationship between T (source_vector) and target_vector.", "According to Mikolov et al.", "(2013b) , the transformation we compute between the vector spaces keeps semantic relations between words from the source space to the target space.", "So, if (source_word, target_word) is a pair of cognates, then T (source_vector) should be close to target_vector.", "Otherwise, source_word and target_word are false friends.", "The method is illustrated in Fig.", "2 .", "In the example, the pair (persona, pessoa) are cognates (meaning \"person\" in English) while the pair (af eitar, af ectar) are false friends (meaning \"to shave\" and \"that affects\", respectively).", "If we transform the source word vectors (persona and af eitar) and thus obtain vectors in the target vector space, T (persona) and pessoa are close while T (af eitar) and af ectar are far from each other (while a valid translation of af eitar, barbear, is close to T (af eitar)).", "Following this idea, a threshold needs to be established by which two words are considered cognates.", "In addition to this, we wanted to see if similar properties help to constitute an acceptable division.", "Hence, we trained and tested by means of cross-validation a supervised binary Support Vector Machines classifier, based on three features: • Feature 1: the cosine distance between T (source_vector) and target_vector.", "• Feature 2: the number of word vectors in the target vector space closer to target_vector than T (source_vector), using the cosine distance.", "We believe that in some cases the distance for cognates may be larger but what it counts is if the transformed vector lays within the closest ones to the target vector.", "• Feature 3: the sum of the distances between target_vector and T (source_vector i ) for the five word vectors source_vector i nearest to source_vector, using the cosine distance.", "The idea here is that the first feature may be error prone since it only considers one vector, so considering more vectors (by taking both the context from the source vector and the one from its transformed vector) should reduce the variance, as neighbor word vectors from the source word should be neighbors of the target word.", "We carried out different experiments alternating the language we used as the source and the language we used as the target, and also other parameters, which we show in the next section.", "The source code is public and available to use.", "2 5 Experimental Analysis Unfortunately, we are not able to compare our method to several others presented by other authors as they are not only based on non-public code, but also on non-public datasets which are not directly comparable with the one used here.", "Nevertheless, we compare our technique against several methods, for the particular case of Spanish and Portuguese and show it is solid.", "First, we set a simple baseline that does the following: it checks if there exist a WordNet synset which contains both pair words within the Spanish and Portuguese words of it, and if it is does, then they are considered cognates.", "Then, we compare to the Machine Translation software Apertium 3 : we take one of the pair words, translate it and check if the translation matches the other word.", "We chose this software since it can be accessed offline and it is freely available.", "Apart from this, we compare with Sepúlveda and Aluísio (2011, experiment 2 and 3.2) method and also with a variant of our method that adds a word frequency feature (the relative number of times each word appeared in the corpus).", "Word frequencies are used by other authors and we believe they are a different data source from what the word2vec vectors can provide.", "For these experiments we use the same data set as in (Sepúlveda and Aluísio, 2011) .", "4 This resource is composed by 710 Spanish-Portuguese word pairs: 338 cognates and 372 false friends.", "The word pairs were selected from the following resources: an online Spanish-Brazilian Portuguese dictionary, an online Spanish-Portuguese dictionary, a list of the most frequent words in Portuguese and Spanish and an online list of different words in Portuguese and Spanish.", "There are not multi-word expressions and roughly half of the pairs are composed of identically spelled words.", "It was annotated by two people.", "It is important to consider that the word coverage is a concern in this task since every method can only works when the pair words are present in their resources (in other words, they are not out of a method's vocabulary).", "The accuracy thus only takes into account the covered pairs.", "The coverage for the simple baseline can be measured by counting the pairs were both words are present in WordNet.", "Sepúlveda and Aluísio (2011, experiment 2) only considers orthographic and phonetic differences, so always covers all pairs.", "Sepúlveda and Aluísio (2011, experiment 3 .2) uses a dictionary, then the pairs that are in it count towards the coverage.", "The words that could not be translated by Apertium are counted against the coverage of its related method.", "Finally, the pairs that cannot be translated into vectors are counted as not covered by our methods.", "Results are shown in Table 1 .", "It can be appreciated that our method provides both high accuracy and coverage, and that word embedding information can be further improved if additional information, such as the word frequencies, is included.", "We also tested a version of our method that only uses Feature 1 via logistic regression, which reduced the accuracy by 3% roughly, showing that the other two features add some missing information to improve the accuracy.", "As an additional experiment, we tried exploiting WordNet to compute taxonomy-based distances as features in the same manner as Mitkov et al.", "(2007) did, but we did not obtain a significant difference, thus we conclude that it does not add information to what already lays in the features built upon the embeddings.", "As Mikolov et al.", "(2013b) did, we wondered how our method works under different vector configurations, hence we carried out several experiments, varying vector space dimensions.", "We also experimented with vectors for phrases up to two words.", "Finally, we evaluated how the election of the source language, Spanish or Portuguese, affects the results.", "Accuracy obtained for the ten best configurations, and for the experiment with two word vectors are presented in Table 2 .", "For the experiment we used the vector dimensions 100, 200, 400 and 800; source vector space Spanish and Portuguese; and we also tried with a single run with two-word phrases (with Spanish as source and 100 as the vector dimension), summing up 33 configurations in total.", "As it can be noted, there are no significant differences in the accuracy of our method when varying the vector sizes.", "Higher dimensions do not provide better results and they even worsen when the target language dimension is greater than or equal to the source language dimension, as Mikolov et al.", "(2013b) claimed.", "Taking Spanish as the source language seems to be better, maybe this is due to the corpus sizes: the corpus used to generate the Spanish vector space is 1.4 times larger than the one used for Portuguese.", "Finally, we can observe that including vectors for two-word phrases does not improve results.", "Linear Transformation Analysis We were intrigued in knowing how different qualities and quantities of bilingual lexicon entries would affect our method performance.", "We show how the accuracy varies according to the bilingual lexicon size and its source in the Fig.", "3 .", "WN seems to be slightly better than using Apertium as source, albeit they both perform well.", "Also, both rapidly achieve acceptable results, with less than a thousand entries, and : Accuracy of our method with respect to different bilingual lexicon sizes and sources.", "WN is the original approach we take to build the bilingual lexicon, WN all is a method that takes every pair of lemmas from both languages in every WordNet synset and Apertium uses the translations of the top 50,000 Spanish words in frequencies from the Wikipedia (and that could be translated to Portuguese).", "Note that the usage of Apertium here has nothing to do with Apertium baseline.", "yield stable results when the number of entries is larger.", "This is not the case for the method WN all, which needs more word pairs to achieve reasonable results (around 5,000) and it is less stable with larger number of entries.", "Even though we use WordNet to build the lexicon, which is a rich and expensive resource, it could also be built with less quality entries, such as those that come from the output of a Machine Translation software or just by having a list of known word translations.", "Furthermore, our method proved to work with a small number of word pairs, it can be applied to language pairs with scarce bilingual resources.", "Additionally, it is interesting to observe that despite the fact that some test set pairs may appear in the bilingual lexicon in which our method is based on, when having changed it (by reducing its size or using Apertium), it still shows great performance.", "This suggest the results are not biased towards the test set used in this work.", "Conclusions and Future Work We have provided an approach to classify false friends and cognates which showed to have both high accuracy and coverage, studying it for the particular case of Spanish and Portuguese and providing state-of-the-art results for this pair of languages.", "Here we use up-to-date word embedding techniques, which have shown to excel in other tasks, and which can be enriched with other information such as the words frequencies to enhance the classifier.", "In the future we want to experiment with other word vector representations and state-of-the-art vector space linear transformation such as (Artetxe et al., 2017; Artetxe et al., 2018) .", "Also, we would like to work on fine-grained classifications, as we mentioned before there are some word pairs that behave like cognates in some cases but like false friends in others.", "Our method can be applied to any pair of languages, without requiring a large bilingual corpus or taxonomy, which can be hard to find or expensive to build.", "In contrast, large untagged monolingual corpora are easily obtained on the Internet.", "Similar languages, that commonly have a high number of false friends, can benefit from the technique we present in this document, for example by generating a list of false friends pairs automatically based on words that are written in both languages in the same way." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5.1", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Word Vector Representations", "Method Description", "Linear Transformation Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-63#paper-1130#slide-11
Future Work
Experiment with other word vector representations and state-of-the-art vector space linear transformation. Work on fine-grained classifications. E.g., partial false friends.
Experiment with other word vector representations and state-of-the-art vector space linear transformation. Work on fine-grained classifications. E.g., partial false friends.
[]
GEM-SciDuet-train-64#paper-1137#slide-0
1137
Introducing a Lexicon of Verbal Polarity Shifters for English
The sentiment polarity of a phrase does not only depend on the polarities of its words, but also on how these are affected by their context. Negation words (e.g. not, no, never) can change the polarity of a phrase. Similarly, verbs and other content words can also act as polarity shifters (e.g. fail, deny, alleviate). While individually more sparse, they are far more numerous. Among verbs alone, there are more than 1200 shifters. However, sentiment analysis systems barely consider polarity shifters other than negation words. A major reason for this is the scarcity of lexicons and corpora that provide information on them. We introduce a lexicon of verbal polarity shifters that covers the entirety of verbs found in WordNet. We provide a fine-grained annotation of individual word senses, as well as information for each verbal shifter on the syntactic scopes that it can affect.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190 ], "paper_content_text": [ "Introduction Polarity shifters are content words that exhibit semantic properties similar to negation.", "For example, the negated statement in (1) can also be achieved by the verbal shifter fail instead of the negation not, as shown in (2).", "(1) Peter did not pass the exam.", "(2) Peter failed shifter to pass the exam.", "As with negation words, polarity shifters change the polarity of a statement.", "This can happen to both positive and negative statements.", "In (3) the positive polarity of scholarship is shifted by denied, resulting in a negative polarity for the phrase.", "Conversely, the overall polarity of (4) is positive despite the negative polarity of pain.", "Polarity shifting is also caused by other content word classes, such as nouns (e.g.", "downfall) and adjectives (e.g.", "devoid).", "However, this work focusses on verbs, due to their importance as minimal semantic units, far-reaching scopes and potential basis for nominal shifter lexicons (see §2.2.).", "Knowledge of polarity shifting is important for a variety of tasks, especially sentiment analysis (Wiegand et al., 2010; Liu, 2012; Wilson et al., 2005) , as well as relation extraction (Sanchez-Graillet and Poesio, 2007) and textual entailment recognition (Harabagiu et al., 2006) .", "The majority of research into polarity shifting for sentiment analysis has focussed on negation words (Wiegand et al., 2010; Schouten and Frasincar, 2016; Pak and Paroubek, 2010) .", "Negation words (e.g.", "not, no, never) are mostly function words, of which only a small number exists, so exhaustive coverage is comparatively simple.", "Content word classes, such as verbs, are considerably more difficult to cover comprehensively due to their sheer number.", "For example, WordNet (Miller et al., 1990) contains over 10k verbal lemmas.", "Most verbs are also far less frequent than common negation words, making individual verbal shifters seem less important.", "However, overall, verbal shifter lemmas occur 2.6 times as often as negation words (see §4.).", "Most existing resources on negation and polarity shifting cover few to no instances of verbal shifters (see §2.3.).", "To remedy this, we introduce a complete lexicon of verbal shifters with annotations of polarity shifters and their shifting scope for each word sense.", "Our contributions are as follows: (i) A complete lexicon of verbal polarity shifters, covering all verbs found in WordNet 3.1.", "(ii) A fine grained annotation, labelling every sense of a verb separately.", "(iii) Annotations for shifter scope, indicating which parts of a sentence are affected by the shifting.", "The entire dataset is publicly available.", "1 Background In this section we will provide a formal definition of polarity shifters ( §2.1.", "), motivate our focus on verbal shifters ( §2.2.)", "and discuss related work ( §2.3.).", "Polarity Shifters The notion of valence or polarity shifting was brought to broad awareness in the research community by the work of Polanyi and Zaenen (2006) .", "Those authors drew attention to the fact that the basic valence of individual lexical items may be shifted in context due to (a) the presence of certain other lexical items, (b) the genre type and discourse structure of the text and (c) cultural factors.", "In subsequent research, the term shifter has since mostly been applied to the case of lexical items that influence polarity.", "Further, the notion of shifting is most prototypically used for situations where a discrete polarity switch occurs between the classes positive, negative and neutral.", "However, for other authors, including Polanyi and Zaenen (2006) , intensification (e.g.", "very disappointing) and downtoning (e.g.", "somewhat disappointing) of polar intensity also falls within the scope of shifting.", "We partially follow this view in that we consider downtoning to be shifting, as it moves the polarity of a word in the opposite direction, i.e.", "making a positive expression less positive (e.g.", "hardly satisfying) and a negative one less negative (e.g.", "slightly problematic).", "We do not consider intensifiers as shifters, as they support the already existing polarity.", "In most research, shifters are commonly illustrated and enumerated rather than formally defined.", "Polanyi and Zaenen (2006) for instance list negation words, intensifiers, modals and presuppositional items as lexical contextual polarity shifters.", "Setting aside downtoners for now, the common denominator of shifting is negation.", "Negation marks contexts in which a situation that the speaker expected fails to occur or hold.", "When this situation is part of a binary opposition (dead -alive), one can firmly conclude that the complementary state of affairs holds (not dead ⇒ alive).", "In cases where the negation affects a scalar notion, which is common in evaluative contexts, the understanding that arises depends on which kinds of scalar inferences and default assumptions are made in the context (Paradis and Willners, 2006) .", "Thus, not good denies the applicability of an evaluation in the region of good or better, but leaves open just how far in the direction of badness the actual interpretation lies: \"It wasn't good\" may be continued with \"but it was ok\" to yield a neutral or mildly positive evaluation or with \"in fact, it was terrible\" to yield a strongly negative one.", "2 While downtoners (e.g.", "somewhat) applied to scalar predicates such as good do not directly express contradiction, they do give rise to negative entailments and inferences.", "Moreover, the structure of scales intrinsically provides shifting.", "Thus, while something being good allows it to be even more positive (\"The movie was good.", "In fact, it was excellent.", "\"), something being somewhat good bounds its positiveness and opens up more negative meanings (\"The performance was somewhat good, but overall rather disappointing\").", "Considering these properties of scales, one can see shifting at work even in the case of downtoning.", "Verbal Shifters While the inclusion of shifting and scalar semantics in semantic representations is not limited to lexical items of particular parts-of-speech -we also find shifter adjectives (e.g.", "devoid) and adverbs (e.g.", "barely) -we limit our work to verbal shifters for several reasons.", "As shown by the work of Schneider et al.", "(2016) , verbs, together with nouns, are the most important minimal semantic units in text and thus are prime candidates for being tackled first.", "Verbs are usually the main syntactic predicates of clauses and sentences and thus verbal shifters can be expected to project far-reaching scopes.", "Most nominal shifters (e.g.", "failure, loss), on the other hand, have morphologically related verbs (e.g.", "fail, lose) and we expect that this connection can be exploited to spread shifter classification from verbs to nouns in the future.", "Related to this, the grammar of verbs, for instance with respect to the diversity of scope types, is more complex than that of nouns and so we expect it to be easier to project from verbs to nouns rather than in the opposite direction.", "Related Work Existing lexicons and corpora that cover polarity shifting focus almost exclusively on negation words.", "The most complex negation lexicon for sentiment analysis (Wilson et al., 2005) includes a mere 12 verbal shifters.", "In contrast, our resource covers over 1200 verbal shifter lemmas.", "Corpora used as training data for negation processing, such as the Sentiment Treebank (Socher et al., 2013) or the BioScope corpus (Szarvas et al., 2008) , are fairly small datasets, so only the most frequent negation words appear.", "The BioScope corpus, for example, contains only 6 verbal shifters (Morante, 2010) .", "Schulder et al.", "(2017) show that state-of-the-art systems trained on such data do not reliably detect polarity shifting and should profit from explicit knowledge of verbal shifters.", "The only work to date that covers a larger number of verbal shifters is Schulder et al.", "(2017) , who annotate a sample of the English verbs found in WordNet for whether they exhibit polarity shifting.", "They start by manually annotating an initial 2000 verbs.", "These verbs are used to train an SVM classifier using linguistic features and common language resources.", "The classifier is then run on the remaining WordNet verbs to bootstrap a list of additional likely shifters.", "This list is then checked by a human annotator to detect false positives.", "Combining the initial annotation and the result of the bootstrapping process, they create a list of 3043 verbs.", "While the lexicon by Schulder et al.", "(2017) is an important step towards full coverage of verbal polarity shifters, there are several aspects that we seek to improve upon.", "First of all, their lexicon covers less than a third of the verbs found in WordNet, likely missing a number of verbal shifters.", "Schulder et al.", "(2017) argue that their bootstrap process should cover the majority of shifters, however, this would mean that only 9% of all verbs are shifters.", "3 Their initial annotation of 2000 randomly selected verbs puts the shifter ratio at 15% instead.", "Another issue with their lexicon is that it only labels lemma forms, but does not differentiate between word senses.", "Many verbs do not actually exhibit shifting in all of their senses, so this information will be important for contextual classification.", "Lastly, they forgo the question of shifter scope, i.e.", "which argument of a verb can be affected by its polarity shift.", "Data We treat this annotation effort as a binary labelling task where a word can either cause polarites to shift or not.", "However, instead of assigning a single label to an entire verb lemma, as Schulder et al.", "(2017) did, we label individual word senses.", "We outline the rationale for this in §3.1.", "In addition we explicitly specify the syntactic scope of the shifting.", "This is motivated and explained in §3.2.", "§3.3.", "describes the annotation process.", "§3.4.", "describes the data format of our main lexicon.", "Based on this main lexicon we also derive two auxiliary lexicons in §3.5., providing complete labelled lists of all WordNet verb lemmas and all WordNet verb synsets respectively.", "Word Senses Many words that shift polarities only do so for some of their word senses.", "For example, mark down acts as a shifter in (5) , where it has the sense of \"reducing the value of something\", but the sense of \"writing something down to have a record of it\" in (6) causes no shifting.", "In our work we found that among shifter lemmas with multiple word senses, only 23% caused shifting in each of their senses.", "An annotation on the basis of individual word senses is therefore required.", "To differentiate the senses of a verb, we use its synset affiliations found in WordNet.", "Words within the same synset share a shifter label.", "Shifter scope, on the other hand, can differ among words of the same synset (see §3.2.).", "The annotation introduced in §3.3.", "is therefore applied to individual lemma-sense pairs to capture the best of both worlds.", "Shifter Scope A verbal shifter usually only affects the parts of a sentence that are syntactically governed by the verb through its valency.", "However, not every argument of a verbal shifter is subject to polarity shifting.", "Which argument is affected by polarity shifting depends on the verb in question.", "In (7) , surrender shifts only the polarity of its subject, but does not affect the object.", "Conversely, defeat only shifts its object in (8).", "The polarity of the subject of defeat does not play a role in this, as can be seen in (9).", "The given scopes assume that verb phrases are in their active form.", "In passive phrases, subject and object roles are inverted.", "To avoid this issue, sentence structure normalization should be performed before computing shifter scope.", "Synsets in WordNet only capture the semantic similarity of words, but almost no syntactic properties (Ruppenhofer and Brandes, 2015) .", "The shifter scope of a verb depends on its syntactic arguments, which can differ between verbs of the same synset.", "For example, discard and dispose share the sense \"throw or cast away\", but while discard shifts its direct object (10), dispose requires a prepositional object (11).", "For this reason we annotate lemma-synset pairs individually, instead of assigning scope labels to an entire synset.", "We also consider cases where a verbal shifter has more than one potential scope for the same lemma-sense pair.", "For example, infringe can shift its direct object or various prepositional objects, as seen in (12) -(14) .", "Therefore, infringe receives the scope labels dobj, pobj on and pobj upon.", "A verbal shifter will only ever shift the polarity of one of its scopes.", "Which scope is affected by the shifting depends on the given sentence.", "Annotation The entire dataset was labelled by an expert annotator with experience in linguistics and annotation work.", "To measure inter-annotator agreement, a second annotator re-annotated 400 word senses for their shifter label.", "They achieved an agreement of κ = 0.73, indicating substantial agreement (Landis and Koch, 1977) .", "The annotation progressed as follows: Given a complete list of WordNet verb lemmas, the annotator would inspect one lemma at a time.", "For this lemma, all senses were looked up.", "For each such lemma-sense pair, the annotator decided whether it is a shifter or not.", "Decisions were based on the sense definition of the synset and whether sentences using this sense of the lemma cause shifting.", "If a word sense was labelled as a shifter, it was subsequently also annotated for its potential shifter scopes.", "In cases where label conflicts between different lemmasense pairs of the same sense were encountered, these labels were reconsidered.", "This introduced an additional robustness to the annotation as it let the annotator revisit challenging cases from a new perspective.", "The resulting list of lemma-sense pairs provides more finegrained information than either an annotation for only word lemmas or only synsets could (see §3.1.", "and §3.2.).", "Main Lexicon File Format We provide our main lexicon as a comma-separated value (csv) file in which each line represents a specific lemmasense-scope triple of a verbal shifter.", "Each line follows the format \"LEMMA,SYNSET,SCOPE\".", "The fields are defined as follows: LEMMA: The lemma form of the verb.", "SYNSET: The numeric identifier of the synset, commonly referred to as offset or database location.", "It consists of 8 digits, including leading zeroes (e.g.", "00334568).", "SCOPE: The scope of the shifting.", "Given as subj for subject position, dobj for direct object position and comp for clausal complements.", "Prepositional object positions are given as pobj * , where * is replaced by the preposition in question, e.g.", "pobj from for objects with the preposition \"from\" or prep of for the preposition \"of\".", "When a lemma has multiple word senses, a separate entry is provided for each lemma-sense pair.", "When a lemma-sense pair has multiple potential shifting scopes, a separate entry is provided for each scope.", "Any combinations not provided are considered not to exhibit shifting.", "Take, for example, the set of entries for \"blow out\": (15) blow out,00436247,subj blow out,02767855,dobj It tells us that blow out in the sense 00436247 (\"melt, break, or become otherwise unusable\") is a shifter that affects its subject.", "The sense 02767855 (\"put out, as of fires, flames, or lights\") also exhibits shifting, but this time affects the direct object.", "It is, however, not a shifter for sense 02766970 (\"erupt in an uncontrolled manner\").", "For an example of multiple scopes for the same word sense, consider cramp: Its sense 00237139 (\"prevent the progress or free movement of\") can shift the polarity of either its direct object (e.g.", "\"it cramped his progress\") or that of a prepositional object with the preposition \"in\" (e.g.", "\"he was cramped in his progress\").", "The three other senses of cramp given by WordNet are not considered shifters.", "Auxiliary Lexicons Our main lexicon is labelled at the lemma-sense pair level to provide the most fine-grained level of information possible.", "It can, however, easily applied to more coarse-grained applications.", "As a convenience, we provide lemma-and synset-level auxiliary lexicons that list all WordNet lemmas and all WordNet synsets, respectively, accompanied with their shifter label.", "A lemma is labelled as a shifter if at least one of its senses is considered a shifter in our main lexicon.", "Similarly, synsets are labelled as shifters if at least one of its lemma-realizations is a shifter.", "Statistics In Table 1 we present the ratio of shifters among the verbs contained in WordNet.", "While only about 10% of verbs are shifters, this still results in 1220 lemmas and 924 synsets, more than covered in any other resource (see §2.3.).", "49% of verbs in WordNet are polysemous, i.e.", "they have multiple meanings.", "Among verbal shifters, this ratio is considerably higher, reaching 73%.", "Of these, only 23% are shifters in all of their word senses.", "To get an idea of how common verbal shifters are in actual use, we computed lemma frequencies over the Amazon Product Review Data corpus (Jindal and Liu, 2008) , which comprises over 5.8 million reviews.", "We found this corpus suitable due to its size, sentiment-related content and use in related tasks (Schulder et al., 2017) .", "We observe 1163 different verbal shifter lemmas with an overall total of 34 million occurrences.", "Correcting for nonshifter senses of shifter lemmas 4 , we still estimate 13 million occurrences, accounting for 5% of all verb occurrences in the corpus.", "To compare, the 15 negation words found in the valence shifter lexicon by Wilson et al.", "(2005) occur 13 million times as well.", "While the frequency of individual negation (function) words is unsurprisingly higher, the total number of verbal shifter occurrences highlights that verbal shifters are just as frequent and should not be ignored.", "Statistics on the distribution of shifter scopes can be found in Table 2 .", "74% of verbal shifters have a direct object scope and 10% a prepositional object scope.", "Among these, \"from\" is the most common preposition at 51%, followed by \"of\" with 22%.", "19% shift the polarity of their subject and only 1.5% shift that of a clausal complement.", "This distribution shows that shifting cannot be trivially assumed to always affect the direct object and that explicit knowledge of shifter scopes will be useful for judging the polarity of a phrase.", "Conclusion We introduced a lexicon of verbal polarity shifters that covers the entire verb vocabulary of WordNet.", "Our annotation labels each individual word sense of a verb, providing more fine-grained information than annotations on the lemmalevel would.", "In addition, we also label the syntactic scopes of each verbal shifter that can be affected by the shifting.", "This is a clear improvement over the list of verbal shifters provided by Schulder et al.", "(2017) , which only provides labels at the lemma-level rather than for individual word senses and gives no information regarding shifting scope.", "It also only has human expert annotation for 30% of the verb vocabulary of WordNet, as opposed to our full coverage.", "We hope this resource will help improve fine-grained sentiment analysis systems by providing explicit information on where polarities may shift in a sentence.", "We also hope our work will encourage the creation of similar polarity shifter lexicons for nouns and adjectives.", "As they are more numerous than verbs (WordNet contains 20k adjectival and 110k nominal lemmas), creating such resources will come with its own challenges, especially in the case of nouns." ] }
{ "paper_header_number": [ "1.", "2.", "2.1.", "2.2.", "2.3.", "3.", "3.1.", "3.2.", "3.3.", "3.4.", "3.5.", "4.", "5." ], "paper_header_content": [ "Introduction", "Background", "Polarity Shifters", "Verbal Shifters", "Related Work", "Data", "Word Senses", "Shifter Scope", "Annotation", "Main Lexicon File Format", "Auxiliary Lexicons", "Statistics", "Conclusion" ] }
GEM-SciDuet-train-64#paper-1137#slide-0
What are Polarity Shifters
Marc Schulder Saarland University
Marc Schulder Saarland University
[]
GEM-SciDuet-train-64#paper-1137#slide-1
1137
Introducing a Lexicon of Verbal Polarity Shifters for English
The sentiment polarity of a phrase does not only depend on the polarities of its words, but also on how these are affected by their context. Negation words (e.g. not, no, never) can change the polarity of a phrase. Similarly, verbs and other content words can also act as polarity shifters (e.g. fail, deny, alleviate). While individually more sparse, they are far more numerous. Among verbs alone, there are more than 1200 shifters. However, sentiment analysis systems barely consider polarity shifters other than negation words. A major reason for this is the scarcity of lexicons and corpora that provide information on them. We introduce a lexicon of verbal polarity shifters that covers the entirety of verbs found in WordNet. We provide a fine-grained annotation of individual word senses, as well as information for each verbal shifter on the syntactic scopes that it can affect.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190 ], "paper_content_text": [ "Introduction Polarity shifters are content words that exhibit semantic properties similar to negation.", "For example, the negated statement in (1) can also be achieved by the verbal shifter fail instead of the negation not, as shown in (2).", "(1) Peter did not pass the exam.", "(2) Peter failed shifter to pass the exam.", "As with negation words, polarity shifters change the polarity of a statement.", "This can happen to both positive and negative statements.", "In (3) the positive polarity of scholarship is shifted by denied, resulting in a negative polarity for the phrase.", "Conversely, the overall polarity of (4) is positive despite the negative polarity of pain.", "Polarity shifting is also caused by other content word classes, such as nouns (e.g.", "downfall) and adjectives (e.g.", "devoid).", "However, this work focusses on verbs, due to their importance as minimal semantic units, far-reaching scopes and potential basis for nominal shifter lexicons (see §2.2.).", "Knowledge of polarity shifting is important for a variety of tasks, especially sentiment analysis (Wiegand et al., 2010; Liu, 2012; Wilson et al., 2005) , as well as relation extraction (Sanchez-Graillet and Poesio, 2007) and textual entailment recognition (Harabagiu et al., 2006) .", "The majority of research into polarity shifting for sentiment analysis has focussed on negation words (Wiegand et al., 2010; Schouten and Frasincar, 2016; Pak and Paroubek, 2010) .", "Negation words (e.g.", "not, no, never) are mostly function words, of which only a small number exists, so exhaustive coverage is comparatively simple.", "Content word classes, such as verbs, are considerably more difficult to cover comprehensively due to their sheer number.", "For example, WordNet (Miller et al., 1990) contains over 10k verbal lemmas.", "Most verbs are also far less frequent than common negation words, making individual verbal shifters seem less important.", "However, overall, verbal shifter lemmas occur 2.6 times as often as negation words (see §4.).", "Most existing resources on negation and polarity shifting cover few to no instances of verbal shifters (see §2.3.).", "To remedy this, we introduce a complete lexicon of verbal shifters with annotations of polarity shifters and their shifting scope for each word sense.", "Our contributions are as follows: (i) A complete lexicon of verbal polarity shifters, covering all verbs found in WordNet 3.1.", "(ii) A fine grained annotation, labelling every sense of a verb separately.", "(iii) Annotations for shifter scope, indicating which parts of a sentence are affected by the shifting.", "The entire dataset is publicly available.", "1 Background In this section we will provide a formal definition of polarity shifters ( §2.1.", "), motivate our focus on verbal shifters ( §2.2.)", "and discuss related work ( §2.3.).", "Polarity Shifters The notion of valence or polarity shifting was brought to broad awareness in the research community by the work of Polanyi and Zaenen (2006) .", "Those authors drew attention to the fact that the basic valence of individual lexical items may be shifted in context due to (a) the presence of certain other lexical items, (b) the genre type and discourse structure of the text and (c) cultural factors.", "In subsequent research, the term shifter has since mostly been applied to the case of lexical items that influence polarity.", "Further, the notion of shifting is most prototypically used for situations where a discrete polarity switch occurs between the classes positive, negative and neutral.", "However, for other authors, including Polanyi and Zaenen (2006) , intensification (e.g.", "very disappointing) and downtoning (e.g.", "somewhat disappointing) of polar intensity also falls within the scope of shifting.", "We partially follow this view in that we consider downtoning to be shifting, as it moves the polarity of a word in the opposite direction, i.e.", "making a positive expression less positive (e.g.", "hardly satisfying) and a negative one less negative (e.g.", "slightly problematic).", "We do not consider intensifiers as shifters, as they support the already existing polarity.", "In most research, shifters are commonly illustrated and enumerated rather than formally defined.", "Polanyi and Zaenen (2006) for instance list negation words, intensifiers, modals and presuppositional items as lexical contextual polarity shifters.", "Setting aside downtoners for now, the common denominator of shifting is negation.", "Negation marks contexts in which a situation that the speaker expected fails to occur or hold.", "When this situation is part of a binary opposition (dead -alive), one can firmly conclude that the complementary state of affairs holds (not dead ⇒ alive).", "In cases where the negation affects a scalar notion, which is common in evaluative contexts, the understanding that arises depends on which kinds of scalar inferences and default assumptions are made in the context (Paradis and Willners, 2006) .", "Thus, not good denies the applicability of an evaluation in the region of good or better, but leaves open just how far in the direction of badness the actual interpretation lies: \"It wasn't good\" may be continued with \"but it was ok\" to yield a neutral or mildly positive evaluation or with \"in fact, it was terrible\" to yield a strongly negative one.", "2 While downtoners (e.g.", "somewhat) applied to scalar predicates such as good do not directly express contradiction, they do give rise to negative entailments and inferences.", "Moreover, the structure of scales intrinsically provides shifting.", "Thus, while something being good allows it to be even more positive (\"The movie was good.", "In fact, it was excellent.", "\"), something being somewhat good bounds its positiveness and opens up more negative meanings (\"The performance was somewhat good, but overall rather disappointing\").", "Considering these properties of scales, one can see shifting at work even in the case of downtoning.", "Verbal Shifters While the inclusion of shifting and scalar semantics in semantic representations is not limited to lexical items of particular parts-of-speech -we also find shifter adjectives (e.g.", "devoid) and adverbs (e.g.", "barely) -we limit our work to verbal shifters for several reasons.", "As shown by the work of Schneider et al.", "(2016) , verbs, together with nouns, are the most important minimal semantic units in text and thus are prime candidates for being tackled first.", "Verbs are usually the main syntactic predicates of clauses and sentences and thus verbal shifters can be expected to project far-reaching scopes.", "Most nominal shifters (e.g.", "failure, loss), on the other hand, have morphologically related verbs (e.g.", "fail, lose) and we expect that this connection can be exploited to spread shifter classification from verbs to nouns in the future.", "Related to this, the grammar of verbs, for instance with respect to the diversity of scope types, is more complex than that of nouns and so we expect it to be easier to project from verbs to nouns rather than in the opposite direction.", "Related Work Existing lexicons and corpora that cover polarity shifting focus almost exclusively on negation words.", "The most complex negation lexicon for sentiment analysis (Wilson et al., 2005) includes a mere 12 verbal shifters.", "In contrast, our resource covers over 1200 verbal shifter lemmas.", "Corpora used as training data for negation processing, such as the Sentiment Treebank (Socher et al., 2013) or the BioScope corpus (Szarvas et al., 2008) , are fairly small datasets, so only the most frequent negation words appear.", "The BioScope corpus, for example, contains only 6 verbal shifters (Morante, 2010) .", "Schulder et al.", "(2017) show that state-of-the-art systems trained on such data do not reliably detect polarity shifting and should profit from explicit knowledge of verbal shifters.", "The only work to date that covers a larger number of verbal shifters is Schulder et al.", "(2017) , who annotate a sample of the English verbs found in WordNet for whether they exhibit polarity shifting.", "They start by manually annotating an initial 2000 verbs.", "These verbs are used to train an SVM classifier using linguistic features and common language resources.", "The classifier is then run on the remaining WordNet verbs to bootstrap a list of additional likely shifters.", "This list is then checked by a human annotator to detect false positives.", "Combining the initial annotation and the result of the bootstrapping process, they create a list of 3043 verbs.", "While the lexicon by Schulder et al.", "(2017) is an important step towards full coverage of verbal polarity shifters, there are several aspects that we seek to improve upon.", "First of all, their lexicon covers less than a third of the verbs found in WordNet, likely missing a number of verbal shifters.", "Schulder et al.", "(2017) argue that their bootstrap process should cover the majority of shifters, however, this would mean that only 9% of all verbs are shifters.", "3 Their initial annotation of 2000 randomly selected verbs puts the shifter ratio at 15% instead.", "Another issue with their lexicon is that it only labels lemma forms, but does not differentiate between word senses.", "Many verbs do not actually exhibit shifting in all of their senses, so this information will be important for contextual classification.", "Lastly, they forgo the question of shifter scope, i.e.", "which argument of a verb can be affected by its polarity shift.", "Data We treat this annotation effort as a binary labelling task where a word can either cause polarites to shift or not.", "However, instead of assigning a single label to an entire verb lemma, as Schulder et al.", "(2017) did, we label individual word senses.", "We outline the rationale for this in §3.1.", "In addition we explicitly specify the syntactic scope of the shifting.", "This is motivated and explained in §3.2.", "§3.3.", "describes the annotation process.", "§3.4.", "describes the data format of our main lexicon.", "Based on this main lexicon we also derive two auxiliary lexicons in §3.5., providing complete labelled lists of all WordNet verb lemmas and all WordNet verb synsets respectively.", "Word Senses Many words that shift polarities only do so for some of their word senses.", "For example, mark down acts as a shifter in (5) , where it has the sense of \"reducing the value of something\", but the sense of \"writing something down to have a record of it\" in (6) causes no shifting.", "In our work we found that among shifter lemmas with multiple word senses, only 23% caused shifting in each of their senses.", "An annotation on the basis of individual word senses is therefore required.", "To differentiate the senses of a verb, we use its synset affiliations found in WordNet.", "Words within the same synset share a shifter label.", "Shifter scope, on the other hand, can differ among words of the same synset (see §3.2.).", "The annotation introduced in §3.3.", "is therefore applied to individual lemma-sense pairs to capture the best of both worlds.", "Shifter Scope A verbal shifter usually only affects the parts of a sentence that are syntactically governed by the verb through its valency.", "However, not every argument of a verbal shifter is subject to polarity shifting.", "Which argument is affected by polarity shifting depends on the verb in question.", "In (7) , surrender shifts only the polarity of its subject, but does not affect the object.", "Conversely, defeat only shifts its object in (8).", "The polarity of the subject of defeat does not play a role in this, as can be seen in (9).", "The given scopes assume that verb phrases are in their active form.", "In passive phrases, subject and object roles are inverted.", "To avoid this issue, sentence structure normalization should be performed before computing shifter scope.", "Synsets in WordNet only capture the semantic similarity of words, but almost no syntactic properties (Ruppenhofer and Brandes, 2015) .", "The shifter scope of a verb depends on its syntactic arguments, which can differ between verbs of the same synset.", "For example, discard and dispose share the sense \"throw or cast away\", but while discard shifts its direct object (10), dispose requires a prepositional object (11).", "For this reason we annotate lemma-synset pairs individually, instead of assigning scope labels to an entire synset.", "We also consider cases where a verbal shifter has more than one potential scope for the same lemma-sense pair.", "For example, infringe can shift its direct object or various prepositional objects, as seen in (12) -(14) .", "Therefore, infringe receives the scope labels dobj, pobj on and pobj upon.", "A verbal shifter will only ever shift the polarity of one of its scopes.", "Which scope is affected by the shifting depends on the given sentence.", "Annotation The entire dataset was labelled by an expert annotator with experience in linguistics and annotation work.", "To measure inter-annotator agreement, a second annotator re-annotated 400 word senses for their shifter label.", "They achieved an agreement of κ = 0.73, indicating substantial agreement (Landis and Koch, 1977) .", "The annotation progressed as follows: Given a complete list of WordNet verb lemmas, the annotator would inspect one lemma at a time.", "For this lemma, all senses were looked up.", "For each such lemma-sense pair, the annotator decided whether it is a shifter or not.", "Decisions were based on the sense definition of the synset and whether sentences using this sense of the lemma cause shifting.", "If a word sense was labelled as a shifter, it was subsequently also annotated for its potential shifter scopes.", "In cases where label conflicts between different lemmasense pairs of the same sense were encountered, these labels were reconsidered.", "This introduced an additional robustness to the annotation as it let the annotator revisit challenging cases from a new perspective.", "The resulting list of lemma-sense pairs provides more finegrained information than either an annotation for only word lemmas or only synsets could (see §3.1.", "and §3.2.).", "Main Lexicon File Format We provide our main lexicon as a comma-separated value (csv) file in which each line represents a specific lemmasense-scope triple of a verbal shifter.", "Each line follows the format \"LEMMA,SYNSET,SCOPE\".", "The fields are defined as follows: LEMMA: The lemma form of the verb.", "SYNSET: The numeric identifier of the synset, commonly referred to as offset or database location.", "It consists of 8 digits, including leading zeroes (e.g.", "00334568).", "SCOPE: The scope of the shifting.", "Given as subj for subject position, dobj for direct object position and comp for clausal complements.", "Prepositional object positions are given as pobj * , where * is replaced by the preposition in question, e.g.", "pobj from for objects with the preposition \"from\" or prep of for the preposition \"of\".", "When a lemma has multiple word senses, a separate entry is provided for each lemma-sense pair.", "When a lemma-sense pair has multiple potential shifting scopes, a separate entry is provided for each scope.", "Any combinations not provided are considered not to exhibit shifting.", "Take, for example, the set of entries for \"blow out\": (15) blow out,00436247,subj blow out,02767855,dobj It tells us that blow out in the sense 00436247 (\"melt, break, or become otherwise unusable\") is a shifter that affects its subject.", "The sense 02767855 (\"put out, as of fires, flames, or lights\") also exhibits shifting, but this time affects the direct object.", "It is, however, not a shifter for sense 02766970 (\"erupt in an uncontrolled manner\").", "For an example of multiple scopes for the same word sense, consider cramp: Its sense 00237139 (\"prevent the progress or free movement of\") can shift the polarity of either its direct object (e.g.", "\"it cramped his progress\") or that of a prepositional object with the preposition \"in\" (e.g.", "\"he was cramped in his progress\").", "The three other senses of cramp given by WordNet are not considered shifters.", "Auxiliary Lexicons Our main lexicon is labelled at the lemma-sense pair level to provide the most fine-grained level of information possible.", "It can, however, easily applied to more coarse-grained applications.", "As a convenience, we provide lemma-and synset-level auxiliary lexicons that list all WordNet lemmas and all WordNet synsets, respectively, accompanied with their shifter label.", "A lemma is labelled as a shifter if at least one of its senses is considered a shifter in our main lexicon.", "Similarly, synsets are labelled as shifters if at least one of its lemma-realizations is a shifter.", "Statistics In Table 1 we present the ratio of shifters among the verbs contained in WordNet.", "While only about 10% of verbs are shifters, this still results in 1220 lemmas and 924 synsets, more than covered in any other resource (see §2.3.).", "49% of verbs in WordNet are polysemous, i.e.", "they have multiple meanings.", "Among verbal shifters, this ratio is considerably higher, reaching 73%.", "Of these, only 23% are shifters in all of their word senses.", "To get an idea of how common verbal shifters are in actual use, we computed lemma frequencies over the Amazon Product Review Data corpus (Jindal and Liu, 2008) , which comprises over 5.8 million reviews.", "We found this corpus suitable due to its size, sentiment-related content and use in related tasks (Schulder et al., 2017) .", "We observe 1163 different verbal shifter lemmas with an overall total of 34 million occurrences.", "Correcting for nonshifter senses of shifter lemmas 4 , we still estimate 13 million occurrences, accounting for 5% of all verb occurrences in the corpus.", "To compare, the 15 negation words found in the valence shifter lexicon by Wilson et al.", "(2005) occur 13 million times as well.", "While the frequency of individual negation (function) words is unsurprisingly higher, the total number of verbal shifter occurrences highlights that verbal shifters are just as frequent and should not be ignored.", "Statistics on the distribution of shifter scopes can be found in Table 2 .", "74% of verbal shifters have a direct object scope and 10% a prepositional object scope.", "Among these, \"from\" is the most common preposition at 51%, followed by \"of\" with 22%.", "19% shift the polarity of their subject and only 1.5% shift that of a clausal complement.", "This distribution shows that shifting cannot be trivially assumed to always affect the direct object and that explicit knowledge of shifter scopes will be useful for judging the polarity of a phrase.", "Conclusion We introduced a lexicon of verbal polarity shifters that covers the entire verb vocabulary of WordNet.", "Our annotation labels each individual word sense of a verb, providing more fine-grained information than annotations on the lemmalevel would.", "In addition, we also label the syntactic scopes of each verbal shifter that can be affected by the shifting.", "This is a clear improvement over the list of verbal shifters provided by Schulder et al.", "(2017) , which only provides labels at the lemma-level rather than for individual word senses and gives no information regarding shifting scope.", "It also only has human expert annotation for 30% of the verb vocabulary of WordNet, as opposed to our full coverage.", "We hope this resource will help improve fine-grained sentiment analysis systems by providing explicit information on where polarities may shift in a sentence.", "We also hope our work will encourage the creation of similar polarity shifter lexicons for nouns and adjectives.", "As they are more numerous than verbs (WordNet contains 20k adjectival and 110k nominal lemmas), creating such resources will come with its own challenges, especially in the case of nouns." ] }
{ "paper_header_number": [ "1.", "2.", "2.1.", "2.2.", "2.3.", "3.", "3.1.", "3.2.", "3.3.", "3.4.", "3.5.", "4.", "5." ], "paper_header_content": [ "Introduction", "Background", "Polarity Shifters", "Verbal Shifters", "Related Work", "Data", "Word Senses", "Shifter Scope", "Annotation", "Main Lexicon File Format", "Auxiliary Lexicons", "Statistics", "Conclusion" ] }
GEM-SciDuet-train-64#paper-1137#slide-1
Shifters vs Negation
Marc Schulder Saarland University Word Type Closed Class Open Class Existing polarity classifiers can process negation, but fail to detect polarity shifters due to a lack of resources.
Marc Schulder Saarland University Word Type Closed Class Open Class Existing polarity classifiers can process negation, but fail to detect polarity shifters due to a lack of resources.
[]
GEM-SciDuet-train-64#paper-1137#slide-2
1137
Introducing a Lexicon of Verbal Polarity Shifters for English
The sentiment polarity of a phrase does not only depend on the polarities of its words, but also on how these are affected by their context. Negation words (e.g. not, no, never) can change the polarity of a phrase. Similarly, verbs and other content words can also act as polarity shifters (e.g. fail, deny, alleviate). While individually more sparse, they are far more numerous. Among verbs alone, there are more than 1200 shifters. However, sentiment analysis systems barely consider polarity shifters other than negation words. A major reason for this is the scarcity of lexicons and corpora that provide information on them. We introduce a lexicon of verbal polarity shifters that covers the entirety of verbs found in WordNet. We provide a fine-grained annotation of individual word senses, as well as information for each verbal shifter on the syntactic scopes that it can affect.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190 ], "paper_content_text": [ "Introduction Polarity shifters are content words that exhibit semantic properties similar to negation.", "For example, the negated statement in (1) can also be achieved by the verbal shifter fail instead of the negation not, as shown in (2).", "(1) Peter did not pass the exam.", "(2) Peter failed shifter to pass the exam.", "As with negation words, polarity shifters change the polarity of a statement.", "This can happen to both positive and negative statements.", "In (3) the positive polarity of scholarship is shifted by denied, resulting in a negative polarity for the phrase.", "Conversely, the overall polarity of (4) is positive despite the negative polarity of pain.", "Polarity shifting is also caused by other content word classes, such as nouns (e.g.", "downfall) and adjectives (e.g.", "devoid).", "However, this work focusses on verbs, due to their importance as minimal semantic units, far-reaching scopes and potential basis for nominal shifter lexicons (see §2.2.).", "Knowledge of polarity shifting is important for a variety of tasks, especially sentiment analysis (Wiegand et al., 2010; Liu, 2012; Wilson et al., 2005) , as well as relation extraction (Sanchez-Graillet and Poesio, 2007) and textual entailment recognition (Harabagiu et al., 2006) .", "The majority of research into polarity shifting for sentiment analysis has focussed on negation words (Wiegand et al., 2010; Schouten and Frasincar, 2016; Pak and Paroubek, 2010) .", "Negation words (e.g.", "not, no, never) are mostly function words, of which only a small number exists, so exhaustive coverage is comparatively simple.", "Content word classes, such as verbs, are considerably more difficult to cover comprehensively due to their sheer number.", "For example, WordNet (Miller et al., 1990) contains over 10k verbal lemmas.", "Most verbs are also far less frequent than common negation words, making individual verbal shifters seem less important.", "However, overall, verbal shifter lemmas occur 2.6 times as often as negation words (see §4.).", "Most existing resources on negation and polarity shifting cover few to no instances of verbal shifters (see §2.3.).", "To remedy this, we introduce a complete lexicon of verbal shifters with annotations of polarity shifters and their shifting scope for each word sense.", "Our contributions are as follows: (i) A complete lexicon of verbal polarity shifters, covering all verbs found in WordNet 3.1.", "(ii) A fine grained annotation, labelling every sense of a verb separately.", "(iii) Annotations for shifter scope, indicating which parts of a sentence are affected by the shifting.", "The entire dataset is publicly available.", "1 Background In this section we will provide a formal definition of polarity shifters ( §2.1.", "), motivate our focus on verbal shifters ( §2.2.)", "and discuss related work ( §2.3.).", "Polarity Shifters The notion of valence or polarity shifting was brought to broad awareness in the research community by the work of Polanyi and Zaenen (2006) .", "Those authors drew attention to the fact that the basic valence of individual lexical items may be shifted in context due to (a) the presence of certain other lexical items, (b) the genre type and discourse structure of the text and (c) cultural factors.", "In subsequent research, the term shifter has since mostly been applied to the case of lexical items that influence polarity.", "Further, the notion of shifting is most prototypically used for situations where a discrete polarity switch occurs between the classes positive, negative and neutral.", "However, for other authors, including Polanyi and Zaenen (2006) , intensification (e.g.", "very disappointing) and downtoning (e.g.", "somewhat disappointing) of polar intensity also falls within the scope of shifting.", "We partially follow this view in that we consider downtoning to be shifting, as it moves the polarity of a word in the opposite direction, i.e.", "making a positive expression less positive (e.g.", "hardly satisfying) and a negative one less negative (e.g.", "slightly problematic).", "We do not consider intensifiers as shifters, as they support the already existing polarity.", "In most research, shifters are commonly illustrated and enumerated rather than formally defined.", "Polanyi and Zaenen (2006) for instance list negation words, intensifiers, modals and presuppositional items as lexical contextual polarity shifters.", "Setting aside downtoners for now, the common denominator of shifting is negation.", "Negation marks contexts in which a situation that the speaker expected fails to occur or hold.", "When this situation is part of a binary opposition (dead -alive), one can firmly conclude that the complementary state of affairs holds (not dead ⇒ alive).", "In cases where the negation affects a scalar notion, which is common in evaluative contexts, the understanding that arises depends on which kinds of scalar inferences and default assumptions are made in the context (Paradis and Willners, 2006) .", "Thus, not good denies the applicability of an evaluation in the region of good or better, but leaves open just how far in the direction of badness the actual interpretation lies: \"It wasn't good\" may be continued with \"but it was ok\" to yield a neutral or mildly positive evaluation or with \"in fact, it was terrible\" to yield a strongly negative one.", "2 While downtoners (e.g.", "somewhat) applied to scalar predicates such as good do not directly express contradiction, they do give rise to negative entailments and inferences.", "Moreover, the structure of scales intrinsically provides shifting.", "Thus, while something being good allows it to be even more positive (\"The movie was good.", "In fact, it was excellent.", "\"), something being somewhat good bounds its positiveness and opens up more negative meanings (\"The performance was somewhat good, but overall rather disappointing\").", "Considering these properties of scales, one can see shifting at work even in the case of downtoning.", "Verbal Shifters While the inclusion of shifting and scalar semantics in semantic representations is not limited to lexical items of particular parts-of-speech -we also find shifter adjectives (e.g.", "devoid) and adverbs (e.g.", "barely) -we limit our work to verbal shifters for several reasons.", "As shown by the work of Schneider et al.", "(2016) , verbs, together with nouns, are the most important minimal semantic units in text and thus are prime candidates for being tackled first.", "Verbs are usually the main syntactic predicates of clauses and sentences and thus verbal shifters can be expected to project far-reaching scopes.", "Most nominal shifters (e.g.", "failure, loss), on the other hand, have morphologically related verbs (e.g.", "fail, lose) and we expect that this connection can be exploited to spread shifter classification from verbs to nouns in the future.", "Related to this, the grammar of verbs, for instance with respect to the diversity of scope types, is more complex than that of nouns and so we expect it to be easier to project from verbs to nouns rather than in the opposite direction.", "Related Work Existing lexicons and corpora that cover polarity shifting focus almost exclusively on negation words.", "The most complex negation lexicon for sentiment analysis (Wilson et al., 2005) includes a mere 12 verbal shifters.", "In contrast, our resource covers over 1200 verbal shifter lemmas.", "Corpora used as training data for negation processing, such as the Sentiment Treebank (Socher et al., 2013) or the BioScope corpus (Szarvas et al., 2008) , are fairly small datasets, so only the most frequent negation words appear.", "The BioScope corpus, for example, contains only 6 verbal shifters (Morante, 2010) .", "Schulder et al.", "(2017) show that state-of-the-art systems trained on such data do not reliably detect polarity shifting and should profit from explicit knowledge of verbal shifters.", "The only work to date that covers a larger number of verbal shifters is Schulder et al.", "(2017) , who annotate a sample of the English verbs found in WordNet for whether they exhibit polarity shifting.", "They start by manually annotating an initial 2000 verbs.", "These verbs are used to train an SVM classifier using linguistic features and common language resources.", "The classifier is then run on the remaining WordNet verbs to bootstrap a list of additional likely shifters.", "This list is then checked by a human annotator to detect false positives.", "Combining the initial annotation and the result of the bootstrapping process, they create a list of 3043 verbs.", "While the lexicon by Schulder et al.", "(2017) is an important step towards full coverage of verbal polarity shifters, there are several aspects that we seek to improve upon.", "First of all, their lexicon covers less than a third of the verbs found in WordNet, likely missing a number of verbal shifters.", "Schulder et al.", "(2017) argue that their bootstrap process should cover the majority of shifters, however, this would mean that only 9% of all verbs are shifters.", "3 Their initial annotation of 2000 randomly selected verbs puts the shifter ratio at 15% instead.", "Another issue with their lexicon is that it only labels lemma forms, but does not differentiate between word senses.", "Many verbs do not actually exhibit shifting in all of their senses, so this information will be important for contextual classification.", "Lastly, they forgo the question of shifter scope, i.e.", "which argument of a verb can be affected by its polarity shift.", "Data We treat this annotation effort as a binary labelling task where a word can either cause polarites to shift or not.", "However, instead of assigning a single label to an entire verb lemma, as Schulder et al.", "(2017) did, we label individual word senses.", "We outline the rationale for this in §3.1.", "In addition we explicitly specify the syntactic scope of the shifting.", "This is motivated and explained in §3.2.", "§3.3.", "describes the annotation process.", "§3.4.", "describes the data format of our main lexicon.", "Based on this main lexicon we also derive two auxiliary lexicons in §3.5., providing complete labelled lists of all WordNet verb lemmas and all WordNet verb synsets respectively.", "Word Senses Many words that shift polarities only do so for some of their word senses.", "For example, mark down acts as a shifter in (5) , where it has the sense of \"reducing the value of something\", but the sense of \"writing something down to have a record of it\" in (6) causes no shifting.", "In our work we found that among shifter lemmas with multiple word senses, only 23% caused shifting in each of their senses.", "An annotation on the basis of individual word senses is therefore required.", "To differentiate the senses of a verb, we use its synset affiliations found in WordNet.", "Words within the same synset share a shifter label.", "Shifter scope, on the other hand, can differ among words of the same synset (see §3.2.).", "The annotation introduced in §3.3.", "is therefore applied to individual lemma-sense pairs to capture the best of both worlds.", "Shifter Scope A verbal shifter usually only affects the parts of a sentence that are syntactically governed by the verb through its valency.", "However, not every argument of a verbal shifter is subject to polarity shifting.", "Which argument is affected by polarity shifting depends on the verb in question.", "In (7) , surrender shifts only the polarity of its subject, but does not affect the object.", "Conversely, defeat only shifts its object in (8).", "The polarity of the subject of defeat does not play a role in this, as can be seen in (9).", "The given scopes assume that verb phrases are in their active form.", "In passive phrases, subject and object roles are inverted.", "To avoid this issue, sentence structure normalization should be performed before computing shifter scope.", "Synsets in WordNet only capture the semantic similarity of words, but almost no syntactic properties (Ruppenhofer and Brandes, 2015) .", "The shifter scope of a verb depends on its syntactic arguments, which can differ between verbs of the same synset.", "For example, discard and dispose share the sense \"throw or cast away\", but while discard shifts its direct object (10), dispose requires a prepositional object (11).", "For this reason we annotate lemma-synset pairs individually, instead of assigning scope labels to an entire synset.", "We also consider cases where a verbal shifter has more than one potential scope for the same lemma-sense pair.", "For example, infringe can shift its direct object or various prepositional objects, as seen in (12) -(14) .", "Therefore, infringe receives the scope labels dobj, pobj on and pobj upon.", "A verbal shifter will only ever shift the polarity of one of its scopes.", "Which scope is affected by the shifting depends on the given sentence.", "Annotation The entire dataset was labelled by an expert annotator with experience in linguistics and annotation work.", "To measure inter-annotator agreement, a second annotator re-annotated 400 word senses for their shifter label.", "They achieved an agreement of κ = 0.73, indicating substantial agreement (Landis and Koch, 1977) .", "The annotation progressed as follows: Given a complete list of WordNet verb lemmas, the annotator would inspect one lemma at a time.", "For this lemma, all senses were looked up.", "For each such lemma-sense pair, the annotator decided whether it is a shifter or not.", "Decisions were based on the sense definition of the synset and whether sentences using this sense of the lemma cause shifting.", "If a word sense was labelled as a shifter, it was subsequently also annotated for its potential shifter scopes.", "In cases where label conflicts between different lemmasense pairs of the same sense were encountered, these labels were reconsidered.", "This introduced an additional robustness to the annotation as it let the annotator revisit challenging cases from a new perspective.", "The resulting list of lemma-sense pairs provides more finegrained information than either an annotation for only word lemmas or only synsets could (see §3.1.", "and §3.2.).", "Main Lexicon File Format We provide our main lexicon as a comma-separated value (csv) file in which each line represents a specific lemmasense-scope triple of a verbal shifter.", "Each line follows the format \"LEMMA,SYNSET,SCOPE\".", "The fields are defined as follows: LEMMA: The lemma form of the verb.", "SYNSET: The numeric identifier of the synset, commonly referred to as offset or database location.", "It consists of 8 digits, including leading zeroes (e.g.", "00334568).", "SCOPE: The scope of the shifting.", "Given as subj for subject position, dobj for direct object position and comp for clausal complements.", "Prepositional object positions are given as pobj * , where * is replaced by the preposition in question, e.g.", "pobj from for objects with the preposition \"from\" or prep of for the preposition \"of\".", "When a lemma has multiple word senses, a separate entry is provided for each lemma-sense pair.", "When a lemma-sense pair has multiple potential shifting scopes, a separate entry is provided for each scope.", "Any combinations not provided are considered not to exhibit shifting.", "Take, for example, the set of entries for \"blow out\": (15) blow out,00436247,subj blow out,02767855,dobj It tells us that blow out in the sense 00436247 (\"melt, break, or become otherwise unusable\") is a shifter that affects its subject.", "The sense 02767855 (\"put out, as of fires, flames, or lights\") also exhibits shifting, but this time affects the direct object.", "It is, however, not a shifter for sense 02766970 (\"erupt in an uncontrolled manner\").", "For an example of multiple scopes for the same word sense, consider cramp: Its sense 00237139 (\"prevent the progress or free movement of\") can shift the polarity of either its direct object (e.g.", "\"it cramped his progress\") or that of a prepositional object with the preposition \"in\" (e.g.", "\"he was cramped in his progress\").", "The three other senses of cramp given by WordNet are not considered shifters.", "Auxiliary Lexicons Our main lexicon is labelled at the lemma-sense pair level to provide the most fine-grained level of information possible.", "It can, however, easily applied to more coarse-grained applications.", "As a convenience, we provide lemma-and synset-level auxiliary lexicons that list all WordNet lemmas and all WordNet synsets, respectively, accompanied with their shifter label.", "A lemma is labelled as a shifter if at least one of its senses is considered a shifter in our main lexicon.", "Similarly, synsets are labelled as shifters if at least one of its lemma-realizations is a shifter.", "Statistics In Table 1 we present the ratio of shifters among the verbs contained in WordNet.", "While only about 10% of verbs are shifters, this still results in 1220 lemmas and 924 synsets, more than covered in any other resource (see §2.3.).", "49% of verbs in WordNet are polysemous, i.e.", "they have multiple meanings.", "Among verbal shifters, this ratio is considerably higher, reaching 73%.", "Of these, only 23% are shifters in all of their word senses.", "To get an idea of how common verbal shifters are in actual use, we computed lemma frequencies over the Amazon Product Review Data corpus (Jindal and Liu, 2008) , which comprises over 5.8 million reviews.", "We found this corpus suitable due to its size, sentiment-related content and use in related tasks (Schulder et al., 2017) .", "We observe 1163 different verbal shifter lemmas with an overall total of 34 million occurrences.", "Correcting for nonshifter senses of shifter lemmas 4 , we still estimate 13 million occurrences, accounting for 5% of all verb occurrences in the corpus.", "To compare, the 15 negation words found in the valence shifter lexicon by Wilson et al.", "(2005) occur 13 million times as well.", "While the frequency of individual negation (function) words is unsurprisingly higher, the total number of verbal shifter occurrences highlights that verbal shifters are just as frequent and should not be ignored.", "Statistics on the distribution of shifter scopes can be found in Table 2 .", "74% of verbal shifters have a direct object scope and 10% a prepositional object scope.", "Among these, \"from\" is the most common preposition at 51%, followed by \"of\" with 22%.", "19% shift the polarity of their subject and only 1.5% shift that of a clausal complement.", "This distribution shows that shifting cannot be trivially assumed to always affect the direct object and that explicit knowledge of shifter scopes will be useful for judging the polarity of a phrase.", "Conclusion We introduced a lexicon of verbal polarity shifters that covers the entire verb vocabulary of WordNet.", "Our annotation labels each individual word sense of a verb, providing more fine-grained information than annotations on the lemmalevel would.", "In addition, we also label the syntactic scopes of each verbal shifter that can be affected by the shifting.", "This is a clear improvement over the list of verbal shifters provided by Schulder et al.", "(2017) , which only provides labels at the lemma-level rather than for individual word senses and gives no information regarding shifting scope.", "It also only has human expert annotation for 30% of the verb vocabulary of WordNet, as opposed to our full coverage.", "We hope this resource will help improve fine-grained sentiment analysis systems by providing explicit information on where polarities may shift in a sentence.", "We also hope our work will encourage the creation of similar polarity shifter lexicons for nouns and adjectives.", "As they are more numerous than verbs (WordNet contains 20k adjectival and 110k nominal lemmas), creating such resources will come with its own challenges, especially in the case of nouns." ] }
{ "paper_header_number": [ "1.", "2.", "2.1.", "2.2.", "2.3.", "3.", "3.1.", "3.2.", "3.3.", "3.4.", "3.5.", "4.", "5." ], "paper_header_content": [ "Introduction", "Background", "Polarity Shifters", "Verbal Shifters", "Related Work", "Data", "Word Senses", "Shifter Scope", "Annotation", "Main Lexicon File Format", "Auxiliary Lexicons", "Statistics", "Conclusion" ] }
GEM-SciDuet-train-64#paper-1137#slide-2
Goal
Marc Schulder Saarland University Main sentence predicate far reaching scope
Marc Schulder Saarland University Main sentence predicate far reaching scope
[]
GEM-SciDuet-train-64#paper-1137#slide-3
1137
Introducing a Lexicon of Verbal Polarity Shifters for English
The sentiment polarity of a phrase does not only depend on the polarities of its words, but also on how these are affected by their context. Negation words (e.g. not, no, never) can change the polarity of a phrase. Similarly, verbs and other content words can also act as polarity shifters (e.g. fail, deny, alleviate). While individually more sparse, they are far more numerous. Among verbs alone, there are more than 1200 shifters. However, sentiment analysis systems barely consider polarity shifters other than negation words. A major reason for this is the scarcity of lexicons and corpora that provide information on them. We introduce a lexicon of verbal polarity shifters that covers the entirety of verbs found in WordNet. We provide a fine-grained annotation of individual word senses, as well as information for each verbal shifter on the syntactic scopes that it can affect.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190 ], "paper_content_text": [ "Introduction Polarity shifters are content words that exhibit semantic properties similar to negation.", "For example, the negated statement in (1) can also be achieved by the verbal shifter fail instead of the negation not, as shown in (2).", "(1) Peter did not pass the exam.", "(2) Peter failed shifter to pass the exam.", "As with negation words, polarity shifters change the polarity of a statement.", "This can happen to both positive and negative statements.", "In (3) the positive polarity of scholarship is shifted by denied, resulting in a negative polarity for the phrase.", "Conversely, the overall polarity of (4) is positive despite the negative polarity of pain.", "Polarity shifting is also caused by other content word classes, such as nouns (e.g.", "downfall) and adjectives (e.g.", "devoid).", "However, this work focusses on verbs, due to their importance as minimal semantic units, far-reaching scopes and potential basis for nominal shifter lexicons (see §2.2.).", "Knowledge of polarity shifting is important for a variety of tasks, especially sentiment analysis (Wiegand et al., 2010; Liu, 2012; Wilson et al., 2005) , as well as relation extraction (Sanchez-Graillet and Poesio, 2007) and textual entailment recognition (Harabagiu et al., 2006) .", "The majority of research into polarity shifting for sentiment analysis has focussed on negation words (Wiegand et al., 2010; Schouten and Frasincar, 2016; Pak and Paroubek, 2010) .", "Negation words (e.g.", "not, no, never) are mostly function words, of which only a small number exists, so exhaustive coverage is comparatively simple.", "Content word classes, such as verbs, are considerably more difficult to cover comprehensively due to their sheer number.", "For example, WordNet (Miller et al., 1990) contains over 10k verbal lemmas.", "Most verbs are also far less frequent than common negation words, making individual verbal shifters seem less important.", "However, overall, verbal shifter lemmas occur 2.6 times as often as negation words (see §4.).", "Most existing resources on negation and polarity shifting cover few to no instances of verbal shifters (see §2.3.).", "To remedy this, we introduce a complete lexicon of verbal shifters with annotations of polarity shifters and their shifting scope for each word sense.", "Our contributions are as follows: (i) A complete lexicon of verbal polarity shifters, covering all verbs found in WordNet 3.1.", "(ii) A fine grained annotation, labelling every sense of a verb separately.", "(iii) Annotations for shifter scope, indicating which parts of a sentence are affected by the shifting.", "The entire dataset is publicly available.", "1 Background In this section we will provide a formal definition of polarity shifters ( §2.1.", "), motivate our focus on verbal shifters ( §2.2.)", "and discuss related work ( §2.3.).", "Polarity Shifters The notion of valence or polarity shifting was brought to broad awareness in the research community by the work of Polanyi and Zaenen (2006) .", "Those authors drew attention to the fact that the basic valence of individual lexical items may be shifted in context due to (a) the presence of certain other lexical items, (b) the genre type and discourse structure of the text and (c) cultural factors.", "In subsequent research, the term shifter has since mostly been applied to the case of lexical items that influence polarity.", "Further, the notion of shifting is most prototypically used for situations where a discrete polarity switch occurs between the classes positive, negative and neutral.", "However, for other authors, including Polanyi and Zaenen (2006) , intensification (e.g.", "very disappointing) and downtoning (e.g.", "somewhat disappointing) of polar intensity also falls within the scope of shifting.", "We partially follow this view in that we consider downtoning to be shifting, as it moves the polarity of a word in the opposite direction, i.e.", "making a positive expression less positive (e.g.", "hardly satisfying) and a negative one less negative (e.g.", "slightly problematic).", "We do not consider intensifiers as shifters, as they support the already existing polarity.", "In most research, shifters are commonly illustrated and enumerated rather than formally defined.", "Polanyi and Zaenen (2006) for instance list negation words, intensifiers, modals and presuppositional items as lexical contextual polarity shifters.", "Setting aside downtoners for now, the common denominator of shifting is negation.", "Negation marks contexts in which a situation that the speaker expected fails to occur or hold.", "When this situation is part of a binary opposition (dead -alive), one can firmly conclude that the complementary state of affairs holds (not dead ⇒ alive).", "In cases where the negation affects a scalar notion, which is common in evaluative contexts, the understanding that arises depends on which kinds of scalar inferences and default assumptions are made in the context (Paradis and Willners, 2006) .", "Thus, not good denies the applicability of an evaluation in the region of good or better, but leaves open just how far in the direction of badness the actual interpretation lies: \"It wasn't good\" may be continued with \"but it was ok\" to yield a neutral or mildly positive evaluation or with \"in fact, it was terrible\" to yield a strongly negative one.", "2 While downtoners (e.g.", "somewhat) applied to scalar predicates such as good do not directly express contradiction, they do give rise to negative entailments and inferences.", "Moreover, the structure of scales intrinsically provides shifting.", "Thus, while something being good allows it to be even more positive (\"The movie was good.", "In fact, it was excellent.", "\"), something being somewhat good bounds its positiveness and opens up more negative meanings (\"The performance was somewhat good, but overall rather disappointing\").", "Considering these properties of scales, one can see shifting at work even in the case of downtoning.", "Verbal Shifters While the inclusion of shifting and scalar semantics in semantic representations is not limited to lexical items of particular parts-of-speech -we also find shifter adjectives (e.g.", "devoid) and adverbs (e.g.", "barely) -we limit our work to verbal shifters for several reasons.", "As shown by the work of Schneider et al.", "(2016) , verbs, together with nouns, are the most important minimal semantic units in text and thus are prime candidates for being tackled first.", "Verbs are usually the main syntactic predicates of clauses and sentences and thus verbal shifters can be expected to project far-reaching scopes.", "Most nominal shifters (e.g.", "failure, loss), on the other hand, have morphologically related verbs (e.g.", "fail, lose) and we expect that this connection can be exploited to spread shifter classification from verbs to nouns in the future.", "Related to this, the grammar of verbs, for instance with respect to the diversity of scope types, is more complex than that of nouns and so we expect it to be easier to project from verbs to nouns rather than in the opposite direction.", "Related Work Existing lexicons and corpora that cover polarity shifting focus almost exclusively on negation words.", "The most complex negation lexicon for sentiment analysis (Wilson et al., 2005) includes a mere 12 verbal shifters.", "In contrast, our resource covers over 1200 verbal shifter lemmas.", "Corpora used as training data for negation processing, such as the Sentiment Treebank (Socher et al., 2013) or the BioScope corpus (Szarvas et al., 2008) , are fairly small datasets, so only the most frequent negation words appear.", "The BioScope corpus, for example, contains only 6 verbal shifters (Morante, 2010) .", "Schulder et al.", "(2017) show that state-of-the-art systems trained on such data do not reliably detect polarity shifting and should profit from explicit knowledge of verbal shifters.", "The only work to date that covers a larger number of verbal shifters is Schulder et al.", "(2017) , who annotate a sample of the English verbs found in WordNet for whether they exhibit polarity shifting.", "They start by manually annotating an initial 2000 verbs.", "These verbs are used to train an SVM classifier using linguistic features and common language resources.", "The classifier is then run on the remaining WordNet verbs to bootstrap a list of additional likely shifters.", "This list is then checked by a human annotator to detect false positives.", "Combining the initial annotation and the result of the bootstrapping process, they create a list of 3043 verbs.", "While the lexicon by Schulder et al.", "(2017) is an important step towards full coverage of verbal polarity shifters, there are several aspects that we seek to improve upon.", "First of all, their lexicon covers less than a third of the verbs found in WordNet, likely missing a number of verbal shifters.", "Schulder et al.", "(2017) argue that their bootstrap process should cover the majority of shifters, however, this would mean that only 9% of all verbs are shifters.", "3 Their initial annotation of 2000 randomly selected verbs puts the shifter ratio at 15% instead.", "Another issue with their lexicon is that it only labels lemma forms, but does not differentiate between word senses.", "Many verbs do not actually exhibit shifting in all of their senses, so this information will be important for contextual classification.", "Lastly, they forgo the question of shifter scope, i.e.", "which argument of a verb can be affected by its polarity shift.", "Data We treat this annotation effort as a binary labelling task where a word can either cause polarites to shift or not.", "However, instead of assigning a single label to an entire verb lemma, as Schulder et al.", "(2017) did, we label individual word senses.", "We outline the rationale for this in §3.1.", "In addition we explicitly specify the syntactic scope of the shifting.", "This is motivated and explained in §3.2.", "§3.3.", "describes the annotation process.", "§3.4.", "describes the data format of our main lexicon.", "Based on this main lexicon we also derive two auxiliary lexicons in §3.5., providing complete labelled lists of all WordNet verb lemmas and all WordNet verb synsets respectively.", "Word Senses Many words that shift polarities only do so for some of their word senses.", "For example, mark down acts as a shifter in (5) , where it has the sense of \"reducing the value of something\", but the sense of \"writing something down to have a record of it\" in (6) causes no shifting.", "In our work we found that among shifter lemmas with multiple word senses, only 23% caused shifting in each of their senses.", "An annotation on the basis of individual word senses is therefore required.", "To differentiate the senses of a verb, we use its synset affiliations found in WordNet.", "Words within the same synset share a shifter label.", "Shifter scope, on the other hand, can differ among words of the same synset (see §3.2.).", "The annotation introduced in §3.3.", "is therefore applied to individual lemma-sense pairs to capture the best of both worlds.", "Shifter Scope A verbal shifter usually only affects the parts of a sentence that are syntactically governed by the verb through its valency.", "However, not every argument of a verbal shifter is subject to polarity shifting.", "Which argument is affected by polarity shifting depends on the verb in question.", "In (7) , surrender shifts only the polarity of its subject, but does not affect the object.", "Conversely, defeat only shifts its object in (8).", "The polarity of the subject of defeat does not play a role in this, as can be seen in (9).", "The given scopes assume that verb phrases are in their active form.", "In passive phrases, subject and object roles are inverted.", "To avoid this issue, sentence structure normalization should be performed before computing shifter scope.", "Synsets in WordNet only capture the semantic similarity of words, but almost no syntactic properties (Ruppenhofer and Brandes, 2015) .", "The shifter scope of a verb depends on its syntactic arguments, which can differ between verbs of the same synset.", "For example, discard and dispose share the sense \"throw or cast away\", but while discard shifts its direct object (10), dispose requires a prepositional object (11).", "For this reason we annotate lemma-synset pairs individually, instead of assigning scope labels to an entire synset.", "We also consider cases where a verbal shifter has more than one potential scope for the same lemma-sense pair.", "For example, infringe can shift its direct object or various prepositional objects, as seen in (12) -(14) .", "Therefore, infringe receives the scope labels dobj, pobj on and pobj upon.", "A verbal shifter will only ever shift the polarity of one of its scopes.", "Which scope is affected by the shifting depends on the given sentence.", "Annotation The entire dataset was labelled by an expert annotator with experience in linguistics and annotation work.", "To measure inter-annotator agreement, a second annotator re-annotated 400 word senses for their shifter label.", "They achieved an agreement of κ = 0.73, indicating substantial agreement (Landis and Koch, 1977) .", "The annotation progressed as follows: Given a complete list of WordNet verb lemmas, the annotator would inspect one lemma at a time.", "For this lemma, all senses were looked up.", "For each such lemma-sense pair, the annotator decided whether it is a shifter or not.", "Decisions were based on the sense definition of the synset and whether sentences using this sense of the lemma cause shifting.", "If a word sense was labelled as a shifter, it was subsequently also annotated for its potential shifter scopes.", "In cases where label conflicts between different lemmasense pairs of the same sense were encountered, these labels were reconsidered.", "This introduced an additional robustness to the annotation as it let the annotator revisit challenging cases from a new perspective.", "The resulting list of lemma-sense pairs provides more finegrained information than either an annotation for only word lemmas or only synsets could (see §3.1.", "and §3.2.).", "Main Lexicon File Format We provide our main lexicon as a comma-separated value (csv) file in which each line represents a specific lemmasense-scope triple of a verbal shifter.", "Each line follows the format \"LEMMA,SYNSET,SCOPE\".", "The fields are defined as follows: LEMMA: The lemma form of the verb.", "SYNSET: The numeric identifier of the synset, commonly referred to as offset or database location.", "It consists of 8 digits, including leading zeroes (e.g.", "00334568).", "SCOPE: The scope of the shifting.", "Given as subj for subject position, dobj for direct object position and comp for clausal complements.", "Prepositional object positions are given as pobj * , where * is replaced by the preposition in question, e.g.", "pobj from for objects with the preposition \"from\" or prep of for the preposition \"of\".", "When a lemma has multiple word senses, a separate entry is provided for each lemma-sense pair.", "When a lemma-sense pair has multiple potential shifting scopes, a separate entry is provided for each scope.", "Any combinations not provided are considered not to exhibit shifting.", "Take, for example, the set of entries for \"blow out\": (15) blow out,00436247,subj blow out,02767855,dobj It tells us that blow out in the sense 00436247 (\"melt, break, or become otherwise unusable\") is a shifter that affects its subject.", "The sense 02767855 (\"put out, as of fires, flames, or lights\") also exhibits shifting, but this time affects the direct object.", "It is, however, not a shifter for sense 02766970 (\"erupt in an uncontrolled manner\").", "For an example of multiple scopes for the same word sense, consider cramp: Its sense 00237139 (\"prevent the progress or free movement of\") can shift the polarity of either its direct object (e.g.", "\"it cramped his progress\") or that of a prepositional object with the preposition \"in\" (e.g.", "\"he was cramped in his progress\").", "The three other senses of cramp given by WordNet are not considered shifters.", "Auxiliary Lexicons Our main lexicon is labelled at the lemma-sense pair level to provide the most fine-grained level of information possible.", "It can, however, easily applied to more coarse-grained applications.", "As a convenience, we provide lemma-and synset-level auxiliary lexicons that list all WordNet lemmas and all WordNet synsets, respectively, accompanied with their shifter label.", "A lemma is labelled as a shifter if at least one of its senses is considered a shifter in our main lexicon.", "Similarly, synsets are labelled as shifters if at least one of its lemma-realizations is a shifter.", "Statistics In Table 1 we present the ratio of shifters among the verbs contained in WordNet.", "While only about 10% of verbs are shifters, this still results in 1220 lemmas and 924 synsets, more than covered in any other resource (see §2.3.).", "49% of verbs in WordNet are polysemous, i.e.", "they have multiple meanings.", "Among verbal shifters, this ratio is considerably higher, reaching 73%.", "Of these, only 23% are shifters in all of their word senses.", "To get an idea of how common verbal shifters are in actual use, we computed lemma frequencies over the Amazon Product Review Data corpus (Jindal and Liu, 2008) , which comprises over 5.8 million reviews.", "We found this corpus suitable due to its size, sentiment-related content and use in related tasks (Schulder et al., 2017) .", "We observe 1163 different verbal shifter lemmas with an overall total of 34 million occurrences.", "Correcting for nonshifter senses of shifter lemmas 4 , we still estimate 13 million occurrences, accounting for 5% of all verb occurrences in the corpus.", "To compare, the 15 negation words found in the valence shifter lexicon by Wilson et al.", "(2005) occur 13 million times as well.", "While the frequency of individual negation (function) words is unsurprisingly higher, the total number of verbal shifter occurrences highlights that verbal shifters are just as frequent and should not be ignored.", "Statistics on the distribution of shifter scopes can be found in Table 2 .", "74% of verbal shifters have a direct object scope and 10% a prepositional object scope.", "Among these, \"from\" is the most common preposition at 51%, followed by \"of\" with 22%.", "19% shift the polarity of their subject and only 1.5% shift that of a clausal complement.", "This distribution shows that shifting cannot be trivially assumed to always affect the direct object and that explicit knowledge of shifter scopes will be useful for judging the polarity of a phrase.", "Conclusion We introduced a lexicon of verbal polarity shifters that covers the entire verb vocabulary of WordNet.", "Our annotation labels each individual word sense of a verb, providing more fine-grained information than annotations on the lemmalevel would.", "In addition, we also label the syntactic scopes of each verbal shifter that can be affected by the shifting.", "This is a clear improvement over the list of verbal shifters provided by Schulder et al.", "(2017) , which only provides labels at the lemma-level rather than for individual word senses and gives no information regarding shifting scope.", "It also only has human expert annotation for 30% of the verb vocabulary of WordNet, as opposed to our full coverage.", "We hope this resource will help improve fine-grained sentiment analysis systems by providing explicit information on where polarities may shift in a sentence.", "We also hope our work will encourage the creation of similar polarity shifter lexicons for nouns and adjectives.", "As they are more numerous than verbs (WordNet contains 20k adjectival and 110k nominal lemmas), creating such resources will come with its own challenges, especially in the case of nouns." ] }
{ "paper_header_number": [ "1.", "2.", "2.1.", "2.2.", "2.3.", "3.", "3.1.", "3.2.", "3.3.", "3.4.", "3.5.", "4.", "5." ], "paper_header_content": [ "Introduction", "Background", "Polarity Shifters", "Verbal Shifters", "Related Work", "Data", "Word Senses", "Shifter Scope", "Annotation", "Main Lexicon File Format", "Auxiliary Lexicons", "Statistics", "Conclusion" ] }
GEM-SciDuet-train-64#paper-1137#slide-3
Related Work
Input Resource WordNet WordNet Shifter Labels Lemma Word Sense Marc Schulder Saarland University
Input Resource WordNet WordNet Shifter Labels Lemma Word Sense Marc Schulder Saarland University
[]
GEM-SciDuet-train-64#paper-1137#slide-4
1137
Introducing a Lexicon of Verbal Polarity Shifters for English
The sentiment polarity of a phrase does not only depend on the polarities of its words, but also on how these are affected by their context. Negation words (e.g. not, no, never) can change the polarity of a phrase. Similarly, verbs and other content words can also act as polarity shifters (e.g. fail, deny, alleviate). While individually more sparse, they are far more numerous. Among verbs alone, there are more than 1200 shifters. However, sentiment analysis systems barely consider polarity shifters other than negation words. A major reason for this is the scarcity of lexicons and corpora that provide information on them. We introduce a lexicon of verbal polarity shifters that covers the entirety of verbs found in WordNet. We provide a fine-grained annotation of individual word senses, as well as information for each verbal shifter on the syntactic scopes that it can affect.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190 ], "paper_content_text": [ "Introduction Polarity shifters are content words that exhibit semantic properties similar to negation.", "For example, the negated statement in (1) can also be achieved by the verbal shifter fail instead of the negation not, as shown in (2).", "(1) Peter did not pass the exam.", "(2) Peter failed shifter to pass the exam.", "As with negation words, polarity shifters change the polarity of a statement.", "This can happen to both positive and negative statements.", "In (3) the positive polarity of scholarship is shifted by denied, resulting in a negative polarity for the phrase.", "Conversely, the overall polarity of (4) is positive despite the negative polarity of pain.", "Polarity shifting is also caused by other content word classes, such as nouns (e.g.", "downfall) and adjectives (e.g.", "devoid).", "However, this work focusses on verbs, due to their importance as minimal semantic units, far-reaching scopes and potential basis for nominal shifter lexicons (see §2.2.).", "Knowledge of polarity shifting is important for a variety of tasks, especially sentiment analysis (Wiegand et al., 2010; Liu, 2012; Wilson et al., 2005) , as well as relation extraction (Sanchez-Graillet and Poesio, 2007) and textual entailment recognition (Harabagiu et al., 2006) .", "The majority of research into polarity shifting for sentiment analysis has focussed on negation words (Wiegand et al., 2010; Schouten and Frasincar, 2016; Pak and Paroubek, 2010) .", "Negation words (e.g.", "not, no, never) are mostly function words, of which only a small number exists, so exhaustive coverage is comparatively simple.", "Content word classes, such as verbs, are considerably more difficult to cover comprehensively due to their sheer number.", "For example, WordNet (Miller et al., 1990) contains over 10k verbal lemmas.", "Most verbs are also far less frequent than common negation words, making individual verbal shifters seem less important.", "However, overall, verbal shifter lemmas occur 2.6 times as often as negation words (see §4.).", "Most existing resources on negation and polarity shifting cover few to no instances of verbal shifters (see §2.3.).", "To remedy this, we introduce a complete lexicon of verbal shifters with annotations of polarity shifters and their shifting scope for each word sense.", "Our contributions are as follows: (i) A complete lexicon of verbal polarity shifters, covering all verbs found in WordNet 3.1.", "(ii) A fine grained annotation, labelling every sense of a verb separately.", "(iii) Annotations for shifter scope, indicating which parts of a sentence are affected by the shifting.", "The entire dataset is publicly available.", "1 Background In this section we will provide a formal definition of polarity shifters ( §2.1.", "), motivate our focus on verbal shifters ( §2.2.)", "and discuss related work ( §2.3.).", "Polarity Shifters The notion of valence or polarity shifting was brought to broad awareness in the research community by the work of Polanyi and Zaenen (2006) .", "Those authors drew attention to the fact that the basic valence of individual lexical items may be shifted in context due to (a) the presence of certain other lexical items, (b) the genre type and discourse structure of the text and (c) cultural factors.", "In subsequent research, the term shifter has since mostly been applied to the case of lexical items that influence polarity.", "Further, the notion of shifting is most prototypically used for situations where a discrete polarity switch occurs between the classes positive, negative and neutral.", "However, for other authors, including Polanyi and Zaenen (2006) , intensification (e.g.", "very disappointing) and downtoning (e.g.", "somewhat disappointing) of polar intensity also falls within the scope of shifting.", "We partially follow this view in that we consider downtoning to be shifting, as it moves the polarity of a word in the opposite direction, i.e.", "making a positive expression less positive (e.g.", "hardly satisfying) and a negative one less negative (e.g.", "slightly problematic).", "We do not consider intensifiers as shifters, as they support the already existing polarity.", "In most research, shifters are commonly illustrated and enumerated rather than formally defined.", "Polanyi and Zaenen (2006) for instance list negation words, intensifiers, modals and presuppositional items as lexical contextual polarity shifters.", "Setting aside downtoners for now, the common denominator of shifting is negation.", "Negation marks contexts in which a situation that the speaker expected fails to occur or hold.", "When this situation is part of a binary opposition (dead -alive), one can firmly conclude that the complementary state of affairs holds (not dead ⇒ alive).", "In cases where the negation affects a scalar notion, which is common in evaluative contexts, the understanding that arises depends on which kinds of scalar inferences and default assumptions are made in the context (Paradis and Willners, 2006) .", "Thus, not good denies the applicability of an evaluation in the region of good or better, but leaves open just how far in the direction of badness the actual interpretation lies: \"It wasn't good\" may be continued with \"but it was ok\" to yield a neutral or mildly positive evaluation or with \"in fact, it was terrible\" to yield a strongly negative one.", "2 While downtoners (e.g.", "somewhat) applied to scalar predicates such as good do not directly express contradiction, they do give rise to negative entailments and inferences.", "Moreover, the structure of scales intrinsically provides shifting.", "Thus, while something being good allows it to be even more positive (\"The movie was good.", "In fact, it was excellent.", "\"), something being somewhat good bounds its positiveness and opens up more negative meanings (\"The performance was somewhat good, but overall rather disappointing\").", "Considering these properties of scales, one can see shifting at work even in the case of downtoning.", "Verbal Shifters While the inclusion of shifting and scalar semantics in semantic representations is not limited to lexical items of particular parts-of-speech -we also find shifter adjectives (e.g.", "devoid) and adverbs (e.g.", "barely) -we limit our work to verbal shifters for several reasons.", "As shown by the work of Schneider et al.", "(2016) , verbs, together with nouns, are the most important minimal semantic units in text and thus are prime candidates for being tackled first.", "Verbs are usually the main syntactic predicates of clauses and sentences and thus verbal shifters can be expected to project far-reaching scopes.", "Most nominal shifters (e.g.", "failure, loss), on the other hand, have morphologically related verbs (e.g.", "fail, lose) and we expect that this connection can be exploited to spread shifter classification from verbs to nouns in the future.", "Related to this, the grammar of verbs, for instance with respect to the diversity of scope types, is more complex than that of nouns and so we expect it to be easier to project from verbs to nouns rather than in the opposite direction.", "Related Work Existing lexicons and corpora that cover polarity shifting focus almost exclusively on negation words.", "The most complex negation lexicon for sentiment analysis (Wilson et al., 2005) includes a mere 12 verbal shifters.", "In contrast, our resource covers over 1200 verbal shifter lemmas.", "Corpora used as training data for negation processing, such as the Sentiment Treebank (Socher et al., 2013) or the BioScope corpus (Szarvas et al., 2008) , are fairly small datasets, so only the most frequent negation words appear.", "The BioScope corpus, for example, contains only 6 verbal shifters (Morante, 2010) .", "Schulder et al.", "(2017) show that state-of-the-art systems trained on such data do not reliably detect polarity shifting and should profit from explicit knowledge of verbal shifters.", "The only work to date that covers a larger number of verbal shifters is Schulder et al.", "(2017) , who annotate a sample of the English verbs found in WordNet for whether they exhibit polarity shifting.", "They start by manually annotating an initial 2000 verbs.", "These verbs are used to train an SVM classifier using linguistic features and common language resources.", "The classifier is then run on the remaining WordNet verbs to bootstrap a list of additional likely shifters.", "This list is then checked by a human annotator to detect false positives.", "Combining the initial annotation and the result of the bootstrapping process, they create a list of 3043 verbs.", "While the lexicon by Schulder et al.", "(2017) is an important step towards full coverage of verbal polarity shifters, there are several aspects that we seek to improve upon.", "First of all, their lexicon covers less than a third of the verbs found in WordNet, likely missing a number of verbal shifters.", "Schulder et al.", "(2017) argue that their bootstrap process should cover the majority of shifters, however, this would mean that only 9% of all verbs are shifters.", "3 Their initial annotation of 2000 randomly selected verbs puts the shifter ratio at 15% instead.", "Another issue with their lexicon is that it only labels lemma forms, but does not differentiate between word senses.", "Many verbs do not actually exhibit shifting in all of their senses, so this information will be important for contextual classification.", "Lastly, they forgo the question of shifter scope, i.e.", "which argument of a verb can be affected by its polarity shift.", "Data We treat this annotation effort as a binary labelling task where a word can either cause polarites to shift or not.", "However, instead of assigning a single label to an entire verb lemma, as Schulder et al.", "(2017) did, we label individual word senses.", "We outline the rationale for this in §3.1.", "In addition we explicitly specify the syntactic scope of the shifting.", "This is motivated and explained in §3.2.", "§3.3.", "describes the annotation process.", "§3.4.", "describes the data format of our main lexicon.", "Based on this main lexicon we also derive two auxiliary lexicons in §3.5., providing complete labelled lists of all WordNet verb lemmas and all WordNet verb synsets respectively.", "Word Senses Many words that shift polarities only do so for some of their word senses.", "For example, mark down acts as a shifter in (5) , where it has the sense of \"reducing the value of something\", but the sense of \"writing something down to have a record of it\" in (6) causes no shifting.", "In our work we found that among shifter lemmas with multiple word senses, only 23% caused shifting in each of their senses.", "An annotation on the basis of individual word senses is therefore required.", "To differentiate the senses of a verb, we use its synset affiliations found in WordNet.", "Words within the same synset share a shifter label.", "Shifter scope, on the other hand, can differ among words of the same synset (see §3.2.).", "The annotation introduced in §3.3.", "is therefore applied to individual lemma-sense pairs to capture the best of both worlds.", "Shifter Scope A verbal shifter usually only affects the parts of a sentence that are syntactically governed by the verb through its valency.", "However, not every argument of a verbal shifter is subject to polarity shifting.", "Which argument is affected by polarity shifting depends on the verb in question.", "In (7) , surrender shifts only the polarity of its subject, but does not affect the object.", "Conversely, defeat only shifts its object in (8).", "The polarity of the subject of defeat does not play a role in this, as can be seen in (9).", "The given scopes assume that verb phrases are in their active form.", "In passive phrases, subject and object roles are inverted.", "To avoid this issue, sentence structure normalization should be performed before computing shifter scope.", "Synsets in WordNet only capture the semantic similarity of words, but almost no syntactic properties (Ruppenhofer and Brandes, 2015) .", "The shifter scope of a verb depends on its syntactic arguments, which can differ between verbs of the same synset.", "For example, discard and dispose share the sense \"throw or cast away\", but while discard shifts its direct object (10), dispose requires a prepositional object (11).", "For this reason we annotate lemma-synset pairs individually, instead of assigning scope labels to an entire synset.", "We also consider cases where a verbal shifter has more than one potential scope for the same lemma-sense pair.", "For example, infringe can shift its direct object or various prepositional objects, as seen in (12) -(14) .", "Therefore, infringe receives the scope labels dobj, pobj on and pobj upon.", "A verbal shifter will only ever shift the polarity of one of its scopes.", "Which scope is affected by the shifting depends on the given sentence.", "Annotation The entire dataset was labelled by an expert annotator with experience in linguistics and annotation work.", "To measure inter-annotator agreement, a second annotator re-annotated 400 word senses for their shifter label.", "They achieved an agreement of κ = 0.73, indicating substantial agreement (Landis and Koch, 1977) .", "The annotation progressed as follows: Given a complete list of WordNet verb lemmas, the annotator would inspect one lemma at a time.", "For this lemma, all senses were looked up.", "For each such lemma-sense pair, the annotator decided whether it is a shifter or not.", "Decisions were based on the sense definition of the synset and whether sentences using this sense of the lemma cause shifting.", "If a word sense was labelled as a shifter, it was subsequently also annotated for its potential shifter scopes.", "In cases where label conflicts between different lemmasense pairs of the same sense were encountered, these labels were reconsidered.", "This introduced an additional robustness to the annotation as it let the annotator revisit challenging cases from a new perspective.", "The resulting list of lemma-sense pairs provides more finegrained information than either an annotation for only word lemmas or only synsets could (see §3.1.", "and §3.2.).", "Main Lexicon File Format We provide our main lexicon as a comma-separated value (csv) file in which each line represents a specific lemmasense-scope triple of a verbal shifter.", "Each line follows the format \"LEMMA,SYNSET,SCOPE\".", "The fields are defined as follows: LEMMA: The lemma form of the verb.", "SYNSET: The numeric identifier of the synset, commonly referred to as offset or database location.", "It consists of 8 digits, including leading zeroes (e.g.", "00334568).", "SCOPE: The scope of the shifting.", "Given as subj for subject position, dobj for direct object position and comp for clausal complements.", "Prepositional object positions are given as pobj * , where * is replaced by the preposition in question, e.g.", "pobj from for objects with the preposition \"from\" or prep of for the preposition \"of\".", "When a lemma has multiple word senses, a separate entry is provided for each lemma-sense pair.", "When a lemma-sense pair has multiple potential shifting scopes, a separate entry is provided for each scope.", "Any combinations not provided are considered not to exhibit shifting.", "Take, for example, the set of entries for \"blow out\": (15) blow out,00436247,subj blow out,02767855,dobj It tells us that blow out in the sense 00436247 (\"melt, break, or become otherwise unusable\") is a shifter that affects its subject.", "The sense 02767855 (\"put out, as of fires, flames, or lights\") also exhibits shifting, but this time affects the direct object.", "It is, however, not a shifter for sense 02766970 (\"erupt in an uncontrolled manner\").", "For an example of multiple scopes for the same word sense, consider cramp: Its sense 00237139 (\"prevent the progress or free movement of\") can shift the polarity of either its direct object (e.g.", "\"it cramped his progress\") or that of a prepositional object with the preposition \"in\" (e.g.", "\"he was cramped in his progress\").", "The three other senses of cramp given by WordNet are not considered shifters.", "Auxiliary Lexicons Our main lexicon is labelled at the lemma-sense pair level to provide the most fine-grained level of information possible.", "It can, however, easily applied to more coarse-grained applications.", "As a convenience, we provide lemma-and synset-level auxiliary lexicons that list all WordNet lemmas and all WordNet synsets, respectively, accompanied with their shifter label.", "A lemma is labelled as a shifter if at least one of its senses is considered a shifter in our main lexicon.", "Similarly, synsets are labelled as shifters if at least one of its lemma-realizations is a shifter.", "Statistics In Table 1 we present the ratio of shifters among the verbs contained in WordNet.", "While only about 10% of verbs are shifters, this still results in 1220 lemmas and 924 synsets, more than covered in any other resource (see §2.3.).", "49% of verbs in WordNet are polysemous, i.e.", "they have multiple meanings.", "Among verbal shifters, this ratio is considerably higher, reaching 73%.", "Of these, only 23% are shifters in all of their word senses.", "To get an idea of how common verbal shifters are in actual use, we computed lemma frequencies over the Amazon Product Review Data corpus (Jindal and Liu, 2008) , which comprises over 5.8 million reviews.", "We found this corpus suitable due to its size, sentiment-related content and use in related tasks (Schulder et al., 2017) .", "We observe 1163 different verbal shifter lemmas with an overall total of 34 million occurrences.", "Correcting for nonshifter senses of shifter lemmas 4 , we still estimate 13 million occurrences, accounting for 5% of all verb occurrences in the corpus.", "To compare, the 15 negation words found in the valence shifter lexicon by Wilson et al.", "(2005) occur 13 million times as well.", "While the frequency of individual negation (function) words is unsurprisingly higher, the total number of verbal shifter occurrences highlights that verbal shifters are just as frequent and should not be ignored.", "Statistics on the distribution of shifter scopes can be found in Table 2 .", "74% of verbal shifters have a direct object scope and 10% a prepositional object scope.", "Among these, \"from\" is the most common preposition at 51%, followed by \"of\" with 22%.", "19% shift the polarity of their subject and only 1.5% shift that of a clausal complement.", "This distribution shows that shifting cannot be trivially assumed to always affect the direct object and that explicit knowledge of shifter scopes will be useful for judging the polarity of a phrase.", "Conclusion We introduced a lexicon of verbal polarity shifters that covers the entire verb vocabulary of WordNet.", "Our annotation labels each individual word sense of a verb, providing more fine-grained information than annotations on the lemmalevel would.", "In addition, we also label the syntactic scopes of each verbal shifter that can be affected by the shifting.", "This is a clear improvement over the list of verbal shifters provided by Schulder et al.", "(2017) , which only provides labels at the lemma-level rather than for individual word senses and gives no information regarding shifting scope.", "It also only has human expert annotation for 30% of the verb vocabulary of WordNet, as opposed to our full coverage.", "We hope this resource will help improve fine-grained sentiment analysis systems by providing explicit information on where polarities may shift in a sentence.", "We also hope our work will encourage the creation of similar polarity shifter lexicons for nouns and adjectives.", "As they are more numerous than verbs (WordNet contains 20k adjectival and 110k nominal lemmas), creating such resources will come with its own challenges, especially in the case of nouns." ] }
{ "paper_header_number": [ "1.", "2.", "2.1.", "2.2.", "2.3.", "3.", "3.1.", "3.2.", "3.3.", "3.4.", "3.5.", "4.", "5." ], "paper_header_content": [ "Introduction", "Background", "Polarity Shifters", "Verbal Shifters", "Related Work", "Data", "Word Senses", "Shifter Scope", "Annotation", "Main Lexicon File Format", "Auxiliary Lexicons", "Statistics", "Conclusion" ] }
GEM-SciDuet-train-64#paper-1137#slide-4
Word Sense Ambiguity
50% of verbs are polysemous. 12% of verbs are shifters in at least one word sense. Among polysemous verbal shifters, only 23% are shifters in all their word senses. Mark down: Reduce in price Shifter The agency [marked down [their assets]+]-. Mark down: Write down No Shifter She [marked down [his confession of guilt]-]-. Marc Schulder Saarland University
50% of verbs are polysemous. 12% of verbs are shifters in at least one word sense. Among polysemous verbal shifters, only 23% are shifters in all their word senses. Mark down: Reduce in price Shifter The agency [marked down [their assets]+]-. Mark down: Write down No Shifter She [marked down [his confession of guilt]-]-. Marc Schulder Saarland University
[]
GEM-SciDuet-train-64#paper-1137#slide-5
1137
Introducing a Lexicon of Verbal Polarity Shifters for English
The sentiment polarity of a phrase does not only depend on the polarities of its words, but also on how these are affected by their context. Negation words (e.g. not, no, never) can change the polarity of a phrase. Similarly, verbs and other content words can also act as polarity shifters (e.g. fail, deny, alleviate). While individually more sparse, they are far more numerous. Among verbs alone, there are more than 1200 shifters. However, sentiment analysis systems barely consider polarity shifters other than negation words. A major reason for this is the scarcity of lexicons and corpora that provide information on them. We introduce a lexicon of verbal polarity shifters that covers the entirety of verbs found in WordNet. We provide a fine-grained annotation of individual word senses, as well as information for each verbal shifter on the syntactic scopes that it can affect.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190 ], "paper_content_text": [ "Introduction Polarity shifters are content words that exhibit semantic properties similar to negation.", "For example, the negated statement in (1) can also be achieved by the verbal shifter fail instead of the negation not, as shown in (2).", "(1) Peter did not pass the exam.", "(2) Peter failed shifter to pass the exam.", "As with negation words, polarity shifters change the polarity of a statement.", "This can happen to both positive and negative statements.", "In (3) the positive polarity of scholarship is shifted by denied, resulting in a negative polarity for the phrase.", "Conversely, the overall polarity of (4) is positive despite the negative polarity of pain.", "Polarity shifting is also caused by other content word classes, such as nouns (e.g.", "downfall) and adjectives (e.g.", "devoid).", "However, this work focusses on verbs, due to their importance as minimal semantic units, far-reaching scopes and potential basis for nominal shifter lexicons (see §2.2.).", "Knowledge of polarity shifting is important for a variety of tasks, especially sentiment analysis (Wiegand et al., 2010; Liu, 2012; Wilson et al., 2005) , as well as relation extraction (Sanchez-Graillet and Poesio, 2007) and textual entailment recognition (Harabagiu et al., 2006) .", "The majority of research into polarity shifting for sentiment analysis has focussed on negation words (Wiegand et al., 2010; Schouten and Frasincar, 2016; Pak and Paroubek, 2010) .", "Negation words (e.g.", "not, no, never) are mostly function words, of which only a small number exists, so exhaustive coverage is comparatively simple.", "Content word classes, such as verbs, are considerably more difficult to cover comprehensively due to their sheer number.", "For example, WordNet (Miller et al., 1990) contains over 10k verbal lemmas.", "Most verbs are also far less frequent than common negation words, making individual verbal shifters seem less important.", "However, overall, verbal shifter lemmas occur 2.6 times as often as negation words (see §4.).", "Most existing resources on negation and polarity shifting cover few to no instances of verbal shifters (see §2.3.).", "To remedy this, we introduce a complete lexicon of verbal shifters with annotations of polarity shifters and their shifting scope for each word sense.", "Our contributions are as follows: (i) A complete lexicon of verbal polarity shifters, covering all verbs found in WordNet 3.1.", "(ii) A fine grained annotation, labelling every sense of a verb separately.", "(iii) Annotations for shifter scope, indicating which parts of a sentence are affected by the shifting.", "The entire dataset is publicly available.", "1 Background In this section we will provide a formal definition of polarity shifters ( §2.1.", "), motivate our focus on verbal shifters ( §2.2.)", "and discuss related work ( §2.3.).", "Polarity Shifters The notion of valence or polarity shifting was brought to broad awareness in the research community by the work of Polanyi and Zaenen (2006) .", "Those authors drew attention to the fact that the basic valence of individual lexical items may be shifted in context due to (a) the presence of certain other lexical items, (b) the genre type and discourse structure of the text and (c) cultural factors.", "In subsequent research, the term shifter has since mostly been applied to the case of lexical items that influence polarity.", "Further, the notion of shifting is most prototypically used for situations where a discrete polarity switch occurs between the classes positive, negative and neutral.", "However, for other authors, including Polanyi and Zaenen (2006) , intensification (e.g.", "very disappointing) and downtoning (e.g.", "somewhat disappointing) of polar intensity also falls within the scope of shifting.", "We partially follow this view in that we consider downtoning to be shifting, as it moves the polarity of a word in the opposite direction, i.e.", "making a positive expression less positive (e.g.", "hardly satisfying) and a negative one less negative (e.g.", "slightly problematic).", "We do not consider intensifiers as shifters, as they support the already existing polarity.", "In most research, shifters are commonly illustrated and enumerated rather than formally defined.", "Polanyi and Zaenen (2006) for instance list negation words, intensifiers, modals and presuppositional items as lexical contextual polarity shifters.", "Setting aside downtoners for now, the common denominator of shifting is negation.", "Negation marks contexts in which a situation that the speaker expected fails to occur or hold.", "When this situation is part of a binary opposition (dead -alive), one can firmly conclude that the complementary state of affairs holds (not dead ⇒ alive).", "In cases where the negation affects a scalar notion, which is common in evaluative contexts, the understanding that arises depends on which kinds of scalar inferences and default assumptions are made in the context (Paradis and Willners, 2006) .", "Thus, not good denies the applicability of an evaluation in the region of good or better, but leaves open just how far in the direction of badness the actual interpretation lies: \"It wasn't good\" may be continued with \"but it was ok\" to yield a neutral or mildly positive evaluation or with \"in fact, it was terrible\" to yield a strongly negative one.", "2 While downtoners (e.g.", "somewhat) applied to scalar predicates such as good do not directly express contradiction, they do give rise to negative entailments and inferences.", "Moreover, the structure of scales intrinsically provides shifting.", "Thus, while something being good allows it to be even more positive (\"The movie was good.", "In fact, it was excellent.", "\"), something being somewhat good bounds its positiveness and opens up more negative meanings (\"The performance was somewhat good, but overall rather disappointing\").", "Considering these properties of scales, one can see shifting at work even in the case of downtoning.", "Verbal Shifters While the inclusion of shifting and scalar semantics in semantic representations is not limited to lexical items of particular parts-of-speech -we also find shifter adjectives (e.g.", "devoid) and adverbs (e.g.", "barely) -we limit our work to verbal shifters for several reasons.", "As shown by the work of Schneider et al.", "(2016) , verbs, together with nouns, are the most important minimal semantic units in text and thus are prime candidates for being tackled first.", "Verbs are usually the main syntactic predicates of clauses and sentences and thus verbal shifters can be expected to project far-reaching scopes.", "Most nominal shifters (e.g.", "failure, loss), on the other hand, have morphologically related verbs (e.g.", "fail, lose) and we expect that this connection can be exploited to spread shifter classification from verbs to nouns in the future.", "Related to this, the grammar of verbs, for instance with respect to the diversity of scope types, is more complex than that of nouns and so we expect it to be easier to project from verbs to nouns rather than in the opposite direction.", "Related Work Existing lexicons and corpora that cover polarity shifting focus almost exclusively on negation words.", "The most complex negation lexicon for sentiment analysis (Wilson et al., 2005) includes a mere 12 verbal shifters.", "In contrast, our resource covers over 1200 verbal shifter lemmas.", "Corpora used as training data for negation processing, such as the Sentiment Treebank (Socher et al., 2013) or the BioScope corpus (Szarvas et al., 2008) , are fairly small datasets, so only the most frequent negation words appear.", "The BioScope corpus, for example, contains only 6 verbal shifters (Morante, 2010) .", "Schulder et al.", "(2017) show that state-of-the-art systems trained on such data do not reliably detect polarity shifting and should profit from explicit knowledge of verbal shifters.", "The only work to date that covers a larger number of verbal shifters is Schulder et al.", "(2017) , who annotate a sample of the English verbs found in WordNet for whether they exhibit polarity shifting.", "They start by manually annotating an initial 2000 verbs.", "These verbs are used to train an SVM classifier using linguistic features and common language resources.", "The classifier is then run on the remaining WordNet verbs to bootstrap a list of additional likely shifters.", "This list is then checked by a human annotator to detect false positives.", "Combining the initial annotation and the result of the bootstrapping process, they create a list of 3043 verbs.", "While the lexicon by Schulder et al.", "(2017) is an important step towards full coverage of verbal polarity shifters, there are several aspects that we seek to improve upon.", "First of all, their lexicon covers less than a third of the verbs found in WordNet, likely missing a number of verbal shifters.", "Schulder et al.", "(2017) argue that their bootstrap process should cover the majority of shifters, however, this would mean that only 9% of all verbs are shifters.", "3 Their initial annotation of 2000 randomly selected verbs puts the shifter ratio at 15% instead.", "Another issue with their lexicon is that it only labels lemma forms, but does not differentiate between word senses.", "Many verbs do not actually exhibit shifting in all of their senses, so this information will be important for contextual classification.", "Lastly, they forgo the question of shifter scope, i.e.", "which argument of a verb can be affected by its polarity shift.", "Data We treat this annotation effort as a binary labelling task where a word can either cause polarites to shift or not.", "However, instead of assigning a single label to an entire verb lemma, as Schulder et al.", "(2017) did, we label individual word senses.", "We outline the rationale for this in §3.1.", "In addition we explicitly specify the syntactic scope of the shifting.", "This is motivated and explained in §3.2.", "§3.3.", "describes the annotation process.", "§3.4.", "describes the data format of our main lexicon.", "Based on this main lexicon we also derive two auxiliary lexicons in §3.5., providing complete labelled lists of all WordNet verb lemmas and all WordNet verb synsets respectively.", "Word Senses Many words that shift polarities only do so for some of their word senses.", "For example, mark down acts as a shifter in (5) , where it has the sense of \"reducing the value of something\", but the sense of \"writing something down to have a record of it\" in (6) causes no shifting.", "In our work we found that among shifter lemmas with multiple word senses, only 23% caused shifting in each of their senses.", "An annotation on the basis of individual word senses is therefore required.", "To differentiate the senses of a verb, we use its synset affiliations found in WordNet.", "Words within the same synset share a shifter label.", "Shifter scope, on the other hand, can differ among words of the same synset (see §3.2.).", "The annotation introduced in §3.3.", "is therefore applied to individual lemma-sense pairs to capture the best of both worlds.", "Shifter Scope A verbal shifter usually only affects the parts of a sentence that are syntactically governed by the verb through its valency.", "However, not every argument of a verbal shifter is subject to polarity shifting.", "Which argument is affected by polarity shifting depends on the verb in question.", "In (7) , surrender shifts only the polarity of its subject, but does not affect the object.", "Conversely, defeat only shifts its object in (8).", "The polarity of the subject of defeat does not play a role in this, as can be seen in (9).", "The given scopes assume that verb phrases are in their active form.", "In passive phrases, subject and object roles are inverted.", "To avoid this issue, sentence structure normalization should be performed before computing shifter scope.", "Synsets in WordNet only capture the semantic similarity of words, but almost no syntactic properties (Ruppenhofer and Brandes, 2015) .", "The shifter scope of a verb depends on its syntactic arguments, which can differ between verbs of the same synset.", "For example, discard and dispose share the sense \"throw or cast away\", but while discard shifts its direct object (10), dispose requires a prepositional object (11).", "For this reason we annotate lemma-synset pairs individually, instead of assigning scope labels to an entire synset.", "We also consider cases where a verbal shifter has more than one potential scope for the same lemma-sense pair.", "For example, infringe can shift its direct object or various prepositional objects, as seen in (12) -(14) .", "Therefore, infringe receives the scope labels dobj, pobj on and pobj upon.", "A verbal shifter will only ever shift the polarity of one of its scopes.", "Which scope is affected by the shifting depends on the given sentence.", "Annotation The entire dataset was labelled by an expert annotator with experience in linguistics and annotation work.", "To measure inter-annotator agreement, a second annotator re-annotated 400 word senses for their shifter label.", "They achieved an agreement of κ = 0.73, indicating substantial agreement (Landis and Koch, 1977) .", "The annotation progressed as follows: Given a complete list of WordNet verb lemmas, the annotator would inspect one lemma at a time.", "For this lemma, all senses were looked up.", "For each such lemma-sense pair, the annotator decided whether it is a shifter or not.", "Decisions were based on the sense definition of the synset and whether sentences using this sense of the lemma cause shifting.", "If a word sense was labelled as a shifter, it was subsequently also annotated for its potential shifter scopes.", "In cases where label conflicts between different lemmasense pairs of the same sense were encountered, these labels were reconsidered.", "This introduced an additional robustness to the annotation as it let the annotator revisit challenging cases from a new perspective.", "The resulting list of lemma-sense pairs provides more finegrained information than either an annotation for only word lemmas or only synsets could (see §3.1.", "and §3.2.).", "Main Lexicon File Format We provide our main lexicon as a comma-separated value (csv) file in which each line represents a specific lemmasense-scope triple of a verbal shifter.", "Each line follows the format \"LEMMA,SYNSET,SCOPE\".", "The fields are defined as follows: LEMMA: The lemma form of the verb.", "SYNSET: The numeric identifier of the synset, commonly referred to as offset or database location.", "It consists of 8 digits, including leading zeroes (e.g.", "00334568).", "SCOPE: The scope of the shifting.", "Given as subj for subject position, dobj for direct object position and comp for clausal complements.", "Prepositional object positions are given as pobj * , where * is replaced by the preposition in question, e.g.", "pobj from for objects with the preposition \"from\" or prep of for the preposition \"of\".", "When a lemma has multiple word senses, a separate entry is provided for each lemma-sense pair.", "When a lemma-sense pair has multiple potential shifting scopes, a separate entry is provided for each scope.", "Any combinations not provided are considered not to exhibit shifting.", "Take, for example, the set of entries for \"blow out\": (15) blow out,00436247,subj blow out,02767855,dobj It tells us that blow out in the sense 00436247 (\"melt, break, or become otherwise unusable\") is a shifter that affects its subject.", "The sense 02767855 (\"put out, as of fires, flames, or lights\") also exhibits shifting, but this time affects the direct object.", "It is, however, not a shifter for sense 02766970 (\"erupt in an uncontrolled manner\").", "For an example of multiple scopes for the same word sense, consider cramp: Its sense 00237139 (\"prevent the progress or free movement of\") can shift the polarity of either its direct object (e.g.", "\"it cramped his progress\") or that of a prepositional object with the preposition \"in\" (e.g.", "\"he was cramped in his progress\").", "The three other senses of cramp given by WordNet are not considered shifters.", "Auxiliary Lexicons Our main lexicon is labelled at the lemma-sense pair level to provide the most fine-grained level of information possible.", "It can, however, easily applied to more coarse-grained applications.", "As a convenience, we provide lemma-and synset-level auxiliary lexicons that list all WordNet lemmas and all WordNet synsets, respectively, accompanied with their shifter label.", "A lemma is labelled as a shifter if at least one of its senses is considered a shifter in our main lexicon.", "Similarly, synsets are labelled as shifters if at least one of its lemma-realizations is a shifter.", "Statistics In Table 1 we present the ratio of shifters among the verbs contained in WordNet.", "While only about 10% of verbs are shifters, this still results in 1220 lemmas and 924 synsets, more than covered in any other resource (see §2.3.).", "49% of verbs in WordNet are polysemous, i.e.", "they have multiple meanings.", "Among verbal shifters, this ratio is considerably higher, reaching 73%.", "Of these, only 23% are shifters in all of their word senses.", "To get an idea of how common verbal shifters are in actual use, we computed lemma frequencies over the Amazon Product Review Data corpus (Jindal and Liu, 2008) , which comprises over 5.8 million reviews.", "We found this corpus suitable due to its size, sentiment-related content and use in related tasks (Schulder et al., 2017) .", "We observe 1163 different verbal shifter lemmas with an overall total of 34 million occurrences.", "Correcting for nonshifter senses of shifter lemmas 4 , we still estimate 13 million occurrences, accounting for 5% of all verb occurrences in the corpus.", "To compare, the 15 negation words found in the valence shifter lexicon by Wilson et al.", "(2005) occur 13 million times as well.", "While the frequency of individual negation (function) words is unsurprisingly higher, the total number of verbal shifter occurrences highlights that verbal shifters are just as frequent and should not be ignored.", "Statistics on the distribution of shifter scopes can be found in Table 2 .", "74% of verbal shifters have a direct object scope and 10% a prepositional object scope.", "Among these, \"from\" is the most common preposition at 51%, followed by \"of\" with 22%.", "19% shift the polarity of their subject and only 1.5% shift that of a clausal complement.", "This distribution shows that shifting cannot be trivially assumed to always affect the direct object and that explicit knowledge of shifter scopes will be useful for judging the polarity of a phrase.", "Conclusion We introduced a lexicon of verbal polarity shifters that covers the entire verb vocabulary of WordNet.", "Our annotation labels each individual word sense of a verb, providing more fine-grained information than annotations on the lemmalevel would.", "In addition, we also label the syntactic scopes of each verbal shifter that can be affected by the shifting.", "This is a clear improvement over the list of verbal shifters provided by Schulder et al.", "(2017) , which only provides labels at the lemma-level rather than for individual word senses and gives no information regarding shifting scope.", "It also only has human expert annotation for 30% of the verb vocabulary of WordNet, as opposed to our full coverage.", "We hope this resource will help improve fine-grained sentiment analysis systems by providing explicit information on where polarities may shift in a sentence.", "We also hope our work will encourage the creation of similar polarity shifter lexicons for nouns and adjectives.", "As they are more numerous than verbs (WordNet contains 20k adjectival and 110k nominal lemmas), creating such resources will come with its own challenges, especially in the case of nouns." ] }
{ "paper_header_number": [ "1.", "2.", "2.1.", "2.2.", "2.3.", "3.", "3.1.", "3.2.", "3.3.", "3.4.", "3.5.", "4.", "5." ], "paper_header_content": [ "Introduction", "Background", "Polarity Shifters", "Verbal Shifters", "Related Work", "Data", "Word Senses", "Shifter Scope", "Annotation", "Main Lexicon File Format", "Auxiliary Lexicons", "Statistics", "Conclusion" ] }
GEM-SciDuet-train-64#paper-1137#slide-5
Shifter Scope
When a phrase contains a polarity shifter, you need to know what part of the phrase it can affect.(Wiegand et al., 2017, GSCL) The villain]- defeated [the hero]+. The villain]- surrendered [to the hero]+. subj Marc Schulder Saarland University Scope annotated for dependency relations.
When a phrase contains a polarity shifter, you need to know what part of the phrase it can affect.(Wiegand et al., 2017, GSCL) The villain]- defeated [the hero]+. The villain]- surrendered [to the hero]+. subj Marc Schulder Saarland University Scope annotated for dependency relations.
[]
GEM-SciDuet-train-64#paper-1137#slide-6
1137
Introducing a Lexicon of Verbal Polarity Shifters for English
The sentiment polarity of a phrase does not only depend on the polarities of its words, but also on how these are affected by their context. Negation words (e.g. not, no, never) can change the polarity of a phrase. Similarly, verbs and other content words can also act as polarity shifters (e.g. fail, deny, alleviate). While individually more sparse, they are far more numerous. Among verbs alone, there are more than 1200 shifters. However, sentiment analysis systems barely consider polarity shifters other than negation words. A major reason for this is the scarcity of lexicons and corpora that provide information on them. We introduce a lexicon of verbal polarity shifters that covers the entirety of verbs found in WordNet. We provide a fine-grained annotation of individual word senses, as well as information for each verbal shifter on the syntactic scopes that it can affect.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190 ], "paper_content_text": [ "Introduction Polarity shifters are content words that exhibit semantic properties similar to negation.", "For example, the negated statement in (1) can also be achieved by the verbal shifter fail instead of the negation not, as shown in (2).", "(1) Peter did not pass the exam.", "(2) Peter failed shifter to pass the exam.", "As with negation words, polarity shifters change the polarity of a statement.", "This can happen to both positive and negative statements.", "In (3) the positive polarity of scholarship is shifted by denied, resulting in a negative polarity for the phrase.", "Conversely, the overall polarity of (4) is positive despite the negative polarity of pain.", "Polarity shifting is also caused by other content word classes, such as nouns (e.g.", "downfall) and adjectives (e.g.", "devoid).", "However, this work focusses on verbs, due to their importance as minimal semantic units, far-reaching scopes and potential basis for nominal shifter lexicons (see §2.2.).", "Knowledge of polarity shifting is important for a variety of tasks, especially sentiment analysis (Wiegand et al., 2010; Liu, 2012; Wilson et al., 2005) , as well as relation extraction (Sanchez-Graillet and Poesio, 2007) and textual entailment recognition (Harabagiu et al., 2006) .", "The majority of research into polarity shifting for sentiment analysis has focussed on negation words (Wiegand et al., 2010; Schouten and Frasincar, 2016; Pak and Paroubek, 2010) .", "Negation words (e.g.", "not, no, never) are mostly function words, of which only a small number exists, so exhaustive coverage is comparatively simple.", "Content word classes, such as verbs, are considerably more difficult to cover comprehensively due to their sheer number.", "For example, WordNet (Miller et al., 1990) contains over 10k verbal lemmas.", "Most verbs are also far less frequent than common negation words, making individual verbal shifters seem less important.", "However, overall, verbal shifter lemmas occur 2.6 times as often as negation words (see §4.).", "Most existing resources on negation and polarity shifting cover few to no instances of verbal shifters (see §2.3.).", "To remedy this, we introduce a complete lexicon of verbal shifters with annotations of polarity shifters and their shifting scope for each word sense.", "Our contributions are as follows: (i) A complete lexicon of verbal polarity shifters, covering all verbs found in WordNet 3.1.", "(ii) A fine grained annotation, labelling every sense of a verb separately.", "(iii) Annotations for shifter scope, indicating which parts of a sentence are affected by the shifting.", "The entire dataset is publicly available.", "1 Background In this section we will provide a formal definition of polarity shifters ( §2.1.", "), motivate our focus on verbal shifters ( §2.2.)", "and discuss related work ( §2.3.).", "Polarity Shifters The notion of valence or polarity shifting was brought to broad awareness in the research community by the work of Polanyi and Zaenen (2006) .", "Those authors drew attention to the fact that the basic valence of individual lexical items may be shifted in context due to (a) the presence of certain other lexical items, (b) the genre type and discourse structure of the text and (c) cultural factors.", "In subsequent research, the term shifter has since mostly been applied to the case of lexical items that influence polarity.", "Further, the notion of shifting is most prototypically used for situations where a discrete polarity switch occurs between the classes positive, negative and neutral.", "However, for other authors, including Polanyi and Zaenen (2006) , intensification (e.g.", "very disappointing) and downtoning (e.g.", "somewhat disappointing) of polar intensity also falls within the scope of shifting.", "We partially follow this view in that we consider downtoning to be shifting, as it moves the polarity of a word in the opposite direction, i.e.", "making a positive expression less positive (e.g.", "hardly satisfying) and a negative one less negative (e.g.", "slightly problematic).", "We do not consider intensifiers as shifters, as they support the already existing polarity.", "In most research, shifters are commonly illustrated and enumerated rather than formally defined.", "Polanyi and Zaenen (2006) for instance list negation words, intensifiers, modals and presuppositional items as lexical contextual polarity shifters.", "Setting aside downtoners for now, the common denominator of shifting is negation.", "Negation marks contexts in which a situation that the speaker expected fails to occur or hold.", "When this situation is part of a binary opposition (dead -alive), one can firmly conclude that the complementary state of affairs holds (not dead ⇒ alive).", "In cases where the negation affects a scalar notion, which is common in evaluative contexts, the understanding that arises depends on which kinds of scalar inferences and default assumptions are made in the context (Paradis and Willners, 2006) .", "Thus, not good denies the applicability of an evaluation in the region of good or better, but leaves open just how far in the direction of badness the actual interpretation lies: \"It wasn't good\" may be continued with \"but it was ok\" to yield a neutral or mildly positive evaluation or with \"in fact, it was terrible\" to yield a strongly negative one.", "2 While downtoners (e.g.", "somewhat) applied to scalar predicates such as good do not directly express contradiction, they do give rise to negative entailments and inferences.", "Moreover, the structure of scales intrinsically provides shifting.", "Thus, while something being good allows it to be even more positive (\"The movie was good.", "In fact, it was excellent.", "\"), something being somewhat good bounds its positiveness and opens up more negative meanings (\"The performance was somewhat good, but overall rather disappointing\").", "Considering these properties of scales, one can see shifting at work even in the case of downtoning.", "Verbal Shifters While the inclusion of shifting and scalar semantics in semantic representations is not limited to lexical items of particular parts-of-speech -we also find shifter adjectives (e.g.", "devoid) and adverbs (e.g.", "barely) -we limit our work to verbal shifters for several reasons.", "As shown by the work of Schneider et al.", "(2016) , verbs, together with nouns, are the most important minimal semantic units in text and thus are prime candidates for being tackled first.", "Verbs are usually the main syntactic predicates of clauses and sentences and thus verbal shifters can be expected to project far-reaching scopes.", "Most nominal shifters (e.g.", "failure, loss), on the other hand, have morphologically related verbs (e.g.", "fail, lose) and we expect that this connection can be exploited to spread shifter classification from verbs to nouns in the future.", "Related to this, the grammar of verbs, for instance with respect to the diversity of scope types, is more complex than that of nouns and so we expect it to be easier to project from verbs to nouns rather than in the opposite direction.", "Related Work Existing lexicons and corpora that cover polarity shifting focus almost exclusively on negation words.", "The most complex negation lexicon for sentiment analysis (Wilson et al., 2005) includes a mere 12 verbal shifters.", "In contrast, our resource covers over 1200 verbal shifter lemmas.", "Corpora used as training data for negation processing, such as the Sentiment Treebank (Socher et al., 2013) or the BioScope corpus (Szarvas et al., 2008) , are fairly small datasets, so only the most frequent negation words appear.", "The BioScope corpus, for example, contains only 6 verbal shifters (Morante, 2010) .", "Schulder et al.", "(2017) show that state-of-the-art systems trained on such data do not reliably detect polarity shifting and should profit from explicit knowledge of verbal shifters.", "The only work to date that covers a larger number of verbal shifters is Schulder et al.", "(2017) , who annotate a sample of the English verbs found in WordNet for whether they exhibit polarity shifting.", "They start by manually annotating an initial 2000 verbs.", "These verbs are used to train an SVM classifier using linguistic features and common language resources.", "The classifier is then run on the remaining WordNet verbs to bootstrap a list of additional likely shifters.", "This list is then checked by a human annotator to detect false positives.", "Combining the initial annotation and the result of the bootstrapping process, they create a list of 3043 verbs.", "While the lexicon by Schulder et al.", "(2017) is an important step towards full coverage of verbal polarity shifters, there are several aspects that we seek to improve upon.", "First of all, their lexicon covers less than a third of the verbs found in WordNet, likely missing a number of verbal shifters.", "Schulder et al.", "(2017) argue that their bootstrap process should cover the majority of shifters, however, this would mean that only 9% of all verbs are shifters.", "3 Their initial annotation of 2000 randomly selected verbs puts the shifter ratio at 15% instead.", "Another issue with their lexicon is that it only labels lemma forms, but does not differentiate between word senses.", "Many verbs do not actually exhibit shifting in all of their senses, so this information will be important for contextual classification.", "Lastly, they forgo the question of shifter scope, i.e.", "which argument of a verb can be affected by its polarity shift.", "Data We treat this annotation effort as a binary labelling task where a word can either cause polarites to shift or not.", "However, instead of assigning a single label to an entire verb lemma, as Schulder et al.", "(2017) did, we label individual word senses.", "We outline the rationale for this in §3.1.", "In addition we explicitly specify the syntactic scope of the shifting.", "This is motivated and explained in §3.2.", "§3.3.", "describes the annotation process.", "§3.4.", "describes the data format of our main lexicon.", "Based on this main lexicon we also derive two auxiliary lexicons in §3.5., providing complete labelled lists of all WordNet verb lemmas and all WordNet verb synsets respectively.", "Word Senses Many words that shift polarities only do so for some of their word senses.", "For example, mark down acts as a shifter in (5) , where it has the sense of \"reducing the value of something\", but the sense of \"writing something down to have a record of it\" in (6) causes no shifting.", "In our work we found that among shifter lemmas with multiple word senses, only 23% caused shifting in each of their senses.", "An annotation on the basis of individual word senses is therefore required.", "To differentiate the senses of a verb, we use its synset affiliations found in WordNet.", "Words within the same synset share a shifter label.", "Shifter scope, on the other hand, can differ among words of the same synset (see §3.2.).", "The annotation introduced in §3.3.", "is therefore applied to individual lemma-sense pairs to capture the best of both worlds.", "Shifter Scope A verbal shifter usually only affects the parts of a sentence that are syntactically governed by the verb through its valency.", "However, not every argument of a verbal shifter is subject to polarity shifting.", "Which argument is affected by polarity shifting depends on the verb in question.", "In (7) , surrender shifts only the polarity of its subject, but does not affect the object.", "Conversely, defeat only shifts its object in (8).", "The polarity of the subject of defeat does not play a role in this, as can be seen in (9).", "The given scopes assume that verb phrases are in their active form.", "In passive phrases, subject and object roles are inverted.", "To avoid this issue, sentence structure normalization should be performed before computing shifter scope.", "Synsets in WordNet only capture the semantic similarity of words, but almost no syntactic properties (Ruppenhofer and Brandes, 2015) .", "The shifter scope of a verb depends on its syntactic arguments, which can differ between verbs of the same synset.", "For example, discard and dispose share the sense \"throw or cast away\", but while discard shifts its direct object (10), dispose requires a prepositional object (11).", "For this reason we annotate lemma-synset pairs individually, instead of assigning scope labels to an entire synset.", "We also consider cases where a verbal shifter has more than one potential scope for the same lemma-sense pair.", "For example, infringe can shift its direct object or various prepositional objects, as seen in (12) -(14) .", "Therefore, infringe receives the scope labels dobj, pobj on and pobj upon.", "A verbal shifter will only ever shift the polarity of one of its scopes.", "Which scope is affected by the shifting depends on the given sentence.", "Annotation The entire dataset was labelled by an expert annotator with experience in linguistics and annotation work.", "To measure inter-annotator agreement, a second annotator re-annotated 400 word senses for their shifter label.", "They achieved an agreement of κ = 0.73, indicating substantial agreement (Landis and Koch, 1977) .", "The annotation progressed as follows: Given a complete list of WordNet verb lemmas, the annotator would inspect one lemma at a time.", "For this lemma, all senses were looked up.", "For each such lemma-sense pair, the annotator decided whether it is a shifter or not.", "Decisions were based on the sense definition of the synset and whether sentences using this sense of the lemma cause shifting.", "If a word sense was labelled as a shifter, it was subsequently also annotated for its potential shifter scopes.", "In cases where label conflicts between different lemmasense pairs of the same sense were encountered, these labels were reconsidered.", "This introduced an additional robustness to the annotation as it let the annotator revisit challenging cases from a new perspective.", "The resulting list of lemma-sense pairs provides more finegrained information than either an annotation for only word lemmas or only synsets could (see §3.1.", "and §3.2.).", "Main Lexicon File Format We provide our main lexicon as a comma-separated value (csv) file in which each line represents a specific lemmasense-scope triple of a verbal shifter.", "Each line follows the format \"LEMMA,SYNSET,SCOPE\".", "The fields are defined as follows: LEMMA: The lemma form of the verb.", "SYNSET: The numeric identifier of the synset, commonly referred to as offset or database location.", "It consists of 8 digits, including leading zeroes (e.g.", "00334568).", "SCOPE: The scope of the shifting.", "Given as subj for subject position, dobj for direct object position and comp for clausal complements.", "Prepositional object positions are given as pobj * , where * is replaced by the preposition in question, e.g.", "pobj from for objects with the preposition \"from\" or prep of for the preposition \"of\".", "When a lemma has multiple word senses, a separate entry is provided for each lemma-sense pair.", "When a lemma-sense pair has multiple potential shifting scopes, a separate entry is provided for each scope.", "Any combinations not provided are considered not to exhibit shifting.", "Take, for example, the set of entries for \"blow out\": (15) blow out,00436247,subj blow out,02767855,dobj It tells us that blow out in the sense 00436247 (\"melt, break, or become otherwise unusable\") is a shifter that affects its subject.", "The sense 02767855 (\"put out, as of fires, flames, or lights\") also exhibits shifting, but this time affects the direct object.", "It is, however, not a shifter for sense 02766970 (\"erupt in an uncontrolled manner\").", "For an example of multiple scopes for the same word sense, consider cramp: Its sense 00237139 (\"prevent the progress or free movement of\") can shift the polarity of either its direct object (e.g.", "\"it cramped his progress\") or that of a prepositional object with the preposition \"in\" (e.g.", "\"he was cramped in his progress\").", "The three other senses of cramp given by WordNet are not considered shifters.", "Auxiliary Lexicons Our main lexicon is labelled at the lemma-sense pair level to provide the most fine-grained level of information possible.", "It can, however, easily applied to more coarse-grained applications.", "As a convenience, we provide lemma-and synset-level auxiliary lexicons that list all WordNet lemmas and all WordNet synsets, respectively, accompanied with their shifter label.", "A lemma is labelled as a shifter if at least one of its senses is considered a shifter in our main lexicon.", "Similarly, synsets are labelled as shifters if at least one of its lemma-realizations is a shifter.", "Statistics In Table 1 we present the ratio of shifters among the verbs contained in WordNet.", "While only about 10% of verbs are shifters, this still results in 1220 lemmas and 924 synsets, more than covered in any other resource (see §2.3.).", "49% of verbs in WordNet are polysemous, i.e.", "they have multiple meanings.", "Among verbal shifters, this ratio is considerably higher, reaching 73%.", "Of these, only 23% are shifters in all of their word senses.", "To get an idea of how common verbal shifters are in actual use, we computed lemma frequencies over the Amazon Product Review Data corpus (Jindal and Liu, 2008) , which comprises over 5.8 million reviews.", "We found this corpus suitable due to its size, sentiment-related content and use in related tasks (Schulder et al., 2017) .", "We observe 1163 different verbal shifter lemmas with an overall total of 34 million occurrences.", "Correcting for nonshifter senses of shifter lemmas 4 , we still estimate 13 million occurrences, accounting for 5% of all verb occurrences in the corpus.", "To compare, the 15 negation words found in the valence shifter lexicon by Wilson et al.", "(2005) occur 13 million times as well.", "While the frequency of individual negation (function) words is unsurprisingly higher, the total number of verbal shifter occurrences highlights that verbal shifters are just as frequent and should not be ignored.", "Statistics on the distribution of shifter scopes can be found in Table 2 .", "74% of verbal shifters have a direct object scope and 10% a prepositional object scope.", "Among these, \"from\" is the most common preposition at 51%, followed by \"of\" with 22%.", "19% shift the polarity of their subject and only 1.5% shift that of a clausal complement.", "This distribution shows that shifting cannot be trivially assumed to always affect the direct object and that explicit knowledge of shifter scopes will be useful for judging the polarity of a phrase.", "Conclusion We introduced a lexicon of verbal polarity shifters that covers the entire verb vocabulary of WordNet.", "Our annotation labels each individual word sense of a verb, providing more fine-grained information than annotations on the lemmalevel would.", "In addition, we also label the syntactic scopes of each verbal shifter that can be affected by the shifting.", "This is a clear improvement over the list of verbal shifters provided by Schulder et al.", "(2017) , which only provides labels at the lemma-level rather than for individual word senses and gives no information regarding shifting scope.", "It also only has human expert annotation for 30% of the verb vocabulary of WordNet, as opposed to our full coverage.", "We hope this resource will help improve fine-grained sentiment analysis systems by providing explicit information on where polarities may shift in a sentence.", "We also hope our work will encourage the creation of similar polarity shifter lexicons for nouns and adjectives.", "As they are more numerous than verbs (WordNet contains 20k adjectival and 110k nominal lemmas), creating such resources will come with its own challenges, especially in the case of nouns." ] }
{ "paper_header_number": [ "1.", "2.", "2.1.", "2.2.", "2.3.", "3.", "3.1.", "3.2.", "3.3.", "3.4.", "3.5.", "4.", "5." ], "paper_header_content": [ "Introduction", "Background", "Polarity Shifters", "Verbal Shifters", "Related Work", "Data", "Word Senses", "Shifter Scope", "Annotation", "Main Lexicon File Format", "Auxiliary Lexicons", "Statistics", "Conclusion" ] }
GEM-SciDuet-train-64#paper-1137#slide-6
Annotation Workflow
Marc Schulder Saarland University
Marc Schulder Saarland University
[]
GEM-SciDuet-train-64#paper-1137#slide-7
1137
Introducing a Lexicon of Verbal Polarity Shifters for English
The sentiment polarity of a phrase does not only depend on the polarities of its words, but also on how these are affected by their context. Negation words (e.g. not, no, never) can change the polarity of a phrase. Similarly, verbs and other content words can also act as polarity shifters (e.g. fail, deny, alleviate). While individually more sparse, they are far more numerous. Among verbs alone, there are more than 1200 shifters. However, sentiment analysis systems barely consider polarity shifters other than negation words. A major reason for this is the scarcity of lexicons and corpora that provide information on them. We introduce a lexicon of verbal polarity shifters that covers the entirety of verbs found in WordNet. We provide a fine-grained annotation of individual word senses, as well as information for each verbal shifter on the syntactic scopes that it can affect.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190 ], "paper_content_text": [ "Introduction Polarity shifters are content words that exhibit semantic properties similar to negation.", "For example, the negated statement in (1) can also be achieved by the verbal shifter fail instead of the negation not, as shown in (2).", "(1) Peter did not pass the exam.", "(2) Peter failed shifter to pass the exam.", "As with negation words, polarity shifters change the polarity of a statement.", "This can happen to both positive and negative statements.", "In (3) the positive polarity of scholarship is shifted by denied, resulting in a negative polarity for the phrase.", "Conversely, the overall polarity of (4) is positive despite the negative polarity of pain.", "Polarity shifting is also caused by other content word classes, such as nouns (e.g.", "downfall) and adjectives (e.g.", "devoid).", "However, this work focusses on verbs, due to their importance as minimal semantic units, far-reaching scopes and potential basis for nominal shifter lexicons (see §2.2.).", "Knowledge of polarity shifting is important for a variety of tasks, especially sentiment analysis (Wiegand et al., 2010; Liu, 2012; Wilson et al., 2005) , as well as relation extraction (Sanchez-Graillet and Poesio, 2007) and textual entailment recognition (Harabagiu et al., 2006) .", "The majority of research into polarity shifting for sentiment analysis has focussed on negation words (Wiegand et al., 2010; Schouten and Frasincar, 2016; Pak and Paroubek, 2010) .", "Negation words (e.g.", "not, no, never) are mostly function words, of which only a small number exists, so exhaustive coverage is comparatively simple.", "Content word classes, such as verbs, are considerably more difficult to cover comprehensively due to their sheer number.", "For example, WordNet (Miller et al., 1990) contains over 10k verbal lemmas.", "Most verbs are also far less frequent than common negation words, making individual verbal shifters seem less important.", "However, overall, verbal shifter lemmas occur 2.6 times as often as negation words (see §4.).", "Most existing resources on negation and polarity shifting cover few to no instances of verbal shifters (see §2.3.).", "To remedy this, we introduce a complete lexicon of verbal shifters with annotations of polarity shifters and their shifting scope for each word sense.", "Our contributions are as follows: (i) A complete lexicon of verbal polarity shifters, covering all verbs found in WordNet 3.1.", "(ii) A fine grained annotation, labelling every sense of a verb separately.", "(iii) Annotations for shifter scope, indicating which parts of a sentence are affected by the shifting.", "The entire dataset is publicly available.", "1 Background In this section we will provide a formal definition of polarity shifters ( §2.1.", "), motivate our focus on verbal shifters ( §2.2.)", "and discuss related work ( §2.3.).", "Polarity Shifters The notion of valence or polarity shifting was brought to broad awareness in the research community by the work of Polanyi and Zaenen (2006) .", "Those authors drew attention to the fact that the basic valence of individual lexical items may be shifted in context due to (a) the presence of certain other lexical items, (b) the genre type and discourse structure of the text and (c) cultural factors.", "In subsequent research, the term shifter has since mostly been applied to the case of lexical items that influence polarity.", "Further, the notion of shifting is most prototypically used for situations where a discrete polarity switch occurs between the classes positive, negative and neutral.", "However, for other authors, including Polanyi and Zaenen (2006) , intensification (e.g.", "very disappointing) and downtoning (e.g.", "somewhat disappointing) of polar intensity also falls within the scope of shifting.", "We partially follow this view in that we consider downtoning to be shifting, as it moves the polarity of a word in the opposite direction, i.e.", "making a positive expression less positive (e.g.", "hardly satisfying) and a negative one less negative (e.g.", "slightly problematic).", "We do not consider intensifiers as shifters, as they support the already existing polarity.", "In most research, shifters are commonly illustrated and enumerated rather than formally defined.", "Polanyi and Zaenen (2006) for instance list negation words, intensifiers, modals and presuppositional items as lexical contextual polarity shifters.", "Setting aside downtoners for now, the common denominator of shifting is negation.", "Negation marks contexts in which a situation that the speaker expected fails to occur or hold.", "When this situation is part of a binary opposition (dead -alive), one can firmly conclude that the complementary state of affairs holds (not dead ⇒ alive).", "In cases where the negation affects a scalar notion, which is common in evaluative contexts, the understanding that arises depends on which kinds of scalar inferences and default assumptions are made in the context (Paradis and Willners, 2006) .", "Thus, not good denies the applicability of an evaluation in the region of good or better, but leaves open just how far in the direction of badness the actual interpretation lies: \"It wasn't good\" may be continued with \"but it was ok\" to yield a neutral or mildly positive evaluation or with \"in fact, it was terrible\" to yield a strongly negative one.", "2 While downtoners (e.g.", "somewhat) applied to scalar predicates such as good do not directly express contradiction, they do give rise to negative entailments and inferences.", "Moreover, the structure of scales intrinsically provides shifting.", "Thus, while something being good allows it to be even more positive (\"The movie was good.", "In fact, it was excellent.", "\"), something being somewhat good bounds its positiveness and opens up more negative meanings (\"The performance was somewhat good, but overall rather disappointing\").", "Considering these properties of scales, one can see shifting at work even in the case of downtoning.", "Verbal Shifters While the inclusion of shifting and scalar semantics in semantic representations is not limited to lexical items of particular parts-of-speech -we also find shifter adjectives (e.g.", "devoid) and adverbs (e.g.", "barely) -we limit our work to verbal shifters for several reasons.", "As shown by the work of Schneider et al.", "(2016) , verbs, together with nouns, are the most important minimal semantic units in text and thus are prime candidates for being tackled first.", "Verbs are usually the main syntactic predicates of clauses and sentences and thus verbal shifters can be expected to project far-reaching scopes.", "Most nominal shifters (e.g.", "failure, loss), on the other hand, have morphologically related verbs (e.g.", "fail, lose) and we expect that this connection can be exploited to spread shifter classification from verbs to nouns in the future.", "Related to this, the grammar of verbs, for instance with respect to the diversity of scope types, is more complex than that of nouns and so we expect it to be easier to project from verbs to nouns rather than in the opposite direction.", "Related Work Existing lexicons and corpora that cover polarity shifting focus almost exclusively on negation words.", "The most complex negation lexicon for sentiment analysis (Wilson et al., 2005) includes a mere 12 verbal shifters.", "In contrast, our resource covers over 1200 verbal shifter lemmas.", "Corpora used as training data for negation processing, such as the Sentiment Treebank (Socher et al., 2013) or the BioScope corpus (Szarvas et al., 2008) , are fairly small datasets, so only the most frequent negation words appear.", "The BioScope corpus, for example, contains only 6 verbal shifters (Morante, 2010) .", "Schulder et al.", "(2017) show that state-of-the-art systems trained on such data do not reliably detect polarity shifting and should profit from explicit knowledge of verbal shifters.", "The only work to date that covers a larger number of verbal shifters is Schulder et al.", "(2017) , who annotate a sample of the English verbs found in WordNet for whether they exhibit polarity shifting.", "They start by manually annotating an initial 2000 verbs.", "These verbs are used to train an SVM classifier using linguistic features and common language resources.", "The classifier is then run on the remaining WordNet verbs to bootstrap a list of additional likely shifters.", "This list is then checked by a human annotator to detect false positives.", "Combining the initial annotation and the result of the bootstrapping process, they create a list of 3043 verbs.", "While the lexicon by Schulder et al.", "(2017) is an important step towards full coverage of verbal polarity shifters, there are several aspects that we seek to improve upon.", "First of all, their lexicon covers less than a third of the verbs found in WordNet, likely missing a number of verbal shifters.", "Schulder et al.", "(2017) argue that their bootstrap process should cover the majority of shifters, however, this would mean that only 9% of all verbs are shifters.", "3 Their initial annotation of 2000 randomly selected verbs puts the shifter ratio at 15% instead.", "Another issue with their lexicon is that it only labels lemma forms, but does not differentiate between word senses.", "Many verbs do not actually exhibit shifting in all of their senses, so this information will be important for contextual classification.", "Lastly, they forgo the question of shifter scope, i.e.", "which argument of a verb can be affected by its polarity shift.", "Data We treat this annotation effort as a binary labelling task where a word can either cause polarites to shift or not.", "However, instead of assigning a single label to an entire verb lemma, as Schulder et al.", "(2017) did, we label individual word senses.", "We outline the rationale for this in §3.1.", "In addition we explicitly specify the syntactic scope of the shifting.", "This is motivated and explained in §3.2.", "§3.3.", "describes the annotation process.", "§3.4.", "describes the data format of our main lexicon.", "Based on this main lexicon we also derive two auxiliary lexicons in §3.5., providing complete labelled lists of all WordNet verb lemmas and all WordNet verb synsets respectively.", "Word Senses Many words that shift polarities only do so for some of their word senses.", "For example, mark down acts as a shifter in (5) , where it has the sense of \"reducing the value of something\", but the sense of \"writing something down to have a record of it\" in (6) causes no shifting.", "In our work we found that among shifter lemmas with multiple word senses, only 23% caused shifting in each of their senses.", "An annotation on the basis of individual word senses is therefore required.", "To differentiate the senses of a verb, we use its synset affiliations found in WordNet.", "Words within the same synset share a shifter label.", "Shifter scope, on the other hand, can differ among words of the same synset (see §3.2.).", "The annotation introduced in §3.3.", "is therefore applied to individual lemma-sense pairs to capture the best of both worlds.", "Shifter Scope A verbal shifter usually only affects the parts of a sentence that are syntactically governed by the verb through its valency.", "However, not every argument of a verbal shifter is subject to polarity shifting.", "Which argument is affected by polarity shifting depends on the verb in question.", "In (7) , surrender shifts only the polarity of its subject, but does not affect the object.", "Conversely, defeat only shifts its object in (8).", "The polarity of the subject of defeat does not play a role in this, as can be seen in (9).", "The given scopes assume that verb phrases are in their active form.", "In passive phrases, subject and object roles are inverted.", "To avoid this issue, sentence structure normalization should be performed before computing shifter scope.", "Synsets in WordNet only capture the semantic similarity of words, but almost no syntactic properties (Ruppenhofer and Brandes, 2015) .", "The shifter scope of a verb depends on its syntactic arguments, which can differ between verbs of the same synset.", "For example, discard and dispose share the sense \"throw or cast away\", but while discard shifts its direct object (10), dispose requires a prepositional object (11).", "For this reason we annotate lemma-synset pairs individually, instead of assigning scope labels to an entire synset.", "We also consider cases where a verbal shifter has more than one potential scope for the same lemma-sense pair.", "For example, infringe can shift its direct object or various prepositional objects, as seen in (12) -(14) .", "Therefore, infringe receives the scope labels dobj, pobj on and pobj upon.", "A verbal shifter will only ever shift the polarity of one of its scopes.", "Which scope is affected by the shifting depends on the given sentence.", "Annotation The entire dataset was labelled by an expert annotator with experience in linguistics and annotation work.", "To measure inter-annotator agreement, a second annotator re-annotated 400 word senses for their shifter label.", "They achieved an agreement of κ = 0.73, indicating substantial agreement (Landis and Koch, 1977) .", "The annotation progressed as follows: Given a complete list of WordNet verb lemmas, the annotator would inspect one lemma at a time.", "For this lemma, all senses were looked up.", "For each such lemma-sense pair, the annotator decided whether it is a shifter or not.", "Decisions were based on the sense definition of the synset and whether sentences using this sense of the lemma cause shifting.", "If a word sense was labelled as a shifter, it was subsequently also annotated for its potential shifter scopes.", "In cases where label conflicts between different lemmasense pairs of the same sense were encountered, these labels were reconsidered.", "This introduced an additional robustness to the annotation as it let the annotator revisit challenging cases from a new perspective.", "The resulting list of lemma-sense pairs provides more finegrained information than either an annotation for only word lemmas or only synsets could (see §3.1.", "and §3.2.).", "Main Lexicon File Format We provide our main lexicon as a comma-separated value (csv) file in which each line represents a specific lemmasense-scope triple of a verbal shifter.", "Each line follows the format \"LEMMA,SYNSET,SCOPE\".", "The fields are defined as follows: LEMMA: The lemma form of the verb.", "SYNSET: The numeric identifier of the synset, commonly referred to as offset or database location.", "It consists of 8 digits, including leading zeroes (e.g.", "00334568).", "SCOPE: The scope of the shifting.", "Given as subj for subject position, dobj for direct object position and comp for clausal complements.", "Prepositional object positions are given as pobj * , where * is replaced by the preposition in question, e.g.", "pobj from for objects with the preposition \"from\" or prep of for the preposition \"of\".", "When a lemma has multiple word senses, a separate entry is provided for each lemma-sense pair.", "When a lemma-sense pair has multiple potential shifting scopes, a separate entry is provided for each scope.", "Any combinations not provided are considered not to exhibit shifting.", "Take, for example, the set of entries for \"blow out\": (15) blow out,00436247,subj blow out,02767855,dobj It tells us that blow out in the sense 00436247 (\"melt, break, or become otherwise unusable\") is a shifter that affects its subject.", "The sense 02767855 (\"put out, as of fires, flames, or lights\") also exhibits shifting, but this time affects the direct object.", "It is, however, not a shifter for sense 02766970 (\"erupt in an uncontrolled manner\").", "For an example of multiple scopes for the same word sense, consider cramp: Its sense 00237139 (\"prevent the progress or free movement of\") can shift the polarity of either its direct object (e.g.", "\"it cramped his progress\") or that of a prepositional object with the preposition \"in\" (e.g.", "\"he was cramped in his progress\").", "The three other senses of cramp given by WordNet are not considered shifters.", "Auxiliary Lexicons Our main lexicon is labelled at the lemma-sense pair level to provide the most fine-grained level of information possible.", "It can, however, easily applied to more coarse-grained applications.", "As a convenience, we provide lemma-and synset-level auxiliary lexicons that list all WordNet lemmas and all WordNet synsets, respectively, accompanied with their shifter label.", "A lemma is labelled as a shifter if at least one of its senses is considered a shifter in our main lexicon.", "Similarly, synsets are labelled as shifters if at least one of its lemma-realizations is a shifter.", "Statistics In Table 1 we present the ratio of shifters among the verbs contained in WordNet.", "While only about 10% of verbs are shifters, this still results in 1220 lemmas and 924 synsets, more than covered in any other resource (see §2.3.).", "49% of verbs in WordNet are polysemous, i.e.", "they have multiple meanings.", "Among verbal shifters, this ratio is considerably higher, reaching 73%.", "Of these, only 23% are shifters in all of their word senses.", "To get an idea of how common verbal shifters are in actual use, we computed lemma frequencies over the Amazon Product Review Data corpus (Jindal and Liu, 2008) , which comprises over 5.8 million reviews.", "We found this corpus suitable due to its size, sentiment-related content and use in related tasks (Schulder et al., 2017) .", "We observe 1163 different verbal shifter lemmas with an overall total of 34 million occurrences.", "Correcting for nonshifter senses of shifter lemmas 4 , we still estimate 13 million occurrences, accounting for 5% of all verb occurrences in the corpus.", "To compare, the 15 negation words found in the valence shifter lexicon by Wilson et al.", "(2005) occur 13 million times as well.", "While the frequency of individual negation (function) words is unsurprisingly higher, the total number of verbal shifter occurrences highlights that verbal shifters are just as frequent and should not be ignored.", "Statistics on the distribution of shifter scopes can be found in Table 2 .", "74% of verbal shifters have a direct object scope and 10% a prepositional object scope.", "Among these, \"from\" is the most common preposition at 51%, followed by \"of\" with 22%.", "19% shift the polarity of their subject and only 1.5% shift that of a clausal complement.", "This distribution shows that shifting cannot be trivially assumed to always affect the direct object and that explicit knowledge of shifter scopes will be useful for judging the polarity of a phrase.", "Conclusion We introduced a lexicon of verbal polarity shifters that covers the entire verb vocabulary of WordNet.", "Our annotation labels each individual word sense of a verb, providing more fine-grained information than annotations on the lemmalevel would.", "In addition, we also label the syntactic scopes of each verbal shifter that can be affected by the shifting.", "This is a clear improvement over the list of verbal shifters provided by Schulder et al.", "(2017) , which only provides labels at the lemma-level rather than for individual word senses and gives no information regarding shifting scope.", "It also only has human expert annotation for 30% of the verb vocabulary of WordNet, as opposed to our full coverage.", "We hope this resource will help improve fine-grained sentiment analysis systems by providing explicit information on where polarities may shift in a sentence.", "We also hope our work will encourage the creation of similar polarity shifter lexicons for nouns and adjectives.", "As they are more numerous than verbs (WordNet contains 20k adjectival and 110k nominal lemmas), creating such resources will come with its own challenges, especially in the case of nouns." ] }
{ "paper_header_number": [ "1.", "2.", "2.1.", "2.2.", "2.3.", "3.", "3.1.", "3.2.", "3.3.", "3.4.", "3.5.", "4.", "5." ], "paper_header_content": [ "Introduction", "Background", "Polarity Shifters", "Verbal Shifters", "Related Work", "Data", "Word Senses", "Shifter Scope", "Annotation", "Main Lexicon File Format", "Auxiliary Lexicons", "Statistics", "Conclusion" ] }
GEM-SciDuet-train-64#paper-1137#slide-7
Annotators
Experience in linguistics and annotation work 2nd annotator labelled 400 word senses Both annotators are authors of this paper. Marc Schulder Saarland University
Experience in linguistics and annotation work 2nd annotator labelled 400 word senses Both annotators are authors of this paper. Marc Schulder Saarland University
[]
GEM-SciDuet-train-64#paper-1137#slide-8
1137
Introducing a Lexicon of Verbal Polarity Shifters for English
The sentiment polarity of a phrase does not only depend on the polarities of its words, but also on how these are affected by their context. Negation words (e.g. not, no, never) can change the polarity of a phrase. Similarly, verbs and other content words can also act as polarity shifters (e.g. fail, deny, alleviate). While individually more sparse, they are far more numerous. Among verbs alone, there are more than 1200 shifters. However, sentiment analysis systems barely consider polarity shifters other than negation words. A major reason for this is the scarcity of lexicons and corpora that provide information on them. We introduce a lexicon of verbal polarity shifters that covers the entirety of verbs found in WordNet. We provide a fine-grained annotation of individual word senses, as well as information for each verbal shifter on the syntactic scopes that it can affect.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190 ], "paper_content_text": [ "Introduction Polarity shifters are content words that exhibit semantic properties similar to negation.", "For example, the negated statement in (1) can also be achieved by the verbal shifter fail instead of the negation not, as shown in (2).", "(1) Peter did not pass the exam.", "(2) Peter failed shifter to pass the exam.", "As with negation words, polarity shifters change the polarity of a statement.", "This can happen to both positive and negative statements.", "In (3) the positive polarity of scholarship is shifted by denied, resulting in a negative polarity for the phrase.", "Conversely, the overall polarity of (4) is positive despite the negative polarity of pain.", "Polarity shifting is also caused by other content word classes, such as nouns (e.g.", "downfall) and adjectives (e.g.", "devoid).", "However, this work focusses on verbs, due to their importance as minimal semantic units, far-reaching scopes and potential basis for nominal shifter lexicons (see §2.2.).", "Knowledge of polarity shifting is important for a variety of tasks, especially sentiment analysis (Wiegand et al., 2010; Liu, 2012; Wilson et al., 2005) , as well as relation extraction (Sanchez-Graillet and Poesio, 2007) and textual entailment recognition (Harabagiu et al., 2006) .", "The majority of research into polarity shifting for sentiment analysis has focussed on negation words (Wiegand et al., 2010; Schouten and Frasincar, 2016; Pak and Paroubek, 2010) .", "Negation words (e.g.", "not, no, never) are mostly function words, of which only a small number exists, so exhaustive coverage is comparatively simple.", "Content word classes, such as verbs, are considerably more difficult to cover comprehensively due to their sheer number.", "For example, WordNet (Miller et al., 1990) contains over 10k verbal lemmas.", "Most verbs are also far less frequent than common negation words, making individual verbal shifters seem less important.", "However, overall, verbal shifter lemmas occur 2.6 times as often as negation words (see §4.).", "Most existing resources on negation and polarity shifting cover few to no instances of verbal shifters (see §2.3.).", "To remedy this, we introduce a complete lexicon of verbal shifters with annotations of polarity shifters and their shifting scope for each word sense.", "Our contributions are as follows: (i) A complete lexicon of verbal polarity shifters, covering all verbs found in WordNet 3.1.", "(ii) A fine grained annotation, labelling every sense of a verb separately.", "(iii) Annotations for shifter scope, indicating which parts of a sentence are affected by the shifting.", "The entire dataset is publicly available.", "1 Background In this section we will provide a formal definition of polarity shifters ( §2.1.", "), motivate our focus on verbal shifters ( §2.2.)", "and discuss related work ( §2.3.).", "Polarity Shifters The notion of valence or polarity shifting was brought to broad awareness in the research community by the work of Polanyi and Zaenen (2006) .", "Those authors drew attention to the fact that the basic valence of individual lexical items may be shifted in context due to (a) the presence of certain other lexical items, (b) the genre type and discourse structure of the text and (c) cultural factors.", "In subsequent research, the term shifter has since mostly been applied to the case of lexical items that influence polarity.", "Further, the notion of shifting is most prototypically used for situations where a discrete polarity switch occurs between the classes positive, negative and neutral.", "However, for other authors, including Polanyi and Zaenen (2006) , intensification (e.g.", "very disappointing) and downtoning (e.g.", "somewhat disappointing) of polar intensity also falls within the scope of shifting.", "We partially follow this view in that we consider downtoning to be shifting, as it moves the polarity of a word in the opposite direction, i.e.", "making a positive expression less positive (e.g.", "hardly satisfying) and a negative one less negative (e.g.", "slightly problematic).", "We do not consider intensifiers as shifters, as they support the already existing polarity.", "In most research, shifters are commonly illustrated and enumerated rather than formally defined.", "Polanyi and Zaenen (2006) for instance list negation words, intensifiers, modals and presuppositional items as lexical contextual polarity shifters.", "Setting aside downtoners for now, the common denominator of shifting is negation.", "Negation marks contexts in which a situation that the speaker expected fails to occur or hold.", "When this situation is part of a binary opposition (dead -alive), one can firmly conclude that the complementary state of affairs holds (not dead ⇒ alive).", "In cases where the negation affects a scalar notion, which is common in evaluative contexts, the understanding that arises depends on which kinds of scalar inferences and default assumptions are made in the context (Paradis and Willners, 2006) .", "Thus, not good denies the applicability of an evaluation in the region of good or better, but leaves open just how far in the direction of badness the actual interpretation lies: \"It wasn't good\" may be continued with \"but it was ok\" to yield a neutral or mildly positive evaluation or with \"in fact, it was terrible\" to yield a strongly negative one.", "2 While downtoners (e.g.", "somewhat) applied to scalar predicates such as good do not directly express contradiction, they do give rise to negative entailments and inferences.", "Moreover, the structure of scales intrinsically provides shifting.", "Thus, while something being good allows it to be even more positive (\"The movie was good.", "In fact, it was excellent.", "\"), something being somewhat good bounds its positiveness and opens up more negative meanings (\"The performance was somewhat good, but overall rather disappointing\").", "Considering these properties of scales, one can see shifting at work even in the case of downtoning.", "Verbal Shifters While the inclusion of shifting and scalar semantics in semantic representations is not limited to lexical items of particular parts-of-speech -we also find shifter adjectives (e.g.", "devoid) and adverbs (e.g.", "barely) -we limit our work to verbal shifters for several reasons.", "As shown by the work of Schneider et al.", "(2016) , verbs, together with nouns, are the most important minimal semantic units in text and thus are prime candidates for being tackled first.", "Verbs are usually the main syntactic predicates of clauses and sentences and thus verbal shifters can be expected to project far-reaching scopes.", "Most nominal shifters (e.g.", "failure, loss), on the other hand, have morphologically related verbs (e.g.", "fail, lose) and we expect that this connection can be exploited to spread shifter classification from verbs to nouns in the future.", "Related to this, the grammar of verbs, for instance with respect to the diversity of scope types, is more complex than that of nouns and so we expect it to be easier to project from verbs to nouns rather than in the opposite direction.", "Related Work Existing lexicons and corpora that cover polarity shifting focus almost exclusively on negation words.", "The most complex negation lexicon for sentiment analysis (Wilson et al., 2005) includes a mere 12 verbal shifters.", "In contrast, our resource covers over 1200 verbal shifter lemmas.", "Corpora used as training data for negation processing, such as the Sentiment Treebank (Socher et al., 2013) or the BioScope corpus (Szarvas et al., 2008) , are fairly small datasets, so only the most frequent negation words appear.", "The BioScope corpus, for example, contains only 6 verbal shifters (Morante, 2010) .", "Schulder et al.", "(2017) show that state-of-the-art systems trained on such data do not reliably detect polarity shifting and should profit from explicit knowledge of verbal shifters.", "The only work to date that covers a larger number of verbal shifters is Schulder et al.", "(2017) , who annotate a sample of the English verbs found in WordNet for whether they exhibit polarity shifting.", "They start by manually annotating an initial 2000 verbs.", "These verbs are used to train an SVM classifier using linguistic features and common language resources.", "The classifier is then run on the remaining WordNet verbs to bootstrap a list of additional likely shifters.", "This list is then checked by a human annotator to detect false positives.", "Combining the initial annotation and the result of the bootstrapping process, they create a list of 3043 verbs.", "While the lexicon by Schulder et al.", "(2017) is an important step towards full coverage of verbal polarity shifters, there are several aspects that we seek to improve upon.", "First of all, their lexicon covers less than a third of the verbs found in WordNet, likely missing a number of verbal shifters.", "Schulder et al.", "(2017) argue that their bootstrap process should cover the majority of shifters, however, this would mean that only 9% of all verbs are shifters.", "3 Their initial annotation of 2000 randomly selected verbs puts the shifter ratio at 15% instead.", "Another issue with their lexicon is that it only labels lemma forms, but does not differentiate between word senses.", "Many verbs do not actually exhibit shifting in all of their senses, so this information will be important for contextual classification.", "Lastly, they forgo the question of shifter scope, i.e.", "which argument of a verb can be affected by its polarity shift.", "Data We treat this annotation effort as a binary labelling task where a word can either cause polarites to shift or not.", "However, instead of assigning a single label to an entire verb lemma, as Schulder et al.", "(2017) did, we label individual word senses.", "We outline the rationale for this in §3.1.", "In addition we explicitly specify the syntactic scope of the shifting.", "This is motivated and explained in §3.2.", "§3.3.", "describes the annotation process.", "§3.4.", "describes the data format of our main lexicon.", "Based on this main lexicon we also derive two auxiliary lexicons in §3.5., providing complete labelled lists of all WordNet verb lemmas and all WordNet verb synsets respectively.", "Word Senses Many words that shift polarities only do so for some of their word senses.", "For example, mark down acts as a shifter in (5) , where it has the sense of \"reducing the value of something\", but the sense of \"writing something down to have a record of it\" in (6) causes no shifting.", "In our work we found that among shifter lemmas with multiple word senses, only 23% caused shifting in each of their senses.", "An annotation on the basis of individual word senses is therefore required.", "To differentiate the senses of a verb, we use its synset affiliations found in WordNet.", "Words within the same synset share a shifter label.", "Shifter scope, on the other hand, can differ among words of the same synset (see §3.2.).", "The annotation introduced in §3.3.", "is therefore applied to individual lemma-sense pairs to capture the best of both worlds.", "Shifter Scope A verbal shifter usually only affects the parts of a sentence that are syntactically governed by the verb through its valency.", "However, not every argument of a verbal shifter is subject to polarity shifting.", "Which argument is affected by polarity shifting depends on the verb in question.", "In (7) , surrender shifts only the polarity of its subject, but does not affect the object.", "Conversely, defeat only shifts its object in (8).", "The polarity of the subject of defeat does not play a role in this, as can be seen in (9).", "The given scopes assume that verb phrases are in their active form.", "In passive phrases, subject and object roles are inverted.", "To avoid this issue, sentence structure normalization should be performed before computing shifter scope.", "Synsets in WordNet only capture the semantic similarity of words, but almost no syntactic properties (Ruppenhofer and Brandes, 2015) .", "The shifter scope of a verb depends on its syntactic arguments, which can differ between verbs of the same synset.", "For example, discard and dispose share the sense \"throw or cast away\", but while discard shifts its direct object (10), dispose requires a prepositional object (11).", "For this reason we annotate lemma-synset pairs individually, instead of assigning scope labels to an entire synset.", "We also consider cases where a verbal shifter has more than one potential scope for the same lemma-sense pair.", "For example, infringe can shift its direct object or various prepositional objects, as seen in (12) -(14) .", "Therefore, infringe receives the scope labels dobj, pobj on and pobj upon.", "A verbal shifter will only ever shift the polarity of one of its scopes.", "Which scope is affected by the shifting depends on the given sentence.", "Annotation The entire dataset was labelled by an expert annotator with experience in linguistics and annotation work.", "To measure inter-annotator agreement, a second annotator re-annotated 400 word senses for their shifter label.", "They achieved an agreement of κ = 0.73, indicating substantial agreement (Landis and Koch, 1977) .", "The annotation progressed as follows: Given a complete list of WordNet verb lemmas, the annotator would inspect one lemma at a time.", "For this lemma, all senses were looked up.", "For each such lemma-sense pair, the annotator decided whether it is a shifter or not.", "Decisions were based on the sense definition of the synset and whether sentences using this sense of the lemma cause shifting.", "If a word sense was labelled as a shifter, it was subsequently also annotated for its potential shifter scopes.", "In cases where label conflicts between different lemmasense pairs of the same sense were encountered, these labels were reconsidered.", "This introduced an additional robustness to the annotation as it let the annotator revisit challenging cases from a new perspective.", "The resulting list of lemma-sense pairs provides more finegrained information than either an annotation for only word lemmas or only synsets could (see §3.1.", "and §3.2.).", "Main Lexicon File Format We provide our main lexicon as a comma-separated value (csv) file in which each line represents a specific lemmasense-scope triple of a verbal shifter.", "Each line follows the format \"LEMMA,SYNSET,SCOPE\".", "The fields are defined as follows: LEMMA: The lemma form of the verb.", "SYNSET: The numeric identifier of the synset, commonly referred to as offset or database location.", "It consists of 8 digits, including leading zeroes (e.g.", "00334568).", "SCOPE: The scope of the shifting.", "Given as subj for subject position, dobj for direct object position and comp for clausal complements.", "Prepositional object positions are given as pobj * , where * is replaced by the preposition in question, e.g.", "pobj from for objects with the preposition \"from\" or prep of for the preposition \"of\".", "When a lemma has multiple word senses, a separate entry is provided for each lemma-sense pair.", "When a lemma-sense pair has multiple potential shifting scopes, a separate entry is provided for each scope.", "Any combinations not provided are considered not to exhibit shifting.", "Take, for example, the set of entries for \"blow out\": (15) blow out,00436247,subj blow out,02767855,dobj It tells us that blow out in the sense 00436247 (\"melt, break, or become otherwise unusable\") is a shifter that affects its subject.", "The sense 02767855 (\"put out, as of fires, flames, or lights\") also exhibits shifting, but this time affects the direct object.", "It is, however, not a shifter for sense 02766970 (\"erupt in an uncontrolled manner\").", "For an example of multiple scopes for the same word sense, consider cramp: Its sense 00237139 (\"prevent the progress or free movement of\") can shift the polarity of either its direct object (e.g.", "\"it cramped his progress\") or that of a prepositional object with the preposition \"in\" (e.g.", "\"he was cramped in his progress\").", "The three other senses of cramp given by WordNet are not considered shifters.", "Auxiliary Lexicons Our main lexicon is labelled at the lemma-sense pair level to provide the most fine-grained level of information possible.", "It can, however, easily applied to more coarse-grained applications.", "As a convenience, we provide lemma-and synset-level auxiliary lexicons that list all WordNet lemmas and all WordNet synsets, respectively, accompanied with their shifter label.", "A lemma is labelled as a shifter if at least one of its senses is considered a shifter in our main lexicon.", "Similarly, synsets are labelled as shifters if at least one of its lemma-realizations is a shifter.", "Statistics In Table 1 we present the ratio of shifters among the verbs contained in WordNet.", "While only about 10% of verbs are shifters, this still results in 1220 lemmas and 924 synsets, more than covered in any other resource (see §2.3.).", "49% of verbs in WordNet are polysemous, i.e.", "they have multiple meanings.", "Among verbal shifters, this ratio is considerably higher, reaching 73%.", "Of these, only 23% are shifters in all of their word senses.", "To get an idea of how common verbal shifters are in actual use, we computed lemma frequencies over the Amazon Product Review Data corpus (Jindal and Liu, 2008) , which comprises over 5.8 million reviews.", "We found this corpus suitable due to its size, sentiment-related content and use in related tasks (Schulder et al., 2017) .", "We observe 1163 different verbal shifter lemmas with an overall total of 34 million occurrences.", "Correcting for nonshifter senses of shifter lemmas 4 , we still estimate 13 million occurrences, accounting for 5% of all verb occurrences in the corpus.", "To compare, the 15 negation words found in the valence shifter lexicon by Wilson et al.", "(2005) occur 13 million times as well.", "While the frequency of individual negation (function) words is unsurprisingly higher, the total number of verbal shifter occurrences highlights that verbal shifters are just as frequent and should not be ignored.", "Statistics on the distribution of shifter scopes can be found in Table 2 .", "74% of verbal shifters have a direct object scope and 10% a prepositional object scope.", "Among these, \"from\" is the most common preposition at 51%, followed by \"of\" with 22%.", "19% shift the polarity of their subject and only 1.5% shift that of a clausal complement.", "This distribution shows that shifting cannot be trivially assumed to always affect the direct object and that explicit knowledge of shifter scopes will be useful for judging the polarity of a phrase.", "Conclusion We introduced a lexicon of verbal polarity shifters that covers the entire verb vocabulary of WordNet.", "Our annotation labels each individual word sense of a verb, providing more fine-grained information than annotations on the lemmalevel would.", "In addition, we also label the syntactic scopes of each verbal shifter that can be affected by the shifting.", "This is a clear improvement over the list of verbal shifters provided by Schulder et al.", "(2017) , which only provides labels at the lemma-level rather than for individual word senses and gives no information regarding shifting scope.", "It also only has human expert annotation for 30% of the verb vocabulary of WordNet, as opposed to our full coverage.", "We hope this resource will help improve fine-grained sentiment analysis systems by providing explicit information on where polarities may shift in a sentence.", "We also hope our work will encourage the creation of similar polarity shifter lexicons for nouns and adjectives.", "As they are more numerous than verbs (WordNet contains 20k adjectival and 110k nominal lemmas), creating such resources will come with its own challenges, especially in the case of nouns." ] }
{ "paper_header_number": [ "1.", "2.", "2.1.", "2.2.", "2.3.", "3.", "3.1.", "3.2.", "3.3.", "3.4.", "3.5.", "4.", "5." ], "paper_header_content": [ "Introduction", "Background", "Polarity Shifters", "Verbal Shifters", "Related Work", "Data", "Word Senses", "Shifter Scope", "Annotation", "Main Lexicon File Format", "Auxiliary Lexicons", "Statistics", "Conclusion" ] }
GEM-SciDuet-train-64#paper-1137#slide-8
Lexicon Example
Blow out Synset 00436247 SUBJ Shifter melt, break, or become otherwise unusable Blow out Synset 02767855 DOBJ Shifter put out, as of fires, flames, or lights erupt in an uncontrolled manner Marc Schulder Saarland University
Blow out Synset 00436247 SUBJ Shifter melt, break, or become otherwise unusable Blow out Synset 02767855 DOBJ Shifter put out, as of fires, flames, or lights erupt in an uncontrolled manner Marc Schulder Saarland University
[]
GEM-SciDuet-train-64#paper-1137#slide-9
1137
Introducing a Lexicon of Verbal Polarity Shifters for English
The sentiment polarity of a phrase does not only depend on the polarities of its words, but also on how these are affected by their context. Negation words (e.g. not, no, never) can change the polarity of a phrase. Similarly, verbs and other content words can also act as polarity shifters (e.g. fail, deny, alleviate). While individually more sparse, they are far more numerous. Among verbs alone, there are more than 1200 shifters. However, sentiment analysis systems barely consider polarity shifters other than negation words. A major reason for this is the scarcity of lexicons and corpora that provide information on them. We introduce a lexicon of verbal polarity shifters that covers the entirety of verbs found in WordNet. We provide a fine-grained annotation of individual word senses, as well as information for each verbal shifter on the syntactic scopes that it can affect.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190 ], "paper_content_text": [ "Introduction Polarity shifters are content words that exhibit semantic properties similar to negation.", "For example, the negated statement in (1) can also be achieved by the verbal shifter fail instead of the negation not, as shown in (2).", "(1) Peter did not pass the exam.", "(2) Peter failed shifter to pass the exam.", "As with negation words, polarity shifters change the polarity of a statement.", "This can happen to both positive and negative statements.", "In (3) the positive polarity of scholarship is shifted by denied, resulting in a negative polarity for the phrase.", "Conversely, the overall polarity of (4) is positive despite the negative polarity of pain.", "Polarity shifting is also caused by other content word classes, such as nouns (e.g.", "downfall) and adjectives (e.g.", "devoid).", "However, this work focusses on verbs, due to their importance as minimal semantic units, far-reaching scopes and potential basis for nominal shifter lexicons (see §2.2.).", "Knowledge of polarity shifting is important for a variety of tasks, especially sentiment analysis (Wiegand et al., 2010; Liu, 2012; Wilson et al., 2005) , as well as relation extraction (Sanchez-Graillet and Poesio, 2007) and textual entailment recognition (Harabagiu et al., 2006) .", "The majority of research into polarity shifting for sentiment analysis has focussed on negation words (Wiegand et al., 2010; Schouten and Frasincar, 2016; Pak and Paroubek, 2010) .", "Negation words (e.g.", "not, no, never) are mostly function words, of which only a small number exists, so exhaustive coverage is comparatively simple.", "Content word classes, such as verbs, are considerably more difficult to cover comprehensively due to their sheer number.", "For example, WordNet (Miller et al., 1990) contains over 10k verbal lemmas.", "Most verbs are also far less frequent than common negation words, making individual verbal shifters seem less important.", "However, overall, verbal shifter lemmas occur 2.6 times as often as negation words (see §4.).", "Most existing resources on negation and polarity shifting cover few to no instances of verbal shifters (see §2.3.).", "To remedy this, we introduce a complete lexicon of verbal shifters with annotations of polarity shifters and their shifting scope for each word sense.", "Our contributions are as follows: (i) A complete lexicon of verbal polarity shifters, covering all verbs found in WordNet 3.1.", "(ii) A fine grained annotation, labelling every sense of a verb separately.", "(iii) Annotations for shifter scope, indicating which parts of a sentence are affected by the shifting.", "The entire dataset is publicly available.", "1 Background In this section we will provide a formal definition of polarity shifters ( §2.1.", "), motivate our focus on verbal shifters ( §2.2.)", "and discuss related work ( §2.3.).", "Polarity Shifters The notion of valence or polarity shifting was brought to broad awareness in the research community by the work of Polanyi and Zaenen (2006) .", "Those authors drew attention to the fact that the basic valence of individual lexical items may be shifted in context due to (a) the presence of certain other lexical items, (b) the genre type and discourse structure of the text and (c) cultural factors.", "In subsequent research, the term shifter has since mostly been applied to the case of lexical items that influence polarity.", "Further, the notion of shifting is most prototypically used for situations where a discrete polarity switch occurs between the classes positive, negative and neutral.", "However, for other authors, including Polanyi and Zaenen (2006) , intensification (e.g.", "very disappointing) and downtoning (e.g.", "somewhat disappointing) of polar intensity also falls within the scope of shifting.", "We partially follow this view in that we consider downtoning to be shifting, as it moves the polarity of a word in the opposite direction, i.e.", "making a positive expression less positive (e.g.", "hardly satisfying) and a negative one less negative (e.g.", "slightly problematic).", "We do not consider intensifiers as shifters, as they support the already existing polarity.", "In most research, shifters are commonly illustrated and enumerated rather than formally defined.", "Polanyi and Zaenen (2006) for instance list negation words, intensifiers, modals and presuppositional items as lexical contextual polarity shifters.", "Setting aside downtoners for now, the common denominator of shifting is negation.", "Negation marks contexts in which a situation that the speaker expected fails to occur or hold.", "When this situation is part of a binary opposition (dead -alive), one can firmly conclude that the complementary state of affairs holds (not dead ⇒ alive).", "In cases where the negation affects a scalar notion, which is common in evaluative contexts, the understanding that arises depends on which kinds of scalar inferences and default assumptions are made in the context (Paradis and Willners, 2006) .", "Thus, not good denies the applicability of an evaluation in the region of good or better, but leaves open just how far in the direction of badness the actual interpretation lies: \"It wasn't good\" may be continued with \"but it was ok\" to yield a neutral or mildly positive evaluation or with \"in fact, it was terrible\" to yield a strongly negative one.", "2 While downtoners (e.g.", "somewhat) applied to scalar predicates such as good do not directly express contradiction, they do give rise to negative entailments and inferences.", "Moreover, the structure of scales intrinsically provides shifting.", "Thus, while something being good allows it to be even more positive (\"The movie was good.", "In fact, it was excellent.", "\"), something being somewhat good bounds its positiveness and opens up more negative meanings (\"The performance was somewhat good, but overall rather disappointing\").", "Considering these properties of scales, one can see shifting at work even in the case of downtoning.", "Verbal Shifters While the inclusion of shifting and scalar semantics in semantic representations is not limited to lexical items of particular parts-of-speech -we also find shifter adjectives (e.g.", "devoid) and adverbs (e.g.", "barely) -we limit our work to verbal shifters for several reasons.", "As shown by the work of Schneider et al.", "(2016) , verbs, together with nouns, are the most important minimal semantic units in text and thus are prime candidates for being tackled first.", "Verbs are usually the main syntactic predicates of clauses and sentences and thus verbal shifters can be expected to project far-reaching scopes.", "Most nominal shifters (e.g.", "failure, loss), on the other hand, have morphologically related verbs (e.g.", "fail, lose) and we expect that this connection can be exploited to spread shifter classification from verbs to nouns in the future.", "Related to this, the grammar of verbs, for instance with respect to the diversity of scope types, is more complex than that of nouns and so we expect it to be easier to project from verbs to nouns rather than in the opposite direction.", "Related Work Existing lexicons and corpora that cover polarity shifting focus almost exclusively on negation words.", "The most complex negation lexicon for sentiment analysis (Wilson et al., 2005) includes a mere 12 verbal shifters.", "In contrast, our resource covers over 1200 verbal shifter lemmas.", "Corpora used as training data for negation processing, such as the Sentiment Treebank (Socher et al., 2013) or the BioScope corpus (Szarvas et al., 2008) , are fairly small datasets, so only the most frequent negation words appear.", "The BioScope corpus, for example, contains only 6 verbal shifters (Morante, 2010) .", "Schulder et al.", "(2017) show that state-of-the-art systems trained on such data do not reliably detect polarity shifting and should profit from explicit knowledge of verbal shifters.", "The only work to date that covers a larger number of verbal shifters is Schulder et al.", "(2017) , who annotate a sample of the English verbs found in WordNet for whether they exhibit polarity shifting.", "They start by manually annotating an initial 2000 verbs.", "These verbs are used to train an SVM classifier using linguistic features and common language resources.", "The classifier is then run on the remaining WordNet verbs to bootstrap a list of additional likely shifters.", "This list is then checked by a human annotator to detect false positives.", "Combining the initial annotation and the result of the bootstrapping process, they create a list of 3043 verbs.", "While the lexicon by Schulder et al.", "(2017) is an important step towards full coverage of verbal polarity shifters, there are several aspects that we seek to improve upon.", "First of all, their lexicon covers less than a third of the verbs found in WordNet, likely missing a number of verbal shifters.", "Schulder et al.", "(2017) argue that their bootstrap process should cover the majority of shifters, however, this would mean that only 9% of all verbs are shifters.", "3 Their initial annotation of 2000 randomly selected verbs puts the shifter ratio at 15% instead.", "Another issue with their lexicon is that it only labels lemma forms, but does not differentiate between word senses.", "Many verbs do not actually exhibit shifting in all of their senses, so this information will be important for contextual classification.", "Lastly, they forgo the question of shifter scope, i.e.", "which argument of a verb can be affected by its polarity shift.", "Data We treat this annotation effort as a binary labelling task where a word can either cause polarites to shift or not.", "However, instead of assigning a single label to an entire verb lemma, as Schulder et al.", "(2017) did, we label individual word senses.", "We outline the rationale for this in §3.1.", "In addition we explicitly specify the syntactic scope of the shifting.", "This is motivated and explained in §3.2.", "§3.3.", "describes the annotation process.", "§3.4.", "describes the data format of our main lexicon.", "Based on this main lexicon we also derive two auxiliary lexicons in §3.5., providing complete labelled lists of all WordNet verb lemmas and all WordNet verb synsets respectively.", "Word Senses Many words that shift polarities only do so for some of their word senses.", "For example, mark down acts as a shifter in (5) , where it has the sense of \"reducing the value of something\", but the sense of \"writing something down to have a record of it\" in (6) causes no shifting.", "In our work we found that among shifter lemmas with multiple word senses, only 23% caused shifting in each of their senses.", "An annotation on the basis of individual word senses is therefore required.", "To differentiate the senses of a verb, we use its synset affiliations found in WordNet.", "Words within the same synset share a shifter label.", "Shifter scope, on the other hand, can differ among words of the same synset (see §3.2.).", "The annotation introduced in §3.3.", "is therefore applied to individual lemma-sense pairs to capture the best of both worlds.", "Shifter Scope A verbal shifter usually only affects the parts of a sentence that are syntactically governed by the verb through its valency.", "However, not every argument of a verbal shifter is subject to polarity shifting.", "Which argument is affected by polarity shifting depends on the verb in question.", "In (7) , surrender shifts only the polarity of its subject, but does not affect the object.", "Conversely, defeat only shifts its object in (8).", "The polarity of the subject of defeat does not play a role in this, as can be seen in (9).", "The given scopes assume that verb phrases are in their active form.", "In passive phrases, subject and object roles are inverted.", "To avoid this issue, sentence structure normalization should be performed before computing shifter scope.", "Synsets in WordNet only capture the semantic similarity of words, but almost no syntactic properties (Ruppenhofer and Brandes, 2015) .", "The shifter scope of a verb depends on its syntactic arguments, which can differ between verbs of the same synset.", "For example, discard and dispose share the sense \"throw or cast away\", but while discard shifts its direct object (10), dispose requires a prepositional object (11).", "For this reason we annotate lemma-synset pairs individually, instead of assigning scope labels to an entire synset.", "We also consider cases where a verbal shifter has more than one potential scope for the same lemma-sense pair.", "For example, infringe can shift its direct object or various prepositional objects, as seen in (12) -(14) .", "Therefore, infringe receives the scope labels dobj, pobj on and pobj upon.", "A verbal shifter will only ever shift the polarity of one of its scopes.", "Which scope is affected by the shifting depends on the given sentence.", "Annotation The entire dataset was labelled by an expert annotator with experience in linguistics and annotation work.", "To measure inter-annotator agreement, a second annotator re-annotated 400 word senses for their shifter label.", "They achieved an agreement of κ = 0.73, indicating substantial agreement (Landis and Koch, 1977) .", "The annotation progressed as follows: Given a complete list of WordNet verb lemmas, the annotator would inspect one lemma at a time.", "For this lemma, all senses were looked up.", "For each such lemma-sense pair, the annotator decided whether it is a shifter or not.", "Decisions were based on the sense definition of the synset and whether sentences using this sense of the lemma cause shifting.", "If a word sense was labelled as a shifter, it was subsequently also annotated for its potential shifter scopes.", "In cases where label conflicts between different lemmasense pairs of the same sense were encountered, these labels were reconsidered.", "This introduced an additional robustness to the annotation as it let the annotator revisit challenging cases from a new perspective.", "The resulting list of lemma-sense pairs provides more finegrained information than either an annotation for only word lemmas or only synsets could (see §3.1.", "and §3.2.).", "Main Lexicon File Format We provide our main lexicon as a comma-separated value (csv) file in which each line represents a specific lemmasense-scope triple of a verbal shifter.", "Each line follows the format \"LEMMA,SYNSET,SCOPE\".", "The fields are defined as follows: LEMMA: The lemma form of the verb.", "SYNSET: The numeric identifier of the synset, commonly referred to as offset or database location.", "It consists of 8 digits, including leading zeroes (e.g.", "00334568).", "SCOPE: The scope of the shifting.", "Given as subj for subject position, dobj for direct object position and comp for clausal complements.", "Prepositional object positions are given as pobj * , where * is replaced by the preposition in question, e.g.", "pobj from for objects with the preposition \"from\" or prep of for the preposition \"of\".", "When a lemma has multiple word senses, a separate entry is provided for each lemma-sense pair.", "When a lemma-sense pair has multiple potential shifting scopes, a separate entry is provided for each scope.", "Any combinations not provided are considered not to exhibit shifting.", "Take, for example, the set of entries for \"blow out\": (15) blow out,00436247,subj blow out,02767855,dobj It tells us that blow out in the sense 00436247 (\"melt, break, or become otherwise unusable\") is a shifter that affects its subject.", "The sense 02767855 (\"put out, as of fires, flames, or lights\") also exhibits shifting, but this time affects the direct object.", "It is, however, not a shifter for sense 02766970 (\"erupt in an uncontrolled manner\").", "For an example of multiple scopes for the same word sense, consider cramp: Its sense 00237139 (\"prevent the progress or free movement of\") can shift the polarity of either its direct object (e.g.", "\"it cramped his progress\") or that of a prepositional object with the preposition \"in\" (e.g.", "\"he was cramped in his progress\").", "The three other senses of cramp given by WordNet are not considered shifters.", "Auxiliary Lexicons Our main lexicon is labelled at the lemma-sense pair level to provide the most fine-grained level of information possible.", "It can, however, easily applied to more coarse-grained applications.", "As a convenience, we provide lemma-and synset-level auxiliary lexicons that list all WordNet lemmas and all WordNet synsets, respectively, accompanied with their shifter label.", "A lemma is labelled as a shifter if at least one of its senses is considered a shifter in our main lexicon.", "Similarly, synsets are labelled as shifters if at least one of its lemma-realizations is a shifter.", "Statistics In Table 1 we present the ratio of shifters among the verbs contained in WordNet.", "While only about 10% of verbs are shifters, this still results in 1220 lemmas and 924 synsets, more than covered in any other resource (see §2.3.).", "49% of verbs in WordNet are polysemous, i.e.", "they have multiple meanings.", "Among verbal shifters, this ratio is considerably higher, reaching 73%.", "Of these, only 23% are shifters in all of their word senses.", "To get an idea of how common verbal shifters are in actual use, we computed lemma frequencies over the Amazon Product Review Data corpus (Jindal and Liu, 2008) , which comprises over 5.8 million reviews.", "We found this corpus suitable due to its size, sentiment-related content and use in related tasks (Schulder et al., 2017) .", "We observe 1163 different verbal shifter lemmas with an overall total of 34 million occurrences.", "Correcting for nonshifter senses of shifter lemmas 4 , we still estimate 13 million occurrences, accounting for 5% of all verb occurrences in the corpus.", "To compare, the 15 negation words found in the valence shifter lexicon by Wilson et al.", "(2005) occur 13 million times as well.", "While the frequency of individual negation (function) words is unsurprisingly higher, the total number of verbal shifter occurrences highlights that verbal shifters are just as frequent and should not be ignored.", "Statistics on the distribution of shifter scopes can be found in Table 2 .", "74% of verbal shifters have a direct object scope and 10% a prepositional object scope.", "Among these, \"from\" is the most common preposition at 51%, followed by \"of\" with 22%.", "19% shift the polarity of their subject and only 1.5% shift that of a clausal complement.", "This distribution shows that shifting cannot be trivially assumed to always affect the direct object and that explicit knowledge of shifter scopes will be useful for judging the polarity of a phrase.", "Conclusion We introduced a lexicon of verbal polarity shifters that covers the entire verb vocabulary of WordNet.", "Our annotation labels each individual word sense of a verb, providing more fine-grained information than annotations on the lemmalevel would.", "In addition, we also label the syntactic scopes of each verbal shifter that can be affected by the shifting.", "This is a clear improvement over the list of verbal shifters provided by Schulder et al.", "(2017) , which only provides labels at the lemma-level rather than for individual word senses and gives no information regarding shifting scope.", "It also only has human expert annotation for 30% of the verb vocabulary of WordNet, as opposed to our full coverage.", "We hope this resource will help improve fine-grained sentiment analysis systems by providing explicit information on where polarities may shift in a sentence.", "We also hope our work will encourage the creation of similar polarity shifter lexicons for nouns and adjectives.", "As they are more numerous than verbs (WordNet contains 20k adjectival and 110k nominal lemmas), creating such resources will come with its own challenges, especially in the case of nouns." ] }
{ "paper_header_number": [ "1.", "2.", "2.1.", "2.2.", "2.3.", "3.", "3.1.", "3.2.", "3.3.", "3.4.", "3.5.", "4.", "5." ], "paper_header_content": [ "Introduction", "Background", "Polarity Shifters", "Verbal Shifters", "Related Work", "Data", "Word Senses", "Shifter Scope", "Annotation", "Main Lexicon File Format", "Auxiliary Lexicons", "Statistics", "Conclusion" ] }
GEM-SciDuet-train-64#paper-1137#slide-9
Conclusion
We introduced a lexicon of English verbal shifters: Covers all verbs in WordNet Annotations for each word sense Marc Schulder Saarland University
We introduced a lexicon of English verbal shifters: Covers all verbs in WordNet Annotations for each word sense Marc Schulder Saarland University
[]
GEM-SciDuet-train-65#paper-1141#slide-0
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-0
Main Clustering Aspects
Text preprocessing and representation
Text preprocessing and representation
[]
GEM-SciDuet-train-65#paper-1141#slide-1
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-1
Text Representation Models
Becker and Kuropka, 2003 Polyvyanyy and Kuropka, 2007 Hammouda and Kamel, 2004 Suffix Tree Zamir and Etzioni, 1998 N-Grams Schenker et al, 2007 Parse Thickets Galitsky, 2013 matrix
Becker and Kuropka, 2003 Polyvyanyy and Kuropka, 2007 Hammouda and Kamel, 2004 Suffix Tree Zamir and Etzioni, 1998 N-Grams Schenker et al, 2007 Parse Thickets Galitsky, 2013 matrix
[]
GEM-SciDuet-train-65#paper-1141#slide-2
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-2
Parse Thickets basic characteristics
Preserving a linguistic structure of a text paragraph Constructing of parse trees for each sentence within a paragraph Adding inter-sentence relations between parse tree nodes
Preserving a linguistic structure of a text paragraph Constructing of parse trees for each sentence within a paragraph Adding inter-sentence relations between parse tree nodes
[]
GEM-SciDuet-train-65#paper-1141#slide-3
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-3
Parse Thickets types of discourse relations
Rhetoric structure theory (RST) (Mann et al., 1992) Communicative Actions (Searle, 1969)
Rhetoric structure theory (RST) (Mann et al., 1992) Communicative Actions (Searle, 1969)
[]
GEM-SciDuet-train-65#paper-1141#slide-4
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-4
Coreferences example
io sonseeing et of oh on macy soaps ars LE Sot thats mee ree epee ese ES Se oa a 7 NUCLEAR | 4 RESEARCH | sso PRS S6.AIN.OFNNTP
io sonseeing et of oh on macy soaps ars LE Sot thats mee ree epee ese ES Se oa a 7 NUCLEAR | 4 RESEARCH | sso PRS S6.AIN.OFNNTP
[]
GEM-SciDuet-train-65#paper-1141#slide-5
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-5
Relations based on Rhetoric Structure Theory
RST characterizes structure of text in terms of relations that hold between parts of text RST describes relations between clauses in text which might not be syntactically linked RST helps to discover text patterns such as nucleus/satellite structure with relation such as evidence, justify, antithesis, concession and so on.
RST characterizes structure of text in terms of relations that hold between parts of text RST describes relations between clauses in text which might not be syntactically linked RST helps to discover text patterns such as nucleus/satellite structure with relation such as evidence, justify, antithesis, concession and so on.
[]
GEM-SciDuet-train-65#paper-1141#slide-6
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-6
Parse Thickets an example
Iran refuses to accept the UN proposal to end the dispute over work on nuclear weapons UN nuclear watchdog passes a resolution condemning Iran for developing a second uranium enrichment site in secret, A recent IAEA report presented diagrams that suggested Iran was secretly working on nuclear weapons, Iran envoy says its nuclear development is for peaceful purpose, and the material evidence against it has been fabricated by the US UN passes a resolution condemning the work of Iran on nuclear weapons, in spite of Iran claims that its nuclear research is for peaceful purpose, Envoy of Iran to IAEA proceeds with the dispute over its nuclear program and develops an enrichment site in secret, Iran confirms that the evidence of its nuclear weapons program is fabricated by the US and proceeds with the second uranium enrichment site
Iran refuses to accept the UN proposal to end the dispute over work on nuclear weapons UN nuclear watchdog passes a resolution condemning Iran for developing a second uranium enrichment site in secret, A recent IAEA report presented diagrams that suggested Iran was secretly working on nuclear weapons, Iran envoy says its nuclear development is for peaceful purpose, and the material evidence against it has been fabricated by the US UN passes a resolution condemning the work of Iran on nuclear weapons, in spite of Iran claims that its nuclear research is for peaceful purpose, Envoy of Iran to IAEA proceeds with the dispute over its nuclear program and develops an enrichment site in secret, Iran confirms that the evidence of its nuclear weapons program is fabricated by the US and proceeds with the second uranium enrichment site
[]
GEM-SciDuet-train-65#paper-1141#slide-7
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-7
Parse Thickets discourse relations
Iran confirms that the evidence of its nuclear weapons program is fabricated by the US and proceeds with the second uranium enrichment site Iran envoy says its nuclear development is for peaceful purpose, and the material evidence against it has been fabricated by the US UN nuclear watchdog passes a resolution condemning Iran for developing a second Uranium enrichment site in secret, A recent IAEA report presented diagrams that suggested Iran was secretly working on nuclear weapons, UN passes a resolution condemning the work of Iran on nuclear weapons, in spite of Iran claims that its nuclear research is for peaceful purpose, Envoy of Iran to IAEA proceeds with the dispute over its nuclear program and develops an enrichment site in secret
Iran confirms that the evidence of its nuclear weapons program is fabricated by the US and proceeds with the second uranium enrichment site Iran envoy says its nuclear development is for peaceful purpose, and the material evidence against it has been fabricated by the US UN nuclear watchdog passes a resolution condemning Iran for developing a second Uranium enrichment site in secret, A recent IAEA report presented diagrams that suggested Iran was secretly working on nuclear weapons, UN passes a resolution condemning the work of Iran on nuclear weapons, in spite of Iran claims that its nuclear research is for peaceful purpose, Envoy of Iran to IAEA proceeds with the dispute over its nuclear program and develops an enrichment site in secret
[]
GEM-SciDuet-train-65#paper-1141#slide-8
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-8
Clustering of Parse Thickets the main idea
Similarity of parse thickets based on sub-trees matching nodes with part of speech and stem of a word
Similarity of parse thickets based on sub-trees matching nodes with part of speech and stem of a word
[]
GEM-SciDuet-train-65#paper-1141#slide-9
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-9
Clustering of paragraphs generalisation of syntacic trees
[NN-work IN-* IN-on JJ-nuclear NNS-weapons ], [DT-the NN-dispute IN-over JJ-nuclear NNS-* ], [VBZ-passes DT-a - NN-resolution], [VBG-developing DT-* NN-enrichment NN-site IN-in NNsecret], [DT-* JJ-second NN-uranium NN-enrichment NN-site], [VBZ-is IN-for JJ-peaceful NN-purpose], [VBN-* VBN-fabricated - IN-by DT-the NNP-us]
[NN-work IN-* IN-on JJ-nuclear NNS-weapons ], [DT-the NN-dispute IN-over JJ-nuclear NNS-* ], [VBZ-passes DT-a - NN-resolution], [VBG-developing DT-* NN-enrichment NN-site IN-in NNsecret], [DT-* JJ-second NN-uranium NN-enrichment NN-site], [VBZ-is IN-for JJ-peaceful NN-purpose], [VBN-* VBN-fabricated - IN-by DT-the NNP-us]
[]
GEM-SciDuet-train-65#paper-1141#slide-10
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-10
Clustering of paragraphs generatisation of parse trees
[NN-Iran VBG-developing DT-* NN-enrichment NN-site IN-in [NN-generalization-<UN/nuclear watchdog> * VB-pass NN-resolution [NN-generalization- <Iran/envoy of Iran> Communicative action DT-the NN-dispute IN-over JJ-nuclear NNS-*] [Communicative action NN-work IN-of NN-Iran IN-on JJ-nuclear [NN-generalization <Iran/envoy to UN> Communicative action NN-Iran NN-nuclear NN-* VBZ-is IN-for JJ-peaceful NN-purpose ] [Communicative action NN-generalization <work/develop> IN-of NN-Iran IN-on JJ-nuclear NNS-weapons] NN-evidence IN-against NN-Iran NN-nuclear VBN-fabricated IN-by [NN-Iran JJ-nuclear NN-weapon NN-* RST-evidence VBN-fabricated IN-by DT-the NNP-US condemnproceed [enrichment site] <leads to> suggestcondemn [ work Iran nuclear weapon ]
[NN-Iran VBG-developing DT-* NN-enrichment NN-site IN-in [NN-generalization-<UN/nuclear watchdog> * VB-pass NN-resolution [NN-generalization- <Iran/envoy of Iran> Communicative action DT-the NN-dispute IN-over JJ-nuclear NNS-*] [Communicative action NN-work IN-of NN-Iran IN-on JJ-nuclear [NN-generalization <Iran/envoy to UN> Communicative action NN-Iran NN-nuclear NN-* VBZ-is IN-for JJ-peaceful NN-purpose ] [Communicative action NN-generalization <work/develop> IN-of NN-Iran IN-on JJ-nuclear NNS-weapons] NN-evidence IN-against NN-Iran NN-nuclear VBN-fabricated IN-by [NN-Iran JJ-nuclear NN-weapon NN-* RST-evidence VBN-fabricated IN-by DT-the NNP-US condemnproceed [enrichment site] <leads to> suggestcondemn [ work Iran nuclear weapon ]
[]
GEM-SciDuet-train-65#paper-1141#slide-11
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-11
Clustering of Parse Thickets what do we want
Adequately represent groups of texts with overlapping content Get text clusters with different refinement Goal: (multi-level) hierarchical structure Solution: Construction of pattern structures on parse thickets
Adequately represent groups of texts with overlapping content Get text clusters with different refinement Goal: (multi-level) hierarchical structure Solution: Construction of pattern structures on parse thickets
[]
GEM-SciDuet-train-65#paper-1141#slide-12
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-12
Clustering of Parse Thickets the mathematical foundation
A triple (G (D,u) , ), where G is a set of objects, (D,u) is a complete meet-semilattice of descriptions and G D is a mapping an object to a description. A pair (A, d) for which A d and d A, where A and d are the Galois connections, defined as follows: A ugA (g) for A G
A triple (G (D,u) , ), where G is a set of objects, (D,u) is a complete meet-semilattice of descriptions and G D is a mapping an object to a description. A pair (A, d) for which A d and d A, where A and d are the Galois connections, defined as follows: A ugA (g) for A G
[]
GEM-SciDuet-train-65#paper-1141#slide-13
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-13
Pattern Structures on Parse Thickets
an original paragraph of text an object a A parse thickets constructed a set of its maximal from paragraphs generalized sub-trees d a pattern concept a cluster Drawback: the exponential growth of the number of clusters by increasing the number of texts (objects)
an original paragraph of text an object a A parse thickets constructed a set of its maximal from paragraphs generalized sub-trees d a pattern concept a cluster Drawback: the exponential growth of the number of clusters by increasing the number of texts (objects)
[]
GEM-SciDuet-train-65#paper-1141#slide-14
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-14
Reduced pattern structuress meaningfullness estimates of a pattern concept
Maximum score among all sub-trees in the cluster Scoremax A, d := max Score (chunk) Average score of sub-trees in the cluster Scoreavg A, d Score (chunk) |d chunkd where Score (chunk) = nodechunk wnode
Maximum score among all sub-trees in the cluster Scoremax A, d := max Score (chunk) Average score of sub-trees in the cluster Scoreavg A, d Score (chunk) |d chunkd where Score (chunk) = nodechunk wnode
[]
GEM-SciDuet-train-65#paper-1141#slide-15
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-15
Reduced pattern Structures loss estimates of a cluster with respect to original texts
Estimates minimal lost meaning of cluster content w.r.t. original texts in the cluster Score max A, d mingA Scoremax g dg Estimates lost meaning of cluster content on average ScoreLossavg A, d Score avg A, d |d gA Score max g dg
Estimates minimal lost meaning of cluster content w.r.t. original texts in the cluster Score max A, d mingA Scoremax g dg Estimates lost meaning of cluster content on average ScoreLossavg A, d Score avg A, d |d gA Score max g dg
[]
GEM-SciDuet-train-65#paper-1141#slide-16
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-16
Reduced pattern structures generalization
Controlling the loss of meaning w.r.t. the original texts ScoreLoss A1 A2 d1 d2
Controlling the loss of meaning w.r.t. the original texts ScoreLoss A1 A2 d1 d2
[]
GEM-SciDuet-train-65#paper-1141#slide-17
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-17
Reduced pattern structures clusters distinguishability
Controlling the loss of meaning w.r.t. the nearest more meaningfulness neighbors in the cluster hierarchy Controlling the distinguishability w.r.t. the nearest neighbors in the hierarchy of clusters
Controlling the loss of meaning w.r.t. the nearest more meaningfulness neighbors in the cluster hierarchy Controlling the distinguishability w.r.t. the nearest neighbors in the hierarchy of clusters
[]
GEM-SciDuet-train-65#paper-1141#slide-18
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-18
Reduced pattern structures constraints
ScoreLoss A1 A2 d1 d2 pattern structure reduced pattern structure without reduction with 1 and
ScoreLoss A1 A2 d1 d2 pattern structure reduced pattern structure without reduction with 1 and
[]
GEM-SciDuet-train-65#paper-1141#slide-19
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-19
Implementation
The Apache OpenNLP library (the most common NLP tasks) Bing search API (to obtain news snippets) Pattern structure builder: modified by authors version of AddIntent algorithm (van der Merwe et al., 2004)
The Apache OpenNLP library (the most common NLP tasks) Bing search API (to obtain news snippets) Pattern structure builder: modified by authors version of AddIntent algorithm (van der Merwe et al., 2004)
[]
GEM-SciDuet-train-65#paper-1141#slide-20
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-20
News Clustering motivation
A long list of search results Many groups of pages with a similar content
A long list of search results Many groups of pages with a similar content
[]
GEM-SciDuet-train-65#paper-1141#slide-21
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-21
User Study non overlapping partition
web snippets on worlds most pressing news: F1 winners, fighting Ebola with nanoparticles, 2015 ACM awards winners, read facial expressions through webcam, turning brown eyes blue inconsistency of human-labeled partitions: low values of a pairwise Adjusted Mutual Information score of human-labeled partitions 0, MIadj
web snippets on worlds most pressing news: F1 winners, fighting Ebola with nanoparticles, 2015 ACM awards winners, read facial expressions through webcam, turning brown eyes blue inconsistency of human-labeled partitions: low values of a pairwise Adjusted Mutual Information score of human-labeled partitions 0, MIadj
[]
GEM-SciDuet-train-65#paper-1141#slide-22
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-22
Example The Ebola News Set
Text ID words symbols sentences quoted speech reported speech
Text ID words symbols sentences quoted speech reported speech
[]
GEM-SciDuet-train-65#paper-1141#slide-23
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-23
Accuracy of non overlapping clustering met
Accuracy of conventional clustering methods in the case of overlapping texts groups low (in most cases) greatly depends on taken as ground truth a human-labeled partition A human-labeled partition Method Linkage Distance average cityblock complete cityblock euclidean average cosine euclidean complete cosine euclidean
Accuracy of conventional clustering methods in the case of overlapping texts groups low (in most cases) greatly depends on taken as ground truth a human-labeled partition A human-labeled partition Method Linkage Distance average cityblock complete cityblock euclidean average cosine euclidean complete cosine euclidean
[]
GEM-SciDuet-train-65#paper-1141#slide-24
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-24
Accuracy of non overlapping clustering methods
Accuracy of conventional clustering methods for 4 human-labeled partitions
Accuracy of conventional clustering methods for 4 human-labeled partitions
[]
GEM-SciDuet-train-65#paper-1141#slide-25
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-25
An example of pattern structures clustering clusters with maximal score
reduced pattern structure with 1 and INNP-* NNP-sierra NNP-leone J,
reduced pattern structure with 1 and INNP-* NNP-sierra NNP-leone J,
[]
GEM-SciDuet-train-65#paper-1141#slide-26
1141
News clustering approach based on discourse text structure
A web search engine usually returns a long list of documents and it may be difficult for users to navigate through this collection and find the most relevant ones. We present an approach to post-retrieval snippet clustering based on pattern structures construction on augmented syntactic parse trees. Since an algorithm may be too slow for a typical collection of snippets, we propose a reduction method that allows us to construct a reduced pattern structure and make it scalable. Our algorithm takes into account discourse information to make clustering results independent of how information is distributed between sentences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101 ], "paper_content_text": [ "Introduction and related works The document clustering problem was widely investigated in many applications of text mining.", "One of the most important aspects of the text clustering problem is a structural representation of texts.", "A common approach to the text representation is a vector space model (Salton et al., 1975) , where the collection or corpus of documents is represented as a term-document matrix.", "The main drawback of this model is its inability to reflect the importance of a word with respect to a document and a corpus.", "To tackle this issue the weighted scheme based on tf-idf score has been proposed.", "Also, a term-document matrix built on a large texts collection may be sparse and have a high dimensionality.", "To reduce feature space, PCA, truncated SVD (Latent Semantic Analysis), random projection and other methods have been proposed.", "To handle synonyms as similar terms the general Vector Space Model (Wong et al., 1985; Tsatsaronis and Panagiotopoulou, 2009 ), topic-based vector model (Becker and Kuropka, 2003) and enhanced topic-based vector space model (Polyvyanyy and Kuropka, 2007) were introduced.", "The most common ways to clustering term-document matrix are hierarchical clustering, k-means and also bisecting k-means.", "Graph models are also used for text representation.", "Document Index Graph (DIG) was proposed by Hammouda (2004) .", "Zamir and Etzioni (1998) use suffix tree for representing web snippets, where words are used instead of characters.", "A more sophisticated model based on n-grams was introduced in Schenker et al.", "(2007) .", "In this paper, we consider a particular application of document clustering, it is a representation of web search results that could improve navigation through relevant documents.", "Clustering snippets on salient phrases is described in (Zamir and Etzioni, 1999; Zeng et al., 2004) .", "But the most promising approach for document clustering is a conceptual clustering, because it allows to obtain overlapping clusters and to organize them into a hierarchical structure as well (Cole et al., 2003; Koester, 2006; Messai et al., 2008; Carpineto and Romano, 1996) .", "We present an approach to selecting most significant clusters based on a pattern structure (Ganter and Kuznetsov, 2001 ).", "An approach of extended representation of syntactic trees with discourse relations between them was introduced in (Galitsky et al., 2013) .", "Leveraging discourse information allows to combine news articles not only by keyword similarity but by broader topicality and writing styles as well.", "The paper is organized as follows.", "Section 2 introduces a parse thicket and its simplified representation.", "In section 3 we consider approach to clustering web snippets and discuss efficiency issues.", "The illustrative example is presented in section 4.", "Finally, we conclude the paper and discuss some research perspectives.", "2 Clustering based on pattern structure Parse Thickets Parse thicket (Galitsky et al., 2013) is defined as a set of parse trees for each sentence augmented with a number of arcs, reflecting inter-sentence relations.", "In present work we use parse thickets based on limited set of relations described in (Galitsky et al., 2013) : coreferences (Lee et al., 2012) , Rhetoric structure relations (Mann and Thompson, 1992) and Communicative Actions (Searle, 1969) .", "Pattern Structure with Parse Thickets simplification To apply parse thickets to text clustering tasks we use pattern structures (Ganter and Kuznetsov, 2001 ) that is defined as a triple (G, (D, ) , δ), where G is a set of objects, (D, ) is a complete meet-semilattice of descriptions and δ : G → D is a mapping an object to a description.", "The Galois connection between set of objects and their descriptions is also defined as follows: A := g ∈ A δ (g) d := {g ∈ G|d δ (g)} for A ⊆ G, for d ∈ D A pair A, d for which A = d and d = A is called a pattern concept.", "In our case, A is the set of news, d is their shared content.", "We use AddIntent algorithm (van der Merwe et al., 2004) to construct pattern structure.", "On each step, it takes the parse thicket (or chunks) of a web snippet of the input and plugs it into the pattern structure.", "A pattern structure has several drawbacks.", "Firstly, the size of the structure could grow exponentially on the input data.", "More than that, construction of a pattern structure could be computationally intensive.", "To address the performance issues, we reduce the set of all intersections between the members of our training set (maximal common sub-parse thickets).", "Reduced pattern structure Pattern structure constructed from a collection of short texts usually has a huge number of concepts.", "To reduce the computational costs and improve the interpretability of pattern concepts we introduce several metrics, that are described below.", "Average and Maximal Pattern Score The average and maximal pattern score indices are meant to assess how meaningful the common description of texts in the concept is.", "The higher the difference of text fragments from each other, the lower their shared content is.", "Thus, meaningfulness criterion of the group of texts is Score max A, d := max chunk∈d Score (chunk) Score avg A, d := 1 |d| chunk∈d Score (chunk) The score function Score (chunk) estimates chunks on the basis of parts of speech composition.", "Average and Minimal Pattern Score loss Average and minimal pattern score loss describe how much information contained in text is lost in the description with respect to the source texts.", "Average pattern score loss expresses the average loss of shared content for all texts in a concept, while minimal pattern score loss represents a minimal loss of content among all texts included in a concept.", "ScoreLoss min A, d := min g∈A Score max g, d g ScoreLoss avg A, d := 1 |d| g∈A Score max g, d g We propose to use a reduced pattern structure.", "There are two options in our approach.", "The first one -construction of lower semilattice.", "This is similar to iceberg concept lattice approach (Stumme et al., 2002) .", "The second option -construction of concepts which are different from each other.", "Thus, for arbitrary sets of texts A 1 and A 2 , corresponding descriptions d 1 and d 2 and candidate for a pattern concept A 1 ∪ A 2 , d 1 ∩ d 2 criterion has the following form Score max A 1 ∪ A 2 , d 1 ∩ d 2 ≥ θ Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≥ µ 1 min {Score * A 1 , d 1 , Score * A 2 , d 2 } Score * A 1 ∪ A 2 , d 1 ∩ d 2 ≤ µ 2 max {Score * A 1 , d 1 , Score * A 2 , d 2 } The first constraint provides the condition for the construction of concepts with meaningful content, while two other constrains ensure that we do not use concepts with similar content.", "Experiments In this section we consider the proposed clustering method on 2 examples.", "The first one corresponds to the case when clusters are overlapping and distinguishable, the second one is the case of non-overlapping clusters.", "User Study In some cases it is quite difficult to identify disjoint classes for a text collection.", "To confirm this, we conducted experiments similar to the experiment scheme described in (Zeng et al., 2004) .", "We took web snippets obtained by querying the Bing search engine API and asked a group of four assessors to label ground truth for them.", "We performed news queries related to world's most pressing news (for example, \"fighting Ebola with nanoparticles\", \"turning brown eyes blue\", \"F1 winners\", \"read facial expressions through webcam\", \"2015 ACM awards winners\") to make labeling of data easier for the assessors.", "In most cases, according to the assessors, it was difficult to determine partitions, while overlapping clusters naturally stood out.", "As a result, in the case of non-overlapping clusters we usually got a small number of large classes or a sufficiently large number of classes consisting of 1-2 snippets.", "More than that, for the same set of snippets we obtained quite different partitions.", "We used the Adjusted Mutual Information score to estimate pairwise agreement of nonoverlapping clusters, which were identified by the people.", "To demonstrate the failure of the conventional clustering approach we consider 12 short texts on news query \"The Ebola epidemic\".", "Tests are available by link 1 .", "Assessors identify quite different nonoverlapping clusters.", "The pairwise Adjusted Mutual Information score was in the range of 0,03 to 0,51.", "Next, we compared partitions to clustering results of the following clustering methods: k-means clustering based on vectors obtained by truncated SVD (retaining at least 80% of the information), hierarchical agglomerative clustering (HAC), complete and average linkage of the term-document matrix with Manhattan distance and cosine similarity, hierarchical agglomerative clustering (both linkage) of tf-idf matrix with Euclidean metric.", "In other words, we turned an unsupervised learning problem into the supervised one.", "The accuracy score for different clustering methods is represented in Figure 1 .", "Curves correspond to the different partitions that have been identified by people.", "As it was mentioned earlier, we obtain incon-1 https://github.com/anonymously1/ CNS2015/blob/master/NewsSet1 Figure 1 : Classification accuracy of clustering results and \"true\" clustering (example 1).", "Four lines are different news labeling made by people.", "The y-axis values for fixed x-value correspond to classification accuracy of a clustering method for each of the four labeling sistent \"true\" labeling.", "Thereby the accuracy of clustering differs from labeling made by evaluators.", "This approach doesn't allow to determine the best partition, because a partition itself is not natural for the given news set.", "For example, consider clusters obtained by HAC based on cosine similarity (trade-off between high accuracy and its low variation): 1-st cluster: 1,2,7,9; 2-nd cluster: 3,11,12; 3-rd cluster: 4,8; 4-th cluster: 5,6; 5-th cluster: 10.", "Almost the same news 4, 8, 12 and 9, 10 are in the different clusters.", "News 10, 11 should be simultaneously in several clusters (1-st, 5-th and 2-nd,3-rd respectively).", "Examples of pattern structures clustering To construct hierarchy of overlapping clusters by the proposed methods, we use the following constraints: θ = 0, 25, µ 1 = 0, 1 and µ 2 = 0, 9.", "The value of θ limits the depth of the pattern structure (the maximal number of texts in a cluster), put differently, the higher θ, the closer should be the general intent of clusters.", "µ 1 and µ 2 determine the degree of dissimilarity of the clusters on different levels of the lattice (the clusters are prepared by adding a new document to the current one).", "We consider the proposed clustering method on 2 examples.", "The first one was described above, it corresponds to the case of overlapping clusters, the second one is the case when clusters are nonoverlapping and distinguishable.", "Texts of the sec-ond example are available by link 2 .", "Three clusters are naturally identified in this texts.", "The cluster distribution depending on volume are shown in Table 1 .", "We got 107 and 29 clusters for the first and the second example respectively.", "Text number Clusters number Example 1 Example 2 1 12 11 2 34 15 3 33 3 4 20 0 5 7 0 6 1 0 In fact, this method is an agglomerative hierarchical clustering with overlapping clusters.", "Hierarchical structure of clusters provides browsing of texts with similar content by layers.", "The cluster structure is represented on Figure 2 .", "The top of the structure corresponds to meaningless clusters that consist of all texts.", "Upper layer consists of clusters with large volume.", "(a) pattern structure without reduction (b) reduced pattern structure Figure 2 : The cluster structure (example 2).", "The node on the top corresponds to the \"dummy\" cluster, high level nodes correspond to the big clusters with quite general content, while the clusters at lower levels correspond to more specific news.", "Clustering based on pattern structures provides well interpretable groups.", "The upper level of hierarchy (the most representative clusters for example 1) consists of the clusters presented in Table 2 We also consider smaller clusters and select those for which adding of any object (text) dramatically reduces the M axScore {1, 2, 3, 7, 9} and {5, 6}.", "For other nested clusters significant decrease of M axScore occurred exactly with the an expansion of single clusters.", "For the second example we obtained 3 clusters that corresponds to \"true\" labeling.", "Our experiments show that pattern structure clustering allows to identify easily interpretable groups of texts and significantly improves text browsing.", "Conclusion In this paper, we presented an approach that addressed the problem of short text clustering.", "Our study shows a failure of the traditional clustering methods, such as k-means and HAC.", "We propose to use parse thickets that retain the structure of sentences instead of the term-document matrix and to build the reduced pattern structures to obtain overlapping groups of texts.", "Experimental results demonstrate considerable improvement of browsing and navigation through texts set for users.", "Introduced indices Score and ScoreLoss both improve computing efficiency and tackle the problem of redundant clusters.", "An important direction for future work is to take into account synonymy and to compare the proposed method to similar approach that use key words instead of parse thickets." ] }
{ "paper_header_number": [ "1", "3", "4", "4.1", "4.2", "5" ], "paper_header_content": [ "Introduction and related works", "Reduced pattern structure", "Experiments", "User Study", "Examples of pattern structures clustering", "Conclusion" ] }
GEM-SciDuet-train-65#paper-1141#slide-26
Conclusion
Short text clustering problem A failure of the traditional clustering methods Parse Thickets as a text model Texts similarity based on pattern structures Reduced pattern structures with constraints Score and ScoreLoss to improve efficiency and to remove redundant clusters Improvement of browsing and navigation through texts set for users
Short text clustering problem A failure of the traditional clustering methods Parse Thickets as a text model Texts similarity based on pattern structures Reduced pattern structures with constraints Score and ScoreLoss to improve efficiency and to remove redundant clusters Improvement of browsing and navigation through texts set for users
[]
GEM-SciDuet-train-66#paper-1142#slide-0
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-0
Summary
With the explosive growth of informal electronic communications such as soci al media, web comments, text messaging, etc., historically u nwritten languages are being written for the first time. For these languages, there ar e extremely limited resources such as translation lexicons a vailable. We present a method for ind ucing portions of translation lexicons through the use of e xpert knowledge for these settings and quantify its effec tiveness in experiments attempting to induce a Moro ccan Darija-English translation lexicon via French loanwords.
With the explosive growth of informal electronic communications such as soci al media, web comments, text messaging, etc., historically u nwritten languages are being written for the first time. For these languages, there ar e extremely limited resources such as translation lexicons a vailable. We present a method for ind ucing portions of translation lexicons through the use of e xpert knowledge for these settings and quantify its effec tiveness in experiments attempting to induce a Moro ccan Darija-English translation lexicon via French loanwords.
[]
GEM-SciDuet-train-66#paper-1142#slide-1
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-1
Motivation
Translation lexicons are a cor e resource used for multilingual processing of languages. Manual creation of translatio n lexicons by lexicographers is time-consuming and expensiv e. There are more than seven th ousand languages in the world, many of which are historically unwritten (Lewis et al., Many historically unwritten la nguages are being written for the first time with the explos ive growth of informal electronic communications.
Translation lexicons are a cor e resource used for multilingual processing of languages. Manual creation of translatio n lexicons by lexicographers is time-consuming and expensiv e. There are more than seven th ousand languages in the world, many of which are historically unwritten (Lewis et al., Many historically unwritten la nguages are being written for the first time with the explos ive growth of informal electronic communications.
[]
GEM-SciDuet-train-66#paper-1142#slide-2
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-2
Past work
There has been a lot of work on automating translation lexicon induction, including ( Bloodgood and Strauss, ACL, The best methods for automa tic translation lexicon induction involve using many sources o f information such as word information, temporal inform ation (Klementiev and Roth, The methods for automatic t ranslation lexicon induction have various data requirements suc h as bilingual seed dictionaries and monolingual text coming from the same time period for each of the languages.
There has been a lot of work on automating translation lexicon induction, including ( Bloodgood and Strauss, ACL, The best methods for automa tic translation lexicon induction involve using many sources o f information such as word information, temporal inform ation (Klementiev and Roth, The methods for automatic t ranslation lexicon induction have various data requirements suc h as bilingual seed dictionaries and monolingual text coming from the same time period for each of the languages.
[]
GEM-SciDuet-train-66#paper-1142#slide-3
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-3
Challenges
For historically unwritten lang uages that are just being written for the first time, there are of ten extremely limited resources of any type available, not eve n large amounts of monolingual text. The written data that can be obtained often has non-standard spellings and code-switching. The code-switching is someti mes within words whereby the base is borrowed and the affi xes are not borrowed, analogous to the multi-language catego ries V and N from (Mericli and
For historically unwritten lang uages that are just being written for the first time, there are of ten extremely limited resources of any type available, not eve n large amounts of monolingual text. The written data that can be obtained often has non-standard spellings and code-switching. The code-switching is someti mes within words whereby the base is borrowed and the affi xes are not borrowed, analogous to the multi-language catego ries V and N from (Mericli and
[]
GEM-SciDuet-train-66#paper-1142#slide-4
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-4
Potential Solution
Many historically unwritten la nguages borrow parts of their lexicons from more highly res ourced written languages. It is often possible to find a l anguage informant that can provide guidance for how sou nds would be rendered in a written script if words were t o be written. Our proposed method makes use of these facts to acquire parts of a translation lexicon quickly.
Many historically unwritten la nguages borrow parts of their lexicons from more highly res ourced written languages. It is often possible to find a l anguage informant that can provide guidance for how sou nds would be rendered in a written script if words were t o be written. Our proposed method makes use of these facts to acquire parts of a translation lexicon quickly.
[]
GEM-SciDuet-train-66#paper-1142#slide-5
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-5
Loanword Candidate Generation Method high level summary
Take word pronunciations from the donor language and convert them to how they would be borrowed in the borrowing language if they were to be borrowed. These are our candidate loanwords.
Take word pronunciations from the donor language and convert them to how they would be borrowed in the borrowing language if they were to be borrowed. These are our candidate loanwords.
[]
GEM-SciDuet-train-66#paper-1142#slide-6
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-6
Loanword Candidate Possibilities
There are three possible case s for a given generated candidate loanword: true match string occurs in borrowing language and is a loanword from t he donor language; false match string occurs in borrowing language by coincidence, bu t its not a loanword from the donor language no match string does not occur in the borrowing language.
There are three possible case s for a given generated candidate loanword: true match string occurs in borrowing language and is a loanword from t he donor language; false match string occurs in borrowing language by coincidence, bu t its not a loanword from the donor language no match string does not occur in the borrowing language.
[]
GEM-SciDuet-train-66#paper-1142#slide-7
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-7
Use Case Moroccan Darija English translation lexicon via French
Our use case is inducing a Moroccan Darija-English translation lexicon via French. We start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) and convert them to how they would be rendered in Arabic script via a multiple step transliteration process.
Our use case is inducing a Moroccan Darija-English translation lexicon via French. We start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) and convert them to how they would be rendered in Arabic script via a multiple step transliteration process.
[]
GEM-SciDuet-train-66#paper-1142#slide-8
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-8
Multiple step Transliteration Process
Step 1 Break pronunciation into syllables. Step 2 Convert each IPA sy llable to a string in modified Buckwalter translite ration, which is a commonly used transliteration schem e that supports a one-to-one mapping to Arabic s cript. Step 3 Convert each syllabl es string in modified Buckwalter transliteration to Ar abic script. Step 4 Merge the resulting syllable to generate Arabic script strings for each a candidate loanword string.
Step 1 Break pronunciation into syllables. Step 2 Convert each IPA sy llable to a string in modified Buckwalter translite ration, which is a commonly used transliteration schem e that supports a one-to-one mapping to Arabic s cript. Step 3 Convert each syllabl es string in modified Buckwalter transliteration to Ar abic script. Step 4 Merge the resulting syllable to generate Arabic script strings for each a candidate loanword string.
[]
GEM-SciDuet-train-66#paper-1142#slide-9
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-9
Step 2
Buckwalter characte rs such as aa,kk, y:iy, etc. that w ere supplied by a language expert.
Buckwalter characte rs such as aa,kk, y:iy, etc. that w ere supplied by a language expert.
[]
GEM-SciDuet-train-66#paper-1142#slide-10
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-10
Experimental Data Sources
We extracted a French-Englis h bilingual dictionary using the freely available English Wikti onary dump 20131101 downloaded from The data used for testing con sists of a million lines of user comments crawled from the M oroccan news website
We extracted a French-Englis h bilingual dictionary using the freely available English Wikti onary dump 20131101 downloaded from The data used for testing con sists of a million lines of user comments crawled from the M oroccan news website
[]
GEM-SciDuet-train-66#paper-1142#slide-11
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-11
Initial Statistics of our Data
Converting each of the Frenc h pronunciations from our dictionary into Arabic script y ielded 8277 unique loanword candidates. The total number of tokens i n our Hespress corpus is We found that 1150 of our 8 277 loanword candidates appear in our Hespress corpus. More than a million (1169087 ) loanword candidate instances appear in the corpus.
Converting each of the Frenc h pronunciations from our dictionary into Arabic script y ielded 8277 unique loanword candidates. The total number of tokens i n our Hespress corpus is We found that 1150 of our 8 277 loanword candidates appear in our Hespress corpus. More than a million (1169087 ) loanword candidate instances appear in the corpus.
[]
GEM-SciDuet-train-66#paper-1142#slide-12
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-12
Filtering out short words
False matches are particularly likely to occur for very short words. So we filter out candidates th at are of length less than four characters. This leaves us with 838 cand idates appearing in the corpus and 217616 candidate instanc es in the corpus.
False matches are particularly likely to occur for very short words. So we filter out candidates th at are of length less than four characters. This leaves us with 838 cand idates appearing in the corpus and 217616 candidate instanc es in the corpus.
[]
GEM-SciDuet-train-66#paper-1142#slide-13
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-13
Percentage of True Matches versus False Matches
We conducted an annotation exercise with two native Moroccan Darija speakers wh o also knew at least intermediate We pulled a random sample o f 1185 candidate instances from our corpus and asked each an notator to mark each instance as either: A if the instance i s originally from Arabic, F if the instance i s originally from French, or U if they were not sure.
We conducted an annotation exercise with two native Moroccan Darija speakers wh o also knew at least intermediate We pulled a random sample o f 1185 candidate instances from our corpus and asked each an notator to mark each instance as either: A if the instance i s originally from Arabic, F if the instance i s originally from French, or U if they were not sure.
[]
GEM-SciDuet-train-66#paper-1142#slide-14
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-14
Annotation Results
Annotator Arabic Un known French Total Table: Number of word instances annotated.
Annotator Arabic Un known French Total Table: Number of word instances annotated.
[]
GEM-SciDuet-train-66#paper-1142#slide-15
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-15
Examples of Translations Found
omelette I JE @; and
omelette I JE @; and
[]
GEM-SciDuet-train-66#paper-1142#slide-16
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-16
Machine Translation Experiment
We selected a random set of sentences from the Hespress corpus that each contained a t least one candidate instance. A Modern Standard Arabic/M oroccan Darija/English trilingual translator translated 273 of t he sentences into English. These manually translated se ntences served as our test set. We trained a baseline MT sys tem using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium from 2007 to 2013 using Mos es 3.0 with default parameters. The baseline system achieves BLEU score of 7.48 on our difficult test set of code-switc hed Moroccan Darija and We trained a second system lexicon appended to the end with our induced translation of the training data. The BLEU score increased to 8.11, a gain of 0.63 BLEU points.
We selected a random set of sentences from the Hespress corpus that each contained a t least one candidate instance. A Modern Standard Arabic/M oroccan Darija/English trilingual translator translated 273 of t he sentences into English. These manually translated se ntences served as our test set. We trained a baseline MT sys tem using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium from 2007 to 2013 using Mos es 3.0 with default parameters. The baseline system achieves BLEU score of 7.48 on our difficult test set of code-switc hed Moroccan Darija and We trained a second system lexicon appended to the end with our induced translation of the training data. The BLEU score increased to 8.11, a gain of 0.63 BLEU points.
[]
GEM-SciDuet-train-66#paper-1142#slide-17
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-17
Conclusions
With the explosive growth of informal textual electronic communications such as soci al media, web comments, etc., many colloquial everyday lang uages that were historically unwritten are now being writ ten for the first time. The new written versions of t hese languages pose significant challenges for multilingual pr ocessing technology due to Out-Of-Vocabulary (OOV) ch allenges. Often these historically unwri tten languages borrow significant amounts of vocabulary from languages. relatively well resourced written We presented a method for t ranslation lexicon induction via loanwords. This paper demonstrates indu ction of a Moroccan Darija-English translation lex icon via bridging French loanwords using the approach
With the explosive growth of informal textual electronic communications such as soci al media, web comments, etc., many colloquial everyday lang uages that were historically unwritten are now being writ ten for the first time. The new written versions of t hese languages pose significant challenges for multilingual pr ocessing technology due to Out-Of-Vocabulary (OOV) ch allenges. Often these historically unwri tten languages borrow significant amounts of vocabulary from languages. relatively well resourced written We presented a method for t ranslation lexicon induction via loanwords. This paper demonstrates indu ction of a Moroccan Darija-English translation lex icon via bridging French loanwords using the approach
[]
GEM-SciDuet-train-66#paper-1142#slide-18
1142
Acquisition of Translation Lexicons for Historically Unwritten Languages via Bridging Loanwords
With the advent of informal electronic communications such as social media, colloquial languages that were historically unwritten are being written for the first time in heavily code-switched environments. We present a method for inducing portions of translation lexicons through the use of expert knowledge in these settings where there are approximately zero resources available other than a language informant, potentially not even large amounts of monolingual data. We investigate inducing a Moroccan Darija-English translation lexicon via French loanwords bridging into English and find that a useful lexicon is induced for humanassisted translation and statistical machine translation.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94 ], "paper_content_text": [ "Introduction With the explosive growth of informal electronic communications such as email, social media, web comments, etc., colloquial languages that were historically unwritten are starting to be written for the first time.", "For these languages, there are extremely limited (approximately zero) resources available, not even large amounts of monolingual text data or possibly not even small amounts of monolingual text data.", "Even when audio resources are available, difficulties arise when converting sound to text (Tratz et al., 2013; Robinson and Gadelii, 2003) .", "Moreover, the text data that can be obtained often has non-standard spellings and substantial code-switching with other traditionally written languages (Tratz et al., 2013) .", "In this paper we present a method for the acquisition of translation lexicons via loanwords and expert knowledge that requires zero resources of the borrowing language.", "Many historically unwritten languages borrow from highly resourced languages.", "Also, it is often feasible to locate a language expert to find out how sounds in these languages would be rendered if they were to be written as many of them are beginning to be written in social media, etc.", "We thus expect the general method to be applicable for multiple historically unwritten languages.", "In this paper we investigate inducing a Moroccan Darija-English translation lexicon via borrowed French words.", "Moroccan Darija is an historically unwritten dialect of Arabic spoken by millions but lacking in standardization and linguistic resources (Tratz et al., 2013) .", "Moroccan Darija is known to borrow many words from French, one of the most highly resourced languages in the world.", "By mapping Moroccan Darija-French borrowings to their donor French words, we can rapidly create lexical resources for portions of Moroccan Darija vocabulary for which no resources currently exist.", "For example, we could use one of many bilingual French-English dictionaries to bridge into English and create a Moroccan Darija-English translation lexicon that can be used to assist professional translation of Moroccan Darija into English and to assist with construction of Moroccan Darija-English Machine Translation (MT) systems.", "The rest of this paper is structured as follows.", "Section 2 summarizes related work; section 3 explains our method; section 4 discusses experimental results of applying our method to the case of building a Moroccan Darija-English translation lexicon; and section 5 concludes.", "Related Work Translation lexicons are a core resource used for multilingual processing of languages.", "Manual creation of translation lexicons by lexicographers is time-consuming and expensive.", "There are more than 7000 languages in the world, many of which are historically unwritten (Lewis et al., 2015) .", "For a relatively small number of these languages there are extensive resources available that have been manually created.", "It has been noted by others (Mann and Yarowsky, 2001; Schafer and Yarowsky, 2002) that languages are organized into families and that using cognates between sister languages can help rapidly create translation lexicons for lower-resourced languages.", "For example, the methods in (Mann and Yarowsky, 2001) are able to detect that English kilograms maps to Portuguese quilogramas via bridge Spanish kilogramos.", "This general idea has been worked on extensively in the context of cognates detection, with 'cognate' typically re-defined to include loanwords as well as true cognates.", "The methods use monolingual data at a minimum and many signals such as orthographic similarity, phonetic similarity, contextual similarity, temporal similarity, frequency similarity, burstiness similarity, and topic similarity (Bloodgood and Strauss, 2017; Irvine and Callison-Burch, 2013; Kondrak et al., 2003; Schafer and Yarowsky, 2002; Mann and Yarowsky, 2001) .", "Inducing translations via loanwords was specifically targeted in .", "While some of these methods don't require bilingual resources, with the possible exception of small bilingual seed dictionaries, they do at a minimum require monolingual text data in the languages to be modeled and sometimes have specific requirements on the monolingual text data such as having text coming from the same time period for each of the languages being modeled.", "For colloquial languages that were historically unwritten, but that are now starting to be written with the advent of social media and web comments, there are often extremely limited resources of any type available, not even large amounts of monolingual text data.", "Moreover, the written data that can be obtained often has non-standard spellings and code-switching with other traditionally written languages.", "Often the code-switching occurs within words whereby the base is borrowed and the affixes are not borrowed, analogous to the multi-language categories \"V\" and \"N\" from (Mericli and Bloodgood, 2012) .", "The data available for historically unwritten languages, and especially the lack thereof, is not suitable for previously developed cognates detection methods that operate as discussed above.", "In the next section we present a method for translation lexicon induction via loanwords that uses expert knowledge and requires zero resources from the borrowing language other than a language informant.", "Method Our method is to take word pronunciations from the donor language we are using and convert them to how they would be rendered in the borrowing language if they were to be borrowed.", "These are our candidate loanwords.", "There are three possible cases for a given generated candidate loanword string: true match string occurs in borrowing language and is a loanword from the donor language; false match string occurs in borrowing language by coincidence but it's not a loanword from the donor language; no match string does not occur in borrowing language.", "For the case of inducing a Moroccan Darija-English translation lexicon via French we start with a French-English bilingual dictionary and take all the French pronunciations in IPA (International Phonetic Alphabet) 1 and convert them to how they would be rendered in Arabic script.", "For this we created a multiple-step transliteration process: Step 1 Break pronunciation into syllables.", "Step 2 Convert each IPA syllable to a string in modified Buckwalter transliteration 2 , which supports a one-to-one mapping to Arabic script.", "Step 3 Convert each syllable's string in modified Buckwalter transliteration to Arabic script.", "Step 4 Merge the resulting Arabic script strings for each syllable to generate a candidate loanword string.", "1 https://en.wikipedia.org/wiki/ International_Phonetic_Alphabet 2 The modified version of Buckwalter transliteration, https://en.wikipedia.org/wiki/ Buckwalter_transliteration, replaces special characters such as < and > with alphanumeric characters so that the transliterations are safe for use with other standards such as XML (Extensible Markup Language).", "For more information see (Habash, 2010) .", "For syllabification, for many word pronunciations the syllables are already marked in the IPA by the '.'", "character; if syllables are not already marked in the IPA, we run a simple syllabifier to complete step 1.", "For step 2, we asked a language expert to give us a sequence of rules to convert a syllable's pronunciation to modified Buckwalter transliteration.", "This is itself a multi-step process (see next paragraph for details).", "In step 3, we simply do the one-to-one conversion and obtain Arabic script for each syllable.", "In step 4, we merge the Arabic script for each syllable and get the generated candidate loanword string.", "The multi-step process that takes place in step 2 of the process is: Step 2.1 Make minor vowel adjustments in certain contexts, e.g., when 'a' is between two consonants it is changed to 'A'.", "Step 2.2 Perform bulk of conversion by using table of mappings from IPA characters to modified Buckwalter characters such as 'a'→'a','k'→'k', 'y:'→'iy', etc.", "that were supplied by a language expert.", "Step 2.3 Perform miscellaneous modifications to finalize the modified Buckwalter strings, e.g., if a syllable ends in 'a', then append an 'A' to that syllable.", "The entire conversion process is illustrated in Figure 1 for the French word raconteur.", "At the top of the Figure is the IPA from the French dictionary entry with syllables marked.", "At the next level, step 1 (syllabification) has been completed.", "Step 2.1 doesn't apply to any of the syllables in this word since there are no minor vowel adjustments that are applicable for this word so at the next level each syllable is shown after step 2.2 has been completed.", "The next level shows the syllables after step 2.3 has been completed.", "The next level shows after step 3 has been completed and then at the end the strings are merged to form the candidate loanword.", "Experiments and Discussion In our experiments we extracted a French-English bilingual dictionary using the freely available English Wiktionary dump 20131101 downloaded from http://dumps.wikimedia.", "org/enwiktionary.", "From this dump we extracted all the French words, their pronunciations, Step 1 Step 2.2 Step 2.3 Step 3 Step 4 Figure 1 : Example of French to Arabic Process for the French word raconteur.", "As discussed in the main text, step 2.1 doesn't apply to this example so it is omitted from the diagram to conserve space.", "Note that in the final step the word is in order of Unicode codepoints.", "Then application software that is capable of processing Arabic will render that as a proper Arabic string in right-to-left order with proper character joining adjustments as and their English definitions.", "Using the process described in section 3 to convert each of the French pronunciations into Arabic script yielded 8277 unique loanword candidate strings.", "{ { { { { The data used for testing consists of a million lines of user comments crawled from the Moroccan news website http://www.hespress.", "com.", "The crawled user comments contain Moroccan Darija in heavily code-switched environments.", "While this makes for a challenging setting, it is a realistic representation of the types of environments in which historically unwritten languages are being written for the first time.", "The data we used is consistent with well-known codeswitching among Arabic speakers, extending spoken discourse into formal writing (Bentahila and Davies, 1983; Redouane, 2005) .", "The total number of tokens in our Hespress corpus is 18,781,041.", "We found that 1150 of our 8277 loanword candidates appear in our Hespress corpus.", "Moreover, more than a million (1169087) date instances appear in the corpus.", "Recall that a match could be a true match that really is a French loanword or a false match that just happens to coincidentally have string equality with words in the borrowing language, but is not a French loanword.", "False matches are particularly likely to occur for very short words.", "Accordingly, we filter out candidates that are of length less than four characters.", "This leaves us with 838 candidates appearing in the corpus and 217616 candidate instances in the corpus.", "To get an idea of what percentage of our matches are true matches versus false matches, we conducted an annotation exercise with two native Moroccan Darija speakers who also knew at least intermediate French.", "We pulled a random sample 3 of 1185 candidate instances from our corpus and asked each annotator to mark each instance as either: A if the instance is originally from Arabic, F if the instance is originally from French, or U if they were not sure.", "The results are shown in Table 1 .", "There are a substantial number of French loanwords that are found.", "Some examples of translations successfully induced by our method are: omelette ; and bourgeoisie .", "We hypothesize that our method can help improve machine translation (MT) of historically unwritten dialects with nearly zero resources.", "To test this hypothesis, we ran an MT experiment as follows.", "First we selected a random set of sentences from the Hespress corpus that each contained at least one candidate instance and had an MSA/Moroccan Darija/English trilingual translator translate them into English.", "In total, 273 sentences were translated.", "This served as our test set.", "We trained a baseline MT system using all GALE MSA-English parallel corpora available from the Linguistic Data Consortium (LDC) from 2007 to 2013.", "4 We trained the system using Moses 3.0 with default parameters.", "This baseline system achieves BLEU score of 7.48 on our difficult test set of code-switched Moroccan Darija and MSA.", "We trained a second system using the parallel corpora with our induced Moroccan Darija-English translation lexicon appended to the end of the training data.", "This time the BLEU score increased to 8.11, a gain of .63 BLEU points.", "Conclusions With the explosive growth of informal textual electronic communications such as social media, web comments, etc., many colloquial everyday languages that were historically unwritten are now being written for the first time often in heavily code-switched text with traditionally written languages.", "The new written versions of these languages pose significant challenges for multilingual processing technology due to Out-Of-Vocabulary (OOV) challenges.", "Yet it is relatively common that these historically unwritten languages borrow significant amounts of vocabulary from relatively well resourced written languages.", "We presented a method for translation lexicon induction via loanwords for alleviating the OOV challenges in these settings where the borrowing language has extremely limited amounts of resources available, in many cases not even substantial amounts of monolingual data that is typically exploited by previous cognates and loanword detection methods to induce translation lexicons.", "This paper demonstrates induction of a Moroccan Darija-English translation lexicon via bridging French loanwords using the method and in MT experiments, the addition of the induced Moroccan Darija-English lexicon increased system performance by .63 BLEU points." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Method", "Experiments and Discussion", "Conclusions" ] }
GEM-SciDuet-train-66#paper-1142#slide-18
Future Work
Explore using the method for other languages. Examine whether adaptations can be made to increase the yield of the method.
Explore using the method for other languages. Examine whether adaptations can be made to increase the yield of the method.
[]
GEM-SciDuet-train-67#paper-1143#slide-0
1143
Neural Reranking Improves Subjective Quality of Machine Translation: NAIST at WAT2015
This year, the Nara Institute of Science and Technology (NAIST)'s submission to the 2015 Workshop on Asian Translation was based on syntax-based statistical machine translation, with the addition of a reranking component using neural attentional machine translation models. Experiments re-confirmed results from previous work stating that neural MT reranking provides a large gain in objective evaluation measures such as BLEU, and also confirmed for the first time that these results also carry over to manual evaluation. We further perform a detailed analysis of reasons for this increase, finding that the main contributions of the neural models lie in improvement of the grammatical correctness of the output, as opposed to improvements in lexical choice of content words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87 ], "paper_content_text": [ "Introduction Neural network models for machine translation (MT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015) , while still in a nascent stage, have shown impressive results in a number of translation tasks.", "Specifically, a number of works have demonstrated gains in BLEU score (Papineni et al., 2002) over state-of-the-art non-neural systems, both when using the neural MT model standalone (Luong et al., 2015a; Jean et al., 2015; Luong et al., 2015b) , or to rerank the output of more traditional systems phrase-based MT systems (Sutskever et al., 2014) .", "However, despite these impressive results with regards to automatic measures of translation quality, there has been little examination of the effect that these gains have on the subjective impressions of human users.", "Because BLEU generally has some correlation with translation quality, 1 it is fair to hypothesize that these gains will carry over to gains in human evaluation, but empirical evidence for this hypothesis is still scarce.", "In this paper, we attempt to close this gap by examining the gains provided by using neural MT models to rerank the hypotheses a state-of-the-art non-neural MT system, both from the objective and subjective perspectives.", "Specifically, as part of the Nara Institute of Science and Technology (NAIST) submission to the Workshop on Asian Translation (WAT) 2015 (Nakazawa et al., 2015) , we generate reranked and non-reranked translation results in four language pairs (Section 2).", "Based on these translation results, we calculate scores according to automatic evaluation measures BLEU and RIBES (Isozaki et al., 2010) , and a manual evaluation that involves comparing hypotheses to a baseline system (Section 3).", "Next, we perform a detailed analysis of the cases in which subjective impressions improved or degraded due to neural MT reranking, and identify major areas in which neural reranking improves results, and areas in which reranking is less helpful (Section 4).", "Finally, as an auxiliary result, we also examine the effect that the size of the n-best list used in reranking has on the improvement of translation results (Section 5).", "Generation of Translation Results Baseline System All experiments are performed on WAT2015 translation task from Japanese (ja) to/from English (en) and Chinese (zh).", "As a baseline, we used the NAIST system for WAT 2014 (Neubig, 2014) , a state-of-the-art system that achieved the highest accuracy on all four tracks in the last year's eval-uation.", "2 The details of construction are described in Neubig (2014) , but we briefly outline it here for completeness.", "The system is based on the Travatar toolkit (Neubig, 2013) , using tree-to-string statistical MT (Graehl and Knight, 2004; Liu et al., 2006) , in which the source is first syntactically parsed, then subtrees of the input parse are converted into strings on the target side.", "This translation paradigm has proven effective for translation between syntactically distant language pairs such as those handled by the WAT tasks.", "In addition, following our findings in Neubig and Duh (2014) , to improve the accuracy of translation we use forestbased encoding of many parse candidates (Mi et al., 2008) , and a supervised alignment technique for ja-en and en-ja (Riesa and Marcu, 2010) .", "To train the systems, we used the ASPEC corpus provided by WAT.", "For the zh-ja and ja-zh systems, we used all of the data, amounting to 672k sentences.", "For the en-ja and ja-en systems, we used all 3M sentences for training the language models, and the first 2M sentences of the training data for training the translation models.", "For English, Japanese, and Chinese, tokenization was performed using the Stanford Parser (Klein and Manning, 2003) , the KyTea toolkit (Neubig et al., 2011) , and the Stanford Segmenter (Tseng et al., 2005) respectively.", "For parsing, we use the Egret parser, 3 which implements the latent variable parsing model of (Petrov et al., 2006) .", "4 For all systems, we trained a 6-gram language model smoothed with modified Kneser-Ney smoothing (Chen and Goodman, 1996) using KenLM (Heafield et al., 2013) .", "To optimize the parameters of the log-linear model, we use standard minimum error rate training (MERT; Och (2003) ) with BLEU as an objective.", "Neural MT Models As our neural MT model, we use the attentional model of Bahdanau et al.", "(2015) .", "The model first encodes the source sentence f using bidirectional long short-term memory (LSTM; Hochreiter and Schmidhuber (1997) ) recurrent networks.", "This results in an encoding vector h j for each word f j in f .", "The model then proceeds to generate the target translationê one word at a time, at each time step calculating soft alignments a i that are used to generate a context vector g i , which is referenced when generating the target word g i = |f | ∑ j=1 a i,j h j .", "(1) Attentional models have a number of appealing properties, such as being theoretically able to encode variable length sequences without worrying about memory constraints imposed by the fixed-size vectors used in encoder-decoder models.", "These advantages are confirmed in empirical results, with attentional models performing markedly better on longer sequences (Bahdanau et al., 2015) .", "To train the neural MT models, we used the implementation provided by the lamtram toolkit.", "5 The forward and reverse LSTM models each had 256 nodes, and word embeddings were also set to size 256.", "For ja-en and en-ja models we chose the first 500k sentences in the training corpus, and for ja-zh and zh-ja models we used all 672k sentences.", "Training was performed using stochastic gradient descent (SGD) with an initial learning rate of 0.1, which was halved every epoch in which the development likelihood decreased.", "For each language pair, we trained two models and ensembled the probabilities by linearly interpolating between the two probability distributions.", "6 These probabilities were used to rerank unique 1,000-best lists from the baseline model.", "To perform reranking, the log likelihood of the neural MT model was added as an additional feature to the standard baseline model features, and the weight of this feature was decided by running MERT on the dev set.", "Experimental Results First, we calculate overall numerical results for our systems with and without the neural MT reranking model.", "As automatic evaluation we use the standard BLEU (Papineni et al., 2002) and reorderingoriented RIBES (Isozaki et al., 2010) Table 1 : Overall BLEU, RIBES, and HUMAN scores for our baseline system and system with neural MT reranking.", "Bold indicates a significant improvement according to bootstrap resampling at p < 0.05 (Koehn, 2004) .", "manual evaluation, we use the WAT \"HUMAN\" evaluation score (Nakazawa et al., 2015) , which is essentially related to the number of wins over a baseline phrase-based system.", "In the case that the system beats the baseline on all sentences, the HUMAN score will be 100, and if it loses on all sentences the score will be -100.", "From the results in Table 1 , we can first see that adding the neural MT reranking resulted in a significant increase in the evaluation scores for all language pairs under consideration, except for the manual evaluation in ja-zh translation.", "7 It should be noted that these gains are achieved even though the original baseline was already quite strong (outperforming most other WAT2015 systems without a neural component).", "While neural MT reranking has been noted to improve traditional systems with respect to BLEU score in previous work (Sutskever et al., 2014) , to our knowledge this is the first work that notes that these gains also carry over convincingly to human evaluation scores.", "In the following section, we will examine the results in more detail and attempt to explain exactly what is causing this increase in translation quality.", "Analysis To perform a deeper analysis, we manually examined the first 200 sentences of the ja-en part of the official WAT2015 human evaluation set.", "Specifically, we (1) compared the baseline and reranked outputs, and decided whether one was better or if they were of the same quality and (2) in the case that one of the two was better, classified the example by the type of error that was fixed or caused by the reranking leading to this change in subjective impression.", "Specifically, when annotating the type of error, we used a simplified version of 7 The overall scores for ja-zh are lower than others, perhaps a result of word-order between Japanese and Chinese being more similar than Japanese and English, the parser for Japanese being weaker than that of the other languages, and less consistent evaluation scores for the Chinese output (Nakazawa et al., 2014 the error typology of Vilar et al.", "(2006) consisting of insertion, deletion, word conjugation, word substitution, and reordering, as well as subcategories of each of these categories (the number of sub-categories totalled approximately 40).", "If there was more than one change in the sentence, only the change that we subjectively felt had the largest effect on the translation quality was annotated.", "The number of improvements and degradations afforded by neural MT reranking is shown in Table 2.", "From this figure, we can see that overall, neural reranking caused an improvement in 117 sentences, and a degradation in 33 sentences, corroborating the fact that the reranking process is giving consistent improvements in accuracy.", "Further breaking down the changes, we can see that improvements in word reordering are by far the most prominent, slightly less than three times the number of improvements in the next most common category.", "This demonstrates that the neural MT model is successfully capturing the overall structure of the sentence, and effectively disambiguating reorderings that could not be appropriately scored in the baseline model.", "Next in Table 3 we show examples of the four most common sub-categories of errors that were fixed by the neural MT reranker, and note the total number of improvements and degradations of each.", "The first subcategory is related to the general reordering of phrases in the sentence.", "As there Table 3 : An example of more common varieties of improvements caused by the neural MT reranking.", "is a large amount of reordering involved in translating from Japanese to English, mistaken longdistance reordering is one of the more common causes for errors, and the neural MT model was effective at fixing these problems, resulting in 26 improvements and only 4 degradations.", "In the sentence shown in the example, the baseline system swaps the verb phrase and subject positions, making it difficult to tell that the list of conditions are what \"occurred,\" while the reranked system appropriately puts this list as the subject of \"occurred.\"", "The second subcategory includes insertions or deletions of auxiliary verbs, for which there were 15 improvements and not a single degradation.", "The reason why these errors occurred in the first place is that when a transitive verb, for example \"obtained,\" occurs on its own, it is often translated as \"X was obtained by Y,\" 8 but when it occurs as a relative clause decorating the noun X it will be translated as \"X obtained by Y,\" as shown in the example.", "The baseline system does not include any explicit features to make this distinction between whether a verb is part of a relative clause or not, and thus made a number of mistakes of this variety.", "However, it is evident that the neural MT model has learned to make this distinction, greatly reducing the number of these errors.", "The third subcategory is similar to the first, but explicitly involves the correct interpretation of co-ordinate structures.", "It is well known that syntactic parsers often make mistakes in their interpretation of coordinate structures (Kummerfeld et al., 2012) .", "Of course, the parser used in our syntaxbased MT system is no exception to this rule, and parse errors often cause coordinate phrases to be broken apart on the target side, as is the case in the example's \"local heating and ablation.\"", "The fact that the neural MT models were able to correct a large number of errors related to these structures suggests that they are able to successfully determine whether two phrases are coordinated or not, and keep them together on the target side.", "The final sub-category of the top four is related to verb conjugation agreement.", "Many of the examples related to verb conjugation, including the one shown in Table 3 , were related to when two singular nouns were connected by a conjunction.", "In this case, the local context provided by a standard n-gram language model is not enough to resolve the ambiguity, but the longer context handled by the neural MT model is able to resolve this easily.", "What is notable about these four categories is that they all are related to improving the correctness of the output from a grammatical point of view, as opposed to fixing mistakes in lexical choice or terminology.", "In fact, neural MT reranking had an overall negative effect on choice of terminology with only 2 improvements at the cost of 4 degradations.", "This was due to the fact that the neural MT model tended to prefer more com- mon words, mistaking \"radiant heat\" as \"radiation heat\" or \"slipring\" as \"ring.\"", "While these tendencies will be affected by many factors such as the size of the vocabulary or the number and size of hidden layers of the net, we feel it is safe to say that neural MT reranking can be expected to have a large positive effect on syntactic correctness of output, while results for lexical choice are less conclusive.", "Effect of n-best Size on Reranking In the previous sections, we confirmed the effectiveness of n-best list reranking using neural MT models.", "However, reranking using n-best lists (like other search methods for MT) is an approximate search method, and its effectiveness is limited by the size of the n-best list used.", "In order to quantify the effect of this inexact search, we performed experiments to examine the post-reranking automatic evaluation scores of the MT results for all n-best list sizes from 1 to 1000.", "Figure 1 shows the results of this examination, with the x-axis referring to the log-scaled number of hypotheses in the n-best list, and the y-axis referring to the quality of the translation, either with regards to model score (for the model including the neural MT likelihood as a feature) or BLEU score.", "9 From these results we can note several interest- 9 The BLEU scores differ slightly from Table 1 due to differences in tokenization standards between these experiments and the official evaluation server.", "ing points.", "First, we can see that the improvement in scores is very slightly sub-linear in the log number of hypotheses in the n-best list.", "In other words, every time we double the n-best list size we will see an improvement in accuracy that is slightly smaller than the last time we doubled the size.", "Second, we can note that in most cases this trend continues all the way up to our limit of 1000best lists, indicating that gains are not saturating, and we can likely expect even more improvements from using larger lists, or perhaps directly performing decoding using neural models (Alkhouli et al., 2015) .", "The en-ja results, however, are an exception to this rule, with BLEU gains more or less saturating around the 50-best list point.", "Conclusion In this paper we described results applying neural MT reranking to a baseline syntax-based machine translation system in 4 languages.", "In particular, we performed an in-depth analysis of what kinds of translation errors were fixed by neural MT reranking.", "Based on this analysis, we found that the majority of the gains were related to improvements in the accuracy of transfer of correct grammatical structure to the target sentence, with the most prominent gains being related to errors regarding reordering of phrases, insertion/deletion of copulas, coordinate structures, and verb agreement.", "We also found that, within the neural MT reranking framework, accuracy gains scaled ap-proximately log-linearly with the size of the n-best list, and in most cases were not saturated even after examining 1000 unique hypotheses." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Baseline System", "Neural MT Models", "Experimental Results", "Analysis", "Effect of n-best Size on Reranking", "Conclusion" ] }
GEM-SciDuet-train-67#paper-1143#slide-0
Relative Merits Demerits
Neural Reranking Improves Subjective Quality of Machine Translation Inner workings well understood Better at translating low-frequency words Produce more fluent output Probabilistic model can score output of other systems!
Neural Reranking Improves Subjective Quality of Machine Translation Inner workings well understood Better at translating low-frequency words Produce more fluent output Probabilistic model can score output of other systems!
[]
GEM-SciDuet-train-67#paper-1143#slide-1
1143
Neural Reranking Improves Subjective Quality of Machine Translation: NAIST at WAT2015
This year, the Nara Institute of Science and Technology (NAIST)'s submission to the 2015 Workshop on Asian Translation was based on syntax-based statistical machine translation, with the addition of a reranking component using neural attentional machine translation models. Experiments re-confirmed results from previous work stating that neural MT reranking provides a large gain in objective evaluation measures such as BLEU, and also confirmed for the first time that these results also carry over to manual evaluation. We further perform a detailed analysis of reasons for this increase, finding that the main contributions of the neural models lie in improvement of the grammatical correctness of the output, as opposed to improvements in lexical choice of content words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87 ], "paper_content_text": [ "Introduction Neural network models for machine translation (MT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015) , while still in a nascent stage, have shown impressive results in a number of translation tasks.", "Specifically, a number of works have demonstrated gains in BLEU score (Papineni et al., 2002) over state-of-the-art non-neural systems, both when using the neural MT model standalone (Luong et al., 2015a; Jean et al., 2015; Luong et al., 2015b) , or to rerank the output of more traditional systems phrase-based MT systems (Sutskever et al., 2014) .", "However, despite these impressive results with regards to automatic measures of translation quality, there has been little examination of the effect that these gains have on the subjective impressions of human users.", "Because BLEU generally has some correlation with translation quality, 1 it is fair to hypothesize that these gains will carry over to gains in human evaluation, but empirical evidence for this hypothesis is still scarce.", "In this paper, we attempt to close this gap by examining the gains provided by using neural MT models to rerank the hypotheses a state-of-the-art non-neural MT system, both from the objective and subjective perspectives.", "Specifically, as part of the Nara Institute of Science and Technology (NAIST) submission to the Workshop on Asian Translation (WAT) 2015 (Nakazawa et al., 2015) , we generate reranked and non-reranked translation results in four language pairs (Section 2).", "Based on these translation results, we calculate scores according to automatic evaluation measures BLEU and RIBES (Isozaki et al., 2010) , and a manual evaluation that involves comparing hypotheses to a baseline system (Section 3).", "Next, we perform a detailed analysis of the cases in which subjective impressions improved or degraded due to neural MT reranking, and identify major areas in which neural reranking improves results, and areas in which reranking is less helpful (Section 4).", "Finally, as an auxiliary result, we also examine the effect that the size of the n-best list used in reranking has on the improvement of translation results (Section 5).", "Generation of Translation Results Baseline System All experiments are performed on WAT2015 translation task from Japanese (ja) to/from English (en) and Chinese (zh).", "As a baseline, we used the NAIST system for WAT 2014 (Neubig, 2014) , a state-of-the-art system that achieved the highest accuracy on all four tracks in the last year's eval-uation.", "2 The details of construction are described in Neubig (2014) , but we briefly outline it here for completeness.", "The system is based on the Travatar toolkit (Neubig, 2013) , using tree-to-string statistical MT (Graehl and Knight, 2004; Liu et al., 2006) , in which the source is first syntactically parsed, then subtrees of the input parse are converted into strings on the target side.", "This translation paradigm has proven effective for translation between syntactically distant language pairs such as those handled by the WAT tasks.", "In addition, following our findings in Neubig and Duh (2014) , to improve the accuracy of translation we use forestbased encoding of many parse candidates (Mi et al., 2008) , and a supervised alignment technique for ja-en and en-ja (Riesa and Marcu, 2010) .", "To train the systems, we used the ASPEC corpus provided by WAT.", "For the zh-ja and ja-zh systems, we used all of the data, amounting to 672k sentences.", "For the en-ja and ja-en systems, we used all 3M sentences for training the language models, and the first 2M sentences of the training data for training the translation models.", "For English, Japanese, and Chinese, tokenization was performed using the Stanford Parser (Klein and Manning, 2003) , the KyTea toolkit (Neubig et al., 2011) , and the Stanford Segmenter (Tseng et al., 2005) respectively.", "For parsing, we use the Egret parser, 3 which implements the latent variable parsing model of (Petrov et al., 2006) .", "4 For all systems, we trained a 6-gram language model smoothed with modified Kneser-Ney smoothing (Chen and Goodman, 1996) using KenLM (Heafield et al., 2013) .", "To optimize the parameters of the log-linear model, we use standard minimum error rate training (MERT; Och (2003) ) with BLEU as an objective.", "Neural MT Models As our neural MT model, we use the attentional model of Bahdanau et al.", "(2015) .", "The model first encodes the source sentence f using bidirectional long short-term memory (LSTM; Hochreiter and Schmidhuber (1997) ) recurrent networks.", "This results in an encoding vector h j for each word f j in f .", "The model then proceeds to generate the target translationê one word at a time, at each time step calculating soft alignments a i that are used to generate a context vector g i , which is referenced when generating the target word g i = |f | ∑ j=1 a i,j h j .", "(1) Attentional models have a number of appealing properties, such as being theoretically able to encode variable length sequences without worrying about memory constraints imposed by the fixed-size vectors used in encoder-decoder models.", "These advantages are confirmed in empirical results, with attentional models performing markedly better on longer sequences (Bahdanau et al., 2015) .", "To train the neural MT models, we used the implementation provided by the lamtram toolkit.", "5 The forward and reverse LSTM models each had 256 nodes, and word embeddings were also set to size 256.", "For ja-en and en-ja models we chose the first 500k sentences in the training corpus, and for ja-zh and zh-ja models we used all 672k sentences.", "Training was performed using stochastic gradient descent (SGD) with an initial learning rate of 0.1, which was halved every epoch in which the development likelihood decreased.", "For each language pair, we trained two models and ensembled the probabilities by linearly interpolating between the two probability distributions.", "6 These probabilities were used to rerank unique 1,000-best lists from the baseline model.", "To perform reranking, the log likelihood of the neural MT model was added as an additional feature to the standard baseline model features, and the weight of this feature was decided by running MERT on the dev set.", "Experimental Results First, we calculate overall numerical results for our systems with and without the neural MT reranking model.", "As automatic evaluation we use the standard BLEU (Papineni et al., 2002) and reorderingoriented RIBES (Isozaki et al., 2010) Table 1 : Overall BLEU, RIBES, and HUMAN scores for our baseline system and system with neural MT reranking.", "Bold indicates a significant improvement according to bootstrap resampling at p < 0.05 (Koehn, 2004) .", "manual evaluation, we use the WAT \"HUMAN\" evaluation score (Nakazawa et al., 2015) , which is essentially related to the number of wins over a baseline phrase-based system.", "In the case that the system beats the baseline on all sentences, the HUMAN score will be 100, and if it loses on all sentences the score will be -100.", "From the results in Table 1 , we can first see that adding the neural MT reranking resulted in a significant increase in the evaluation scores for all language pairs under consideration, except for the manual evaluation in ja-zh translation.", "7 It should be noted that these gains are achieved even though the original baseline was already quite strong (outperforming most other WAT2015 systems without a neural component).", "While neural MT reranking has been noted to improve traditional systems with respect to BLEU score in previous work (Sutskever et al., 2014) , to our knowledge this is the first work that notes that these gains also carry over convincingly to human evaluation scores.", "In the following section, we will examine the results in more detail and attempt to explain exactly what is causing this increase in translation quality.", "Analysis To perform a deeper analysis, we manually examined the first 200 sentences of the ja-en part of the official WAT2015 human evaluation set.", "Specifically, we (1) compared the baseline and reranked outputs, and decided whether one was better or if they were of the same quality and (2) in the case that one of the two was better, classified the example by the type of error that was fixed or caused by the reranking leading to this change in subjective impression.", "Specifically, when annotating the type of error, we used a simplified version of 7 The overall scores for ja-zh are lower than others, perhaps a result of word-order between Japanese and Chinese being more similar than Japanese and English, the parser for Japanese being weaker than that of the other languages, and less consistent evaluation scores for the Chinese output (Nakazawa et al., 2014 the error typology of Vilar et al.", "(2006) consisting of insertion, deletion, word conjugation, word substitution, and reordering, as well as subcategories of each of these categories (the number of sub-categories totalled approximately 40).", "If there was more than one change in the sentence, only the change that we subjectively felt had the largest effect on the translation quality was annotated.", "The number of improvements and degradations afforded by neural MT reranking is shown in Table 2.", "From this figure, we can see that overall, neural reranking caused an improvement in 117 sentences, and a degradation in 33 sentences, corroborating the fact that the reranking process is giving consistent improvements in accuracy.", "Further breaking down the changes, we can see that improvements in word reordering are by far the most prominent, slightly less than three times the number of improvements in the next most common category.", "This demonstrates that the neural MT model is successfully capturing the overall structure of the sentence, and effectively disambiguating reorderings that could not be appropriately scored in the baseline model.", "Next in Table 3 we show examples of the four most common sub-categories of errors that were fixed by the neural MT reranker, and note the total number of improvements and degradations of each.", "The first subcategory is related to the general reordering of phrases in the sentence.", "As there Table 3 : An example of more common varieties of improvements caused by the neural MT reranking.", "is a large amount of reordering involved in translating from Japanese to English, mistaken longdistance reordering is one of the more common causes for errors, and the neural MT model was effective at fixing these problems, resulting in 26 improvements and only 4 degradations.", "In the sentence shown in the example, the baseline system swaps the verb phrase and subject positions, making it difficult to tell that the list of conditions are what \"occurred,\" while the reranked system appropriately puts this list as the subject of \"occurred.\"", "The second subcategory includes insertions or deletions of auxiliary verbs, for which there were 15 improvements and not a single degradation.", "The reason why these errors occurred in the first place is that when a transitive verb, for example \"obtained,\" occurs on its own, it is often translated as \"X was obtained by Y,\" 8 but when it occurs as a relative clause decorating the noun X it will be translated as \"X obtained by Y,\" as shown in the example.", "The baseline system does not include any explicit features to make this distinction between whether a verb is part of a relative clause or not, and thus made a number of mistakes of this variety.", "However, it is evident that the neural MT model has learned to make this distinction, greatly reducing the number of these errors.", "The third subcategory is similar to the first, but explicitly involves the correct interpretation of co-ordinate structures.", "It is well known that syntactic parsers often make mistakes in their interpretation of coordinate structures (Kummerfeld et al., 2012) .", "Of course, the parser used in our syntaxbased MT system is no exception to this rule, and parse errors often cause coordinate phrases to be broken apart on the target side, as is the case in the example's \"local heating and ablation.\"", "The fact that the neural MT models were able to correct a large number of errors related to these structures suggests that they are able to successfully determine whether two phrases are coordinated or not, and keep them together on the target side.", "The final sub-category of the top four is related to verb conjugation agreement.", "Many of the examples related to verb conjugation, including the one shown in Table 3 , were related to when two singular nouns were connected by a conjunction.", "In this case, the local context provided by a standard n-gram language model is not enough to resolve the ambiguity, but the longer context handled by the neural MT model is able to resolve this easily.", "What is notable about these four categories is that they all are related to improving the correctness of the output from a grammatical point of view, as opposed to fixing mistakes in lexical choice or terminology.", "In fact, neural MT reranking had an overall negative effect on choice of terminology with only 2 improvements at the cost of 4 degradations.", "This was due to the fact that the neural MT model tended to prefer more com- mon words, mistaking \"radiant heat\" as \"radiation heat\" or \"slipring\" as \"ring.\"", "While these tendencies will be affected by many factors such as the size of the vocabulary or the number and size of hidden layers of the net, we feel it is safe to say that neural MT reranking can be expected to have a large positive effect on syntactic correctness of output, while results for lexical choice are less conclusive.", "Effect of n-best Size on Reranking In the previous sections, we confirmed the effectiveness of n-best list reranking using neural MT models.", "However, reranking using n-best lists (like other search methods for MT) is an approximate search method, and its effectiveness is limited by the size of the n-best list used.", "In order to quantify the effect of this inexact search, we performed experiments to examine the post-reranking automatic evaluation scores of the MT results for all n-best list sizes from 1 to 1000.", "Figure 1 shows the results of this examination, with the x-axis referring to the log-scaled number of hypotheses in the n-best list, and the y-axis referring to the quality of the translation, either with regards to model score (for the model including the neural MT likelihood as a feature) or BLEU score.", "9 From these results we can note several interest- 9 The BLEU scores differ slightly from Table 1 due to differences in tokenization standards between these experiments and the official evaluation server.", "ing points.", "First, we can see that the improvement in scores is very slightly sub-linear in the log number of hypotheses in the n-best list.", "In other words, every time we double the n-best list size we will see an improvement in accuracy that is slightly smaller than the last time we doubled the size.", "Second, we can note that in most cases this trend continues all the way up to our limit of 1000best lists, indicating that gains are not saturating, and we can likely expect even more improvements from using larger lists, or perhaps directly performing decoding using neural models (Alkhouli et al., 2015) .", "The en-ja results, however, are an exception to this rule, with BLEU gains more or less saturating around the 50-best list point.", "Conclusion In this paper we described results applying neural MT reranking to a baseline syntax-based machine translation system in 4 languages.", "In particular, we performed an in-depth analysis of what kinds of translation errors were fixed by neural MT reranking.", "Based on this analysis, we found that the majority of the gains were related to improvements in the accuracy of transfer of correct grammatical structure to the target sentence, with the most prominent gains being related to errors regarding reordering of phrases, insertion/deletion of copulas, coordinate structures, and verb agreement.", "We also found that, within the neural MT reranking framework, accuracy gains scaled ap-proximately log-linearly with the size of the n-best list, and in most cases were not saturated even after examining 1000 unique hypotheses." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Baseline System", "Neural MT Models", "Experimental Results", "Analysis", "Effect of n-best Size on Reranking", "Conclusion" ] }
GEM-SciDuet-train-67#paper-1143#slide-1
Reranking with Neural MT Models
Neural Reranking Improves Subjective Quality of Machine Translation Input N-best w/MT Features Neural Features he has a cold
Neural Reranking Improves Subjective Quality of Machine Translation Input N-best w/MT Features Neural Features he has a cold
[]
GEM-SciDuet-train-67#paper-1143#slide-2
1143
Neural Reranking Improves Subjective Quality of Machine Translation: NAIST at WAT2015
This year, the Nara Institute of Science and Technology (NAIST)'s submission to the 2015 Workshop on Asian Translation was based on syntax-based statistical machine translation, with the addition of a reranking component using neural attentional machine translation models. Experiments re-confirmed results from previous work stating that neural MT reranking provides a large gain in objective evaluation measures such as BLEU, and also confirmed for the first time that these results also carry over to manual evaluation. We further perform a detailed analysis of reasons for this increase, finding that the main contributions of the neural models lie in improvement of the grammatical correctness of the output, as opposed to improvements in lexical choice of content words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87 ], "paper_content_text": [ "Introduction Neural network models for machine translation (MT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015) , while still in a nascent stage, have shown impressive results in a number of translation tasks.", "Specifically, a number of works have demonstrated gains in BLEU score (Papineni et al., 2002) over state-of-the-art non-neural systems, both when using the neural MT model standalone (Luong et al., 2015a; Jean et al., 2015; Luong et al., 2015b) , or to rerank the output of more traditional systems phrase-based MT systems (Sutskever et al., 2014) .", "However, despite these impressive results with regards to automatic measures of translation quality, there has been little examination of the effect that these gains have on the subjective impressions of human users.", "Because BLEU generally has some correlation with translation quality, 1 it is fair to hypothesize that these gains will carry over to gains in human evaluation, but empirical evidence for this hypothesis is still scarce.", "In this paper, we attempt to close this gap by examining the gains provided by using neural MT models to rerank the hypotheses a state-of-the-art non-neural MT system, both from the objective and subjective perspectives.", "Specifically, as part of the Nara Institute of Science and Technology (NAIST) submission to the Workshop on Asian Translation (WAT) 2015 (Nakazawa et al., 2015) , we generate reranked and non-reranked translation results in four language pairs (Section 2).", "Based on these translation results, we calculate scores according to automatic evaluation measures BLEU and RIBES (Isozaki et al., 2010) , and a manual evaluation that involves comparing hypotheses to a baseline system (Section 3).", "Next, we perform a detailed analysis of the cases in which subjective impressions improved or degraded due to neural MT reranking, and identify major areas in which neural reranking improves results, and areas in which reranking is less helpful (Section 4).", "Finally, as an auxiliary result, we also examine the effect that the size of the n-best list used in reranking has on the improvement of translation results (Section 5).", "Generation of Translation Results Baseline System All experiments are performed on WAT2015 translation task from Japanese (ja) to/from English (en) and Chinese (zh).", "As a baseline, we used the NAIST system for WAT 2014 (Neubig, 2014) , a state-of-the-art system that achieved the highest accuracy on all four tracks in the last year's eval-uation.", "2 The details of construction are described in Neubig (2014) , but we briefly outline it here for completeness.", "The system is based on the Travatar toolkit (Neubig, 2013) , using tree-to-string statistical MT (Graehl and Knight, 2004; Liu et al., 2006) , in which the source is first syntactically parsed, then subtrees of the input parse are converted into strings on the target side.", "This translation paradigm has proven effective for translation between syntactically distant language pairs such as those handled by the WAT tasks.", "In addition, following our findings in Neubig and Duh (2014) , to improve the accuracy of translation we use forestbased encoding of many parse candidates (Mi et al., 2008) , and a supervised alignment technique for ja-en and en-ja (Riesa and Marcu, 2010) .", "To train the systems, we used the ASPEC corpus provided by WAT.", "For the zh-ja and ja-zh systems, we used all of the data, amounting to 672k sentences.", "For the en-ja and ja-en systems, we used all 3M sentences for training the language models, and the first 2M sentences of the training data for training the translation models.", "For English, Japanese, and Chinese, tokenization was performed using the Stanford Parser (Klein and Manning, 2003) , the KyTea toolkit (Neubig et al., 2011) , and the Stanford Segmenter (Tseng et al., 2005) respectively.", "For parsing, we use the Egret parser, 3 which implements the latent variable parsing model of (Petrov et al., 2006) .", "4 For all systems, we trained a 6-gram language model smoothed with modified Kneser-Ney smoothing (Chen and Goodman, 1996) using KenLM (Heafield et al., 2013) .", "To optimize the parameters of the log-linear model, we use standard minimum error rate training (MERT; Och (2003) ) with BLEU as an objective.", "Neural MT Models As our neural MT model, we use the attentional model of Bahdanau et al.", "(2015) .", "The model first encodes the source sentence f using bidirectional long short-term memory (LSTM; Hochreiter and Schmidhuber (1997) ) recurrent networks.", "This results in an encoding vector h j for each word f j in f .", "The model then proceeds to generate the target translationê one word at a time, at each time step calculating soft alignments a i that are used to generate a context vector g i , which is referenced when generating the target word g i = |f | ∑ j=1 a i,j h j .", "(1) Attentional models have a number of appealing properties, such as being theoretically able to encode variable length sequences without worrying about memory constraints imposed by the fixed-size vectors used in encoder-decoder models.", "These advantages are confirmed in empirical results, with attentional models performing markedly better on longer sequences (Bahdanau et al., 2015) .", "To train the neural MT models, we used the implementation provided by the lamtram toolkit.", "5 The forward and reverse LSTM models each had 256 nodes, and word embeddings were also set to size 256.", "For ja-en and en-ja models we chose the first 500k sentences in the training corpus, and for ja-zh and zh-ja models we used all 672k sentences.", "Training was performed using stochastic gradient descent (SGD) with an initial learning rate of 0.1, which was halved every epoch in which the development likelihood decreased.", "For each language pair, we trained two models and ensembled the probabilities by linearly interpolating between the two probability distributions.", "6 These probabilities were used to rerank unique 1,000-best lists from the baseline model.", "To perform reranking, the log likelihood of the neural MT model was added as an additional feature to the standard baseline model features, and the weight of this feature was decided by running MERT on the dev set.", "Experimental Results First, we calculate overall numerical results for our systems with and without the neural MT reranking model.", "As automatic evaluation we use the standard BLEU (Papineni et al., 2002) and reorderingoriented RIBES (Isozaki et al., 2010) Table 1 : Overall BLEU, RIBES, and HUMAN scores for our baseline system and system with neural MT reranking.", "Bold indicates a significant improvement according to bootstrap resampling at p < 0.05 (Koehn, 2004) .", "manual evaluation, we use the WAT \"HUMAN\" evaluation score (Nakazawa et al., 2015) , which is essentially related to the number of wins over a baseline phrase-based system.", "In the case that the system beats the baseline on all sentences, the HUMAN score will be 100, and if it loses on all sentences the score will be -100.", "From the results in Table 1 , we can first see that adding the neural MT reranking resulted in a significant increase in the evaluation scores for all language pairs under consideration, except for the manual evaluation in ja-zh translation.", "7 It should be noted that these gains are achieved even though the original baseline was already quite strong (outperforming most other WAT2015 systems without a neural component).", "While neural MT reranking has been noted to improve traditional systems with respect to BLEU score in previous work (Sutskever et al., 2014) , to our knowledge this is the first work that notes that these gains also carry over convincingly to human evaluation scores.", "In the following section, we will examine the results in more detail and attempt to explain exactly what is causing this increase in translation quality.", "Analysis To perform a deeper analysis, we manually examined the first 200 sentences of the ja-en part of the official WAT2015 human evaluation set.", "Specifically, we (1) compared the baseline and reranked outputs, and decided whether one was better or if they were of the same quality and (2) in the case that one of the two was better, classified the example by the type of error that was fixed or caused by the reranking leading to this change in subjective impression.", "Specifically, when annotating the type of error, we used a simplified version of 7 The overall scores for ja-zh are lower than others, perhaps a result of word-order between Japanese and Chinese being more similar than Japanese and English, the parser for Japanese being weaker than that of the other languages, and less consistent evaluation scores for the Chinese output (Nakazawa et al., 2014 the error typology of Vilar et al.", "(2006) consisting of insertion, deletion, word conjugation, word substitution, and reordering, as well as subcategories of each of these categories (the number of sub-categories totalled approximately 40).", "If there was more than one change in the sentence, only the change that we subjectively felt had the largest effect on the translation quality was annotated.", "The number of improvements and degradations afforded by neural MT reranking is shown in Table 2.", "From this figure, we can see that overall, neural reranking caused an improvement in 117 sentences, and a degradation in 33 sentences, corroborating the fact that the reranking process is giving consistent improvements in accuracy.", "Further breaking down the changes, we can see that improvements in word reordering are by far the most prominent, slightly less than three times the number of improvements in the next most common category.", "This demonstrates that the neural MT model is successfully capturing the overall structure of the sentence, and effectively disambiguating reorderings that could not be appropriately scored in the baseline model.", "Next in Table 3 we show examples of the four most common sub-categories of errors that were fixed by the neural MT reranker, and note the total number of improvements and degradations of each.", "The first subcategory is related to the general reordering of phrases in the sentence.", "As there Table 3 : An example of more common varieties of improvements caused by the neural MT reranking.", "is a large amount of reordering involved in translating from Japanese to English, mistaken longdistance reordering is one of the more common causes for errors, and the neural MT model was effective at fixing these problems, resulting in 26 improvements and only 4 degradations.", "In the sentence shown in the example, the baseline system swaps the verb phrase and subject positions, making it difficult to tell that the list of conditions are what \"occurred,\" while the reranked system appropriately puts this list as the subject of \"occurred.\"", "The second subcategory includes insertions or deletions of auxiliary verbs, for which there were 15 improvements and not a single degradation.", "The reason why these errors occurred in the first place is that when a transitive verb, for example \"obtained,\" occurs on its own, it is often translated as \"X was obtained by Y,\" 8 but when it occurs as a relative clause decorating the noun X it will be translated as \"X obtained by Y,\" as shown in the example.", "The baseline system does not include any explicit features to make this distinction between whether a verb is part of a relative clause or not, and thus made a number of mistakes of this variety.", "However, it is evident that the neural MT model has learned to make this distinction, greatly reducing the number of these errors.", "The third subcategory is similar to the first, but explicitly involves the correct interpretation of co-ordinate structures.", "It is well known that syntactic parsers often make mistakes in their interpretation of coordinate structures (Kummerfeld et al., 2012) .", "Of course, the parser used in our syntaxbased MT system is no exception to this rule, and parse errors often cause coordinate phrases to be broken apart on the target side, as is the case in the example's \"local heating and ablation.\"", "The fact that the neural MT models were able to correct a large number of errors related to these structures suggests that they are able to successfully determine whether two phrases are coordinated or not, and keep them together on the target side.", "The final sub-category of the top four is related to verb conjugation agreement.", "Many of the examples related to verb conjugation, including the one shown in Table 3 , were related to when two singular nouns were connected by a conjunction.", "In this case, the local context provided by a standard n-gram language model is not enough to resolve the ambiguity, but the longer context handled by the neural MT model is able to resolve this easily.", "What is notable about these four categories is that they all are related to improving the correctness of the output from a grammatical point of view, as opposed to fixing mistakes in lexical choice or terminology.", "In fact, neural MT reranking had an overall negative effect on choice of terminology with only 2 improvements at the cost of 4 degradations.", "This was due to the fact that the neural MT model tended to prefer more com- mon words, mistaking \"radiant heat\" as \"radiation heat\" or \"slipring\" as \"ring.\"", "While these tendencies will be affected by many factors such as the size of the vocabulary or the number and size of hidden layers of the net, we feel it is safe to say that neural MT reranking can be expected to have a large positive effect on syntactic correctness of output, while results for lexical choice are less conclusive.", "Effect of n-best Size on Reranking In the previous sections, we confirmed the effectiveness of n-best list reranking using neural MT models.", "However, reranking using n-best lists (like other search methods for MT) is an approximate search method, and its effectiveness is limited by the size of the n-best list used.", "In order to quantify the effect of this inexact search, we performed experiments to examine the post-reranking automatic evaluation scores of the MT results for all n-best list sizes from 1 to 1000.", "Figure 1 shows the results of this examination, with the x-axis referring to the log-scaled number of hypotheses in the n-best list, and the y-axis referring to the quality of the translation, either with regards to model score (for the model including the neural MT likelihood as a feature) or BLEU score.", "9 From these results we can note several interest- 9 The BLEU scores differ slightly from Table 1 due to differences in tokenization standards between these experiments and the official evaluation server.", "ing points.", "First, we can see that the improvement in scores is very slightly sub-linear in the log number of hypotheses in the n-best list.", "In other words, every time we double the n-best list size we will see an improvement in accuracy that is slightly smaller than the last time we doubled the size.", "Second, we can note that in most cases this trend continues all the way up to our limit of 1000best lists, indicating that gains are not saturating, and we can likely expect even more improvements from using larger lists, or perhaps directly performing decoding using neural models (Alkhouli et al., 2015) .", "The en-ja results, however, are an exception to this rule, with BLEU gains more or less saturating around the 50-best list point.", "Conclusion In this paper we described results applying neural MT reranking to a baseline syntax-based machine translation system in 4 languages.", "In particular, we performed an in-depth analysis of what kinds of translation errors were fixed by neural MT reranking.", "Based on this analysis, we found that the majority of the gains were related to improvements in the accuracy of transfer of correct grammatical structure to the target sentence, with the most prominent gains being related to errors regarding reordering of phrases, insertion/deletion of copulas, coordinate structures, and verb agreement.", "We also found that, within the neural MT reranking framework, accuracy gains scaled ap-proximately log-linearly with the size of the n-best list, and in most cases were not saturated even after examining 1000 unique hypotheses." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Baseline System", "Neural MT Models", "Experimental Results", "Analysis", "Effect of n-best Size on Reranking", "Conclusion" ] }
GEM-SciDuet-train-67#paper-1143#slide-2
What Do We Know About Reranking
Neural Reranking Improves Subjective Quality of Machine Translation Reranking greatly improves BLEU score, even over strong baseline systems:
Neural Reranking Improves Subjective Quality of Machine Translation Reranking greatly improves BLEU score, even over strong baseline systems:
[]
GEM-SciDuet-train-67#paper-1143#slide-3
1143
Neural Reranking Improves Subjective Quality of Machine Translation: NAIST at WAT2015
This year, the Nara Institute of Science and Technology (NAIST)'s submission to the 2015 Workshop on Asian Translation was based on syntax-based statistical machine translation, with the addition of a reranking component using neural attentional machine translation models. Experiments re-confirmed results from previous work stating that neural MT reranking provides a large gain in objective evaluation measures such as BLEU, and also confirmed for the first time that these results also carry over to manual evaluation. We further perform a detailed analysis of reasons for this increase, finding that the main contributions of the neural models lie in improvement of the grammatical correctness of the output, as opposed to improvements in lexical choice of content words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87 ], "paper_content_text": [ "Introduction Neural network models for machine translation (MT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015) , while still in a nascent stage, have shown impressive results in a number of translation tasks.", "Specifically, a number of works have demonstrated gains in BLEU score (Papineni et al., 2002) over state-of-the-art non-neural systems, both when using the neural MT model standalone (Luong et al., 2015a; Jean et al., 2015; Luong et al., 2015b) , or to rerank the output of more traditional systems phrase-based MT systems (Sutskever et al., 2014) .", "However, despite these impressive results with regards to automatic measures of translation quality, there has been little examination of the effect that these gains have on the subjective impressions of human users.", "Because BLEU generally has some correlation with translation quality, 1 it is fair to hypothesize that these gains will carry over to gains in human evaluation, but empirical evidence for this hypothesis is still scarce.", "In this paper, we attempt to close this gap by examining the gains provided by using neural MT models to rerank the hypotheses a state-of-the-art non-neural MT system, both from the objective and subjective perspectives.", "Specifically, as part of the Nara Institute of Science and Technology (NAIST) submission to the Workshop on Asian Translation (WAT) 2015 (Nakazawa et al., 2015) , we generate reranked and non-reranked translation results in four language pairs (Section 2).", "Based on these translation results, we calculate scores according to automatic evaluation measures BLEU and RIBES (Isozaki et al., 2010) , and a manual evaluation that involves comparing hypotheses to a baseline system (Section 3).", "Next, we perform a detailed analysis of the cases in which subjective impressions improved or degraded due to neural MT reranking, and identify major areas in which neural reranking improves results, and areas in which reranking is less helpful (Section 4).", "Finally, as an auxiliary result, we also examine the effect that the size of the n-best list used in reranking has on the improvement of translation results (Section 5).", "Generation of Translation Results Baseline System All experiments are performed on WAT2015 translation task from Japanese (ja) to/from English (en) and Chinese (zh).", "As a baseline, we used the NAIST system for WAT 2014 (Neubig, 2014) , a state-of-the-art system that achieved the highest accuracy on all four tracks in the last year's eval-uation.", "2 The details of construction are described in Neubig (2014) , but we briefly outline it here for completeness.", "The system is based on the Travatar toolkit (Neubig, 2013) , using tree-to-string statistical MT (Graehl and Knight, 2004; Liu et al., 2006) , in which the source is first syntactically parsed, then subtrees of the input parse are converted into strings on the target side.", "This translation paradigm has proven effective for translation between syntactically distant language pairs such as those handled by the WAT tasks.", "In addition, following our findings in Neubig and Duh (2014) , to improve the accuracy of translation we use forestbased encoding of many parse candidates (Mi et al., 2008) , and a supervised alignment technique for ja-en and en-ja (Riesa and Marcu, 2010) .", "To train the systems, we used the ASPEC corpus provided by WAT.", "For the zh-ja and ja-zh systems, we used all of the data, amounting to 672k sentences.", "For the en-ja and ja-en systems, we used all 3M sentences for training the language models, and the first 2M sentences of the training data for training the translation models.", "For English, Japanese, and Chinese, tokenization was performed using the Stanford Parser (Klein and Manning, 2003) , the KyTea toolkit (Neubig et al., 2011) , and the Stanford Segmenter (Tseng et al., 2005) respectively.", "For parsing, we use the Egret parser, 3 which implements the latent variable parsing model of (Petrov et al., 2006) .", "4 For all systems, we trained a 6-gram language model smoothed with modified Kneser-Ney smoothing (Chen and Goodman, 1996) using KenLM (Heafield et al., 2013) .", "To optimize the parameters of the log-linear model, we use standard minimum error rate training (MERT; Och (2003) ) with BLEU as an objective.", "Neural MT Models As our neural MT model, we use the attentional model of Bahdanau et al.", "(2015) .", "The model first encodes the source sentence f using bidirectional long short-term memory (LSTM; Hochreiter and Schmidhuber (1997) ) recurrent networks.", "This results in an encoding vector h j for each word f j in f .", "The model then proceeds to generate the target translationê one word at a time, at each time step calculating soft alignments a i that are used to generate a context vector g i , which is referenced when generating the target word g i = |f | ∑ j=1 a i,j h j .", "(1) Attentional models have a number of appealing properties, such as being theoretically able to encode variable length sequences without worrying about memory constraints imposed by the fixed-size vectors used in encoder-decoder models.", "These advantages are confirmed in empirical results, with attentional models performing markedly better on longer sequences (Bahdanau et al., 2015) .", "To train the neural MT models, we used the implementation provided by the lamtram toolkit.", "5 The forward and reverse LSTM models each had 256 nodes, and word embeddings were also set to size 256.", "For ja-en and en-ja models we chose the first 500k sentences in the training corpus, and for ja-zh and zh-ja models we used all 672k sentences.", "Training was performed using stochastic gradient descent (SGD) with an initial learning rate of 0.1, which was halved every epoch in which the development likelihood decreased.", "For each language pair, we trained two models and ensembled the probabilities by linearly interpolating between the two probability distributions.", "6 These probabilities were used to rerank unique 1,000-best lists from the baseline model.", "To perform reranking, the log likelihood of the neural MT model was added as an additional feature to the standard baseline model features, and the weight of this feature was decided by running MERT on the dev set.", "Experimental Results First, we calculate overall numerical results for our systems with and without the neural MT reranking model.", "As automatic evaluation we use the standard BLEU (Papineni et al., 2002) and reorderingoriented RIBES (Isozaki et al., 2010) Table 1 : Overall BLEU, RIBES, and HUMAN scores for our baseline system and system with neural MT reranking.", "Bold indicates a significant improvement according to bootstrap resampling at p < 0.05 (Koehn, 2004) .", "manual evaluation, we use the WAT \"HUMAN\" evaluation score (Nakazawa et al., 2015) , which is essentially related to the number of wins over a baseline phrase-based system.", "In the case that the system beats the baseline on all sentences, the HUMAN score will be 100, and if it loses on all sentences the score will be -100.", "From the results in Table 1 , we can first see that adding the neural MT reranking resulted in a significant increase in the evaluation scores for all language pairs under consideration, except for the manual evaluation in ja-zh translation.", "7 It should be noted that these gains are achieved even though the original baseline was already quite strong (outperforming most other WAT2015 systems without a neural component).", "While neural MT reranking has been noted to improve traditional systems with respect to BLEU score in previous work (Sutskever et al., 2014) , to our knowledge this is the first work that notes that these gains also carry over convincingly to human evaluation scores.", "In the following section, we will examine the results in more detail and attempt to explain exactly what is causing this increase in translation quality.", "Analysis To perform a deeper analysis, we manually examined the first 200 sentences of the ja-en part of the official WAT2015 human evaluation set.", "Specifically, we (1) compared the baseline and reranked outputs, and decided whether one was better or if they were of the same quality and (2) in the case that one of the two was better, classified the example by the type of error that was fixed or caused by the reranking leading to this change in subjective impression.", "Specifically, when annotating the type of error, we used a simplified version of 7 The overall scores for ja-zh are lower than others, perhaps a result of word-order between Japanese and Chinese being more similar than Japanese and English, the parser for Japanese being weaker than that of the other languages, and less consistent evaluation scores for the Chinese output (Nakazawa et al., 2014 the error typology of Vilar et al.", "(2006) consisting of insertion, deletion, word conjugation, word substitution, and reordering, as well as subcategories of each of these categories (the number of sub-categories totalled approximately 40).", "If there was more than one change in the sentence, only the change that we subjectively felt had the largest effect on the translation quality was annotated.", "The number of improvements and degradations afforded by neural MT reranking is shown in Table 2.", "From this figure, we can see that overall, neural reranking caused an improvement in 117 sentences, and a degradation in 33 sentences, corroborating the fact that the reranking process is giving consistent improvements in accuracy.", "Further breaking down the changes, we can see that improvements in word reordering are by far the most prominent, slightly less than three times the number of improvements in the next most common category.", "This demonstrates that the neural MT model is successfully capturing the overall structure of the sentence, and effectively disambiguating reorderings that could not be appropriately scored in the baseline model.", "Next in Table 3 we show examples of the four most common sub-categories of errors that were fixed by the neural MT reranker, and note the total number of improvements and degradations of each.", "The first subcategory is related to the general reordering of phrases in the sentence.", "As there Table 3 : An example of more common varieties of improvements caused by the neural MT reranking.", "is a large amount of reordering involved in translating from Japanese to English, mistaken longdistance reordering is one of the more common causes for errors, and the neural MT model was effective at fixing these problems, resulting in 26 improvements and only 4 degradations.", "In the sentence shown in the example, the baseline system swaps the verb phrase and subject positions, making it difficult to tell that the list of conditions are what \"occurred,\" while the reranked system appropriately puts this list as the subject of \"occurred.\"", "The second subcategory includes insertions or deletions of auxiliary verbs, for which there were 15 improvements and not a single degradation.", "The reason why these errors occurred in the first place is that when a transitive verb, for example \"obtained,\" occurs on its own, it is often translated as \"X was obtained by Y,\" 8 but when it occurs as a relative clause decorating the noun X it will be translated as \"X obtained by Y,\" as shown in the example.", "The baseline system does not include any explicit features to make this distinction between whether a verb is part of a relative clause or not, and thus made a number of mistakes of this variety.", "However, it is evident that the neural MT model has learned to make this distinction, greatly reducing the number of these errors.", "The third subcategory is similar to the first, but explicitly involves the correct interpretation of co-ordinate structures.", "It is well known that syntactic parsers often make mistakes in their interpretation of coordinate structures (Kummerfeld et al., 2012) .", "Of course, the parser used in our syntaxbased MT system is no exception to this rule, and parse errors often cause coordinate phrases to be broken apart on the target side, as is the case in the example's \"local heating and ablation.\"", "The fact that the neural MT models were able to correct a large number of errors related to these structures suggests that they are able to successfully determine whether two phrases are coordinated or not, and keep them together on the target side.", "The final sub-category of the top four is related to verb conjugation agreement.", "Many of the examples related to verb conjugation, including the one shown in Table 3 , were related to when two singular nouns were connected by a conjunction.", "In this case, the local context provided by a standard n-gram language model is not enough to resolve the ambiguity, but the longer context handled by the neural MT model is able to resolve this easily.", "What is notable about these four categories is that they all are related to improving the correctness of the output from a grammatical point of view, as opposed to fixing mistakes in lexical choice or terminology.", "In fact, neural MT reranking had an overall negative effect on choice of terminology with only 2 improvements at the cost of 4 degradations.", "This was due to the fact that the neural MT model tended to prefer more com- mon words, mistaking \"radiant heat\" as \"radiation heat\" or \"slipring\" as \"ring.\"", "While these tendencies will be affected by many factors such as the size of the vocabulary or the number and size of hidden layers of the net, we feel it is safe to say that neural MT reranking can be expected to have a large positive effect on syntactic correctness of output, while results for lexical choice are less conclusive.", "Effect of n-best Size on Reranking In the previous sections, we confirmed the effectiveness of n-best list reranking using neural MT models.", "However, reranking using n-best lists (like other search methods for MT) is an approximate search method, and its effectiveness is limited by the size of the n-best list used.", "In order to quantify the effect of this inexact search, we performed experiments to examine the post-reranking automatic evaluation scores of the MT results for all n-best list sizes from 1 to 1000.", "Figure 1 shows the results of this examination, with the x-axis referring to the log-scaled number of hypotheses in the n-best list, and the y-axis referring to the quality of the translation, either with regards to model score (for the model including the neural MT likelihood as a feature) or BLEU score.", "9 From these results we can note several interest- 9 The BLEU scores differ slightly from Table 1 due to differences in tokenization standards between these experiments and the official evaluation server.", "ing points.", "First, we can see that the improvement in scores is very slightly sub-linear in the log number of hypotheses in the n-best list.", "In other words, every time we double the n-best list size we will see an improvement in accuracy that is slightly smaller than the last time we doubled the size.", "Second, we can note that in most cases this trend continues all the way up to our limit of 1000best lists, indicating that gains are not saturating, and we can likely expect even more improvements from using larger lists, or perhaps directly performing decoding using neural models (Alkhouli et al., 2015) .", "The en-ja results, however, are an exception to this rule, with BLEU gains more or less saturating around the 50-best list point.", "Conclusion In this paper we described results applying neural MT reranking to a baseline syntax-based machine translation system in 4 languages.", "In particular, we performed an in-depth analysis of what kinds of translation errors were fixed by neural MT reranking.", "Based on this analysis, we found that the majority of the gains were related to improvements in the accuracy of transfer of correct grammatical structure to the target sentence, with the most prominent gains being related to errors regarding reordering of phrases, insertion/deletion of copulas, coordinate structures, and verb agreement.", "We also found that, within the neural MT reranking framework, accuracy gains scaled ap-proximately log-linearly with the size of the n-best list, and in most cases were not saturated even after examining 1000 unique hypotheses." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Baseline System", "Neural MT Models", "Experimental Results", "Analysis", "Effect of n-best Size on Reranking", "Conclusion" ] }
GEM-SciDuet-train-67#paper-1143#slide-3
What Dont We Know About Reranking
Neural Reranking Improves Subjective Quality of Machine Translation Does reranking improve subjective impressions of results? What are the qualitative differences before/after reranking with neural MT models?
Neural Reranking Improves Subjective Quality of Machine Translation Does reranking improve subjective impressions of results? What are the qualitative differences before/after reranking with neural MT models?
[]
GEM-SciDuet-train-67#paper-1143#slide-4
1143
Neural Reranking Improves Subjective Quality of Machine Translation: NAIST at WAT2015
This year, the Nara Institute of Science and Technology (NAIST)'s submission to the 2015 Workshop on Asian Translation was based on syntax-based statistical machine translation, with the addition of a reranking component using neural attentional machine translation models. Experiments re-confirmed results from previous work stating that neural MT reranking provides a large gain in objective evaluation measures such as BLEU, and also confirmed for the first time that these results also carry over to manual evaluation. We further perform a detailed analysis of reasons for this increase, finding that the main contributions of the neural models lie in improvement of the grammatical correctness of the output, as opposed to improvements in lexical choice of content words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87 ], "paper_content_text": [ "Introduction Neural network models for machine translation (MT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015) , while still in a nascent stage, have shown impressive results in a number of translation tasks.", "Specifically, a number of works have demonstrated gains in BLEU score (Papineni et al., 2002) over state-of-the-art non-neural systems, both when using the neural MT model standalone (Luong et al., 2015a; Jean et al., 2015; Luong et al., 2015b) , or to rerank the output of more traditional systems phrase-based MT systems (Sutskever et al., 2014) .", "However, despite these impressive results with regards to automatic measures of translation quality, there has been little examination of the effect that these gains have on the subjective impressions of human users.", "Because BLEU generally has some correlation with translation quality, 1 it is fair to hypothesize that these gains will carry over to gains in human evaluation, but empirical evidence for this hypothesis is still scarce.", "In this paper, we attempt to close this gap by examining the gains provided by using neural MT models to rerank the hypotheses a state-of-the-art non-neural MT system, both from the objective and subjective perspectives.", "Specifically, as part of the Nara Institute of Science and Technology (NAIST) submission to the Workshop on Asian Translation (WAT) 2015 (Nakazawa et al., 2015) , we generate reranked and non-reranked translation results in four language pairs (Section 2).", "Based on these translation results, we calculate scores according to automatic evaluation measures BLEU and RIBES (Isozaki et al., 2010) , and a manual evaluation that involves comparing hypotheses to a baseline system (Section 3).", "Next, we perform a detailed analysis of the cases in which subjective impressions improved or degraded due to neural MT reranking, and identify major areas in which neural reranking improves results, and areas in which reranking is less helpful (Section 4).", "Finally, as an auxiliary result, we also examine the effect that the size of the n-best list used in reranking has on the improvement of translation results (Section 5).", "Generation of Translation Results Baseline System All experiments are performed on WAT2015 translation task from Japanese (ja) to/from English (en) and Chinese (zh).", "As a baseline, we used the NAIST system for WAT 2014 (Neubig, 2014) , a state-of-the-art system that achieved the highest accuracy on all four tracks in the last year's eval-uation.", "2 The details of construction are described in Neubig (2014) , but we briefly outline it here for completeness.", "The system is based on the Travatar toolkit (Neubig, 2013) , using tree-to-string statistical MT (Graehl and Knight, 2004; Liu et al., 2006) , in which the source is first syntactically parsed, then subtrees of the input parse are converted into strings on the target side.", "This translation paradigm has proven effective for translation between syntactically distant language pairs such as those handled by the WAT tasks.", "In addition, following our findings in Neubig and Duh (2014) , to improve the accuracy of translation we use forestbased encoding of many parse candidates (Mi et al., 2008) , and a supervised alignment technique for ja-en and en-ja (Riesa and Marcu, 2010) .", "To train the systems, we used the ASPEC corpus provided by WAT.", "For the zh-ja and ja-zh systems, we used all of the data, amounting to 672k sentences.", "For the en-ja and ja-en systems, we used all 3M sentences for training the language models, and the first 2M sentences of the training data for training the translation models.", "For English, Japanese, and Chinese, tokenization was performed using the Stanford Parser (Klein and Manning, 2003) , the KyTea toolkit (Neubig et al., 2011) , and the Stanford Segmenter (Tseng et al., 2005) respectively.", "For parsing, we use the Egret parser, 3 which implements the latent variable parsing model of (Petrov et al., 2006) .", "4 For all systems, we trained a 6-gram language model smoothed with modified Kneser-Ney smoothing (Chen and Goodman, 1996) using KenLM (Heafield et al., 2013) .", "To optimize the parameters of the log-linear model, we use standard minimum error rate training (MERT; Och (2003) ) with BLEU as an objective.", "Neural MT Models As our neural MT model, we use the attentional model of Bahdanau et al.", "(2015) .", "The model first encodes the source sentence f using bidirectional long short-term memory (LSTM; Hochreiter and Schmidhuber (1997) ) recurrent networks.", "This results in an encoding vector h j for each word f j in f .", "The model then proceeds to generate the target translationê one word at a time, at each time step calculating soft alignments a i that are used to generate a context vector g i , which is referenced when generating the target word g i = |f | ∑ j=1 a i,j h j .", "(1) Attentional models have a number of appealing properties, such as being theoretically able to encode variable length sequences without worrying about memory constraints imposed by the fixed-size vectors used in encoder-decoder models.", "These advantages are confirmed in empirical results, with attentional models performing markedly better on longer sequences (Bahdanau et al., 2015) .", "To train the neural MT models, we used the implementation provided by the lamtram toolkit.", "5 The forward and reverse LSTM models each had 256 nodes, and word embeddings were also set to size 256.", "For ja-en and en-ja models we chose the first 500k sentences in the training corpus, and for ja-zh and zh-ja models we used all 672k sentences.", "Training was performed using stochastic gradient descent (SGD) with an initial learning rate of 0.1, which was halved every epoch in which the development likelihood decreased.", "For each language pair, we trained two models and ensembled the probabilities by linearly interpolating between the two probability distributions.", "6 These probabilities were used to rerank unique 1,000-best lists from the baseline model.", "To perform reranking, the log likelihood of the neural MT model was added as an additional feature to the standard baseline model features, and the weight of this feature was decided by running MERT on the dev set.", "Experimental Results First, we calculate overall numerical results for our systems with and without the neural MT reranking model.", "As automatic evaluation we use the standard BLEU (Papineni et al., 2002) and reorderingoriented RIBES (Isozaki et al., 2010) Table 1 : Overall BLEU, RIBES, and HUMAN scores for our baseline system and system with neural MT reranking.", "Bold indicates a significant improvement according to bootstrap resampling at p < 0.05 (Koehn, 2004) .", "manual evaluation, we use the WAT \"HUMAN\" evaluation score (Nakazawa et al., 2015) , which is essentially related to the number of wins over a baseline phrase-based system.", "In the case that the system beats the baseline on all sentences, the HUMAN score will be 100, and if it loses on all sentences the score will be -100.", "From the results in Table 1 , we can first see that adding the neural MT reranking resulted in a significant increase in the evaluation scores for all language pairs under consideration, except for the manual evaluation in ja-zh translation.", "7 It should be noted that these gains are achieved even though the original baseline was already quite strong (outperforming most other WAT2015 systems without a neural component).", "While neural MT reranking has been noted to improve traditional systems with respect to BLEU score in previous work (Sutskever et al., 2014) , to our knowledge this is the first work that notes that these gains also carry over convincingly to human evaluation scores.", "In the following section, we will examine the results in more detail and attempt to explain exactly what is causing this increase in translation quality.", "Analysis To perform a deeper analysis, we manually examined the first 200 sentences of the ja-en part of the official WAT2015 human evaluation set.", "Specifically, we (1) compared the baseline and reranked outputs, and decided whether one was better or if they were of the same quality and (2) in the case that one of the two was better, classified the example by the type of error that was fixed or caused by the reranking leading to this change in subjective impression.", "Specifically, when annotating the type of error, we used a simplified version of 7 The overall scores for ja-zh are lower than others, perhaps a result of word-order between Japanese and Chinese being more similar than Japanese and English, the parser for Japanese being weaker than that of the other languages, and less consistent evaluation scores for the Chinese output (Nakazawa et al., 2014 the error typology of Vilar et al.", "(2006) consisting of insertion, deletion, word conjugation, word substitution, and reordering, as well as subcategories of each of these categories (the number of sub-categories totalled approximately 40).", "If there was more than one change in the sentence, only the change that we subjectively felt had the largest effect on the translation quality was annotated.", "The number of improvements and degradations afforded by neural MT reranking is shown in Table 2.", "From this figure, we can see that overall, neural reranking caused an improvement in 117 sentences, and a degradation in 33 sentences, corroborating the fact that the reranking process is giving consistent improvements in accuracy.", "Further breaking down the changes, we can see that improvements in word reordering are by far the most prominent, slightly less than three times the number of improvements in the next most common category.", "This demonstrates that the neural MT model is successfully capturing the overall structure of the sentence, and effectively disambiguating reorderings that could not be appropriately scored in the baseline model.", "Next in Table 3 we show examples of the four most common sub-categories of errors that were fixed by the neural MT reranker, and note the total number of improvements and degradations of each.", "The first subcategory is related to the general reordering of phrases in the sentence.", "As there Table 3 : An example of more common varieties of improvements caused by the neural MT reranking.", "is a large amount of reordering involved in translating from Japanese to English, mistaken longdistance reordering is one of the more common causes for errors, and the neural MT model was effective at fixing these problems, resulting in 26 improvements and only 4 degradations.", "In the sentence shown in the example, the baseline system swaps the verb phrase and subject positions, making it difficult to tell that the list of conditions are what \"occurred,\" while the reranked system appropriately puts this list as the subject of \"occurred.\"", "The second subcategory includes insertions or deletions of auxiliary verbs, for which there were 15 improvements and not a single degradation.", "The reason why these errors occurred in the first place is that when a transitive verb, for example \"obtained,\" occurs on its own, it is often translated as \"X was obtained by Y,\" 8 but when it occurs as a relative clause decorating the noun X it will be translated as \"X obtained by Y,\" as shown in the example.", "The baseline system does not include any explicit features to make this distinction between whether a verb is part of a relative clause or not, and thus made a number of mistakes of this variety.", "However, it is evident that the neural MT model has learned to make this distinction, greatly reducing the number of these errors.", "The third subcategory is similar to the first, but explicitly involves the correct interpretation of co-ordinate structures.", "It is well known that syntactic parsers often make mistakes in their interpretation of coordinate structures (Kummerfeld et al., 2012) .", "Of course, the parser used in our syntaxbased MT system is no exception to this rule, and parse errors often cause coordinate phrases to be broken apart on the target side, as is the case in the example's \"local heating and ablation.\"", "The fact that the neural MT models were able to correct a large number of errors related to these structures suggests that they are able to successfully determine whether two phrases are coordinated or not, and keep them together on the target side.", "The final sub-category of the top four is related to verb conjugation agreement.", "Many of the examples related to verb conjugation, including the one shown in Table 3 , were related to when two singular nouns were connected by a conjunction.", "In this case, the local context provided by a standard n-gram language model is not enough to resolve the ambiguity, but the longer context handled by the neural MT model is able to resolve this easily.", "What is notable about these four categories is that they all are related to improving the correctness of the output from a grammatical point of view, as opposed to fixing mistakes in lexical choice or terminology.", "In fact, neural MT reranking had an overall negative effect on choice of terminology with only 2 improvements at the cost of 4 degradations.", "This was due to the fact that the neural MT model tended to prefer more com- mon words, mistaking \"radiant heat\" as \"radiation heat\" or \"slipring\" as \"ring.\"", "While these tendencies will be affected by many factors such as the size of the vocabulary or the number and size of hidden layers of the net, we feel it is safe to say that neural MT reranking can be expected to have a large positive effect on syntactic correctness of output, while results for lexical choice are less conclusive.", "Effect of n-best Size on Reranking In the previous sections, we confirmed the effectiveness of n-best list reranking using neural MT models.", "However, reranking using n-best lists (like other search methods for MT) is an approximate search method, and its effectiveness is limited by the size of the n-best list used.", "In order to quantify the effect of this inexact search, we performed experiments to examine the post-reranking automatic evaluation scores of the MT results for all n-best list sizes from 1 to 1000.", "Figure 1 shows the results of this examination, with the x-axis referring to the log-scaled number of hypotheses in the n-best list, and the y-axis referring to the quality of the translation, either with regards to model score (for the model including the neural MT likelihood as a feature) or BLEU score.", "9 From these results we can note several interest- 9 The BLEU scores differ slightly from Table 1 due to differences in tokenization standards between these experiments and the official evaluation server.", "ing points.", "First, we can see that the improvement in scores is very slightly sub-linear in the log number of hypotheses in the n-best list.", "In other words, every time we double the n-best list size we will see an improvement in accuracy that is slightly smaller than the last time we doubled the size.", "Second, we can note that in most cases this trend continues all the way up to our limit of 1000best lists, indicating that gains are not saturating, and we can likely expect even more improvements from using larger lists, or perhaps directly performing decoding using neural models (Alkhouli et al., 2015) .", "The en-ja results, however, are an exception to this rule, with BLEU gains more or less saturating around the 50-best list point.", "Conclusion In this paper we described results applying neural MT reranking to a baseline syntax-based machine translation system in 4 languages.", "In particular, we performed an in-depth analysis of what kinds of translation errors were fixed by neural MT reranking.", "Based on this analysis, we found that the majority of the gains were related to improvements in the accuracy of transfer of correct grammatical structure to the target sentence, with the most prominent gains being related to errors regarding reordering of phrases, insertion/deletion of copulas, coordinate structures, and verb agreement.", "We also found that, within the neural MT reranking framework, accuracy gains scaled ap-proximately log-linearly with the size of the n-best list, and in most cases were not saturated even after examining 1000 unique hypotheses." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Baseline System", "Neural MT Models", "Experimental Results", "Analysis", "Effect of n-best Size on Reranking", "Conclusion" ] }
GEM-SciDuet-train-67#paper-1143#slide-4
Experimental Setup
Neural Reranking Improves Subjective Quality of Machine Translation Data: ASPEC Scientific Abstracts Baseline: NAIST WAT2014 Tree-to-String System Strong baseline achieving high scores Implemented using Travatar (http://phontron.com/travatar) Neural MT Model: Attentional model Trained ~500k sent., 256 hidden nodes, 2 model ensemble Trained w/ lamtram (http://github.com/neubig/lamtram) Automatic Evaluation: BLEU, RIBES Manual Evaluation: WAT 2015 HUMAN Score
Neural Reranking Improves Subjective Quality of Machine Translation Data: ASPEC Scientific Abstracts Baseline: NAIST WAT2014 Tree-to-String System Strong baseline achieving high scores Implemented using Travatar (http://phontron.com/travatar) Neural MT Model: Attentional model Trained ~500k sent., 256 hidden nodes, 2 model ensemble Trained w/ lamtram (http://github.com/neubig/lamtram) Automatic Evaluation: BLEU, RIBES Manual Evaluation: WAT 2015 HUMAN Score
[]
GEM-SciDuet-train-67#paper-1143#slide-5
1143
Neural Reranking Improves Subjective Quality of Machine Translation: NAIST at WAT2015
This year, the Nara Institute of Science and Technology (NAIST)'s submission to the 2015 Workshop on Asian Translation was based on syntax-based statistical machine translation, with the addition of a reranking component using neural attentional machine translation models. Experiments re-confirmed results from previous work stating that neural MT reranking provides a large gain in objective evaluation measures such as BLEU, and also confirmed for the first time that these results also carry over to manual evaluation. We further perform a detailed analysis of reasons for this increase, finding that the main contributions of the neural models lie in improvement of the grammatical correctness of the output, as opposed to improvements in lexical choice of content words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87 ], "paper_content_text": [ "Introduction Neural network models for machine translation (MT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015) , while still in a nascent stage, have shown impressive results in a number of translation tasks.", "Specifically, a number of works have demonstrated gains in BLEU score (Papineni et al., 2002) over state-of-the-art non-neural systems, both when using the neural MT model standalone (Luong et al., 2015a; Jean et al., 2015; Luong et al., 2015b) , or to rerank the output of more traditional systems phrase-based MT systems (Sutskever et al., 2014) .", "However, despite these impressive results with regards to automatic measures of translation quality, there has been little examination of the effect that these gains have on the subjective impressions of human users.", "Because BLEU generally has some correlation with translation quality, 1 it is fair to hypothesize that these gains will carry over to gains in human evaluation, but empirical evidence for this hypothesis is still scarce.", "In this paper, we attempt to close this gap by examining the gains provided by using neural MT models to rerank the hypotheses a state-of-the-art non-neural MT system, both from the objective and subjective perspectives.", "Specifically, as part of the Nara Institute of Science and Technology (NAIST) submission to the Workshop on Asian Translation (WAT) 2015 (Nakazawa et al., 2015) , we generate reranked and non-reranked translation results in four language pairs (Section 2).", "Based on these translation results, we calculate scores according to automatic evaluation measures BLEU and RIBES (Isozaki et al., 2010) , and a manual evaluation that involves comparing hypotheses to a baseline system (Section 3).", "Next, we perform a detailed analysis of the cases in which subjective impressions improved or degraded due to neural MT reranking, and identify major areas in which neural reranking improves results, and areas in which reranking is less helpful (Section 4).", "Finally, as an auxiliary result, we also examine the effect that the size of the n-best list used in reranking has on the improvement of translation results (Section 5).", "Generation of Translation Results Baseline System All experiments are performed on WAT2015 translation task from Japanese (ja) to/from English (en) and Chinese (zh).", "As a baseline, we used the NAIST system for WAT 2014 (Neubig, 2014) , a state-of-the-art system that achieved the highest accuracy on all four tracks in the last year's eval-uation.", "2 The details of construction are described in Neubig (2014) , but we briefly outline it here for completeness.", "The system is based on the Travatar toolkit (Neubig, 2013) , using tree-to-string statistical MT (Graehl and Knight, 2004; Liu et al., 2006) , in which the source is first syntactically parsed, then subtrees of the input parse are converted into strings on the target side.", "This translation paradigm has proven effective for translation between syntactically distant language pairs such as those handled by the WAT tasks.", "In addition, following our findings in Neubig and Duh (2014) , to improve the accuracy of translation we use forestbased encoding of many parse candidates (Mi et al., 2008) , and a supervised alignment technique for ja-en and en-ja (Riesa and Marcu, 2010) .", "To train the systems, we used the ASPEC corpus provided by WAT.", "For the zh-ja and ja-zh systems, we used all of the data, amounting to 672k sentences.", "For the en-ja and ja-en systems, we used all 3M sentences for training the language models, and the first 2M sentences of the training data for training the translation models.", "For English, Japanese, and Chinese, tokenization was performed using the Stanford Parser (Klein and Manning, 2003) , the KyTea toolkit (Neubig et al., 2011) , and the Stanford Segmenter (Tseng et al., 2005) respectively.", "For parsing, we use the Egret parser, 3 which implements the latent variable parsing model of (Petrov et al., 2006) .", "4 For all systems, we trained a 6-gram language model smoothed with modified Kneser-Ney smoothing (Chen and Goodman, 1996) using KenLM (Heafield et al., 2013) .", "To optimize the parameters of the log-linear model, we use standard minimum error rate training (MERT; Och (2003) ) with BLEU as an objective.", "Neural MT Models As our neural MT model, we use the attentional model of Bahdanau et al.", "(2015) .", "The model first encodes the source sentence f using bidirectional long short-term memory (LSTM; Hochreiter and Schmidhuber (1997) ) recurrent networks.", "This results in an encoding vector h j for each word f j in f .", "The model then proceeds to generate the target translationê one word at a time, at each time step calculating soft alignments a i that are used to generate a context vector g i , which is referenced when generating the target word g i = |f | ∑ j=1 a i,j h j .", "(1) Attentional models have a number of appealing properties, such as being theoretically able to encode variable length sequences without worrying about memory constraints imposed by the fixed-size vectors used in encoder-decoder models.", "These advantages are confirmed in empirical results, with attentional models performing markedly better on longer sequences (Bahdanau et al., 2015) .", "To train the neural MT models, we used the implementation provided by the lamtram toolkit.", "5 The forward and reverse LSTM models each had 256 nodes, and word embeddings were also set to size 256.", "For ja-en and en-ja models we chose the first 500k sentences in the training corpus, and for ja-zh and zh-ja models we used all 672k sentences.", "Training was performed using stochastic gradient descent (SGD) with an initial learning rate of 0.1, which was halved every epoch in which the development likelihood decreased.", "For each language pair, we trained two models and ensembled the probabilities by linearly interpolating between the two probability distributions.", "6 These probabilities were used to rerank unique 1,000-best lists from the baseline model.", "To perform reranking, the log likelihood of the neural MT model was added as an additional feature to the standard baseline model features, and the weight of this feature was decided by running MERT on the dev set.", "Experimental Results First, we calculate overall numerical results for our systems with and without the neural MT reranking model.", "As automatic evaluation we use the standard BLEU (Papineni et al., 2002) and reorderingoriented RIBES (Isozaki et al., 2010) Table 1 : Overall BLEU, RIBES, and HUMAN scores for our baseline system and system with neural MT reranking.", "Bold indicates a significant improvement according to bootstrap resampling at p < 0.05 (Koehn, 2004) .", "manual evaluation, we use the WAT \"HUMAN\" evaluation score (Nakazawa et al., 2015) , which is essentially related to the number of wins over a baseline phrase-based system.", "In the case that the system beats the baseline on all sentences, the HUMAN score will be 100, and if it loses on all sentences the score will be -100.", "From the results in Table 1 , we can first see that adding the neural MT reranking resulted in a significant increase in the evaluation scores for all language pairs under consideration, except for the manual evaluation in ja-zh translation.", "7 It should be noted that these gains are achieved even though the original baseline was already quite strong (outperforming most other WAT2015 systems without a neural component).", "While neural MT reranking has been noted to improve traditional systems with respect to BLEU score in previous work (Sutskever et al., 2014) , to our knowledge this is the first work that notes that these gains also carry over convincingly to human evaluation scores.", "In the following section, we will examine the results in more detail and attempt to explain exactly what is causing this increase in translation quality.", "Analysis To perform a deeper analysis, we manually examined the first 200 sentences of the ja-en part of the official WAT2015 human evaluation set.", "Specifically, we (1) compared the baseline and reranked outputs, and decided whether one was better or if they were of the same quality and (2) in the case that one of the two was better, classified the example by the type of error that was fixed or caused by the reranking leading to this change in subjective impression.", "Specifically, when annotating the type of error, we used a simplified version of 7 The overall scores for ja-zh are lower than others, perhaps a result of word-order between Japanese and Chinese being more similar than Japanese and English, the parser for Japanese being weaker than that of the other languages, and less consistent evaluation scores for the Chinese output (Nakazawa et al., 2014 the error typology of Vilar et al.", "(2006) consisting of insertion, deletion, word conjugation, word substitution, and reordering, as well as subcategories of each of these categories (the number of sub-categories totalled approximately 40).", "If there was more than one change in the sentence, only the change that we subjectively felt had the largest effect on the translation quality was annotated.", "The number of improvements and degradations afforded by neural MT reranking is shown in Table 2.", "From this figure, we can see that overall, neural reranking caused an improvement in 117 sentences, and a degradation in 33 sentences, corroborating the fact that the reranking process is giving consistent improvements in accuracy.", "Further breaking down the changes, we can see that improvements in word reordering are by far the most prominent, slightly less than three times the number of improvements in the next most common category.", "This demonstrates that the neural MT model is successfully capturing the overall structure of the sentence, and effectively disambiguating reorderings that could not be appropriately scored in the baseline model.", "Next in Table 3 we show examples of the four most common sub-categories of errors that were fixed by the neural MT reranker, and note the total number of improvements and degradations of each.", "The first subcategory is related to the general reordering of phrases in the sentence.", "As there Table 3 : An example of more common varieties of improvements caused by the neural MT reranking.", "is a large amount of reordering involved in translating from Japanese to English, mistaken longdistance reordering is one of the more common causes for errors, and the neural MT model was effective at fixing these problems, resulting in 26 improvements and only 4 degradations.", "In the sentence shown in the example, the baseline system swaps the verb phrase and subject positions, making it difficult to tell that the list of conditions are what \"occurred,\" while the reranked system appropriately puts this list as the subject of \"occurred.\"", "The second subcategory includes insertions or deletions of auxiliary verbs, for which there were 15 improvements and not a single degradation.", "The reason why these errors occurred in the first place is that when a transitive verb, for example \"obtained,\" occurs on its own, it is often translated as \"X was obtained by Y,\" 8 but when it occurs as a relative clause decorating the noun X it will be translated as \"X obtained by Y,\" as shown in the example.", "The baseline system does not include any explicit features to make this distinction between whether a verb is part of a relative clause or not, and thus made a number of mistakes of this variety.", "However, it is evident that the neural MT model has learned to make this distinction, greatly reducing the number of these errors.", "The third subcategory is similar to the first, but explicitly involves the correct interpretation of co-ordinate structures.", "It is well known that syntactic parsers often make mistakes in their interpretation of coordinate structures (Kummerfeld et al., 2012) .", "Of course, the parser used in our syntaxbased MT system is no exception to this rule, and parse errors often cause coordinate phrases to be broken apart on the target side, as is the case in the example's \"local heating and ablation.\"", "The fact that the neural MT models were able to correct a large number of errors related to these structures suggests that they are able to successfully determine whether two phrases are coordinated or not, and keep them together on the target side.", "The final sub-category of the top four is related to verb conjugation agreement.", "Many of the examples related to verb conjugation, including the one shown in Table 3 , were related to when two singular nouns were connected by a conjunction.", "In this case, the local context provided by a standard n-gram language model is not enough to resolve the ambiguity, but the longer context handled by the neural MT model is able to resolve this easily.", "What is notable about these four categories is that they all are related to improving the correctness of the output from a grammatical point of view, as opposed to fixing mistakes in lexical choice or terminology.", "In fact, neural MT reranking had an overall negative effect on choice of terminology with only 2 improvements at the cost of 4 degradations.", "This was due to the fact that the neural MT model tended to prefer more com- mon words, mistaking \"radiant heat\" as \"radiation heat\" or \"slipring\" as \"ring.\"", "While these tendencies will be affected by many factors such as the size of the vocabulary or the number and size of hidden layers of the net, we feel it is safe to say that neural MT reranking can be expected to have a large positive effect on syntactic correctness of output, while results for lexical choice are less conclusive.", "Effect of n-best Size on Reranking In the previous sections, we confirmed the effectiveness of n-best list reranking using neural MT models.", "However, reranking using n-best lists (like other search methods for MT) is an approximate search method, and its effectiveness is limited by the size of the n-best list used.", "In order to quantify the effect of this inexact search, we performed experiments to examine the post-reranking automatic evaluation scores of the MT results for all n-best list sizes from 1 to 1000.", "Figure 1 shows the results of this examination, with the x-axis referring to the log-scaled number of hypotheses in the n-best list, and the y-axis referring to the quality of the translation, either with regards to model score (for the model including the neural MT likelihood as a feature) or BLEU score.", "9 From these results we can note several interest- 9 The BLEU scores differ slightly from Table 1 due to differences in tokenization standards between these experiments and the official evaluation server.", "ing points.", "First, we can see that the improvement in scores is very slightly sub-linear in the log number of hypotheses in the n-best list.", "In other words, every time we double the n-best list size we will see an improvement in accuracy that is slightly smaller than the last time we doubled the size.", "Second, we can note that in most cases this trend continues all the way up to our limit of 1000best lists, indicating that gains are not saturating, and we can likely expect even more improvements from using larger lists, or perhaps directly performing decoding using neural models (Alkhouli et al., 2015) .", "The en-ja results, however, are an exception to this rule, with BLEU gains more or less saturating around the 50-best list point.", "Conclusion In this paper we described results applying neural MT reranking to a baseline syntax-based machine translation system in 4 languages.", "In particular, we performed an in-depth analysis of what kinds of translation errors were fixed by neural MT reranking.", "Based on this analysis, we found that the majority of the gains were related to improvements in the accuracy of transfer of correct grammatical structure to the target sentence, with the most prominent gains being related to errors regarding reordering of phrases, insertion/deletion of copulas, coordinate structures, and verb agreement.", "We also found that, within the neural MT reranking framework, accuracy gains scaled ap-proximately log-linearly with the size of the n-best list, and in most cases were not saturated even after examining 1000 unique hypotheses." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Baseline System", "Neural MT Models", "Experimental Results", "Analysis", "Effect of n-best Size on Reranking", "Conclusion" ] }
GEM-SciDuet-train-67#paper-1143#slide-5
Results
Neural Reranking Improves Subjective Quality of Machine Translation en-ja ja-en zh-ja ja-zh en-ja ja-en zh-ja ja-zh Confirm what we know: Neural reranking helps automatic evaluation Show what we didn't know: Also help manual evaluation.
Neural Reranking Improves Subjective Quality of Machine Translation en-ja ja-en zh-ja ja-zh en-ja ja-en zh-ja ja-zh Confirm what we know: Neural reranking helps automatic evaluation Show what we didn't know: Also help manual evaluation.
[]
GEM-SciDuet-train-67#paper-1143#slide-6
1143
Neural Reranking Improves Subjective Quality of Machine Translation: NAIST at WAT2015
This year, the Nara Institute of Science and Technology (NAIST)'s submission to the 2015 Workshop on Asian Translation was based on syntax-based statistical machine translation, with the addition of a reranking component using neural attentional machine translation models. Experiments re-confirmed results from previous work stating that neural MT reranking provides a large gain in objective evaluation measures such as BLEU, and also confirmed for the first time that these results also carry over to manual evaluation. We further perform a detailed analysis of reasons for this increase, finding that the main contributions of the neural models lie in improvement of the grammatical correctness of the output, as opposed to improvements in lexical choice of content words.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87 ], "paper_content_text": [ "Introduction Neural network models for machine translation (MT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015) , while still in a nascent stage, have shown impressive results in a number of translation tasks.", "Specifically, a number of works have demonstrated gains in BLEU score (Papineni et al., 2002) over state-of-the-art non-neural systems, both when using the neural MT model standalone (Luong et al., 2015a; Jean et al., 2015; Luong et al., 2015b) , or to rerank the output of more traditional systems phrase-based MT systems (Sutskever et al., 2014) .", "However, despite these impressive results with regards to automatic measures of translation quality, there has been little examination of the effect that these gains have on the subjective impressions of human users.", "Because BLEU generally has some correlation with translation quality, 1 it is fair to hypothesize that these gains will carry over to gains in human evaluation, but empirical evidence for this hypothesis is still scarce.", "In this paper, we attempt to close this gap by examining the gains provided by using neural MT models to rerank the hypotheses a state-of-the-art non-neural MT system, both from the objective and subjective perspectives.", "Specifically, as part of the Nara Institute of Science and Technology (NAIST) submission to the Workshop on Asian Translation (WAT) 2015 (Nakazawa et al., 2015) , we generate reranked and non-reranked translation results in four language pairs (Section 2).", "Based on these translation results, we calculate scores according to automatic evaluation measures BLEU and RIBES (Isozaki et al., 2010) , and a manual evaluation that involves comparing hypotheses to a baseline system (Section 3).", "Next, we perform a detailed analysis of the cases in which subjective impressions improved or degraded due to neural MT reranking, and identify major areas in which neural reranking improves results, and areas in which reranking is less helpful (Section 4).", "Finally, as an auxiliary result, we also examine the effect that the size of the n-best list used in reranking has on the improvement of translation results (Section 5).", "Generation of Translation Results Baseline System All experiments are performed on WAT2015 translation task from Japanese (ja) to/from English (en) and Chinese (zh).", "As a baseline, we used the NAIST system for WAT 2014 (Neubig, 2014) , a state-of-the-art system that achieved the highest accuracy on all four tracks in the last year's eval-uation.", "2 The details of construction are described in Neubig (2014) , but we briefly outline it here for completeness.", "The system is based on the Travatar toolkit (Neubig, 2013) , using tree-to-string statistical MT (Graehl and Knight, 2004; Liu et al., 2006) , in which the source is first syntactically parsed, then subtrees of the input parse are converted into strings on the target side.", "This translation paradigm has proven effective for translation between syntactically distant language pairs such as those handled by the WAT tasks.", "In addition, following our findings in Neubig and Duh (2014) , to improve the accuracy of translation we use forestbased encoding of many parse candidates (Mi et al., 2008) , and a supervised alignment technique for ja-en and en-ja (Riesa and Marcu, 2010) .", "To train the systems, we used the ASPEC corpus provided by WAT.", "For the zh-ja and ja-zh systems, we used all of the data, amounting to 672k sentences.", "For the en-ja and ja-en systems, we used all 3M sentences for training the language models, and the first 2M sentences of the training data for training the translation models.", "For English, Japanese, and Chinese, tokenization was performed using the Stanford Parser (Klein and Manning, 2003) , the KyTea toolkit (Neubig et al., 2011) , and the Stanford Segmenter (Tseng et al., 2005) respectively.", "For parsing, we use the Egret parser, 3 which implements the latent variable parsing model of (Petrov et al., 2006) .", "4 For all systems, we trained a 6-gram language model smoothed with modified Kneser-Ney smoothing (Chen and Goodman, 1996) using KenLM (Heafield et al., 2013) .", "To optimize the parameters of the log-linear model, we use standard minimum error rate training (MERT; Och (2003) ) with BLEU as an objective.", "Neural MT Models As our neural MT model, we use the attentional model of Bahdanau et al.", "(2015) .", "The model first encodes the source sentence f using bidirectional long short-term memory (LSTM; Hochreiter and Schmidhuber (1997) ) recurrent networks.", "This results in an encoding vector h j for each word f j in f .", "The model then proceeds to generate the target translationê one word at a time, at each time step calculating soft alignments a i that are used to generate a context vector g i , which is referenced when generating the target word g i = |f | ∑ j=1 a i,j h j .", "(1) Attentional models have a number of appealing properties, such as being theoretically able to encode variable length sequences without worrying about memory constraints imposed by the fixed-size vectors used in encoder-decoder models.", "These advantages are confirmed in empirical results, with attentional models performing markedly better on longer sequences (Bahdanau et al., 2015) .", "To train the neural MT models, we used the implementation provided by the lamtram toolkit.", "5 The forward and reverse LSTM models each had 256 nodes, and word embeddings were also set to size 256.", "For ja-en and en-ja models we chose the first 500k sentences in the training corpus, and for ja-zh and zh-ja models we used all 672k sentences.", "Training was performed using stochastic gradient descent (SGD) with an initial learning rate of 0.1, which was halved every epoch in which the development likelihood decreased.", "For each language pair, we trained two models and ensembled the probabilities by linearly interpolating between the two probability distributions.", "6 These probabilities were used to rerank unique 1,000-best lists from the baseline model.", "To perform reranking, the log likelihood of the neural MT model was added as an additional feature to the standard baseline model features, and the weight of this feature was decided by running MERT on the dev set.", "Experimental Results First, we calculate overall numerical results for our systems with and without the neural MT reranking model.", "As automatic evaluation we use the standard BLEU (Papineni et al., 2002) and reorderingoriented RIBES (Isozaki et al., 2010) Table 1 : Overall BLEU, RIBES, and HUMAN scores for our baseline system and system with neural MT reranking.", "Bold indicates a significant improvement according to bootstrap resampling at p < 0.05 (Koehn, 2004) .", "manual evaluation, we use the WAT \"HUMAN\" evaluation score (Nakazawa et al., 2015) , which is essentially related to the number of wins over a baseline phrase-based system.", "In the case that the system beats the baseline on all sentences, the HUMAN score will be 100, and if it loses on all sentences the score will be -100.", "From the results in Table 1 , we can first see that adding the neural MT reranking resulted in a significant increase in the evaluation scores for all language pairs under consideration, except for the manual evaluation in ja-zh translation.", "7 It should be noted that these gains are achieved even though the original baseline was already quite strong (outperforming most other WAT2015 systems without a neural component).", "While neural MT reranking has been noted to improve traditional systems with respect to BLEU score in previous work (Sutskever et al., 2014) , to our knowledge this is the first work that notes that these gains also carry over convincingly to human evaluation scores.", "In the following section, we will examine the results in more detail and attempt to explain exactly what is causing this increase in translation quality.", "Analysis To perform a deeper analysis, we manually examined the first 200 sentences of the ja-en part of the official WAT2015 human evaluation set.", "Specifically, we (1) compared the baseline and reranked outputs, and decided whether one was better or if they were of the same quality and (2) in the case that one of the two was better, classified the example by the type of error that was fixed or caused by the reranking leading to this change in subjective impression.", "Specifically, when annotating the type of error, we used a simplified version of 7 The overall scores for ja-zh are lower than others, perhaps a result of word-order between Japanese and Chinese being more similar than Japanese and English, the parser for Japanese being weaker than that of the other languages, and less consistent evaluation scores for the Chinese output (Nakazawa et al., 2014 the error typology of Vilar et al.", "(2006) consisting of insertion, deletion, word conjugation, word substitution, and reordering, as well as subcategories of each of these categories (the number of sub-categories totalled approximately 40).", "If there was more than one change in the sentence, only the change that we subjectively felt had the largest effect on the translation quality was annotated.", "The number of improvements and degradations afforded by neural MT reranking is shown in Table 2.", "From this figure, we can see that overall, neural reranking caused an improvement in 117 sentences, and a degradation in 33 sentences, corroborating the fact that the reranking process is giving consistent improvements in accuracy.", "Further breaking down the changes, we can see that improvements in word reordering are by far the most prominent, slightly less than three times the number of improvements in the next most common category.", "This demonstrates that the neural MT model is successfully capturing the overall structure of the sentence, and effectively disambiguating reorderings that could not be appropriately scored in the baseline model.", "Next in Table 3 we show examples of the four most common sub-categories of errors that were fixed by the neural MT reranker, and note the total number of improvements and degradations of each.", "The first subcategory is related to the general reordering of phrases in the sentence.", "As there Table 3 : An example of more common varieties of improvements caused by the neural MT reranking.", "is a large amount of reordering involved in translating from Japanese to English, mistaken longdistance reordering is one of the more common causes for errors, and the neural MT model was effective at fixing these problems, resulting in 26 improvements and only 4 degradations.", "In the sentence shown in the example, the baseline system swaps the verb phrase and subject positions, making it difficult to tell that the list of conditions are what \"occurred,\" while the reranked system appropriately puts this list as the subject of \"occurred.\"", "The second subcategory includes insertions or deletions of auxiliary verbs, for which there were 15 improvements and not a single degradation.", "The reason why these errors occurred in the first place is that when a transitive verb, for example \"obtained,\" occurs on its own, it is often translated as \"X was obtained by Y,\" 8 but when it occurs as a relative clause decorating the noun X it will be translated as \"X obtained by Y,\" as shown in the example.", "The baseline system does not include any explicit features to make this distinction between whether a verb is part of a relative clause or not, and thus made a number of mistakes of this variety.", "However, it is evident that the neural MT model has learned to make this distinction, greatly reducing the number of these errors.", "The third subcategory is similar to the first, but explicitly involves the correct interpretation of co-ordinate structures.", "It is well known that syntactic parsers often make mistakes in their interpretation of coordinate structures (Kummerfeld et al., 2012) .", "Of course, the parser used in our syntaxbased MT system is no exception to this rule, and parse errors often cause coordinate phrases to be broken apart on the target side, as is the case in the example's \"local heating and ablation.\"", "The fact that the neural MT models were able to correct a large number of errors related to these structures suggests that they are able to successfully determine whether two phrases are coordinated or not, and keep them together on the target side.", "The final sub-category of the top four is related to verb conjugation agreement.", "Many of the examples related to verb conjugation, including the one shown in Table 3 , were related to when two singular nouns were connected by a conjunction.", "In this case, the local context provided by a standard n-gram language model is not enough to resolve the ambiguity, but the longer context handled by the neural MT model is able to resolve this easily.", "What is notable about these four categories is that they all are related to improving the correctness of the output from a grammatical point of view, as opposed to fixing mistakes in lexical choice or terminology.", "In fact, neural MT reranking had an overall negative effect on choice of terminology with only 2 improvements at the cost of 4 degradations.", "This was due to the fact that the neural MT model tended to prefer more com- mon words, mistaking \"radiant heat\" as \"radiation heat\" or \"slipring\" as \"ring.\"", "While these tendencies will be affected by many factors such as the size of the vocabulary or the number and size of hidden layers of the net, we feel it is safe to say that neural MT reranking can be expected to have a large positive effect on syntactic correctness of output, while results for lexical choice are less conclusive.", "Effect of n-best Size on Reranking In the previous sections, we confirmed the effectiveness of n-best list reranking using neural MT models.", "However, reranking using n-best lists (like other search methods for MT) is an approximate search method, and its effectiveness is limited by the size of the n-best list used.", "In order to quantify the effect of this inexact search, we performed experiments to examine the post-reranking automatic evaluation scores of the MT results for all n-best list sizes from 1 to 1000.", "Figure 1 shows the results of this examination, with the x-axis referring to the log-scaled number of hypotheses in the n-best list, and the y-axis referring to the quality of the translation, either with regards to model score (for the model including the neural MT likelihood as a feature) or BLEU score.", "9 From these results we can note several interest- 9 The BLEU scores differ slightly from Table 1 due to differences in tokenization standards between these experiments and the official evaluation server.", "ing points.", "First, we can see that the improvement in scores is very slightly sub-linear in the log number of hypotheses in the n-best list.", "In other words, every time we double the n-best list size we will see an improvement in accuracy that is slightly smaller than the last time we doubled the size.", "Second, we can note that in most cases this trend continues all the way up to our limit of 1000best lists, indicating that gains are not saturating, and we can likely expect even more improvements from using larger lists, or perhaps directly performing decoding using neural models (Alkhouli et al., 2015) .", "The en-ja results, however, are an exception to this rule, with BLEU gains more or less saturating around the 50-best list point.", "Conclusion In this paper we described results applying neural MT reranking to a baseline syntax-based machine translation system in 4 languages.", "In particular, we performed an in-depth analysis of what kinds of translation errors were fixed by neural MT reranking.", "Based on this analysis, we found that the majority of the gains were related to improvements in the accuracy of transfer of correct grammatical structure to the target sentence, with the most prominent gains being related to errors regarding reordering of phrases, insertion/deletion of copulas, coordinate structures, and verb agreement.", "We also found that, within the neural MT reranking framework, accuracy gains scaled ap-proximately log-linearly with the size of the n-best list, and in most cases were not saturated even after examining 1000 unique hypotheses." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Baseline System", "Neural MT Models", "Experimental Results", "Analysis", "Effect of n-best Size on Reranking", "Conclusion" ] }
GEM-SciDuet-train-67#paper-1143#slide-6
What is Getting Better
Neural Reranking Improves Subjective Quality of Machine Translation Perform detailed categorization of the changes in 1. Is the sentence better/worse after ranking? 2. What is the main error corrected: insertion, deletion, substitution, reordering, or conjugation? 3. What is the detailed subcategory?
Neural Reranking Improves Subjective Quality of Machine Translation Perform detailed categorization of the changes in 1. Is the sentence better/worse after ranking? 2. What is the main error corrected: insertion, deletion, substitution, reordering, or conjugation? 3. What is the detailed subcategory?
[]