{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:08:13.022238Z" }, "title": "Enhancing Interpretable Clauses Semantically using Pretrained Word Representation", "authors": [ { "first": "Rohan", "middle": [ "Kumar" ], "last": "Yadav", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Agder", "location": { "postCode": "4879", "region": "Grimstad", "country": "Norway" } }, "email": "rohan.k.yadav@uia.no" }, { "first": "Lei", "middle": [], "last": "Jiao", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Agder", "location": { "postCode": "4879", "region": "Grimstad", "country": "Norway" } }, "email": "lei.jiao@uia.no" }, { "first": "Ole-Christoffer", "middle": [], "last": "Granmo", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Agder", "location": { "postCode": "4879", "region": "Grimstad", "country": "Norway" } }, "email": "ole.granmo@uia.no" }, { "first": "Morten", "middle": [], "last": "Goodwin", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Agder", "location": { "postCode": "4879", "region": "Grimstad", "country": "Norway" } }, "email": "morten.goodwin@uia.no" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Tsetlin Machine (TM) is an interpretable pattern recognition algorithm based on propositional logic, which has demonstrated competitive performance in many Natural Language Processing (NLP) tasks, including sentiment analysis, text classification, and Word Sense Disambiguation. To obtain human-level interpretability, legacy TM employs Boolean input features such as bag-of-words (BOW). However, the BOW representation makes it difficult to use any pre-trained information, for instance, word2vec and GloVe word representations. This restriction has constrained the performance of TM compared to deep neural networks (DNNs) in NLP. To reduce the performance gap, in this paper, we propose a novel way of using pre-trained word representations for TM. The approach significantly enhances the performance and interpretability of TM. We achieve this by extracting semantically related words from pre-trained word representations as input features to the TM. Our experiments show that the accuracy of the proposed approach is significantly higher than the previous BOW-based TM, reaching the level of DNN-based models.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Tsetlin Machine (TM) is an interpretable pattern recognition algorithm based on propositional logic, which has demonstrated competitive performance in many Natural Language Processing (NLP) tasks, including sentiment analysis, text classification, and Word Sense Disambiguation. To obtain human-level interpretability, legacy TM employs Boolean input features such as bag-of-words (BOW). However, the BOW representation makes it difficult to use any pre-trained information, for instance, word2vec and GloVe word representations. This restriction has constrained the performance of TM compared to deep neural networks (DNNs) in NLP. To reduce the performance gap, in this paper, we propose a novel way of using pre-trained word representations for TM. The approach significantly enhances the performance and interpretability of TM. We achieve this by extracting semantically related words from pre-trained word representations as input features to the TM. Our experiments show that the accuracy of the proposed approach is significantly higher than the previous BOW-based TM, reaching the level of DNN-based models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Tsetlin Machine (TM) is an explainable pattern recognition approach that solves complex classification problems using propositional formulas (Granmo, 2018) . Text-(Berge et al., 2019) , numerical data- (Abeyrathna et al., 2019) , and image classification (Granmo et al., 2019) are recent areas of application. In Natural Language Processing (NLP), TM has provided encouraging trade-offs between accuracy and interpretability for various tasks. These include Sentiment Analysis (SA) Saha et al., 2020) , Word Sense Disambiguation (WSD) , and novelty detection (Bhattarai. et al., 2021) . Because TM NLP models employ bag-ofwords (BOW) that treat each word as independent features, it is easy for humans to interpret them. The models can be interpreted simply by inspecting the words that take part in the conjunctive clauses. However, using a simple BOW makes it challenging to attain the same accuracy level as deep neural network (DNN) based models.", "cite_spans": [ { "start": 141, "end": 155, "text": "(Granmo, 2018)", "ref_id": "BIBREF11" }, { "start": 158, "end": 183, "text": "Text-(Berge et al., 2019)", "ref_id": null }, { "start": 202, "end": 227, "text": "(Abeyrathna et al., 2019)", "ref_id": null }, { "start": 255, "end": 276, "text": "(Granmo et al., 2019)", "ref_id": "BIBREF1" }, { "start": 482, "end": 500, "text": "Saha et al., 2020)", "ref_id": "BIBREF27" }, { "start": 559, "end": 584, "text": "(Bhattarai. et al., 2021)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A key advantage of DNN models is distributed representation of words in a vector space. By using a single-layer neural network, Mikolov et al. introduced such a representation, allowing for relating words based on the inner product between word vectors (Mikolov et al., 2013) . One of the popular methods is skip-gram, an approach that learns word representations by predicting the context surrounding a word within a given window length. However, skip-gram has the disadvantage of not considering the co-occurrence statistics of the corpus. Later, Pennington et al. developed GloVe -a model that combines the advantages of local window-based methods and global matrix factorization (Pennington et al., 2014) . The foundation for the above vector representation of words is the distributional hypothesis that states that \"the word that occurs in the same contexts tend to have similar meanings\" (Harris, 1954) . This means that in addition to forming a rich high-dimensional representation of words, words that are closer to each other in vector space tend to represent similar meaning. As such, vector representations have been used to enhance for instance information retrieval (Manning et al., 2008) , name entity recognition (Turian et al., 2010) , and parsing (Socher et al., 2013) .", "cite_spans": [ { "start": 128, "end": 142, "text": "Mikolov et al.", "ref_id": null }, { "start": 253, "end": 275, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF22" }, { "start": 549, "end": 566, "text": "Pennington et al.", "ref_id": null }, { "start": 683, "end": 708, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF24" }, { "start": 895, "end": 909, "text": "(Harris, 1954)", "ref_id": "BIBREF13" }, { "start": 1180, "end": 1202, "text": "(Manning et al., 2008)", "ref_id": "BIBREF21" }, { "start": 1229, "end": 1250, "text": "(Turian et al., 2010)", "ref_id": "BIBREF34" }, { "start": 1265, "end": 1286, "text": "(Socher et al., 2013)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The state of the art in DNN-based NLP has been advanced by incorporating various pre-trained word representations such as GloVe (Pennington et al., 2014) , word2vec (Mikolov et al., 2013) , and fasttext . Indeed, building semantic representations of the words has been demonstrated to be a vital factor for improved performance. Most DNN-based models utilize the pre-trained word representations to initialize their word embeddings. This provides them with additional semantic information that goes beyond a traditional BOW.", "cite_spans": [ { "start": 128, "end": 153, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF24" }, { "start": 165, "end": 187, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, in the case of TM, such word representations cannot be directly employed because they consist of floating-point numbers. First, these numbers must be converted into Boolean form for TM to use, which may result in information loss. Secondly, replacing the straightforward BOW of a TM with a large number of floating-point numbers in fine-grained Boolean form would impede interpretability. In this paper, we propose a novel preprocessing technique that evades the above challenges entirely by extracting additional features for the BOW. The additional features are found using the pre-trained distributed word representations to identify words that enrich the BOW, based on cosine similarity. In this way, TM can use the information from word representations for increasing performance, and at the same time retaining the interpretability of the model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of the paper is organised as follows. We summarize related work in Section 2. The proposed semantic feature extraction for TM is then explained in Section 3. In Section 4, we present the TM architecture employing the proposed feature extension. We provide extensive experiment results in Section 5, demonstrating the benefits of our approach, before concluding the paper in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Conventional text classification usually focuses on feature engineering and classification algorithms. One of the most popular feature engineering approaches is the derivation of BOW features. Several complex variants of BOW have been designed such as n-grams (Wang and Manning, 2012) and entities in ontologies (Chenthamarakshan et al., 2011) . Apart from BOW approaches, Tang et al. demonstrated a new mechanism for feature engineering using a time series model for short text samples (Tang et al., 2020) . There are also several techniques to convert text into a graph and sub-graph (Rousseau et al., 2015; Luo et al., 2017) . In general, none of the above methods adopt any pre-trained information, hence have inferior performance.", "cite_spans": [ { "start": 260, "end": 284, "text": "(Wang and Manning, 2012)", "ref_id": "BIBREF35" }, { "start": 312, "end": 343, "text": "(Chenthamarakshan et al., 2011)", "ref_id": "BIBREF6" }, { "start": 487, "end": 506, "text": "(Tang et al., 2020)", "ref_id": "BIBREF33" }, { "start": 586, "end": 609, "text": "(Rousseau et al., 2015;", "ref_id": "BIBREF26" }, { "start": 610, "end": 627, "text": "Luo et al., 2017)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Deep learning-based text classification either depends on initializing models from pre-trained word representations, or on jointly learning both the word-and document level representations. Various studies report that incorporating such word representations, embedding the words, significantly enhances the accuracy of text classification Shen et al., 2018a) . Another approach related to pre-trained word embedding is to aggregate unsupervised word embeddings into a document embedding, which is then fed to a classifier (Le and Mikolov, 2014; Tang et al., 2015) .", "cite_spans": [ { "start": 339, "end": 358, "text": "Shen et al., 2018a)", "ref_id": "BIBREF29" }, { "start": 522, "end": 544, "text": "(Le and Mikolov, 2014;", "ref_id": "BIBREF16" }, { "start": 545, "end": 563, "text": "Tang et al., 2015)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Despite being empowered with world knowledge through pre-trained information, DNNs such as BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019) can be very hard to interpret. One interpretation approach is to use attention-based models, relying on the weights they assign to the inputs. However, more careful studies reveal that attention weights in general do not provide a useful explanation (Bai et al., 2020; Serrano and Smith, 2019) . Researchers are thus increasingly shifting focus to other kinds of machine learning, with the TM being a recent approach considered to provide human-level interpretability Granmo, 2018; . It offers a very simple model consisting of multiple Tsetlin Automata (TAs) that select which features take part in the classification. However, despite promising performance, there is still a performance gap to the DNN models that utilize pre-trained word embedding. Yet, several TM studies demonstrate high degree of interpretability through simple rules, with a marginal loss in accuracy Saha et al., 2020) . A significant reason for the performance gap between TM-based and state-of-the-art DNN-based NLP models is that TM operates on Boolean inputs, lacking a method for incorporating pre-trained word embeddings. Without pre-trained information, TMs must rely on labelled data available for supervised learning. On the other hand, incorporating high-dimensional Booleanized word embedding vectors directly into the TM would significantly reduce interpretability. In this paper, we address this intertwined challenge. We propose a novel technique that boosts the TM BOW approach, enhancing the BOW with additional word features. The enhancement consists of using cosine similarity between GloVe word representations to obtain semantically related words. We thus distill information from the pre-trained word representations for utilization by the TM. To this end, we propose two methods of feature extension: (1) using the k nearest words in embedding space and (2) using words within a given similarity threshold, measured as cosine angle (\u03b8). By adopting the two methods, we aim to reduce the current performance gap between interpretable TM and black-box DNN, by achieving either higher or similar accuracy.", "cite_spans": [ { "start": 96, "end": 117, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF8" }, { "start": 128, "end": 147, "text": "(Yang et al., 2019)", "ref_id": "BIBREF38" }, { "start": 398, "end": 416, "text": "(Bai et al., 2020;", "ref_id": "BIBREF2" }, { "start": 417, "end": 441, "text": "Serrano and Smith, 2019)", "ref_id": "BIBREF28" }, { "start": 616, "end": 629, "text": "Granmo, 2018;", "ref_id": "BIBREF11" }, { "start": 1023, "end": 1041, "text": "Saha et al., 2020)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Here, we introduce our novel method for boosting the BOW of TM with semantically related words. The method is based on comparing pretrained word representations using cosine similarity, leveraging distributed word representation. There are various distributional representations of words available. These are obtained from different corpora, using various techniques, such as word2vec, GloVe, and fastText. We here use GloVe because of its general applicability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Boosting TM BOW with Semantically Related Words", "sec_num": "3" }, { "text": "Distributed word representation does not necessarily derive word similarity based on synonyms but based on the words that appear in the same context. As such, the representation is essential for NLP because it captures the semantically interconnecting words. Our approach utilizes this property to expand the range of features that we can use in an interpretable manner in TM. Consider a full vocabulary", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Feature Extraction from Distributed Word Representation", "sec_num": "3.1" }, { "text": "W of m words, W = [w 1 , w 2 , w 3 . . . , w m ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Feature Extraction from Distributed Word Representation", "sec_num": "3.1" }, { "text": "Further consider a particular sentence that is represented as a Boolean BOW", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Feature Extraction from Distributed Word Representation", "sec_num": "3.1" }, { "text": "X = [x 1 , x 2 , x 3 , . . . , x m ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Feature Extraction from Distributed Word Representation", "sec_num": "3.1" }, { "text": "In a Boolean BOW, each element x r , r = 1, 2, 3, . . . , m, refers to a specific word w r in the vocabulary W . The element x r takes the value 1 if the corresponding word w r is present in the sentence and the value 0 if the word is absent. Assume that n words are present in the sentence, i.e., n of the elements in X are 1-valued. Our strategy is to extract additional features from these by expanding them using cosine similarity. To this end, we use a GloVe embedding of each present word w r , r \u2208 {z|x z = 1, z = 1, 2, 3 . . . , m}. The embedding for word w r is represented by vector w e r \u2208 d , where d is the dimensionality of the embedding (typically varying from 25 to 300).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Feature Extraction from Distributed Word Representation", "sec_num": "3.1" }, { "text": "We next introduce two selection techniques to expand upon each word:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Feature Extraction from Distributed Word Representation", "sec_num": "3.1" }, { "text": "\u2022 Select the top k most similar words,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Feature Extraction from Distributed Word Representation", "sec_num": "3.1" }, { "text": "\u2022 Select words up to a fixed similarity angle cos(\u03b8) = \u03c6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Feature Extraction from Distributed Word Representation", "sec_num": "3.1" }, { "text": "For example, let us consider two contexts: \"very good movie\" and \"excellent film, enjoyable\". Figs. 1 and 2 list similar words showing the difference between top k words and words within angle cos(\u03b8), i.e., \u03c6. In what follows, we will explain how these words are found.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Feature Extraction from Distributed Word Representation", "sec_num": "3.1" }, { "text": "We first boost the Boolean BOW of the considered sentence by expanding X with (k \u2212 1) \u00d7 n semantically related words. That is, we add k \u2212 1 new words for each of the n present words. We do this by identifying neighbouring words in the GloVe embedding space, using cosine similarity between the embedding vectors. Consider the GloVe embedding vectors W e G = [w e 1 , w e 2 , . . . , w e m ] of the full vocabulary W . For each word w r from the sentence considered, the cosine similarity to each word w t , t = 1, 2, . . . , m, of the full vocabulary is given by Eq. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similar Words based on Top k Nearest Words", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": ", \u03c6 t r = cos(w e r , w e t ) = w e r \u2022 w e t ||w e r || \u2022 ||w e t || .", "eq_num": "(1)" } ], "section": "Similar Words based on Top k Nearest Words", "sec_num": "3.2" }, { "text": "Clearly, \u03c6 t r is the cosine similarity between w e r and w e t . By calculating the cosine similarity of w r to the words in the vocabulary, we obtain m values: \u03c6 t r , t = 1, 2, . . . , m. We arrange these values in a vector \u03a6 r :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similar Words based on Top k Nearest Words", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03a6r = [\u03c6 1 r , \u03c6 2 r , . . . , \u03c6 m r ].", "eq_num": "(2)" } ], "section": "Similar Words based on Top k Nearest Words", "sec_num": "3.2" }, { "text": "The k elements from \u03a6 r of largest value are then identified and their indices are stored in a new set A r . Finally, a boosted BOW, referred to as X mod , can be formed by assigning element x t value 1 whenever one of the A r contains t, and 0 otherwise:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similar Words based on Top k Nearest Words", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X mod = [x1, x2, x3, . . . , xm],", "eq_num": "(3)" } ], "section": "Similar Words based on Top k Nearest Words", "sec_num": "3.2" }, { "text": "xt = 1 \u2203r, t \u2208 Ar 0 r, t \u2208 Ar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similar Words based on Top k Nearest Words", "sec_num": "3.2" }, { "text": "In addition, the vocabulary size for a particular task/dataset can be changed accordingly, which is usually less than m. Note that implementationwise, the GloVe library provides the top k similar words of w r without considering the word w r itself, having similarity score 1. Hence, using the GloVe library, w r must also be added to the boosted BOW.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similar Words based on Top k Nearest Words", "sec_num": "3.2" }, { "text": "Another approach to enrich the Boolean BOW of a sentence is thresholding the cosine angle. This is different from the first technique because the number of additional words extracted will vary rather than being fixed. Whereas the first approach always produces k \u2212 1 new features for each given word, the cosine angle thresholding brings in all those words that are sufficiently similar. The cosine similarity threshold is given by \u03c6 = cos(\u03b8), where \u03b8 is the threshold for vector angle, while \u03c6 is the corresponding similarity score. As per Eq. (2), we obtain \u03a6 r , which consists of the similarity scores of the given word w r in comparison to the m words in the vocabulary. Then, for each given word w r , the indices of those scores \u03c6 t r that are greater than or equal to \u03c6 (\u03c6 t r \u2265 \u03c6) are stored in the set A r . Similar to the first technique, the words in W with the indices in A r are utilized to create X mod as given by Eq. (3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Similar Words within Cosine Angle Threshold", "sec_num": "3.3" }, { "text": "A TM is composed by TAs that operate with literals -Boolean inputs and their negations -to form conjunctions of literals (conjunctive clauses). A dedicated team of TAs builds each clause, with each input being associated with a pair of TAs. One TA controls the original Boolean input whereas the other TA controls its negation. The TA pair selects a combination of \"Include\" or \"Exclude\" actions, which decide the form of the literal to include or exclude in the clause.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tsetlin Machine Architecture", "sec_num": "4.1" }, { "text": "Each TA decides upon an action according to its current state. There are N states per TA action, 2N states in total. When a TA finds itself in states 1 to N , it performs the \"Exclude\" action. When in states N + 1 to 2N , it performs the \"Include\" action. How the TA updates its state is shown in Fig. 3 . If it receives Reward, the TA moves to a deeper state thereby increasing its confidence in the current action. However, if it receives Penalty, it moves towards the centre, weakening the action. It may eventually jump over the middle decision boundary, to the other action. It is through this game of TAs that the TM shapes the clauses into frequent and discriminative patterns.", "cite_spans": [], "ref_spans": [ { "start": 297, "end": 303, "text": "Fig. 3", "ref_id": null } ], "eq_spans": [], "section": "Tsetlin Machine Architecture", "sec_num": "4.1" }, { "text": "Penatly Reward Figure 3 : A TA with two actions: \"Include\" and \"Exclude\".", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 23, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Exclude Inlcude", "sec_num": null }, { "text": "With respect to NLP, TM heavily relies on the Boolean BOW introduced earlier in the paper. We now make use of our proposed modified BOW", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Exclude Inlcude", "sec_num": null }, { "text": "X mod = [x 1 , x 2 , x 3 , . . . , x m ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Exclude Inlcude", "sec_num": null }, { "text": "Let l be the number of clauses that represent each class of the TM, covering q classes altogether. Then, the overall pattern recognition problem is solved using l \u00d7 q clauses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Exclude Inlcude", "sec_num": null }, { "text": "Each clause C j i , 1 \u2264 j \u2264 q, 1 \u2264 i \u2264 l of the TM is given by C j i = \uf8eb \uf8ed k\u2208I j i x k \uf8f6 \uf8f8 \u2227 \uf8eb \uf8ed k\u2208\u012a j i \u00acx k \uf8f6 \uf8f8 , where I j i and\u012a j i are non-overlapping subsets of the input variable indices, I i j ,\u012a i j \u2286 {1, . . . , m}, I i j \u2229\u012a i j = \u2205.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Exclude Inlcude", "sec_num": null }, { "text": "The subsets decide which of the input variables take part in the clause, and whether they are negated or not. The indices of input variables in I i j represent the literals that are included as is, while the indices of input variables in\u012a i j correspond to the negated ones. Among the q clauses of each class, clauses with odd indexes are assigned positive polarity (+) whereas those with even indices are assigned negative polarity (-). The clauses with positive polarity vote for the target class and those with negative polarity vote against it. A summation operator aggregates the votes by subtracting the total number of negative votes from positive votes, as shown in Eq. (4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Exclude Inlcude", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "f j (X mod ) = \u03a3 l\u22121 i=1,3,... C j i (X mod )\u2212 \u03a3 l i=2,4,... C j i (X mod ).", "eq_num": "(4)" } ], "section": "Exclude Inlcude", "sec_num": null }, { "text": "For q number of classes, the final output y is given by the argmax operator to classify the input based on the highest sum of votes,\u0177 = argmax j f j (X mod ) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Exclude Inlcude", "sec_num": null }, { "text": "Consider two contexts for sentiment classification: \"Very good movie\" and \"Excellent film, enjoyable\". Both contexts have different vocabularies but some of them are semantically related to each other. For example, \"good\" and \"excellent\" have similar semantics as well as \"film\" and \"movie\". Such semantics are not captured in the BOW-based input. However, as shown in Fig. 4 , adding words to the BOWs that are semantically related, as proposed in the previous section, makes distributed word representation available to the TM.", "cite_spans": [], "ref_spans": [ { "start": 369, "end": 375, "text": "Fig. 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Distributed Word Representation in TM", "sec_num": "4.2" }, { "text": "The resulting BOW-boosted TM architecture is shown in Fig. 5 . Here each input feature is first expanded using the GloVe representation, adding semantically related words. Each feature is then transferred to its corresponding TAs, both in original and negated form. Each TA, in turn, decides whether to include or exclude its literal in the clause by taking part in a decentralized game. The actions of each TA is decided by its current state and updated by the the feedback it receives based on its action. As shown in the figure, the TA actions produce a collection of conjunctive clauses, joining the words into more complex linguistic patterns.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 60, "text": "Fig. 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Distributed Word Representation in TM", "sec_num": "4.2" }, { "text": "There are two types of feedback that guides the TA learning. They are Type I feedback and Type II feedback, detailed in (Granmo, 2018) . Type I feedback is triggered when the ground truth label is 1, i.e., y = 1. The purpose of Type I feedback is to include more literals from the BOW to refine the clauses, or to trim them by removing literals. The balance between refinement and trimming is controlled by a parameter called specificity, s. Type I feedback guides the clauses to provide true positive output, while simultaneously controlling overfitting by producing frequent patterns. Conversely, Type II feedback is triggered in case of false positive output. Its main aim is to introduce zero-valued literals into clauses when they give false positive output. The purpose is to change them so that they correctly output zero later in the learning process.", "cite_spans": [ { "start": 120, "end": 134, "text": "(Granmo, 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Distributed Word Representation in TM", "sec_num": "4.2" }, { "text": "Based on these feedback types, each TA in a clause receives Reward, Penalty or Inaction. The overall learning process is explained in detail by . ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributed Word Representation in TM", "sec_num": "4.2" }, { "text": "In this section, we evaluate our TM-based solution with the input features enhanced by distributed word representation. Here we use Glove pretrained word vector that is trained using CommonCrawl with the configuration of 42B tokens, 1.9M vocab, uncased, and 300d vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "5" }, { "text": "We have selected various types of datasets to investigate how broadly our method is applicable: R8 and R52 of Reuters, Movie Review (MR), and TREC-6. \u2022 Reuters 21578 dataset include two subsets: R52 and R8 (all-terms version). R8 is divided into 8 sections while there are 52 categories in R52. \u2022MR is a movie analysis dataset for binary sentiment classification with just one sentence per review (Pang and Lee, 2005) . In this study, we used a training/test split from (Tang et al., 2015) 1 .", "cite_spans": [ { "start": 397, "end": 417, "text": "(Pang and Lee, 2005)", "ref_id": "BIBREF23" }, { "start": 470, "end": 489, "text": "(Tang et al., 2015)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "\u2022TREC-6 is a question classification dataset (Li and Roth, 2002) . The task entails categorizing a query into six distinct categories (abbreviation, description, entity, human, location, numeric value).", "cite_spans": [ { "start": 45, "end": 64, "text": "(Li and Roth, 2002)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "5.1" }, { "text": "A TM has three parameters that must be initialized before training a model: number of clauses l, voting target T , and specificity s. We configure these parameters as follows. For R8, we use 2,500 clauses, a threshold of 80, and specificity 9. The vocabulary size is 5,000. For R52, we employ 1,500 clauses, the voting target is 80, and specificity is 9.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TM Parameters", "sec_num": "5.2" }, { "text": "Here, we use a vocabulary of size 6,000. For MR, the number of clauses is 3,000, the voting target is 80, and specificity is 9, with a vocabulary of size 5,000. Finally, for TREC, we use 2,000 clause, a voting target of 80, and specificity 9, with vocabulary size 6,000. These parameters are kept static as we explore various k and \u03b8 values for selecting similar words to facilitate comparison. The code and datasets are available online 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TM Parameters", "sec_num": "5.2" }, { "text": "Here, we demonstrate the performance on each of the datasets, exploring the effect of different k-values, i.e., 3, 5 and 10. The performance of the proposed technique for selected datasets with various values of k is compared in Table 1 . It can be seen that by using feature extension, performance is significantly enhanced. Both k = 3 and k = 5 outperform the simple BOW (k = 0). However, for this particular dataset, k = 10 performs poorly because extending each word to its 10 nearest neighbors includes many unnecessary contexts that have no significant impact on the classification. In terms of accuracy, k = 5 performs best for the R8 dataset. For the R52 dataset, the feature extension with k = 5 and k = 10 performs poorly compared to using k = 0 and k = 3. Here, k = 3 is the best-performing parameter. The improvement obtained by moving from a simple BOW to a BOW enhanced with semantically similar features is obvious in the case of the R52 dataset. Similarly, in the case of the TREC dataset, the performance of simple BOW (k = 0) is markedly outperformed by the feature extension techniques for all the tested k-values, with k = 5 and k = 10 being good candidates. The advantage of k = 10 over k = 5 is that k = 10 reaches its peak accuracy in an earlier epoch. Lastly, the performance of the MR is again clear that the feature extension technique outperforms the simple BOW (k = 0) with a high margin.", "cite_spans": [], "ref_spans": [ { "start": 229, "end": 236, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Performance When Using Top k Nearest Neighbors", "sec_num": "5.3" }, { "text": "This section demonstrates the performance of our BOW enhancement approach when using various similarity thresholds \u03c6 for feature extension. Here, \u03c6 refers to the cosine similarity between a word in the BOW and a target word from the overall vocabulary. Again, similarity is measured in the GloVe embedding space as the cosine of the angle \u03b8 between the embedding vectors compared, cos(\u03b8). For \u03c6, we here explore the values 0.5, 0.6, 0.7, 0.8, and 0.9, whose corresponding angles are 60 \u2022 , 53.13 \u2022 , 45.57 \u2022 , 36.86 \u2022 , and 25.84 \u2022 , respectively. The performance of the various \u03c6-values for the selected dataset is shown in Table 2 . For R8 dataset, feature extension using \u03c6 = 0.7, \u03c6 = 0.8, and \u03c6 = 0.9 outperforms the simple BOW (\u03c6 = 0) where \u03c6 = 0.7 being the best. In case of the R52 dataset, all of the investigated \u03c6-values outperform the simple BOW (\u03c6 = 0) where \u03c6 = 0.5 and \u03c6 = 0.8 performs the best. Similar trend is observed in case of TREC and MR dataset where feature extension outperforms the simple BOW. In most of the cases, however, a too strict similarity threshold \u03c6 tends to reduce performance because fewer features are added to the BOW. Even though using a looser similarity score thresholds also introduces unnecessary features, these do not seem to impact the formation of accurate clauses. Overall, our experiments show that using \u03c6-values from 0.5 to 0.7 peaks performance.", "cite_spans": [], "ref_spans": [ { "start": 625, "end": 632, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Performance When Using Neighbors Within a Similarity Threshold", "sec_num": "5.4" }, { "text": "We here compare our proposed model with selected text classification-and embedding methods. We have selected representative techniques from various main approaches, both those that leverage similar kinds of pre-trained word embedding and those that only use BOW. The selected baselines are: \u2022TF-IDF+LR: This is a bag-of-words model employing Term Frequency-Inverse Document Frequency (TF-IDF) weighting. Logistic Regression is used as a softmax classifier. \u2022CNN: The CNN-baselines cover both initialization with random word embedding (CNN-rand) as well as initialization with pretrained word embedding (CNNnon-static) (Kim, 2014) . \u2022 LSTM: The LSTM model that we employ here is from (Liu et al., 2016) , representing the entire text using the last hidden state. We tested this model with and without pre-trained word embeddings. \u2022 Bi-LSTM: Bidirectional LSTMs are widely used for text classification. We compare our model with Bi-LSTM fed with pre-trained word embeddings. \u2022PV-DBOW: PV-DBOW is a paragraph vector model where the word order is ignored. Logistic Regression is used as a softmax classifier (Le and Mikolov, 2014) . \u2022 PV-DM: PV-DM is also a paragraph vector model, however with word ordering taken into account. Logistic Regression is used as a softmax classifier (Le and Mikolov, 2014) . \u2022fastText: This baseline is a simple text classification technique that uses the average of the word embeddings provided by fastText as document embedding. The embedding is then fed to a linear classifier . We evaluate both the use of uni-grams and bigrams. \u2022 SWEM : SWEM applies simple pooling techniques over the word embeddings to obtain a document embedding (Shen et al., 2018b) .", "cite_spans": [ { "start": 618, "end": 629, "text": "(Kim, 2014)", "ref_id": "BIBREF15" }, { "start": 683, "end": 701, "text": "(Liu et al., 2016)", "ref_id": "BIBREF18" }, { "start": 1104, "end": 1126, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF16" }, { "start": 1277, "end": 1299, "text": "(Le and Mikolov, 2014)", "ref_id": "BIBREF16" }, { "start": 1664, "end": 1684, "text": "(Shen et al., 2018b)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with Baselines", "sec_num": "5.5" }, { "text": "\u2022Graph-CNN-C: A graph CNN model uses convolutions over a word embedding similarity graph (Defferrard et al., 2016) , employing a Chebyshev filter. \u2022S 2 GC: This technique uses a modified Markov Diffusion Kernel to derive a variant of Graph Convolutional Network (GCN) (Zhu and Koniusz, 2021 ). \u2022LguidedLearn: It is a label-guided learning framework for text classification. This technique is applied to BERT as well ), which we use for comparison purposes here.", "cite_spans": [ { "start": 89, "end": 114, "text": "(Defferrard et al., 2016)", "ref_id": "BIBREF7" }, { "start": 268, "end": 290, "text": "(Zhu and Koniusz, 2021", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with Baselines", "sec_num": "5.5" }, { "text": "\u2022Feature Projection (FP): It is a novel approach to improve representation learning through feature projection. Existing features are projected into an orthogonal space (Qin et al., 2020) .", "cite_spans": [ { "start": 169, "end": 187, "text": "(Qin et al., 2020)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison with Baselines", "sec_num": "5.5" }, { "text": "From Table 3 , we observe that the TM approaches that employ either of our feature extension techniques outperform several word embedding-based Logistic Regression approaches, such as PV-DBOW, PV-DM, and fastText. Similarly, the legacy TM outperforms sophisticated models like CNN and LSTM based on randomly initialized word embedding. Still, the legacy TM falls of other models when they are initialized by pre-trained word embeddings. By boosting the Boolean BOW with semantically similar features using our proposed technique, however, TM outperforms LSTM (pretrain) on the R8 dataset and performs similarly on R52 and MR. In addition to this, our proposed approach achieves quite similar performance compared to BERT, even though BERT has been pre-trained on a huge text corpus. However, it falls slightly short of sophisticated finetuned models like Lguided-BERT-1 and Lguided-BERT-3. Overall, our results show that our proposed feature extension technique for TMs significantly enhances accuracy, reaching state of the art accuracy. Importantly, this accuracy enhancement does not come at the cost of reduced interpretability, unlike DNNs, which we discuss below. The state of the art for the TREC dataset is different from the other three datasets, hence we report results separately in Table 4 . These results clearly show that although the basic TM model does not outperform the recent DNN-and transformer-based models, the feature-boosted TM outperforms all of (Dragos , et al., 2021) 87.20 TM 88.05\u00b1 1.52 TM with k 89.82\u00b1 1.18 TM with \u03c6 90.04\u00b1 0.94 Table 4 : Comparison of feature extended TM with the state of the art for TREC. Reported accuracy of TM is the mean of last 50 epochs of 5 independent experiments with their standard deviation.", "cite_spans": [ { "start": 1471, "end": 1494, "text": "(Dragos , et al., 2021)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 3", "ref_id": "TABREF6" }, { "start": 1294, "end": 1301, "text": "Table 4", "ref_id": null }, { "start": 1560, "end": 1567, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Comparison with Baselines", "sec_num": "5.5" }, { "text": "The proposed feature extension-based TM does not only impact accuracy. Perhaps surprisingly, our proposed technique also simplify the clauses that the TM produces, making them more meaningful in a semantic sense. To demonstrate this property, let us consider two samples from the MR dataset: S 1 =\"the cast is uniformly excellent and relaxed\" and S 2 =\"the entire cast is extraordinarily good\". Let the vocabulary, in this case, be [cast, excellent, relaxed, extraordinarily, good, bad, boring, worst] as shown in Fig. 6 .", "cite_spans": [ { "start": 432, "end": 501, "text": "[cast, excellent, relaxed, extraordinarily, good, bad, boring, worst]", "ref_id": null } ], "ref_spans": [ { "start": 514, "end": 520, "text": "Fig. 6", "ref_id": null } ], "eq_spans": [], "section": "Interpretation", "sec_num": "5.6" }, { "text": "the cast is uniformly excellent and relaxed the entire cast is extraordinarily good the cast is uniformly excellent/good and relaxed the entire cast is extraordinarily good/excellent TM with a simple BOW TM with a feature extended BOW Figure 6 : Clause learning semantic for multiple examples compared to simple BOW based TM.", "cite_spans": [], "ref_spans": [ { "start": 235, "end": 243, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Interpretation", "sec_num": "5.6" }, { "text": "As we can see, that the TM initialized with normal BOW uses two separate clauses to represent two examples. However, augmenting feature on TM uses only one clause that learns the semantic for multiple examples.This indeed makes interpretation of TM more powerful and meaningful as compared to simple BOW based TM.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Interpretation", "sec_num": "5.6" }, { "text": "In this paper, we aimed to enhance the performance of Tsetlin Machines (TMs) by introducing a novel way to exploit distributed feature representation for TMs. Given that a TM relies on Bag-of-words (BOW), it is not possible to introduce pre-trained word representation into a TM directly, without sacrificing the interpretability of the model. To address this intertwined challenge, we extended each word feature by using cosine similarity on the distributed word representation. We proposed two techniques for feature extension: (1) using the k nearest words in embedding space and (2) including words within a given cosine angle (\u03b8). Through this enhancement, the TM BOW can be boosted with pre-trained world knowledge in a simple yet effective way. Our experiment results showed that the enhanced TM not only achieve competitive accuracy compared to state of the art, but also outperform some of the sophisticated deep neural network (DNN) models. In addition, our BOW boosting also improved the interpretability of the model by increasing the scope of each clause, semantically relating more samples. We thus believe that our proposed approach significantly enhance the TM in the accuracy/interpretability continuum, establishing a new standard in the field of explainable NLP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "https://github.com/mnqu/PTE/tree/master/data/mr.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF1": { "ref_id": "b1", "title": "A scheme for continuous input to the Tsetlin machine with applications to forecasting disease outbreaks", "authors": [ { "first": "Xuan", "middle": [], "last": "Granmo", "suffix": "" }, { "first": "Morten", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "", "middle": [], "last": "Goodwin", "suffix": "" } ], "year": 2019, "venue": "Advances and Trends in Artificial Intelligence. From Theory to Practice", "volume": "", "issue": "", "pages": "564--578", "other_ids": {}, "num": null, "urls": [], "raw_text": "Granmo, Xuan Zhang, and Morten Goodwin. 2019. A scheme for continuous input to the Tsetlin machine with applications to forecasting disease outbreaks. In Advances and Trends in Artificial In- telligence. From Theory to Practice, pages 564-578. Springer International Publishing.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Why is attention not so interpretable. arXiv: Machine Learning", "authors": [ { "first": "Bing", "middle": [], "last": "Bai", "suffix": "" }, { "first": "J", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Guanhua", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kun", "middle": [], "last": "Bai", "suffix": "" }, { "first": "F", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bing Bai, J. Liang, Guanhua Zhang, Hao Li, Kun Bai, and F. Wang. 2020. Why is attention not so inter- pretable. arXiv: Machine Learning.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Using the tsetlin machine to learn human-interpretable rules for highaccuracy text categorization with medical applications", "authors": [ { "first": "Ole-Christoffer", "middle": [], "last": "Geir Thore Berge", "suffix": "" }, { "first": "", "middle": [], "last": "Granmo", "suffix": "" }, { "first": "Morten", "middle": [], "last": "Tor Oddbj\u00f8rn Tveit", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Goodwin", "suffix": "" }, { "first": "Bernt", "middle": [], "last": "Jiao", "suffix": "" }, { "first": "", "middle": [], "last": "Viggo Matheussen", "suffix": "" } ], "year": 2019, "venue": "IEEE Access", "volume": "7", "issue": "", "pages": "115134--115146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geir Thore Berge, Ole-Christoffer Granmo, Tor Odd- bj\u00f8rn Tveit, Morten Goodwin, Lei Jiao, and Bernt Viggo Matheussen. 2019. Using the tsetlin machine to learn human-interpretable rules for high- accuracy text categorization with medical applica- tions. IEEE Access, 7:115134-115146.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Measuring the novelty of natural language text using the conjunctive clauses of a tsetlin machine text classifier", "authors": [ { "first": "Bimal", "middle": [], "last": "Bhattarai", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Ole-Christoffer Granmo", "suffix": "" }, { "first": "", "middle": [], "last": "Jiao", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 13th International Conference on Agents and Artificial Intelligence", "volume": "2", "issue": "", "pages": "410--417", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bimal Bhattarai., Ole-Christoffer Granmo., and Lei Jiao. 2021. Measuring the novelty of natural lan- guage text using the conjunctive clauses of a tsetlin machine text classifier. In Proceedings of the 13th International Conference on Agents and Artificial In- telligence -Volume 2: ICAART,, pages 410-417. IN- STICC, SciTePress.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Concept labeling: Building text classifiers with minimal supervision", "authors": [ { "first": "Prem", "middle": [], "last": "Vijil Chenthamarakshan", "suffix": "" }, { "first": "Vikas", "middle": [], "last": "Melville", "suffix": "" }, { "first": "Richard", "middle": [ "D" ], "last": "Sindhwani", "suffix": "" }, { "first": "", "middle": [], "last": "Lawrence", "suffix": "" } ], "year": 2011, "venue": "IJCAI", "volume": "", "issue": "", "pages": "1225--1230", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vijil Chenthamarakshan, Prem Melville, Vikas Sind- hwani, and Richard D. Lawrence. 2011. Concept labeling: Building text classifiers with minimal su- pervision. In IJCAI, pages 1225-1230.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "authors": [ { "first": "Micha\u00ebl", "middle": [], "last": "Defferrard", "suffix": "" }, { "first": "Xavier", "middle": [], "last": "Bresson", "suffix": "" }, { "first": "Pierre", "middle": [], "last": "Vandergheynst", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "29", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Micha\u00ebl Defferrard, Xavier Bresson, and Pierre Van- dergheynst. 2016. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "ACL: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In ACL: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171- 4186, Minneapolis, Minnesota. ACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Question classification using interpretable tsetlin machine", "authors": [ { "first": ",", "middle": [], "last": "Dragos", "suffix": "" }, { "first": "Constantin", "middle": [], "last": "Nicolae", "suffix": "" } ], "year": 2021, "venue": "International Workshop of Machine Reasoning. ACM International Conference on Web Search and Data Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dragos , , Constantin Nicolae, and dragosnicolae. 2021. Question classification using interpretable tsetlin machine. In International Workshop of Machine Reasoning. ACM International Conference on Web Search and Data Mining.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Bae: Bert-based adversarial examples for text classification", "authors": [ { "first": "Siddhant", "middle": [], "last": "Garg", "suffix": "" }, { "first": "Goutham", "middle": [], "last": "Ramakrishnan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddhant Garg and Goutham Ramakrishnan. 2020. Bae: Bert-based adversarial examples for text clas- sification.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The tsetlin machinea game theoretic bandit driven approach to optimal pattern recognition with propositional logic", "authors": [ { "first": "Ole-Christoffer", "middle": [], "last": "Granmo", "suffix": "" } ], "year": 2018, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ole-Christoffer Granmo. 2018. The tsetlin machine - a game theoretic bandit driven approach to optimal pattern recognition with propositional logic. ArXiv, abs/1804.01508.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The convolutional tsetlin machine. arXiv, 1905", "authors": [ { "first": "Ole-Christoffer", "middle": [], "last": "Granmo", "suffix": "" }, { "first": "Sondre", "middle": [], "last": "Glimsdal", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Jiao", "suffix": "" }, { "first": "Morten", "middle": [], "last": "Goodwin", "suffix": "" }, { "first": "Christian", "middle": [ "W" ], "last": "Omlin", "suffix": "" }, { "first": "Geir Thore", "middle": [], "last": "Berge", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ole-Christoffer Granmo, Sondre Glimsdal, Lei Jiao, Morten Goodwin, Christian W. Omlin, and Geir Thore Berge. 2019. The convolutional tsetlin machine. arXiv, 1905.09688.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Distributional structure. WORD", "authors": [ { "first": "S", "middle": [], "last": "Zellig", "suffix": "" }, { "first": "", "middle": [], "last": "Harris", "suffix": "" } ], "year": 1954, "venue": "", "volume": "10", "issue": "", "pages": "146--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zellig S. Harris. 1954. Distributional structure. WORD, 10(2-3):146-162.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "", "volume": "2", "issue": "", "pages": "427--431", "other_ids": {}, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In EACL: Volume 2, Short Pa- pers, pages 427-431, Valencia, Spain. ACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Convolutional neural networks for sentence classification", "authors": [ { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1746--1751", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751, Doha, Qatar. ACL.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Distributed representations of sentences and documents", "authors": [ { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 31st International Conference on Machine Learning", "volume": "32", "issue": "", "pages": "1188--1196", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Proceed- ings of the 31st International Conference on Ma- chine Learning, volume 32 of Proceedings of Ma- chine Learning Research, pages 1188-1196, Bejing, China. PMLR.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Learning question classifiers", "authors": [ { "first": "Xin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2002, "venue": "COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xin Li and Dan Roth. 2002. Learning question classi- fiers. In COLING.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Recurrent neural network for text classification with multi-task learning", "authors": [ { "first": "Pengfei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xipeng", "middle": [], "last": "Qiu", "suffix": "" }, { "first": "Xuanjing", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2016, "venue": "IJCAI", "volume": "", "issue": "", "pages": "2873--2879", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Recurrent neural network for text classification with multi-task learning. In IJCAI, page 2873-2879.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Label-guided learning for text classification", "authors": [ { "first": "X", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Song", "middle": [], "last": "Wang", "suffix": "" }, { "first": "X", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xinxin", "middle": [], "last": "You", "suffix": "" }, { "first": "J", "middle": [], "last": "Wu", "suffix": "" }, { "first": "D", "middle": [], "last": "Dou", "suffix": "" } ], "year": 2020, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Liu, Song Wang, X. Zhang, Xinxin You, J. Wu, and D. Dou. 2020. Label-guided learning for text classi- fication. ArXiv, abs/2002.10772.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Bridging semantics and syntax with graph algorithms -state-of-the-art of extracting biomedical relations", "authors": [ { "first": "Yuan", "middle": [], "last": "Luo", "suffix": "" }, { "first": "\u00d6zlem", "middle": [], "last": "Uzuner", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Szolovits", "suffix": "" } ], "year": 2017, "venue": "Briefings in bioinformatics", "volume": "18", "issue": "1", "pages": "160--178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuan Luo, \u00d6zlem Uzuner, and Peter Szolovits. 2017. Bridging semantics and syntax with graph algo- rithms -state-of-the-art of extracting biomedical re- lations. Briefings in bioinformatics, 18 1:160-178.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Introduction to Information Retrieval", "authors": [ { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Prabhakar", "middle": [], "last": "Raghavan", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2008. Introduction to Information Retrieval. Cambridge University Press.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Distributed representations of words and phrases and their compositionality", "authors": [ { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Greg", "middle": [ "S" ], "last": "Corrado", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "NIPS", "volume": "26", "issue": "", "pages": "3111--3119", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS, Nevada, USA, volume 26, pages 3111- 3119. Curran Associates, Inc.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", "authors": [ { "first": "Bo", "middle": [], "last": "Pang", "suffix": "" }, { "first": "Lillian", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2005, "venue": "ACL", "volume": "", "issue": "", "pages": "115--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Exploit- ing class relationships for sentiment categorization with respect to rating scales. In ACL, page 115-124, Michigan, USA. ACL.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "EMNLP", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP, Doha, Qatar, page 1532-1543.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Feature projection for improved text classification", "authors": [ { "first": "Qi", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Wenpeng", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "ACL", "volume": "", "issue": "", "pages": "8161--8171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qi Qin, Wenpeng Hu, and Bing Liu. 2020. Feature projection for improved text classification. In ACL, pages 8161-8171, Online. ACL.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Text categorization as a graph classification problem", "authors": [ { "first": "Fran\u00e7ois", "middle": [], "last": "Rousseau", "suffix": "" }, { "first": "Emmanouil", "middle": [], "last": "Kiagias", "suffix": "" }, { "first": "Michalis", "middle": [], "last": "Vazirgiannis", "suffix": "" } ], "year": 2015, "venue": "Long Papers)", "volume": "1", "issue": "", "pages": "1702--1712", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fran\u00e7ois Rousseau, Emmanouil Kiagias, and Michalis Vazirgiannis. 2015. Text categorization as a graph classification problem. In ACL (Volume 1: Long Pa- pers), pages 1702-1712, Beijing, China. ACL.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Mining interpretable rules for sentiment and semantic relation analysis using tsetlin machines", "authors": [ { "first": "Rupsa", "middle": [], "last": "Saha", "suffix": "" }, { "first": "Ole-Christoffer", "middle": [], "last": "Granmo", "suffix": "" }, { "first": "Morten", "middle": [], "last": "Goodwin", "suffix": "" } ], "year": 2020, "venue": "Artificial Intelligence XXXVII", "volume": "", "issue": "", "pages": "67--78", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rupsa Saha, Ole-Christoffer Granmo, and Morten Goodwin. 2020. Mining interpretable rules for sen- timent and semantic relation analysis using tsetlin machines. In Artificial Intelligence XXXVII, pages 67-78, Cham. Springer International Publishing.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Is attention interpretable? In ACL", "authors": [ { "first": "Sofia", "middle": [], "last": "Serrano", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "2931--2951", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sofia Serrano and Noah A. Smith. 2019. Is attention interpretable? In ACL, pages 2931-2951, Florence, Italy. ACL.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Baseline needs more love: On simple word-embedding-based models and associated pooling mechanisms", "authors": [ { "first": "Dinghan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Guoyin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wenlin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Renqiang Min", "suffix": "" }, { "first": "Qinliang", "middle": [], "last": "Su", "suffix": "" }, { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chunyuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Henao", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Carin", "suffix": "" } ], "year": 2018, "venue": "ACL", "volume": "1", "issue": "", "pages": "440--450", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dinghan Shen, Guoyin Wang, Wenlin Wang, Mar- tin Renqiang Min, Qinliang Su, Yizhe Zhang, Chun- yuan Li, Ricardo Henao, and Lawrence Carin. 2018a. Baseline needs more love: On simple word-embedding-based models and associated pool- ing mechanisms. In ACL Volume 1: Long Papers, pages 440-450, Melbourne, Australia. ACL.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Baseline needs more love: On simple word-embedding-based models and associated pooling mechanisms", "authors": [ { "first": "Dinghan", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Guoyin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wenlin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Renqiang Min", "suffix": "" }, { "first": "Qinliang", "middle": [], "last": "Su", "suffix": "" }, { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chunyuan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Henao", "suffix": "" }, { "first": "Lawrence", "middle": [], "last": "Carin", "suffix": "" } ], "year": 2018, "venue": "In ACL", "volume": "1", "issue": "", "pages": "440--450", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dinghan Shen, Guoyin Wang, Wenlin Wang, Mar- tin Renqiang Min, Qinliang Su, Yizhe Zhang, Chun- yuan Li, Ricardo Henao, and Lawrence Carin. 2018b. Baseline needs more love: On simple word-embedding-based models and associated pool- ing mechanisms. In ACL (Volume 1: Long Papers), pages 440-450, Melbourne, Australia. ACL.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Parsing with compositional vector grammars", "authors": [ { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "John", "middle": [], "last": "Bauer", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Ng", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "455--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013. Parsing with compo- sitional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 455-465, Sofia, Bulgaria. Association for Computa- tional Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Pte: Predictive text embedding through large-scale heterogeneous text networks", "authors": [ { "first": "Jian", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Qiaozhu", "middle": [], "last": "Mei", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '15", "volume": "", "issue": "", "pages": "1165--1174", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Tang, Meng Qu, and Qiaozhu Mei. 2015. Pte: Pre- dictive text embedding through large-scale hetero- geneous text networks. In Proceedings of the 21th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, KDD '15, page 1165-1174, Sydney, NSW, Australia. Association for Computing Machinery.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Enriching feature engineering for short text samples by language time series analysis", "authors": [ { "first": "Yichen", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Kelly", "middle": [], "last": "Blincoe", "suffix": "" }, { "first": "A", "middle": [], "last": "Kempa-Liehr", "suffix": "" } ], "year": 2020, "venue": "EPJ Data Science", "volume": "9", "issue": "", "pages": "1--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yichen Tang, Kelly Blincoe, and A. Kempa-Liehr. 2020. Enriching feature engineering for short text samples by language time series analysis. EPJ Data Science, 9:1-59.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Word representations: A simple and general method for semi-supervised learning", "authors": [ { "first": "Joseph", "middle": [], "last": "Turian", "suffix": "" }, { "first": "Lev-Arie", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "384--394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceed- ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384-394, Up- psala, Sweden. Association for Computational Lin- guistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Baselines and bigrams: Simple, good sentiment and topic classification", "authors": [ { "first": "Sida", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2012, "venue": "In ACL", "volume": "2", "issue": "", "pages": "90--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sida Wang and Christopher Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic clas- sification. In ACL (Volume 2: Short Papers), pages 90-94, Jeju Island, Korea.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Humanlevel interpretable learning for aspect-based sentiment analysis", "authors": [ { "first": "Lei", "middle": [], "last": "Rohan Kumar Yadav", "suffix": "" }, { "first": "Ole-Christoffer", "middle": [], "last": "Jiao", "suffix": "" }, { "first": "Morten", "middle": [], "last": "Granmo", "suffix": "" }, { "first": "", "middle": [], "last": "Goodwin", "suffix": "" } ], "year": 2021, "venue": "The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rohan Kumar Yadav, Lei Jiao, Ole-Christoffer Granmo, and Morten Goodwin. 2021. Human- level interpretable learning for aspect-based senti- ment analysis. In The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21). AAAI.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Interpretability in word sense disambiguation using tsetlin machine", "authors": [ { "first": "", "middle": [], "last": "Rohan Kumar Yadav", "suffix": "" }, { "first": "", "middle": [], "last": "Lei Jiao", "suffix": "" }, { "first": "Morten", "middle": [], "last": "Ole-Christoffer Granmo", "suffix": "" }, { "first": "", "middle": [], "last": "Goodwin", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 13th International Conference on Agents", "volume": "2", "issue": "", "pages": "402--409", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rohan Kumar Yadav., Lei Jiao., Ole-Christoffer Granmo., and Morten Goodwin. 2021. Interpretabil- ity in word sense disambiguation using tsetlin ma- chine. In Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,, pages 402-409. INSTICC, SciTePress.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Russ", "suffix": "" }, { "first": "Quoc V", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "32", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural In- formation Processing Systems, volume 32. Curran Associates, Inc.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Simple spectral graph convolution", "authors": [ { "first": "Hao", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Koniusz", "suffix": "" } ], "year": 2021, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Zhu and Piotr Koniusz. 2021. Simple spectral graph convolution. In International Conference on Learning Representations.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Similar words for an example \"very good movie\" using 300d GloVe word representation.", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "(a) BOW input representation without distributed word representation. (b) BOW input using similar words based on distributed word representation.", "type_str": "figure" }, "FIGREF3": { "num": null, "uris": null, "text": "Architecture of TM using modified BOW based on word similarity.", "type_str": "figure" }, "TABREF2": { "html": null, "text": "Comparison of feature extended TM with several parameters for k.", "type_str": "table", "content": "", "num": null }, "TABREF4": { "html": null, "text": "Comparison of feature extended TM with several parameters for \u03c6.", "type_str": "table", "content": "
", "num": null }, "TABREF6": { "html": null, "text": "Comparison of feature extended TM with the state of the art for R8, R52 and MR. Reported accuracy of TM is the mean of last 50 epochs of 5 independent experiments with their standard deviation.", "type_str": "table", "content": "
those models except understandably BAE:BERT
(Garg and Ramakrishnan, 2020).
ModelTREC
LSTM87.19
FP+LSTM88.83
Transformer87.33
FP+Transformer89.51
BAE: BERT97.6
TM
", "num": null } } } }