{ "paper_id": "D19-1028", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:06:41.393066Z" }, "title": "Learning to Bootstrap for Entity Set Expansion", "authors": [ { "first": "Lingyong", "middle": [], "last": "Yan", "suffix": "", "affiliation": { "laboratory": "Chinese Information Processing Laboratory", "institution": "", "location": {} }, "email": "lingyong2014@iscas.ac.cn" }, { "first": "Xianpei", "middle": [], "last": "Han", "suffix": "", "affiliation": { "laboratory": "Chinese Information Processing Laboratory", "institution": "", "location": {} }, "email": "xianpei@iscas.ac.cn" }, { "first": "", "middle": [], "last": "Le Sun", "suffix": "", "affiliation": { "laboratory": "Chinese Information Processing Laboratory", "institution": "", "location": {} }, "email": "" }, { "first": "Ben", "middle": [], "last": "He", "suffix": "", "affiliation": { "laboratory": "Chinese Information Processing Laboratory", "institution": "", "location": {} }, "email": "benhe@ucas.ac.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Bootstrapping for Entity Set Expansion (ESE) aims at iteratively acquiring new instances of a specific target category. Traditional bootstrapping methods often suffer from two problems: 1) delayed feedback, i.e., the pattern evaluation relies on both its direct extraction quality and the extraction quality in later iterations. 2) sparse supervision, i.e., only few seed entities are used as the supervision. To address the above two problems, we propose a novel bootstrapping method combining the Monte Carlo Tree Search (MCTS) algorithm with a deep similarity network, which can efficiently estimate delayed feedback for pattern evaluation and adaptively score entities given sparse supervision signals. Experimental results confirm the effectiveness of the proposed method.", "pdf_parse": { "paper_id": "D19-1028", "_pdf_hash": "", "abstract": [ { "text": "Bootstrapping for Entity Set Expansion (ESE) aims at iteratively acquiring new instances of a specific target category. Traditional bootstrapping methods often suffer from two problems: 1) delayed feedback, i.e., the pattern evaluation relies on both its direct extraction quality and the extraction quality in later iterations. 2) sparse supervision, i.e., only few seed entities are used as the supervision. To address the above two problems, we propose a novel bootstrapping method combining the Monte Carlo Tree Search (MCTS) algorithm with a deep similarity network, which can efficiently estimate delayed feedback for pattern evaluation and adaptively score entities given sparse supervision signals. Experimental results confirm the effectiveness of the proposed method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Bootstrapping is widely used for Entity Set Expansion (ESE), which acquires new instances of a specific category by iteratively evaluating and selecting patterns, while extracting and scoring entities. For example, given seeds {London, Paris, Beijing} for capital entity expansion, a bootstrapping system for ESE iteratively selects effective patterns, e.g., \"the US Embassy in *\", and extracts new capital entities, e.g., Moscow, using the selected patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main challenges of effective bootstrapping for ESE owe to the delayed feedback and the sparse supervision. Firstly, bootstrapping is an iterative process, where noises brought by currently selected patterns can affect successive iterations (Movshovitz-Attias and Cohen, 2012; Qadir et al., 2015) . Indeed, the pattern evaluation relies on not only its direct extraction quality but also the extraction quality in later iterations, which are correspondingly denoted as instant feedback and Figure 1 : Example of the delayed feedback problem, which expands the capital seeds {London, Paris, Bei-jing}. We demonstrate the top entities extracted by a pattern (correct ones are shown in blue), where P is the precision of extracted entities.", "cite_spans": [ { "start": 244, "end": 279, "text": "(Movshovitz-Attias and Cohen, 2012;", "ref_id": "BIBREF18" }, { "start": 280, "end": 299, "text": "Qadir et al., 2015)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 493, "end": 501, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "delayed feedback in this paper. For instance, as shown in Figure 1 , although \"* is a big city\" and \"the US Embassy in *\" have equal direct extraction quality, the former is considered less useful since its later extracted entities are mostly unrelated. As a result, selecting patterns with high instant feedback but low delayed feedback can cause semantic drift problem (Curran et al., 2007) , where the later extracted entities belong to other categories. Secondly, the above difficulty is further compounded by sparse supervision, i.e., using only seed entities as supervision, since it provides little evidence for deciding whether an extracted entity belongs to the same category of seed entities or not.", "cite_spans": [ { "start": 371, "end": 392, "text": "(Curran et al., 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 58, "end": 66, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address the above two challenges, we propose a novel bootstrapping method combining the Monte Carlo Tree Search (MCTS) algorithm with a deep similarity network, aiming to effectively evaluate the delayed feedback of patterns and adaptively score entities given sparse supervision signals. Specifically, our method tackles the delayed feedback problem by enhancing the traditional bootstrapping method using the MCTS algorithm, which effectively estimates each pattern's delayed feedback via efficient multi-step lookahead search. In this way, our method can select the pattern based on its delayed feedback rather than instant feedback, which is beneficial in that the former feedback is regarded more reliable and accurate for bootstrapping. To resolve the sparse supervision problem, we propose a deep similarity network-pattern mover similarity network (PMSN), which uniformly embeds entities and patterns using the distribution vectors on context pattern embeddings, and measures their semantic similarity to seeds as their ranking scores based on those embeddings; furthermore, we combine the PMSN with the MCTS, and fine-tune the distribution vectors using the estimated delayed feedback. In this way, our method can adaptively embed and score entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Major contributions of this paper are tri-fold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Enhanced bootstrapping via the MCTS algorithm to estimate delayed feedback in bootstrapping. To our best knowledge, this is the first work to combine bootstrapping with MCTS for entity set expansion. \u2022 A novel deep similarity network to evaluate different categories of entities in the bootstrapping for Entity Set Expansion. \u2022 Adaptive entity scoring by combining the deep similarity network with MCTS.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Traditional bootstrapping systems for ESE are usually provided with sparse supervision, i.e., only a few seed entities, and iteratively extract new entities from corpus by performing the following steps, as demonstrated in Figure 2 (a). Pattern generation. Given seed entities and the extracted entities (known entities), a bootstrapping system for ESE firstly generates patterns from the corpus. In this paper, we use lexicon-syntactic surface words around known entities as patterns.", "cite_spans": [], "ref_spans": [ { "start": 223, "end": 231, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Pattern evaluation. This step evaluates generated patterns using sparse supervision and other sources of evidence, e.g., pattern embedding similarity. Many previous studies (Riloff and Jones, 1999; Curran et al., 2007; Gupta and Manning, 2014) use the RlogF function or its variants to evaluate patterns, which usually estimate the instant feedback of each pattern.", "cite_spans": [ { "start": 173, "end": 197, "text": "(Riloff and Jones, 1999;", "ref_id": "BIBREF24" }, { "start": 198, "end": 218, "text": "Curran et al., 2007;", "ref_id": "BIBREF5" }, { "start": 219, "end": 243, "text": "Gupta and Manning, 2014)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Entity expansion. This step selects top patterns to match new candidate entities from the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Entity scoring. This step scores candidate entities using sparse supervision, bootstrapping or other external sources of evidence. The top scored entities are then added to the extracted entity set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "As aforementioned, traditional bootstrapping systems for ESE do not consider the delayed feedback when evaluating patterns, leaving considerable potential in further improvement. To estimate the delayed feedback of a pattern, a simple solution is to perform lookahead search for a fixed number of steps and estimate the quality of its successive extracted entities. However, lookahead search may suffer from efficiency issue brought by the enormous search space since there are many candidate patterns at each step. Besides, due to the sparse supervision problem, entity scoring is often unreliable, which in turn influences the delayed feedback estimation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "To address the above two problems, this paper enhances the bootstrapping system using the Monte Carlo Tree Search (MCTS) algorithm for lookahead search, combined with a pattern mover similarity network (PMSN) for better entity scoring. Specifically, we use MCTS for the efficient lookahead search by pruning bad patterns. We additionally estimate the delayed feedback using the PMSN to score entities, given the sparse supervision signals, and fine-tune the PMSN using the delayed feedback estimated by MCTS. In this way, both the MCTS algorithm and the PMSN are devised to enhance each other, resulting in efficient delayed feedback estimation for pattern evaluation and reliable entity scoring.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "In this section, we describe how to enhance the traditional bootstrapping for ESE using the MCTS algorithm for efficient delayed feedback estimation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Enhancing Bootstrapping via Monte Carlo Tree Search", "sec_num": "3" }, { "text": "To estimate the delayed feedback, this paper uses the Monte Carlo Tree Search (MCTS) algorithm for efficient lookahead search. At each bootstrapping iteration, the MCTS algorithm performs multiple lookahead search procedures (MCTS simulations). Starting from the same root node, each MCTS simulation performs forward search by iteratively constructing subnodes, and moving to one of these sub-nodes. Therefore, the whole MCTS algorithm looks like a tree structure (see Figure 2 (b)), where the node s represents for known entities, i.e., both seed entities and previously extracted entities, and the edge, (b) The Monte Carlo Tree Search for the pattern evaluation in a bootstrapping system for entity set expansion. The red circles refer to the search node (i.e., seed entities plus extracted entities). A unidirectional edge represents the selection of a pattern to match new entities, which results in a new node. linking from a node s to one of its sub-nodes s , represents for selecting a pattern p to expand s. At the very beginning, the root tree node s 0 is constructed using seed entities and the extracted entities from previous iterations, and is fixed among different simulations. Besides, for each (s, p) pair, we store a cumulative reward Q(s, p) and a visit count N (s, p) during tree search for the subsequent reward function defined in Section 3.3.", "cite_spans": [], "ref_spans": [ { "start": 469, "end": 477, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Efficient Delayed Feedback Estimation via MCTS", "sec_num": "3.1" }, { "text": "Specifically, each MCTS simulation contains four stages, as demonstrated in Figure 2 ", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 84, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Efficient Delayed Feedback Estimation via MCTS", "sec_num": "3.1" }, { "text": "(b):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Delayed Feedback Estimation via MCTS", "sec_num": "3.1" }, { "text": "Selection. Starting from s 0 , each simulation first traverses the MCTS search tree until reaching the leaf node s L (which is never reached before) or reaching a fixed depth. Each traversing step i corresponding to selecting a pattern p i by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Delayed Feedback Estimation via MCTS", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p i = arg max p Q(s, p) + \u00b5(s, p)", "eq_num": "(1)" } ], "section": "Efficient Delayed Feedback Estimation via MCTS", "sec_num": "3.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Delayed Feedback Estimation via MCTS", "sec_num": "3.1" }, { "text": "\u00b5(s, p) \u221d p\u03c3(s,p) 1+N (s,p) , p \u03c3 (s, p)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Delayed Feedback Estimation via MCTS", "sec_num": "3.1" }, { "text": "is the prior probability of p returned by the policy network p \u03c3 , which is described in detail in Section 3.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Delayed Feedback Estimation via MCTS", "sec_num": "3.1" }, { "text": "Among an MCTS simulation, the lookahead search space is reduced mainly in this stage. Specifically, according to e.q. (1), the lookahead search is more likely to select the potential patterns with high cumulative rewards or high prior probabilities, rather than all patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Delayed Feedback Estimation via MCTS", "sec_num": "3.1" }, { "text": "Expansion. When the traversal reaches a leaf node s L , we expand this node by selecting a new pattern to match more entities. Due to the lack of the cumulative rewards on the new patterns, we select the new pattern with the highest prior probability returned by p \u03c3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Delayed Feedback Estimation via MCTS", "sec_num": "3.1" }, { "text": "Evaluation. Once finishing the above expansion stage or the simulation procedure reaches a fixed depth, the reward R for the leaf node is returned by first quickly selecting multiple patterns to expand the leaf node and then evaluating the quality of all newly extracted entities in this simulation. We herein use the RlogF function to quickly select patterns rather than the policy network (Running RlogF function is much faster than the policy network). And the reward function is described in Section 3.3 in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Delayed Feedback Estimation via MCTS", "sec_num": "3.1" }, { "text": "Backup. At this stage, the returned reward is used to update the cumulative rewards and visit counts of previous (s, p) pairs by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Delayed Feedback Estimation via MCTS", "sec_num": "3.1" }, { "text": "N (s, p) = n j 1(s, p, j) Q(s, p) = 1 N (s, p) n j=1 1(s, p, j) \u2022 R (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Delayed Feedback Estimation via MCTS", "sec_num": "3.1" }, { "text": "where 1(s, p, j) indicates whether an edge (s, p) was traversed during the j th simulation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Delayed Feedback Estimation via MCTS", "sec_num": "3.1" }, { "text": "After finishing all MCTS simulations, we use the cumulative reward of each (s 0 , p) pair as the delayed feedback for pattern p. Because the cumulative rewards are updated many times using the quality evaluation of their future extracted entities, the cumulative reward of each (s 0 , p) pair can be regarded as the precise approximation of the delayed feedback if we have a proper reward function. As a result, our method selects the top patterns with the highest cumulative rewards, which are more likely to extract correct entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Efficient Delayed Feedback Estimation via MCTS", "sec_num": "3.1" }, { "text": "The prior policy network is mainly used to prune bad patterns and therefore reduce the search space in the MCTS. Intuitively, if a pattern is not similar to other patterns around some entities, it is likely a general pattern or noisy pattern to these entities, and therefore should be pruned. To this end, this paper proposes a novel deep similarity network, namely Pattern Mover Similarity Network (PMSN), which uniformly embeds patterns, entities and entity sets, and estimates their embedding similarities. We describe this model in detail in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior Policy using Pattern Mover Similarity Network", "sec_num": "3.2" }, { "text": "Particularly, for a pattern p linked from a tree node s, the PMSN is used as the prior policy network to compute the prior probability by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior Policy using Pattern Mover Similarity Network", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p \u03c3 (s, p) = SIM(p, E) p SIM(p , E)", "eq_num": "(3)" } ], "section": "Prior Policy using Pattern Mover Similarity Network", "sec_num": "3.2" }, { "text": "where E is the set of known entities included in node s, and SIM(p, E) is the similarity of the pattern p to the entity set E using the PMSN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior Policy using Pattern Mover Similarity Network", "sec_num": "3.2" }, { "text": "To further reduce the search space (note that the number of patterns at each step can be tens of thousands), we use the RlogF function to retain only the top k for lookahead search. In the experiments, we set k to a reasonably large value, i.e., 200, to balance between efficiency and effectiveness.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Prior Policy using Pattern Mover Similarity Network", "sec_num": "3.2" }, { "text": "The reward function is critical for efficiently estimating the real delayed feedback of each pattern. Intuitively, a pattern should have higher delayed feedback if it extracts more similar entities and less unrelated entities. Base on this intuition, we devise the reward function as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reward Function in MCTS", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R = e\u2208E SIM(e, E 0 ) |E | \u03c3( |E | a )", "eq_num": "(4)" } ], "section": "Reward Function in MCTS", "sec_num": "3.3" }, { "text": "where E 0 is the set of known entities in root node, E is the set of new entities, SIM(e, E) is the similarity score of newly extracted entity e to known entities, \u03c3(\u2022) is the sigmoid function, and a is a \"temperature\" hyperparameter. The above reward function can be regarded as a trade-off between the extraction quality (the first term) and the extraction amount (the second term). To compute the similarity score, we also exploit the PMSN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reward Function in MCTS", "sec_num": "3.3" }, { "text": "The pattern mover similarity network (PMSN) is a unified model for adaptively scoring the similarity of entities or patterns to seed entities. Specifically, the pattern mover similarity network contains two components: 1) the adaptive distributional pattern embeddings (ADPE) that adaptively represent patterns, entities, and entity sets in a unified way; 2) the pattern mover similarity (PMS) measurement that estimates the similarity of two ADPEs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pattern Mover Similarity Network", "sec_num": "4" }, { "text": "The PMSN model is mainly used in three aspects as follows. 1) The PMSN is used as the prior policy network in the MCTS algorithm to evaluate the similarity of patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pattern Mover Similarity Network", "sec_num": "4" }, { "text": "2) The PMSN is used to evaluate the newly extracted entities within the MCTS simulation, whose evaluation scores are subsequently used to update rewards.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pattern Mover Similarity Network", "sec_num": "4" }, { "text": "3) The PMSN is also used as the entity scoring function at the Entity Scoring stage in the bootstrapping process as mentioned in Section 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pattern Mover Similarity Network", "sec_num": "4" }, { "text": "In this section, we first describe how to embed patterns; then, we introduce how to obtain the basic distributional pattern embeddings for uniformly representing entities and patterns without adaptation; finally, we introduce the adaptation mechanism combined with the MCTS algorithm. Pattern Embedding. As a basic step of our PMSN model, we first embed a context pattern of an entity as a single embedding vector. Specifically, we use the average word embeddings, from the pre-trained GloVe (Pennington et al., 2014) embeddings, of a pattern surface text as the pattern embeddings. We filter out patterns containing at least two OOV terms. According to our pilot experiments, replacing the average GloVe word embeddings with alternatives such as Convolutional Neural Network and Recurrent Neural Network does not influence performance.", "cite_spans": [ { "start": 492, "end": 517, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Adaptive Distributional Pattern Embeddings", "sec_num": "4.1" }, { "text": "Basic distributional pattern embeddings without adaptation. Based on the single embedding of each context pattern, this paper represents an entity using a distributional vector on its context pattern embeddings, called distributional pattern embeddings (DPE). The intuition behind is that each context pattern represents one aspect of meanings of an entity according to the distributional hypothesis in linguistics (Harris, 1954) , while the importance of different patterns to the semantic of an entity varies from each other. Therefore, we use a distributional vector, which stores the importance score of different patterns, together with the context pattern embeddings to represent the semantic of an entity. To estimate each pattern's importance score, we suggest that a context pattern is important to the semantic of an entity if: the pattern matches the entity frequently; the pattern matches as few other entities as possible. Therefore, we design a importance score function for each context pattern p of an entity e as follows:", "cite_spans": [ { "start": 415, "end": 429, "text": "(Harris, 1954)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Adaptive Distributional Pattern Embeddings", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w(p, e) = N (e, p) \u00d7 log N (e, p) C(p)", "eq_num": "(5)" } ], "section": "Adaptive Distributional Pattern Embeddings", "sec_num": "4.1" }, { "text": "where C(p) is the number of different entities matched by p, N (e, p) is the frequency of p i matching entity e. Ultimately, the above importance scores are mapped to the distributional probabilities of each entity's context pattern such that all probabilities sum up to 1. In addition, we estimate the basic DPE for a single pattern and a whole entity set. Specifically, a single pattern can be regarded as a special \"entity\", whose context pattern is only the pattern itself. Similarly, as shown in Figure 3 , an entity set can be regarded as another special \"entity\", whose context patterns are the union of all context patterns of entities in this set. Besides, the importance score function for context patterns of an entity set is slightly different from the one in e.q. (5):", "cite_spans": [], "ref_spans": [ { "start": 501, "end": 509, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Adaptive Distributional Pattern Embeddings", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w(p, E) = N (E, p) \u00d7 log N (E, p) C(p)", "eq_num": "(6)" } ], "section": "Adaptive Distributional Pattern Embeddings", "sec_num": "4.1" }, { "text": "where E is the entity set, N (E, p) is the frequency of p matching all entities in E. Finally, we denote the DPE as < X, w >, where X is the context pattern embedding matrix, and w is the vector of distribution probabilities. For efficiency, we only select the top n important patterns. Therefore, X is actually an n \u00d7 d matrix, where d is the dimension of each pattern embedding, and w is an n-dimensional vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptive Distributional Pattern Embeddings", "sec_num": "4.1" }, { "text": "Adaptive distributional pattern embeddings combined with MCTS. Although the basic DPE can provide unified representations for both patterns and entities, it could still fail to represent the underlying semantics of seed entities as unrelated patterns may match many other entities. To address this problem, we combine the MCTS algorithm with the PMSN, to fine-tune the distributional vector of the basic DPE, resulting in an Adaptive Distributional Pattern Embedding (ADPE). As shown in Figure 3 , at each iteration, multiple MCTS simulations are performed to accumulate the long-term feedback of selecting a pattern, where the PMSN is used for entity scoring; after that, the delayed feedback can be efficiently calculated for each pattern. Apart from being used to evaluate the reward of selecting a pattern, the delayed feedback can also be used to estimate the importance score of a pattern, since patterns with the high delayed feedback can extract more correct entities and less incorrect entities in the future. Therefore, at each iteration, we fine-tune the distributional probabilities after the MCTS simulations as following:", "cite_spans": [], "ref_spans": [ { "start": 487, "end": 495, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Adaptive Distributional Pattern Embeddings", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w t (p, e) \u221d w t\u22121 (p, e) \u2022 Q(s 0 , p)", "eq_num": "(7)" } ], "section": "Adaptive Distributional Pattern Embeddings", "sec_num": "4.1" }, { "text": "where w t\u22121 (p, E) is the probability of p at iteration t \u2212 1, Q(s 0 , p) is returned cumulative reward of p i in iteration t.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Adaptive Distributional Pattern Embeddings", "sec_num": "4.1" }, { "text": "The similarity measurement for two ADPEs is important, since it is used for delayed feedback estimation and entity scoring. Intuitively, given two entities embedded by two ADPEs, they can be regarded similarly to each other, if they have similar context patterns and similar distributions on these patterns. Therefore, the similarity measurement should take both context pattern embeddings and corresponding distributional vectors into consideration. Inspired by Kusner et al. (2015) on the sentence similarity measurement, we devise a similarity measurement for two ADPEs as follows:", "cite_spans": [ { "start": 463, "end": 483, "text": "Kusner et al. (2015)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Pattern Mover Similarity", "sec_num": "4.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "max T \u22650 n i,j=1 T ij \u2022 SIM(i, j) s.t. n j=1 T ij = w i , \u2200i \u2208 1, ..., n n i=1 T ij = w j , \u2200j \u2208 1, ..., n", "eq_num": "(8)" } ], "section": "Pattern Mover Similarity", "sec_num": "4.2" }, { "text": "where SIM(i, j) is the cosine similarity between the i-th pattern embedding of one entity and the j-th pattern embedding of the other entity. We denote the above measurement as the Pattern Mover Similarity (PMS) measurement. Datasets. We conduct experiments on three public datasets: Google Web 1T (Brants and Franz, 2006) , APR, and Wiki (Shen et al., 2017) . 1) Google Web 1T contains a large scale of ngrams compiled from a one trillion words corpus. Following Shi et al. (2014) , we use 5-grams as the entity context and filter out those 5-grams containing all stopwords or common words. We use 13 categories of entities (see Table 1 ) list in Shi et al. (2014) and compare our method with traditional bootstrapping methods for ESE on this corpus. 2) APR (2015 news from AP and Reuters) and Wiki (a subset of English Wikipedia) are two datasets published by Shen et al. (2017) . Each of them contains about 1 million sentences. We use totally 12 categories of entities as listed in Shen et al. (2017) and compare the final entity scoring performance on both datasets.", "cite_spans": [ { "start": 298, "end": 322, "text": "(Brants and Franz, 2006)", "ref_id": "BIBREF3" }, { "start": 339, "end": 358, "text": "(Shen et al., 2017)", "ref_id": "BIBREF25" }, { "start": 464, "end": 481, "text": "Shi et al. (2014)", "ref_id": "BIBREF26" }, { "start": 648, "end": 665, "text": "Shi et al. (2014)", "ref_id": "BIBREF26" }, { "start": 862, "end": 880, "text": "Shen et al. (2017)", "ref_id": "BIBREF25" }, { "start": 986, "end": 1004, "text": "Shen et al. (2017)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 630, "end": 637, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Pattern Mover Similarity", "sec_num": "4.2" }, { "text": "Baselines. To evaluate the efficiency of the MCTS and the PMSN, we use several baselines: 1) POS: bootstrapping method which only uses positive seeds without any other constraint;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pattern Mover Similarity", "sec_num": "4.2" }, { "text": "2) MEB (Curran et al., 2007) : mutual exclusion bootstrapping method, which uses the category exclusion as the constraints of bootstrapping;", "cite_spans": [ { "start": 7, "end": 28, "text": "(Curran et al., 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Pattern Mover Similarity", "sec_num": "4.2" }, { "text": "3) COB (Shi et al., 2014) : a probabilistic bootstrapping method which uses both positive and negative seeds. 4) SetExpan (Shen et al., 2017) : corpus-based entity set expansion method, which adaptively selects context features and unsupervisedly ensembles them to score entities.", "cite_spans": [ { "start": 7, "end": 25, "text": "(Shi et al., 2014)", "ref_id": "BIBREF26" }, { "start": 122, "end": 141, "text": "(Shen et al., 2017)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Pattern Mover Similarity", "sec_num": "4.2" }, { "text": "Specifically, we compare baselines (1)-(3) and our method on Google Web 1T; we compare baseline (4) and our method on APR and Wiki 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pattern Mover Similarity", "sec_num": "4.2" }, { "text": "Metrics. We use P@n (precision at top n), and the mean average precision (MAP) on Google Web 1T as in Shi et al. (2014) . As for the APR and the Wiki , we use MAP@n (n=10,20,50) to evaluate entity scoring performance of our method. In our experiments, we manually select frequent entities as the seeds from these datasets; the correctness of all extracted entities is manually judged with external supporting resources, e.g., the entity list collected from Wikipedia 2 .", "cite_spans": [ { "start": 102, "end": 119, "text": "Shi et al. (2014)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Pattern Mover Similarity", "sec_num": "4.2" }, { "text": "Comparison with three baseline methods on Google Web 1T. Table 2 shows the performance of different bootstrapping methods on Google Web 1T. We can see that our full model outperforms three baseline methods: comparing with POS, our method achieves 41% improvement in P@100, 35% improvement in P@200 and 45% improvement in MAP; comparing with MEB, our method achieves 24% improvement in P@100 and 18% improvement in P@200; comparing with COB, our method achieves 3% improvement in both P@100 and P@200 metrics, and 2% improvement in MAP. The above findings indicate that our method can extract more correct entities with higher ranking scores than the baselines.", "cite_spans": [], "ref_spans": [ { "start": 57, "end": 64, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "5.2" }, { "text": "Comparison with SetExpan on APR and Wiki. To further verify that our method can learn a better representation and adaptively score entities in ESE, we compare our method with the state-Method P@10 P@20 P@50 P@100 P@ of-the-art entity set expansion method-SetExpan, which is a non-bootstrapping method on the APR and Wik (see Table 3 ). From Table 3 , we can see that our method outperforms SetExpan on both datasets: on the APR, our method achieves 6% improvement in MAP@10 and 2% improvement in MAP@50 ; on the Wiki, our method can achieve 2% improvement in MAP@10 and 4% improvement in MAP@50. The above results further confirm that our method can improve the performance of bootstrapping for Entity Set Expansion.", "cite_spans": [], "ref_spans": [ { "start": 325, "end": 332, "text": "Table 3", "ref_id": "TABREF5" }, { "start": 341, "end": 348, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "5.2" }, { "text": "200", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5.2" }, { "text": "Comparison with the Ours -MCTS method and the Ours -PMSN method. From Table 2 , we can also see that if we replace the Monte Carlo Tree Search by selecting top-n patterns, the performance decreases by 19% in P@100 and 17% in P@200; if we replace the PMSN with word embedding, the performance decreases by 34% in P@100 and 27% in P@200. The results show that both the PMSN and the MCTS algorithm are critical for our model's performance. Remarkably, the PMSN and the MCTS algorithms can enhance each other in that the PMSN can learn a better representation by combing with the MCTS algorithm, and the MCTS can in turn effectively estimate delayed feedback using the PMSN.", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 77, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Detailed Analysis", "sec_num": "5.3" }, { "text": "Performance of our full method on different categories of Google Web 1T. From Table 4 , We can see that our method achieves high performance in most categories except for ELE, TTL and FAC entities. The lesser performance of our method on ELE entities is likely caused by the data sparseness -there are fewer than 150 elements of the ELE entities in total. The lower performance on TTL and FAC entities is likely due to the fact that the context patterns of TTL and FAC entities are similar to those of person and location names respectively, which makes them easily be regarded as special person names and location names respectively.", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 86, "text": "Table 4", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Detailed Analysis", "sec_num": "5.3" }, { "text": "Influence of the number of context patterns. Figure 4 shows the performance of our full method under different context pattern numbers. It can be seen that the number of context patterns used to embed entity and entity set heavily influences the performance. Comparing with the settings using fewer context patterns (e.g., 10 and 20, 50), using more context patterns, i.e., 100, in the PMSN has superior performance since it can provide more context information to compare two entities. Besides, further adding context patterns in the PMSN causes the performance decrease for 3%, which is likely due to the case that noises can be included when considering too many patterns. Top patterns selected by different methods. To demonstrate the effectiveness of delayed feedback in our method, we illustrate the top 1 pattern in the first five iterations of three methods in Table 5 . From Table 5 , we can see that the top patterns by our full method are more related to seed entities than other two baselines. Besides, we can see that without the MCTS algorithm or the PMSN, most top patterns are less related and easily semantically drifted to other categories.", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 53, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 869, "end": 877, "text": "Table 5", "ref_id": "TABREF8" }, { "start": 885, "end": 892, "text": "Table 5", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Detailed Analysis", "sec_num": "5.3" }, { "text": "Entity set expansion (ESE) is a weakly supervised task, which is often given seed entities as supervision and tries to expand new entities related to them. According to the used corpus, there are two types of ESE: limited corpus (Shi et al., 2014; Shen et al., 2017) and large open corpus, e.g. making use of a search engine for the web search (Wang and Cohen, 2007) .", "cite_spans": [ { "start": 229, "end": 247, "text": "(Shi et al., 2014;", "ref_id": "BIBREF26" }, { "start": 248, "end": 266, "text": "Shen et al., 2017)", "ref_id": "BIBREF25" }, { "start": 344, "end": 366, "text": "(Wang and Cohen, 2007)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Weakly supervised methods for information extraction (IE) are often provided insufficient supervision signals, such as knowledge base facts as distant supervision (Mintz et al., 2009; Hoffmann et al., 2011; Zeng et al., 2015; Han and Sun, 2016) , and light amount of supervision samples in bootstrapping (Riloff and Jones, 1999) . As a classical technique, bootstrapping usually exploits pattern (Curran et al., 2007) , document (Liao and Grish-man, 2010) or syntactic and semantic contextual features (He and Grishman, 2015) to extract and classify new instances.", "cite_spans": [ { "start": 163, "end": 183, "text": "(Mintz et al., 2009;", "ref_id": "BIBREF17" }, { "start": 184, "end": 206, "text": "Hoffmann et al., 2011;", "ref_id": "BIBREF11" }, { "start": 207, "end": 225, "text": "Zeng et al., 2015;", "ref_id": "BIBREF33" }, { "start": 226, "end": 244, "text": "Han and Sun, 2016)", "ref_id": "BIBREF8" }, { "start": 304, "end": 328, "text": "(Riloff and Jones, 1999)", "ref_id": "BIBREF24" }, { "start": 396, "end": 417, "text": "(Curran et al., 2007)", "ref_id": "BIBREF5" }, { "start": 429, "end": 455, "text": "(Liao and Grish-man, 2010)", "ref_id": null }, { "start": 502, "end": 525, "text": "(He and Grishman, 2015)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Limited to the sparse supervision, previous work estimate patterns mainly based on its direct extraction features, e.g., the matching statistics with known entities (Riloff and Jones, 1999; Agichtein and Gravano, 2000) , which often suffers from the semantic drift problem. To avoid semantic drift, most existing approaches exploit extra constraints, such as parallel multiple categories (Thelen and Riloff, 2002; Yangarber, 2003; McIntosh, 2010) , coupling constraints (Carlson et al., 2010) , and mutual exclusion bootstrapping (Curran et al., 2007; McIntosh and Curran, 2008) . Besides, graph-based methods (Li et al., 2011; Tao et al., 2015) and the probability-based method (Shi et al., 2014) are also used to improve the bootstrapping performance.", "cite_spans": [ { "start": 165, "end": 189, "text": "(Riloff and Jones, 1999;", "ref_id": "BIBREF24" }, { "start": 190, "end": 218, "text": "Agichtein and Gravano, 2000)", "ref_id": "BIBREF0" }, { "start": 388, "end": 413, "text": "(Thelen and Riloff, 2002;", "ref_id": "BIBREF29" }, { "start": 414, "end": 430, "text": "Yangarber, 2003;", "ref_id": "BIBREF31" }, { "start": 431, "end": 446, "text": "McIntosh, 2010)", "ref_id": "BIBREF15" }, { "start": 470, "end": 492, "text": "(Carlson et al., 2010)", "ref_id": "BIBREF4" }, { "start": 530, "end": 551, "text": "(Curran et al., 2007;", "ref_id": "BIBREF5" }, { "start": 552, "end": 578, "text": "McIntosh and Curran, 2008)", "ref_id": "BIBREF16" }, { "start": 610, "end": 627, "text": "(Li et al., 2011;", "ref_id": "BIBREF13" }, { "start": 628, "end": 645, "text": "Tao et al., 2015)", "ref_id": "BIBREF28" }, { "start": 679, "end": 697, "text": "(Shi et al., 2014)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "To address the sparse supervision problem, many previous studies score entities by leveraging lexical and statistical features (Yangarber et al., 2000; Stevenson and Greenwood, 2005; Pantel and Pennacchiotti, 2006; Pa\u015fca, 2007; Pantel et al., 2009) , which, despite the promising effectiveness, could often fail since the sparse statistical features provide little semantic information to evaluate entities. Recently word embedding based methods (Batista et al., 2015; Gupta and Manning, 2015) use fixed word embedding learned on external resources and evaluate entities by their similarity to seeds. Recently, Berger et al. (2018) propose to learn custom embeddings at each bootstrapping iteration, to trade efficiency for effectiveness.", "cite_spans": [ { "start": 127, "end": 151, "text": "(Yangarber et al., 2000;", "ref_id": "BIBREF32" }, { "start": 152, "end": 182, "text": "Stevenson and Greenwood, 2005;", "ref_id": "BIBREF27" }, { "start": 183, "end": 214, "text": "Pantel and Pennacchiotti, 2006;", "ref_id": "BIBREF20" }, { "start": 215, "end": 227, "text": "Pa\u015fca, 2007;", "ref_id": "BIBREF21" }, { "start": 228, "end": 248, "text": "Pantel et al., 2009)", "ref_id": "BIBREF19" }, { "start": 446, "end": 468, "text": "(Batista et al., 2015;", "ref_id": "BIBREF1" }, { "start": 469, "end": 493, "text": "Gupta and Manning, 2015)", "ref_id": "BIBREF7" }, { "start": 611, "end": 631, "text": "Berger et al. (2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "In this paper, we propose a deep similarity network-based model combined with the MCTS algorithm to bootstrap Entity Set Expansion. Specifically, we leverage the Monte Carlo Tree Search (MCTS) algorithm to efficiently estimate the delayed feedback of each pattern in the bootstrapping; we propose a Pattern Mover Similarity Network (PMSN) to uniformly embed entities and patterns using a distribution on context pattern embeddings; we combine the MCTS and the PMSN to adaptively learn a better embedding for evaluating both patterns and entities. Experimental results confirm the superior performance of our PMSN combined with the MCTS algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "Due to scalability issue, we omit the result of SetExpan on Google Web 1T. On the two smaller datasets, results of the under-performing baselines (1-3) are also omitted for space reason. The incompetitive performance of those 3 baselines could be caused by the difficulty in deriving robust statistical features out of the sparse context patterns on the small ARP and Wiki datasets.2 The code is released at https://www.github. com/lingyongyan/mcts-bootstrapping.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Snowball: Extracting Relations from Large Plain-text Collections", "authors": [ { "first": "Eugene", "middle": [], "last": "Agichtein", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Gravano", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the Fifth ACM Conference on Digital Libraries", "volume": "", "issue": "", "pages": "85--94", "other_ids": { "DOI": [ "10.1145/336597.336644" ] }, "num": null, "urls": [], "raw_text": "Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting Relations from Large Plain-text Collec- tions. In Proceedings of the Fifth ACM Conference on Digital Libraries, pages 85-94, NY, USA.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Semi-Supervised Bootstrapping of Relationship Extractors with Distributional Semantics", "authors": [ { "first": "David", "middle": [ "S" ], "last": "Batista", "suffix": "" }, { "first": "Bruno", "middle": [], "last": "Martins", "suffix": "" }, { "first": "M\u00e1rio", "middle": [ "J" ], "last": "Silva", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "499--504", "other_ids": {}, "num": null, "urls": [], "raw_text": "David S. Batista, Bruno Martins, and M\u00e1rio J. Silva. 2015. Semi-Supervised Bootstrapping of Relation- ship Extractors with Distributional Semantics. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 499-504, Lisbon, Portugal. Association for Compu- tational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Visual Supervision in Bootstrapped Information Extraction", "authors": [ { "first": "Matthew", "middle": [], "last": "Berger", "suffix": "" }, { "first": "Ajay", "middle": [], "last": "Nagesh", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Levine", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Helen", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2043--2053", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Berger, Ajay Nagesh, Joshua Levine, Mi- hai Surdeanu, and Helen Zhang. 2018. Visual Su- pervision in Bootstrapped Information Extraction. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 2043-2053, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The google web 1t 5-gram corpus version 1.1. LDC2006T13", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Franz", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Brants and Alex Franz. 2006. The google web 1t 5-gram corpus version 1.1. LDC2006T13.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Coupled Semi-supervised Learning for Information Extraction", "authors": [ { "first": "Andrew", "middle": [], "last": "Carlson", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Betteridge", "suffix": "" }, { "first": "Richard", "middle": [ "C" ], "last": "Wang", "suffix": "" }, { "first": "Estevam", "middle": [ "R" ], "last": "Hruschka", "suffix": "" }, { "first": "Jr", "middle": [], "last": "", "suffix": "" }, { "first": "Tom", "middle": [ "M" ], "last": "Mitchell", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Third ACM International Conference on Web Search and Data Mining", "volume": "", "issue": "", "pages": "101--110", "other_ids": { "DOI": [ "10.1145/1718487.1718501" ] }, "num": null, "urls": [], "raw_text": "Andrew Carlson, Justin Betteridge, Richard C. Wang, Estevam R. Hruschka, Jr., and Tom M. Mitchell. 2010. Coupled Semi-supervised Learning for In- formation Extraction. In Proceedings of the Third ACM International Conference on Web Search and Data Mining, pages 101-110, NY, USA.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Minimising semantic drift with mutual exclusion bootstrapping", "authors": [ { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" }, { "first": "Tara", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "Bernhard", "middle": [], "last": "Scholz", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics", "volume": "6", "issue": "", "pages": "172--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "James R. Curran, Tara Murphy, and Bernhard Scholz. 2007. Minimising semantic drift with mutual exclu- sion bootstrapping. In Proceedings of the 10th Con- ference of the Pacific Association for Computational Linguistics, volume 6, pages 172-180. Citeseer.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Improved Pattern Learning for Bootstrapped Entity Extraction", "authors": [ { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "98--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sonal Gupta and Christopher Manning. 2014. Im- proved Pattern Learning for Bootstrapped Entity Ex- traction. In Proceedings of the Eighteenth Confer- ence on Computational Natural Language Learning, pages 98-108, Ann Arbor, Michigan. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Distributed Representations of Words to Guide Bootstrapped Entity Classifiers", "authors": [ { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "D", "middle": [], "last": "Christopher", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1215--1220", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sonal Gupta and Christopher D. Manning. 2015. Dis- tributed Representations of Words to Guide Boot- strapped Entity Classifiers. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1215-1220, Denver, Colorado. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Global Distant Supervision for Relation Extraction", "authors": [ { "first": "Xianpei", "middle": [], "last": "Han", "suffix": "" }, { "first": "Le", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "Thirtieth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xianpei Han and Le Sun. 2016. Global Distant Su- pervision for Relation Extraction. In Thirtieth AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Distributional structure. Word", "authors": [ { "first": "S", "middle": [], "last": "Zellig", "suffix": "" }, { "first": "", "middle": [], "last": "Harris", "suffix": "" } ], "year": 1954, "venue": "", "volume": "10", "issue": "", "pages": "146--162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zellig S Harris. 1954. Distributional structure. Word, 10(2-3):146-162.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "ICE: Rapid Information Extraction Customization for NLP Novices", "authors": [ { "first": "Yifan", "middle": [], "last": "He", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations", "volume": "", "issue": "", "pages": "31--35", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yifan He and Ralph Grishman. 2015. ICE: Rapid Information Extraction Customization for NLP Novices. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 31-35, Denver, Colorado. Association for Compu- tational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Knowledge-based Weak Supervision for Information Extraction of Overlapping Relations", "authors": [ { "first": "Raphael", "middle": [], "last": "Hoffmann", "suffix": "" }, { "first": "Congle", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Ling", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "541--550", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge-based Weak Supervision for Informa- tion Extraction of Overlapping Relations. In Pro- ceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 541-550, Stroudsburg, PA, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "From word embeddings to document distances", "authors": [ { "first": "Matt", "middle": [], "last": "Kusner", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Kolkin", "suffix": "" }, { "first": "Kilian", "middle": [], "last": "Weinberger", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32nd International Conference on Machine Learning", "volume": "37", "issue": "", "pages": "957--966", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to docu- ment distances. In Proceedings of the 32nd Interna- tional Conference on Machine Learning, volume 37, pages 957-966, Lille, France.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Using Graph Based Method to Improve Bootstrapping Relation Extraction", "authors": [ { "first": "Haibo", "middle": [], "last": "Li", "suffix": "" }, { "first": "Danushka", "middle": [], "last": "Bollegala", "suffix": "" }, { "first": "Yutaka", "middle": [], "last": "Matsuo", "suffix": "" }, { "first": "Mitsuru", "middle": [], "last": "Ishizuka", "suffix": "" } ], "year": 2011, "venue": "Computational Linguistics and Intelligent Text Processing", "volume": "6609", "issue": "", "pages": "127--138", "other_ids": { "DOI": [ "10.1007/978-3-642-19437-5_10" ] }, "num": null, "urls": [], "raw_text": "Haibo Li, Danushka Bollegala, Yutaka Matsuo, and Mitsuru Ishizuka. 2011. Using Graph Based Method to Improve Bootstrapping Relation Extrac- tion. In Computational Linguistics and Intelli- gent Text Processing, volume 6609, pages 127-138.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Filtered Ranking for Bootstrapping in Event Extraction", "authors": [ { "first": "Shasha", "middle": [], "last": "Liao", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "680--688", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shasha Liao and Ralph Grishman. 2010. Filtered Ranking for Bootstrapping in Event Extraction. In Proceedings of the 23rd International Confer- ence on Computational Linguistics, pages 680-688, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Unsupervised Discovery of Negative Categories in Lexicon Bootstrapping", "authors": [ { "first": "Tara", "middle": [], "last": "Mcintosh", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "356--365", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tara McIntosh. 2010. Unsupervised Discovery of Neg- ative Categories in Lexicon Bootstrapping. In Pro- ceedings of the 2010 Conference on Empirical Meth- ods in Natural Language Processing, pages 356- 365, Stroudsburg, PA, USA. Association for Com- putational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Weighted Mutual Exclusion Bootstrapping for Domain Independent Lexicon and Template Acquisition", "authors": [ { "first": "Tara", "middle": [], "last": "Mcintosh", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Australasian Language Technology Association Workshop", "volume": "", "issue": "", "pages": "97--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tara McIntosh and James R. Curran. 2008. Weighted Mutual Exclusion Bootstrapping for Domain Inde- pendent Lexicon and Template Acquisition. In Pro- ceedings of the Australasian Language Technology Association Workshop 2008, pages 97-105, Hobart, Australia.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Distant Supervision for Relation Extraction Without Labeled Data", "authors": [ { "first": "Mike", "middle": [], "last": "Mintz", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bills", "suffix": "" }, { "first": "Rion", "middle": [], "last": "Snow", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP", "volume": "", "issue": "", "pages": "1003--1011", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Mintz, Steven Bills, Rion Snow, and Dan Juraf- sky. 2009. Distant Supervision for Relation Extrac- tion Without Labeled Data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1003-1011, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Bootstrapping Biomedical Ontologies for Scientific Text Using NELL", "authors": [ { "first": "Dana", "middle": [], "last": "Movshovitz", "suffix": "" }, { "first": "-", "middle": [], "last": "Attias", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Cohen", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "11--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dana Movshovitz-Attias and William W. Cohen. 2012. Bootstrapping Biomedical Ontologies for Scientific Text Using NELL. pages 11-19.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Web-scale Distributional Similarity and Entity Set Expansion", "authors": [ { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Crestan", "suffix": "" }, { "first": "Arkady", "middle": [], "last": "Borkovsky", "suffix": "" }, { "first": "Ana-Maria", "middle": [], "last": "Popescu", "suffix": "" }, { "first": "Vishnu", "middle": [], "last": "Vyas", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", "volume": "2", "issue": "", "pages": "938--947", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Pantel, Eric Crestan, Arkady Borkovsky, Ana- Maria Popescu, and Vishnu Vyas. 2009. Web-scale Distributional Similarity and Entity Set Expansion. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 -Volume 2, pages 938-947, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Espresso: Leveraging Generic Patterns for Automatically Harvesting Semantic Relations", "authors": [ { "first": "Patrick", "middle": [], "last": "Pantel", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pennacchiotti", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "113--120", "other_ids": { "DOI": [ "10.3115/1220175.1220190" ] }, "num": null, "urls": [], "raw_text": "Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: Leveraging Generic Patterns for Au- tomatically Harvesting Semantic Relations. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, pages 113-120, Stroudsburg, PA, USA. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Weakly-supervised Discovery of Named Entities Using Web Search Queries", "authors": [ { "first": "Marius", "middle": [], "last": "Pa\u015fca", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Sixteenth ACM Conference on Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "683--690", "other_ids": { "DOI": [ "10.1145/1321440.1321536" ] }, "num": null, "urls": [], "raw_text": "Marius Pa\u015fca. 2007. Weakly-supervised Discovery of Named Entities Using Web Search Queries. In Pro- ceedings of the Sixteenth ACM Conference on Con- ference on Information and Knowledge Manage- ment, pages 683-690, New York, NY, USA. ACM.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Glove: Global Vectors for Word Representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global Vectors for Word Representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing, pages 1532-1543, Doha, Qatar. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Semantic Lexicon Induction from Twitter with Pattern Relatedness and Flexible Term Length", "authors": [ { "first": "Ashequl", "middle": [], "last": "Qadir", "suffix": "" }, { "first": "Pablo", "middle": [ "N" ], "last": "Mendes", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gruhl", "suffix": "" }, { "first": "Neal", "middle": [], "last": "Lewis", "suffix": "" } ], "year": 2015, "venue": "Twenty-Ninth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "2432--2439", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashequl Qadir, Pablo N Mendes, Daniel Gruhl, and Neal Lewis. 2015. Semantic Lexicon Induction from Twitter with Pattern Relatedness and Flexible Term Length. In Twenty-Ninth AAAI Conference on Artificial Intelligence, pages 2432-2439.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Learning dictionaries for information extraction by multi-level bootstrapping", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Rosie", "middle": [], "last": "Jones", "suffix": "" } ], "year": 1999, "venue": "AAAI/IAAI", "volume": "", "issue": "", "pages": "474--479", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff and Rosie Jones. 1999. Learning dictionar- ies for information extraction by multi-level boot- strapping. In AAAI/IAAI, pages 474-479.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Set-Expan: Corpus-Based Set Expansion via Context Feature Selection and Rank Ensemble", "authors": [ { "first": "Jiaming", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Zeqiu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Dongming", "middle": [], "last": "Lei", "suffix": "" }, { "first": "Jingbo", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2017, "venue": "ECML PKDD", "volume": "10534", "issue": "", "pages": "288--304", "other_ids": { "DOI": [ "10.1007/978-3-319-71249-9_18" ] }, "num": null, "urls": [], "raw_text": "Jiaming Shen, Zeqiu Wu, Dongming Lei, Jingbo Shang, Xiang Ren, and Jiawei Han. 2017. Set- Expan: Corpus-Based Set Expansion via Context Feature Selection and Rank Ensemble. In ECML PKDD, volume 10534, pages 288-304, Cham. Springer International Publishing.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A probabilistic co-bootstrapping method for entity set expansion", "authors": [ { "first": "Bei", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Zhenzhong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Le", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Xianpei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2014, "venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "2280--2290", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bei Shi, Zhenzhong Zhang, Le Sun, and Xianpei Han. 2014. A probabilistic co-bootstrapping method for entity set expansion. In Proceedings of COLING 2014, the 25th International Conference on Compu- tational Linguistics, pages 2280-2290.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A Semantic Approach to IE Pattern Induction", "authors": [ { "first": "Mark", "middle": [], "last": "Stevenson", "suffix": "" }, { "first": "Mark", "middle": [ "A" ], "last": "Greenwood", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "379--386", "other_ids": { "DOI": [ "10.3115/1219840.1219887" ] }, "num": null, "urls": [], "raw_text": "Mark Stevenson and Mark A. Greenwood. 2005. A Semantic Approach to IE Pattern Induction. In Pro- ceedings of the 43rd Annual Meeting on Associa- tion for Computational Linguistics, pages 379-386, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Leveraging Pattern Semantics for Extracting Entities in Enterprises", "authors": [ { "first": "Fangbo", "middle": [], "last": "Tao", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Ariel", "middle": [], "last": "Fuxman", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th International Conference on World Wide Web", "volume": "", "issue": "", "pages": "1078--1088", "other_ids": { "DOI": [ "10.1145/2736277.2741670" ] }, "num": null, "urls": [], "raw_text": "Fangbo Tao, Bo Zhao, Ariel Fuxman, Yang Li, and Jiawei Han. 2015. Leveraging Pattern Semantics for Extracting Entities in Enterprises. In Proceed- ings of the 24th International Conference on World Wide Web, pages 1078-1088, Republic and Canton of Geneva, Switzerland. International World Wide Web Conferences Steering Committee.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A Bootstrapping Method for Learning Semantic Lexicons Using Extraction Pattern Contexts", "authors": [ { "first": "Michael", "middle": [], "last": "Thelen", "suffix": "" }, { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "214--221", "other_ids": { "DOI": [ "10.3115/1118693.1118721" ] }, "num": null, "urls": [], "raw_text": "Michael Thelen and Ellen Riloff. 2002. A Bootstrap- ping Method for Learning Semantic Lexicons Using Extraction Pattern Contexts. In Proceedings of the ACL-02 Conference on Empirical Methods in Natu- ral Language Processing, pages 214-221, Strouds- burg, PA, USA. Association for Computational Lin- guistics.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Language-independent set expansion of named entities using the web", "authors": [ { "first": "C", "middle": [], "last": "Richard", "suffix": "" }, { "first": "William", "middle": [ "W" ], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Cohen", "suffix": "" } ], "year": 2007, "venue": "Seventh IEEE International Conference On", "volume": "", "issue": "", "pages": "342--350", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard C. Wang and William W. Cohen. 2007. Language-independent set expansion of named en- tities using the web. In Data Mining, 2007. ICDM 2007. Seventh IEEE International Conference On, pages 342-350. IEEE.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Counter-training in Discovery of Semantic Patterns", "authors": [ { "first": "Roman", "middle": [], "last": "Yangarber", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "343--350", "other_ids": { "DOI": [ "10.3115/1075096.1075140" ] }, "num": null, "urls": [], "raw_text": "Roman Yangarber. 2003. Counter-training in Discov- ery of Semantic Patterns. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics -Volume 1, pages 343-350, Stroudsburg, PA, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Automatic Acquisition of Domain Knowledge for Information Extraction", "authors": [ { "first": "Roman", "middle": [], "last": "Yangarber", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Grishman", "suffix": "" }, { "first": "Pasi", "middle": [], "last": "Tapanainen", "suffix": "" }, { "first": "Silja", "middle": [], "last": "Huttunen", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 18th Conference on Computational Linguistics", "volume": "2", "issue": "", "pages": "940--946", "other_ids": { "DOI": [ "10.3115/992730.992782" ] }, "num": null, "urls": [], "raw_text": "Roman Yangarber, Ralph Grishman, Pasi Tapanainen, and Silja Huttunen. 2000. Automatic Acquisition of Domain Knowledge for Information Extraction. In Proceedings of the 18th Conference on Com- putational Linguistics -Volume 2, pages 940-946, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks", "authors": [ { "first": "Daojian", "middle": [], "last": "Zeng", "suffix": "" }, { "first": "Kang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yubo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1753--1762", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1753-1762, Lisbon, Portugal. Association for Com- putational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "(a) Traditional bootstrapping system for Entity set expansion. The main difference between our method and traditional methods is that we enhance the pattern evaluation by the MCTS algorithm, as demonstrated inFigure (b)." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "Traditional Bootstrapping for ESE (a) and the enhanced pattern evaluation using MCTS (b)." }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "Overall architecture of the Pattern Mover Similarity Network (PMSN), as shown in the rounded square in the middle of thefigure." }, "FIGREF3": { "uris": null, "type_str": "figure", "num": null, "text": "Performance of our full method using different context patterns in the PMSN." }, "TABREF2": { "html": null, "num": null, "text": "Target categories used on Google Web 1T.", "content": "", "type_str": "table" }, "TABREF4": { "html": null, "num": null, "text": "Overall results for entity set expansion on Google Web 1T,where Ours full is the full version of our method, Ours -MCTS is our method with the MCTS disabled, and Ours -PMSN is our method but replacing the PMSN with fixed word embeddings. * indicates COB using the human feedback for seed entity selection.", "content": "
APRWiki
MethodMAP@10 MAP@20 MAP@50MAP@10 MAP@20 MAP@50
SetExpan0.900.860.790.960.900.75
Ours0.960.900.810.980.930.79
", "type_str": "table" }, "TABREF5": { "html": null, "num": null, "text": "The adaptive entity scoring performance of different methods on the APR and Wiki.", "content": "
Category P@20 P@50 P@100 P@200
CAP1.001.000.940.69
ELE1.000.840.510.36
FEN1.001.001.000.95
MALE1.001.001.000.98
LAST1.001.001.001.00
TTL0.850.640.490.34
NORP0.950.960.890.60
FAC0.950.860.590.42
ORG1.001.001.000.94
GPE1.001.001.000.94
LOC0.800.760.700.67
DAT1.001.000.820.56
LANG1.000.980.810.52
", "type_str": "table" }, "TABREF6": { "html": null, "num": null, "text": "The performance of our full method on different categories on Google Web 1T.", "content": "", "type_str": "table" }, "TABREF8": { "html": null, "num": null, "text": "The top patterns selected by different methods when expanding capital entities in the first 5 iterations.", "content": "
MAP vs Context Pattern Amount
0.90.820.870.84
MAP0.80.75
0.70.69
0.6102050 # patterns100200
", "type_str": "table" } } } }