{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:29:26.342300Z" }, "title": "Leveraging Wikipedia Navigational Templates for Curating Domain-Specific Fuzzy Conceptual Bases", "authors": [ { "first": "Krati", "middle": [], "last": "Saxena", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tata Consultancy Services Research Pune", "location": { "country": "India" } }, "email": "" }, { "first": "Tushita", "middle": [], "last": "Singh", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tata Consultancy Services Research Pune", "location": { "country": "India" } }, "email": "" }, { "first": "Ashwini", "middle": [], "last": "Patil", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tata Consultancy Services Research Pune", "location": { "country": "India" } }, "email": "" }, { "first": "Sagar", "middle": [], "last": "Sunkle", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tata Consultancy Services Research Pune", "location": { "country": "India" } }, "email": "" }, { "first": "Vinay", "middle": [], "last": "Kulkarni", "suffix": "", "affiliation": { "laboratory": "", "institution": "Tata Consultancy Services Research Pune", "location": { "country": "India" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Domain-specific conceptual bases use key concepts to capture domain scope and relevant information. Conceptual bases serve as a foundation for various downstream tasks, including ontology construction, information mapping, and analysis. However, building conceptual bases necessitates domain awareness and takes time. Wikipedia navigational templates offer multiple articles on the same/similar domain. It is possible to use the templates to recognize fundamental concepts that shape the domain. Earlier work in this domain used Wikipedia's structured and unstructured data to construct open-domain ontologies, domain terminologies, and knowledge bases. We present a novel method for leveraging navigational templates to create domain-specific fuzzy conceptual bases in this work. Our system generates knowledge graphs from the articles mentioned in the template, which we then process using Wikidata and machine learning algorithms. We filter important concepts using fuzzy logic on network metrics to create a crude conceptual base. Finally, the expert helps by refining the conceptual base. We demonstrate our system using an example of RNA virus antiviral drugs.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Domain-specific conceptual bases use key concepts to capture domain scope and relevant information. Conceptual bases serve as a foundation for various downstream tasks, including ontology construction, information mapping, and analysis. However, building conceptual bases necessitates domain awareness and takes time. Wikipedia navigational templates offer multiple articles on the same/similar domain. It is possible to use the templates to recognize fundamental concepts that shape the domain. Earlier work in this domain used Wikipedia's structured and unstructured data to construct open-domain ontologies, domain terminologies, and knowledge bases. We present a novel method for leveraging navigational templates to create domain-specific fuzzy conceptual bases in this work. Our system generates knowledge graphs from the articles mentioned in the template, which we then process using Wikidata and machine learning algorithms. We filter important concepts using fuzzy logic on network metrics to create a crude conceptual base. Finally, the expert helps by refining the conceptual base. We demonstrate our system using an example of RNA virus antiviral drugs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Domain-specific conceptual bases are a method for grasping the domain at a high level by capturing the notions that generally make up a domain. While ontology focus on formal representations and system of categories encompassing the domain information and conceptual models focus on linking the general ontological categories (Fonseca and Martin, 2007) , the conceptual bases are abstract models addressing the most crucial concepts that are invariably found in a domain. Aside from defining the scope and outlining the concepts, the conceptual bases may be used for a variety of downstream activities, such as developing less abstract conceptual constructs, such as ontology, or applications such as entity mapping in knowledge graphs, creating instances for named entity recognition, and summarizing or analyzing the domain.", "cite_spans": [ { "start": 326, "end": 352, "text": "(Fonseca and Martin, 2007)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Creating a conceptual base is a difficult task that necessitates a thorough understanding of the domain and a considerable amount of time to establish the importance of concepts. Online sources such as Wikipedia contain a vast amount of information on many domains (Wikipedia, 2021a) . In this research, we propose a novel approach to create domain-specific conceptual bases using Wikipedia navigational templates (Wikipedia, 2021b) . The navigational templates make it simple to connect similar topics invariably. Similar topics are present as navigational boxes at the bottom of the article or sidebars on the right side of the article.", "cite_spans": [ { "start": 265, "end": 283, "text": "(Wikipedia, 2021a)", "ref_id": null }, { "start": 414, "end": 432, "text": "(Wikipedia, 2021b)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our system uses knowledge from the articles in the navigational templates and identifies relevant notions consistently present in various articles of the same field. For this, we parse the articles' information and create a basic knowledge graph. We map the information to their Wikidata instances and cluster similar concepts. We apply fuzzy rules based on network metrics to decide the importance of concepts. In the end, the expert cleans and refines the resultant conceptual base to create the final version.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our specific contributions are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Our framework allows users to build domainspecific conceptual bases from knowledge graphs in various domains using Wikipedia navigational templates. \u2022 The novelty lies in the application of fuzzy rules on network metrics. We also provide modifiable fuzzy rules to expand or contract the conceptual bases as required. We organize the paper as follows. We discuss the method in Section 2. We illustrate the outcomes of the approach using an example of RNA virus antivirals in Section 3. We also review the outcomes and limitations in that section, followed by ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We show the overview of the method in Figure 1 . Our system consists of two parts: Knowledge Curator and Conceptual Base Curator. The Knowledge Curator extracts information from articles and Wikidata to construct a basic knowledge graph, and the Conceptual Base Curator employs machine learning techniques for processing and fuzzy rules to filter the relevant concepts.", "cite_spans": [], "ref_spans": [ { "start": 38, "end": 47, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Proposed Method", "sec_num": "2" }, { "text": "Collecting articles Our framework uses the template name as an input to gather information from a particular domain. The pattern \"Template:Template name>\" defines the Wikipedia templates. We use Wikipedia's special export webpage 1 to export the template's data into XML for faster processing. To remove unnecessary text from the XML, we use pattern-based cleaning and rulebased parsing. To retrieve the article names in the template, we use rule-based parsing. We export the information as XML for each article and use pattern-based cleaning to clean it. Information extraction from articles Structured material, such as content information and infoboxes, can be found in Wikipedia articles. In the same way, they contain unstructured information in the context of the article's text. We extract this information by rule-based text processing on the cleaned article's XMLs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Curator", "sec_num": "2.1" }, { "text": "We represent the extracted information as a graph for further processing. For each article, we create a separate knowledge graph where the article node is the central node. We add section-subsection information using the relations: has_section and has_subsection. We add infobox information by adding has_ in front of the first column labels of the infobox and the first column label as the node. For example, Earth 2 info box contains information on mass. We add the has_mass relation to the Earth node with mass node. We process the text in the article by text normalization and sentence segmentation 3 . We tokenize 4 the sentences and extract noun chunks 5 from the sentences and consider the noun chunks as the nodes. We join the first noun chunk of the sentences with the section node using the relation has_info_about. The trailing noun chunks are added to the previous noun chunk nodes using inbetween tokens as the relation. We also create a list of nodes that are links to other articles. Retrieving Wikidata instance For all the nodes that are links to other Wikipedia articles, we parse the instanceOf 6 property using web crawling and save them to a file.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph representation", "sec_num": null }, { "text": "Mapping We map the nodes to their Wikidata instances. If an instance is present, we replace the node with the instance name. If there are multiple instances, we create multiple nodes and add all the connecting nodes to the instance nodes. For example, a node A is connected to node B and C and A has Wikidata instanceOf as A i1 and A i2 . Then we Figure 2 : Screenshot of a small part of graph for Ciluprevir drug: the dark green node is the article node. Red, navy blue and light green nodes are information from infoboxes, section and text, respectively. Light green nodes constitutes noun chunk information connected via in-between tokens or has_info_about relations.", "cite_spans": [], "ref_spans": [ { "start": 347, "end": 355, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Conceptual Base Curator", "sec_num": "2.2" }, { "text": "replace A-B and A-C with A i1 -B, A i1 -C, A i2 -B, A i2 -C in the graph. Clustering nodes There are several similar nodes in the graphs of all the articles. We calculate the Levenshtein distance (Levenshtein, 1966) based feature matrix for all the nodes. We perform affinity propagation clustering (Frey and Dueck, 2007) which outputs cluster and cluster exemplars. We replace the nodes in the clusters with their exemplars for further use.", "cite_spans": [ { "start": 196, "end": 215, "text": "(Levenshtein, 1966)", "ref_id": "BIBREF8" }, { "start": 299, "end": 321, "text": "(Frey and Dueck, 2007)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Conceptual Base Curator", "sec_num": "2.2" }, { "text": "The uncertainty factor of the concepts is the impetus for using fuzzy logic to construct a conceptual base. If we fill the conceptual base with all possible notions, the structure assumes that all concepts and relations are equally representative of the domain. However, this is not the case. Some notions are more applicable than others. Consider the following three medications: Remdesivir 7 , Ledipasvir 8 and Dasabuvir 9 . Medical uses, side effects, and trade names are all common concepts. As a result, these can be said to be true in the drug domain with some certainty. The Remdesivir article contains information about medical usage controversy, which is absent in other drugs. As a consequence, this concept can be categorized as less significant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": "We use fuzzy logic to find relevant concepts in a particular domain. For this, we filter out the nodes 7 https://en.wikipedia.org/wiki/ Remdesivir 8 https://en.wikipedia.org/wiki/ Ledipasvir/sofosbuvir 9 https://en.wikipedia.org/wiki/ Dasabuvir whose relation does not contain a word with VERB pos-tag. We collate the graphs for all the articles and remove \"a, an, the\" from the nodes. Fuzzy logic on network metrics We calculate two network metrics: degree centrality and betweenness centrality (Freeman, 1977) . The centrality metric identifies the network's most influential nodes. The number of connections a node has determines its degree centrality. The degree centrality of a vertex v, for a given graph G := (V, E) with |V | vertices and |E| edges, is defined as:", "cite_spans": [ { "start": 496, "end": 511, "text": "(Freeman, 1977)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C Deg (v) = deg(v)", "eq_num": "(1)" } ], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": "Where, deg(v) is the degree of vertex v. The number of times a node appears in the shortest path of other nodes is known as betweenness centrality. It is a metric that reflects a node's power over other network nodes. It is defined by the equation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": "C Btw (v) = i =v =j \u03c3 ij (v) \u03c3 ij (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": "where \u03c3 ij is the total number of shortest paths from node i to node j and \u03c3 ij (v) is the number of those paths that pass through v. The fuzzy logic uses the above-defined network metrics to decide the relevancy of the concepts. The fuzzy logic consists of four main components: fuzzifier, rule base, inference engine, and defuzzifier. Fuzzifier converts inputs to fuzzy sets characterized by membership functions (MF). Rule base consists of IF-THEN rules used to drive the inference engine. The inference engine makes fuzzy inference on the fuzzy input based on the defined rules. Defuzzifier converts fuzzy set to the required output.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": "In our system, the input is degree centrality and betweenness centrality measures for all the nodes. We have experimented with the Gaussian membership function. The Gaussian MF is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": "GaussM F (x; \u00b5, \u03c3) = e \u2212 1 2 ( x\u2212\u00b5 \u03c3 ) 2 (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": "where, x is the input, \u00b5 is the mean and \u03c3 is the standard deviation of x. We generate gaussian MF for both the centrality measures. We use categorical inference on the concept relevance (HIGH, MEDIUM, LOW) and Mamdani Implication for getting the output. Assuming a rule", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": "R i = (D i OR B i ) \u2192 N i , is defined by \u00b5 R i = \u00b5 D i ORB i \u2192N j (d, b; n)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": ", where \u00b5 is membership function, D i and B i are fuzzy sets for degree and betweenness centrality and N j wherej \u2208 [1, 2, 3] \u2261 [HIGH, M EDIU M, LOW ] denotes relevance set for nodes. Then, the Mamdani Implication uses minimum operator (\u2227) for fuzzy implication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00b5 N j (n) = \u03b1 i \u2227 \u00b5 N j (n) where, \u03b1 i = (\u00b5 D i \u2227 \u00b5 B i )", "eq_num": "(4)" } ], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": "We define three rules for inference:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": "\u2022 IF \u00b5 dn \u2227\u00b5 bn <= 0.6 THEN \u00b5 N j (n) = HIGH \u2022 IF 0.6 < \u00b5 dn \u2227 \u00b5 bn <= 0.8 THEN \u00b5 N j (n) = M EDIU M \u2022 IF \u00b5 dn \u2227 \u00b5 bn > 0.8 THEN \u00b5 N j (n) = LOW", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": "The values in the rules are modifiable to increase or decrease the span of concepts covered in various relevance levels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": "We filter out node-edge-node pairs using nodes of varying significance. We consider a node-edgenode pair highly relevant if any node in the pair is highly relevant and the node-edge-node pair has appeared in more than two articles. Similarly, we translate the medium and low importance at node level to node-edge-node pair level. We only use highly relevant node-edge-node pairs in this paper, but medium and low relevance pairs may be added to extend the conceptual base if required. We measure the resultant network's largest connected component and present it to the domain expert for further refinement. Refining the concept base The domain expert refines the crude conceptual base. Removal or mod-ification of semantically related concepts and removal or modification of notions that reflect the same object are both parts of the refinement process. The expert makes node connections to the modified nodes by naming \"has_\" to new relations. There is no modification of the relations where the node is not modified.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Node filtering and knowledge graphs collation", "sec_num": null }, { "text": "We present the results of our approach using an example of RNA virus antiviral drugs 10 . The system is implemented in Python. All the steps automatically retrieve or process the data until stated otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study on RNA Virus Antiviral Drugs", "sec_num": "3.1" }, { "text": "The system first curates the knowledge graph from all the articles using the section, infobox, and text information. We show a screenshot of a small part of the knowledge graph for the Ciluprevir drug in Figure 2 . The system also retrieves and maps the Wikidata instanceOf property to all the link-based nodes.", "cite_spans": [], "ref_spans": [ { "start": 204, "end": 212, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Case Study on RNA Virus Antiviral Drugs", "sec_num": "3.1" }, { "text": "Next, we apply affinity propagation to cluster similar information together. We experimented by clustering section, infoboxes and text together and independently. Infoboxes present a structured summary of the article's information. We note that the clustering of infoboxes tends to lose information because different information identifiers may come under a single cluster, although they represent independent information. As a result, the final conceptual base contains minimal information from infoboxes. In our experiment, the automatically created conceptual base that uses clusters of all information together contains 49.4%, 4%, and 46.6% concepts coming from section, infoboxes, and text, respectively. Hence, we cluster only section and text information (independently) and use infoboxes information as it is.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study on RNA Virus Antiviral Drugs", "sec_num": "3.1" }, { "text": "After this, the application of fuzzy logic results in a crude conceptual base. Due to space restrictions, we show snippets of the model with only a few mentions of the edge names: consisting of section and text information in Figure 3 (a) and infobox information in Figure 4(a) . The respective refined models are shown in Figure 3 (b) and 4(b). Edges or relations mostly consists of names such as has_info_about, has_section, has_subsection, has_type and verbs such as is, approved_by, is_not_recommended_during, etc. We call our output conceptual base and not conceptual model because the relations such as has_info_about, has_section, has_subsection does not provide any meaningful link between the concepts. Meaningful modification of such relations can be considered as a downstream task.", "cite_spans": [], "ref_spans": [ { "start": 226, "end": 234, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 266, "end": 277, "text": "Figure 4(a)", "ref_id": "FIGREF2" }, { "start": 323, "end": 331, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Case Study on RNA Virus Antiviral Drugs", "sec_num": "3.1" }, { "text": "The templates contain few articles of different domains as well. For instance, RNA antiviral template contains disease and virus names as well. But, the proposed approach ensures that we consider only statistically significant concepts for the conceptual base. We manually validate that the crude conceptual base contains 14%, 16%, and 70% of concepts from section, text, and infoboxes, respec-tively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Case Study on RNA Virus Antiviral Drugs", "sec_num": "3.1" }, { "text": "Our observations suggest that the crude conceptual base can capture most of the relevant information from both the section and text information and infobox information. There are few ambiguous names in the nodes like pore, south, rate (marked using crosses) in Figure 3(a) and legal_us, legal_uk, etc. (colored nodes) in Figure 4(a) , which the domain expert removes or corrects. The expert also modifies edges, where nodes are modified.", "cite_spans": [], "ref_spans": [ { "start": 261, "end": 272, "text": "Figure 3(a)", "ref_id": "FIGREF1" }, { "start": 321, "end": 332, "text": "Figure 4(a)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Discussion", "sec_num": "3.2" }, { "text": "The crude base contains two types of nodes: 1) nodes representing the same object in the current context but can have different meanings, and 2) nodes that are instances of another concept. For example, in Figure 3(a) , the nodes medication, antiviral drug and antiviral medication represents antiviral drugs in the current context. These nodes appear because an article node is the most central in their knowledge graph, and they can have multiple Wikipedia instanceOf properties. Similarly, there are many instances of legal status and pregnancy category in Figure 4(a) . Instances appear in the infobox conceptual base because we do not cluster those nodes. As a result, original data is retained for calculation of relevance.", "cite_spans": [], "ref_spans": [ { "start": 206, "end": 217, "text": "Figure 3(a)", "ref_id": "FIGREF1" }, { "start": 560, "end": 571, "text": "Figure 4(a)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Discussion", "sec_num": "3.2" }, { "text": "Since the refinement process is manual, the expert can decide how to modify the crude conceptual base as per the need. In Figure 3 (b) and 4(b), we have shown basic refinement. In Figure 3(a) , we show red crosses on the nodes that are removed because of ambiguity or no meaningful information, yellow crosses on the nodes that are modified because of inappropriate names but are meaningful, and blue cross on the node that is merged with another similar node. Here, antiviral drug is merged to antiviral medication. The refined version in Figure 3 (b) depicts bold boundary nodes and edges that the expert modifies. In Figure 4(a) , same-colored nodes represent instances of same concepts, which the expert merge into one in Figure 4(b) .", "cite_spans": [], "ref_spans": [ { "start": 122, "end": 130, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 180, "end": 191, "text": "Figure 3(a)", "ref_id": "FIGREF1" }, { "start": 540, "end": 548, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 620, "end": 631, "text": "Figure 4(a)", "ref_id": "FIGREF2" }, { "start": 726, "end": 737, "text": "Figure 4(b)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Discussion", "sec_num": "3.2" }, { "text": "In the presented case study, the expert modifies about 30% of the total nodes (section+ text+ infoboxes). However, this is subject to the structure of Wikipedia articles in the navigational template. For example, most of the articles in the Distillation 11 template do not contain infoboxes, which reduces the percentage of nodes that needs to be modified.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.2" }, { "text": "Following are the limitations of our approach:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.2" }, { "text": "\u2022 Parsing information from web pages is a timeconsuming task, so we use XML and text processing for information gathering. Sometimes, rule-based text processing incorrectly extracts the information, and seldom, the XML does not contain full information. We manually check the infobox content after cleaning the XMLs. We find that approximately 47.5% of infobox entries are incorrect or empty in our case study. In the future, we plan to check the performance and scalability of other tools. \u2022 Sections constitute a small part of the article's information, but we lose a considerable amount of textual information because of the filtering process. We are currently exploring techniques to create enhanced knowledge graphs using language models where filtration of nodes results in minimum or no information loss. \u2022 Currently, we do not provide any aid for refining the conceptual base. We plan to create a GUI for this purpose that will include controllers for fuzzy logic and an interface for effortless refinement, further reducing the time and effort needed to create the conceptual base.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "3.2" }, { "text": "Many researchers have worked on fuzzy ontology creation and their downstream applications, such as generating taxonomies, ontologies, and conceptual models from various data sources. The use of fuzzy logic for creating concept lattices and ontologies has been studied previously by various researchers. There have been studies regarding fuzzy ontology creation (De Maio et al., 2009) , (Tho et al., 2006) , using fuzzy ontology and concept models in various domain-specific tasks and dataset (Parry, 2006) , (Abulaish, 2009) , (Quach and Hoang, 2018) . As opposed to the previous work, we employ fuzzy logic using network metrics attributes.", "cite_spans": [ { "start": 361, "end": 383, "text": "(De Maio et al., 2009)", "ref_id": "BIBREF1" }, { "start": 386, "end": 404, "text": "(Tho et al., 2006)", "ref_id": "BIBREF16" }, { "start": 492, "end": 505, "text": "(Parry, 2006)", "ref_id": "BIBREF10" }, { "start": 508, "end": 524, "text": "(Abulaish, 2009)", "ref_id": "BIBREF0" }, { "start": 527, "end": 550, "text": "(Quach and Hoang, 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "4" }, { "text": "There are a few significant open-domain, community-driven projects for structured knowledge creation. DBpedia (Lehmann et al., 2015) extracts structured information in multiple languages from Wikipedia infoboxes. Yago (Suchanek et al., 2007) has released various versions, and this also uses Wikipedia infoboxes. It also employs Wikipedia categories to determine the type of information, which is then mapped to WordNet taxonomy. Wikidata (Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014) a collaborative database, also links Wikipedia data with unique identifiers. Apart from the communitydriven projects, researchers also used Wikipedia in other open-domain tasks such as document topic classification (Hassan et al., 2012) , collaborative ontology creation (Hepp et al., 2006) , semantic conceptual modeling and semantic relatedness interpretation (Saif et al., 2018) , explaining facts in AI (Sarker et al., 2020), learning named entities (Nothman et al., 2013) , large-scale taxonomy generation (Ponzetto and Strube, 2007) . Researchers also used Wikipedia in domain-specific tasks like exploiting Wikipedia knowledge for classification tasks (Warren, 2012) and extracting domain-terms and terminologies from Wikipedia (Vivaldi and Rodr\u00edguez, 2010) , (Vivaldi and Rodr\u00edguez, 2011) , (Vivaldi et al., 2012) . In this research, we provide domain-specific conceptual base construction from a small set of articles extracted on-the-fly from Wikipedia navigational templates instead of full Wiki dumps or other domain-specific corpora/texts. We also exploit unstructured text in addition to the structured information like Wikipedia info-boxes and article content structure.", "cite_spans": [ { "start": 110, "end": 132, "text": "(Lehmann et al., 2015)", "ref_id": "BIBREF7" }, { "start": 192, "end": 241, "text": "Wikipedia infoboxes. Yago (Suchanek et al., 2007)", "ref_id": null }, { "start": 439, "end": 469, "text": "(Vrande\u010di\u0107 and Kr\u00f6tzsch, 2014)", "ref_id": "BIBREF20" }, { "start": 685, "end": 706, "text": "(Hassan et al., 2012)", "ref_id": "BIBREF5" }, { "start": 741, "end": 760, "text": "(Hepp et al., 2006)", "ref_id": "BIBREF6" }, { "start": 832, "end": 851, "text": "(Saif et al., 2018)", "ref_id": "BIBREF13" }, { "start": 924, "end": 946, "text": "(Nothman et al., 2013)", "ref_id": "BIBREF9" }, { "start": 981, "end": 1008, "text": "(Ponzetto and Strube, 2007)", "ref_id": "BIBREF11" }, { "start": 1129, "end": 1143, "text": "(Warren, 2012)", "ref_id": "BIBREF21" }, { "start": 1205, "end": 1234, "text": "(Vivaldi and Rodr\u00edguez, 2010)", "ref_id": "BIBREF18" }, { "start": 1237, "end": 1266, "text": "(Vivaldi and Rodr\u00edguez, 2011)", "ref_id": "BIBREF19" }, { "start": 1269, "end": 1291, "text": "(Vivaldi et al., 2012)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works", "sec_num": "4" }, { "text": "We use Wikipedia navigational templates to build domain-specific conceptual bases in this study. To compute the relevance of the concepts, our system generates a graph representation of the article's knowledge and uses fuzzy logic on top of its network metrics. With a bit of human intervention, the system outputs a refined conceptual base that can be used further for various downstream purposes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "https://en.wikipedia.org/wiki/Special: Export", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://en.wikipedia.org/wiki/Earth 3 https://spacy.io/usage/ linguistic-features#sbd 4 https://spacy.io/usage/ linguistic-features#tokenization 5 https://spacy.io/usage/ linguistic-features#dependency-parse 6 https://www.wikidata.org/wiki/ Property:P31", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://en.wikipedia.org/wiki/ Template:RNA_antivirals", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://en.wikipedia.org/wiki/Template:Distillation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "An ontology enhancement framework to accommodate imprecise concepts and relations", "authors": [ { "first": "Muhammad", "middle": [], "last": "Abulaish", "suffix": "" } ], "year": 2009, "venue": "Journal of Emerging technologies in web intelligence", "volume": "", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Muhammad Abulaish. 2009. An ontology enhance- ment framework to accommodate imprecise con- cepts and relations. Journal of Emerging technolo- gies in web intelligence, 1(1).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Towards an automatic fuzzy ontology generation", "authors": [ { "first": "Carmen", "middle": [], "last": "De Maio", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Fenza", "suffix": "" }, { "first": "Vincenzo", "middle": [], "last": "Loia", "suffix": "" }, { "first": "Sabrina", "middle": [], "last": "Senatore", "suffix": "" } ], "year": 2009, "venue": "2009 IEEE International Conference on Fuzzy Systems", "volume": "", "issue": "", "pages": "1044--1049", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carmen De Maio, Giuseppe Fenza, Vincenzo Loia, and Sabrina Senatore. 2009. Towards an automatic fuzzy ontology generation. In 2009 IEEE Interna- tional Conference on Fuzzy Systems, pages 1044- 1049. IEEE.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning the differences between ontologies and conceptual schemas through ontology-driven information systems", "authors": [ { "first": "Frederico", "middle": [], "last": "Fonseca", "suffix": "" }, { "first": "James", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2007, "venue": "Journal of the Association for Information Systems", "volume": "8", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frederico Fonseca and James Martin. 2007. Learning the differences between ontologies and conceptual schemas through ontology-driven information sys- tems. Journal of the Association for Information Systems, 8(2):4.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A set of measures of centrality based on betweenness", "authors": [ { "first": "C", "middle": [], "last": "Linton", "suffix": "" }, { "first": "", "middle": [], "last": "Freeman", "suffix": "" } ], "year": 1977, "venue": "Sociometry", "volume": "", "issue": "", "pages": "35--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linton C Freeman. 1977. A set of measures of central- ity based on betweenness. Sociometry, pages 35-41.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Clustering by passing messages between data points. science", "authors": [ { "first": "J", "middle": [], "last": "Brendan", "suffix": "" }, { "first": "Delbert", "middle": [], "last": "Frey", "suffix": "" }, { "first": "", "middle": [], "last": "Dueck", "suffix": "" } ], "year": 2007, "venue": "", "volume": "315", "issue": "", "pages": "972--976", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brendan J Frey and Delbert Dueck. 2007. Clustering by passing messages between data points. science, 315(5814):972-976.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automatic document topic identification using wikipedia hierarchical ontology", "authors": [ { "first": "M", "middle": [], "last": "Mostafa", "suffix": "" }, { "first": "Fakhri", "middle": [], "last": "Hassan", "suffix": "" }, { "first": "Mohamed", "middle": [ "S" ], "last": "Karray", "suffix": "" }, { "first": "", "middle": [], "last": "Kamel", "suffix": "" } ], "year": 2012, "venue": "2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA)", "volume": "", "issue": "", "pages": "237--242", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mostafa M Hassan, Fakhri Karray, and Mohamed S Kamel. 2012. Automatic document topic identifi- cation using wikipedia hierarchical ontology. In 2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA), pages 237-242. IEEE.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Harvesting wiki consensus-using wikipedia entries as ontology elements", "authors": [ { "first": "Martin", "middle": [], "last": "Hepp", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Bachlechner", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Siorpaes", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Hepp, Daniel Bachlechner, and Katharina Siorpaes. 2006. Harvesting wiki consensus-using wikipedia entries as ontology elements. In SemWiki. Citeseer.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Dbpedia-a large-scale, multilingual knowledge base extracted from wikipedia", "authors": [ { "first": "Jens", "middle": [], "last": "Lehmann", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Isele", "suffix": "" }, { "first": "Max", "middle": [], "last": "Jakob", "suffix": "" }, { "first": "Anja", "middle": [], "last": "Jentzsch", "suffix": "" }, { "first": "Dimitris", "middle": [], "last": "Kontokostas", "suffix": "" }, { "first": "Pablo", "middle": [ "N" ], "last": "Mendes", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Hellmann", "suffix": "" }, { "first": "Mohamed", "middle": [], "last": "Morsey", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Van Kleef", "suffix": "" }, { "first": "S\u00f6ren", "middle": [], "last": "Auer", "suffix": "" } ], "year": 2015, "venue": "", "volume": "6", "issue": "", "pages": "167--195", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, S\u00f6ren Auer, et al. 2015. Dbpedia-a large-scale, mul- tilingual knowledge base extracted from wikipedia. Semantic web, 6(2):167-195.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Binary codes capable of correcting deletions, insertions, and reversals", "authors": [ { "first": "", "middle": [], "last": "Vladimir I Levenshtein", "suffix": "" } ], "year": 1966, "venue": "Soviet physics doklady", "volume": "10", "issue": "", "pages": "707--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, volume 10, pages 707-710. Soviet Union.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning multilingual named entity recognition from wikipedia", "authors": [ { "first": "Joel", "middle": [], "last": "Nothman", "suffix": "" }, { "first": "Nicky", "middle": [], "last": "Ringland", "suffix": "" }, { "first": "Will", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Tara", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "James R", "middle": [], "last": "Curran", "suffix": "" } ], "year": 2013, "venue": "Artificial Intelligence", "volume": "194", "issue": "", "pages": "151--175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R Curran. 2013. Learning mul- tilingual named entity recognition from wikipedia. Artificial Intelligence, 194:151-175.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Fuzzy ontologies for information retrieval on the www", "authors": [ { "first": "David", "middle": [], "last": "Parry", "suffix": "" } ], "year": 2006, "venue": "Capturing Intelligence", "volume": "", "issue": "", "pages": "21--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Parry. 2006. Fuzzy ontologies for information retrieval on the www. In Capturing Intelligence, vol- ume 1, pages 21-48. Elsevier.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Deriving a large scale taxonomy from wikipedia", "authors": [ { "first": "Paolo", "middle": [], "last": "Simone", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Ponzetto", "suffix": "" }, { "first": "", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2007, "venue": "AAAI", "volume": "7", "issue": "", "pages": "1440--1445", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simone Paolo Ponzetto and Michael Strube. 2007. De- riving a large scale taxonomy from wikipedia. In AAAI, volume 7, pages 1440-1445.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Fuzzy ontology modeling by utilizing fuzzy set and fuzzy description logic", "authors": [ { "first": "Xuan", "middle": [ "Hung" ], "last": "Quach", "suffix": "" }, { "first": "Thi Lan Giao", "middle": [], "last": "Hoang", "suffix": "" } ], "year": 2018, "venue": "Modern Approaches for Intelligent Information and Database Systems", "volume": "", "issue": "", "pages": "15--26", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuan Hung Quach and Thi Lan Giao Hoang. 2018. Fuzzy ontology modeling by utilizing fuzzy set and fuzzy description logic. In Modern Approaches for Intelligent Information and Database Systems, pages 15-26. Springer.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Mohd Juzaiddin Ab Aziz, Ummi Zakiah Zainodin, and Naomie Salim", "authors": [ { "first": "Abdulgabbar", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Nazlia", "middle": [], "last": "Omar", "suffix": "" } ], "year": 2018, "venue": "Journal of Information Science", "volume": "44", "issue": "4", "pages": "526--551", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abdulgabbar Saif, Nazlia Omar, Mohd Juzaiddin Ab Aziz, Ummi Zakiah Zainodin, and Naomie Salim. 2018. Semantic concept model using wikipedia semantic features. Journal of Information Science, 44(4):526-551.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Wikipedia knowledge graph for explainable ai", "authors": [ { "first": "Joshua", "middle": [], "last": "Md Kamruzzaman Sarker", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Hitzler", "suffix": "" }, { "first": "Srikanth", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Brandon", "middle": [], "last": "Nadella", "suffix": "" }, { "first": "Ion", "middle": [], "last": "Minnery", "suffix": "" }, { "first": "", "middle": [], "last": "Juvina", "suffix": "" }, { "first": "L", "middle": [], "last": "Michael", "suffix": "" }, { "first": "", "middle": [], "last": "Raymer", "suffix": "" }, { "first": "", "middle": [], "last": "William R Aue", "suffix": "" } ], "year": 2020, "venue": "Iberoamerican Knowledge Graphs and Semantic Web Conference", "volume": "", "issue": "", "pages": "72--87", "other_ids": {}, "num": null, "urls": [], "raw_text": "Md Kamruzzaman Sarker, Joshua Schwartz, Pascal Hit- zler, Lu Zhou, Srikanth Nadella, Brandon Minnery, Ion Juvina, Michael L Raymer, and William R Aue. 2020. Wikipedia knowledge graph for explainable ai. In Iberoamerican Knowledge Graphs and Se- mantic Web Conference, pages 72-87. Springer.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Yago: a core of semantic knowledge", "authors": [ { "first": "M", "middle": [], "last": "Fabian", "suffix": "" }, { "first": "Gjergji", "middle": [], "last": "Suchanek", "suffix": "" }, { "first": "Gerhard", "middle": [], "last": "Kasneci", "suffix": "" }, { "first": "", "middle": [], "last": "Weikum", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 16th international conference on World Wide Web", "volume": "", "issue": "", "pages": "697--706", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web, pages 697-706.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Automatic fuzzy ontology generation for semantic web", "authors": [ { "first": "Thanh", "middle": [], "last": "Quan", "suffix": "" }, { "first": "Siu", "middle": [ "Cheung" ], "last": "Tho", "suffix": "" }, { "first": "Alvis", "middle": [], "last": "Hui", "suffix": "" }, { "first": "M", "middle": [], "last": "Cheuk", "suffix": "" }, { "first": "Tru", "middle": [], "last": "Fong", "suffix": "" }, { "first": "", "middle": [], "last": "Hoang Cao", "suffix": "" } ], "year": 2006, "venue": "IEEE transactions on knowledge and data engineering", "volume": "18", "issue": "6", "pages": "842--856", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quan Thanh Tho, Siu Cheung Hui, Alvis Cheuk M Fong, and Tru Hoang Cao. 2006. Automatic fuzzy ontology generation for semantic web. IEEE transactions on knowledge and data engineering, 18(6):842-856.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Using wikipedia to validate the terminology found in a corpus of basic textbooks", "authors": [ { "first": "Jorge", "middle": [], "last": "Vivaldi", "suffix": "" }, { "first": "Gerardo", "middle": [], "last": "Luis Adri\u00e1n Cabrera-Diego", "suffix": "" }, { "first": "Mar\u00eda", "middle": [], "last": "Sierra", "suffix": "" }, { "first": "", "middle": [], "last": "Pozzi", "suffix": "" } ], "year": 2012, "venue": "LREC", "volume": "", "issue": "", "pages": "3820--3827", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jorge Vivaldi, Luis Adri\u00e1n Cabrera-Diego, Gerardo Sierra, and Mar\u00eda Pozzi. 2012. Using wikipedia to validate the terminology found in a corpus of basic textbooks. In LREC, pages 3820-3827.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Finding domain terms using wikipedia", "authors": [ { "first": "Jorge", "middle": [], "last": "Vivaldi", "suffix": "" }, { "first": "Horacio", "middle": [], "last": "Rodr\u00edguez", "suffix": "" } ], "year": 2010, "venue": "LREC", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jorge Vivaldi and Horacio Rodr\u00edguez. 2010. Finding domain terms using wikipedia. In LREC.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Extracting terminology from wikipedia. Procesamiento del lenguaje natural", "authors": [ { "first": "Jorge", "middle": [], "last": "Vivaldi", "suffix": "" }, { "first": "Horacio", "middle": [], "last": "Rodr\u00edguez", "suffix": "" } ], "year": 2011, "venue": "", "volume": "47", "issue": "", "pages": "65--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jorge Vivaldi and Horacio Rodr\u00edguez. 2011. Extract- ing terminology from wikipedia. Procesamiento del lenguaje natural, 47:65-73.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Wikidata: a free collaborative knowledgebase", "authors": [ { "first": "Denny", "middle": [], "last": "Vrande\u010di\u0107", "suffix": "" }, { "first": "Markus", "middle": [], "last": "Kr\u00f6tzsch", "suffix": "" } ], "year": 2014, "venue": "Communications of the ACM", "volume": "57", "issue": "10", "pages": "78--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Denny Vrande\u010di\u0107 and Markus Kr\u00f6tzsch. 2014. Wiki- data: a free collaborative knowledgebase. Commu- nications of the ACM, 57(10):78-85.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Creating specialized ontologies using wikipedia: The muninn experience", "authors": [ { "first": "Robert", "middle": [], "last": "Warren", "suffix": "" } ], "year": 2012, "venue": "Proceedings of Wikipedia Academy: Research and Free Knowledge", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Warren. 2012. Creating specialized ontolo- gies using wikipedia: The muninn experience. Berlin, DE: Proceedings of Wikipedia Academy: Research and Free Knowledge (WPAC2012). URL: http://hangingtogether. org.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Wikipedia. 2021a. Wikipedia category contents", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wikipedia. 2021a. Wikipedia category contents. https://en.wikipedia.org/wiki/ Wikipedia:Contents/Categories.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Wikipedia navigation template", "authors": [ { "first": "", "middle": [], "last": "Wikipedia", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wikipedia. 2021b. Wikipedia navigation tem- plate. https://en.wikipedia.org/wiki/ Wikipedia:Navigation_template.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "text": "Method overview related works in Section 4. We conclude the paper in Section 5.", "uris": null, "num": null }, "FIGREF1": { "type_str": "figure", "text": "(a) Left: automatically generated crude conceptual base, (b) Right: refined conceptual base, consisting of concepts from section and text. (a) Left: red crosses depict nodes removed, yellow crosses depict the modified nodes, and blue crosses depict the node merged to another node. (b) Right: refined nodes and edges are shown in bold. Yellow ticked nodes are modified, and blue ticked are merged. For clarity, we show the five most central nodes and nodes connecting to them with different colors.", "uris": null, "num": null }, "FIGREF2": { "type_str": "figure", "text": "(a) Left: Crude and (b) Right: refined conceptual base from infoboxes. Same color nodes represent instances of the same concept.", "uris": null, "num": null } } } }