ACL-OCL / Base_JSON /prefixI /json /insights /2020.insights-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
51.2 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:13:26.833364Z"
},
"title": "Can Knowledge Graph Embeddings Tell Us What Fact-checked Claims Are About?",
"authors": [
{
"first": "Valentina",
"middle": [],
"last": "Beretta",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IMT Mines Al\u00e8s",
"location": {
"settlement": "Al\u00e8s",
"country": "France"
}
},
"email": ""
},
{
"first": "Katarina",
"middle": [],
"last": "Boland",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "GESIS",
"location": {
"settlement": "Cologne",
"country": "Germany"
}
},
"email": "katarina.boland@gesis.org"
},
{
"first": "Luke",
"middle": [],
"last": "Lo Seen",
"suffix": "",
"affiliation": {
"laboratory": "LIRMM",
"institution": "CNRS",
"location": {
"settlement": "Montpellier",
"country": "France"
}
},
"email": ""
},
{
"first": "S\u00e9bastien",
"middle": [],
"last": "Harispe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IMT Mines Al\u00e8s",
"location": {
"settlement": "Al\u00e8s",
"country": "France"
}
},
"email": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Todorov",
"suffix": "",
"affiliation": {
"laboratory": "LIRMM",
"institution": "CNRS",
"location": {
"settlement": "Montpellier",
"country": "France"
}
},
"email": "todorov@lirmm.fr"
},
{
"first": "Andon",
"middle": [],
"last": "Tchechmedjiev",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IMT Mines Al\u00e8s",
"location": {
"settlement": "Al\u00e8s",
"country": "France"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The web offers a wealth of discourse data that help researchers from various fields analyze debates about current societal issues and gauge the effects on society of important phenomena such as misinformation spread. Such analyses often revolve around claims made by people about a given topic of interest. Fact-checking portals offer partially structured information that can assist such analysis. However, exploiting the network structure of such online discourse data is as of yet under-explored. We study the effectiveness of using neural-graph embedding features for claim topic prediction and their complementarity with text embeddings. We show that graph embeddings are modestly complementary with text embeddings, but the low performance of graph embedding features alone indicate that the model fails to capture topological features pertinent of the topic prediction task.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The web offers a wealth of discourse data that help researchers from various fields analyze debates about current societal issues and gauge the effects on society of important phenomena such as misinformation spread. Such analyses often revolve around claims made by people about a given topic of interest. Fact-checking portals offer partially structured information that can assist such analysis. However, exploiting the network structure of such online discourse data is as of yet under-explored. We study the effectiveness of using neural-graph embedding features for claim topic prediction and their complementarity with text embeddings. We show that graph embeddings are modestly complementary with text embeddings, but the low performance of graph embedding features alone indicate that the model fails to capture topological features pertinent of the topic prediction task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Analysing claims shared on social media is of growing interest, from social/political sciences to Artificial Intelligence (AI). Such analyses are often performed with respect to a specific set of topics (e.g. \"immigration\" or \"abortion\") that allow carrying out targeted studies of trends, understanding/quantifying hidden biases (Garimella et al., 2018) , discovering stances towards those topics (Wang et al., 2018) or their underlying falsehood propagation patterns (Vosoughi et al., 2018) . Fact-checking portals offer a wealth of information about claims, their truth values and their sources. To analyse claims about a given topic, scientists need (1) access to heterogeneous repositories of claims and (2) the prior knowledge of which entities are mentioned in claims that belong to a topic (as defined by thematic keywords in each portal).",
"cite_spans": [
{
"start": 330,
"end": 354,
"text": "(Garimella et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 398,
"end": 417,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 469,
"end": 492,
"text": "(Vosoughi et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As a (partial) response to (1), recent work has presented ClaimsKG-a large dynamic knowledge graph (KG) of fact-checked claims harvested from various fact-checking portals (like politifact.com) and their metadata (e.g. truth values, authors, sources, links to DBpedia) (Tchechmedjiev et al., 2019 ) (cf. Figure 1 ). 1 ClaimsKG includes thematic keywords provided by the fact-checking portals (e.g. \"elections\" or \"taxes\"). However, using them to filter claims by topic is problematic as: (1) not all claims are annotated;",
"cite_spans": [
{
"start": 269,
"end": 296,
"text": "(Tchechmedjiev et al., 2019",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 304,
"end": 312,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) the keywords are very heterogeneous (granularity or level of abstraction; e.g. \"economy\" vs. \"Kim Kardashian\"); (3) there is no standardization within or across portals;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(4) there are no links between keywords grouping related concepts and (5) existing annotations are often incomplete. We address this need for normalization and for providing missing topic annotations of claims by investigating representation learning methods for claims. Representation learning for text (Devlin et al., 2018; Li and Yang, 2018) and graphs (Cai et al., 2018; Goyal and Ferrara, 2018) has been successfully applied to many tasks from entity linking (Radhakrishnan et al., 2018) to link prediction in large KGs (Kazemi and Poole, 2018) allowing for KG completion/fusion. However, the ability of these methods to represent claims and to transfer to other machine learning (ML) tasks (e.g. predicting the topic(s) of a claim) has not been investigated. We evaluate the capability of link prediction graph embeddings to capture pertinent information from the graph structure in order to benefit downstream tasks. We compare the performance resulting from using (1) graph embeddings (CP/N3 model on ClaimsKG enriched with relations between mentions coming from DBPedia) (2) claim textual embeddings, or (3) different combinations thereof, as features in the task of supervised multi- label claim topic prediction on a gold dataset. This task was chosen given that (1) it is significantly more challenging than typical topic classification tasks and (2) we can control the parameters of the evaluation by design and check for desirable properties captured by link prediction graph embeddings. We evaluate the use of claim vectors as features with or without the addition of neighbourhood vectors (outgoing relations and targets). We then perform ablation studies over different features to better characterise what is captured by the graph embeddings. Our results show that state-of-the-art link prediction models fail to capture equivalence structures and transfer poorly.",
"cite_spans": [
{
"start": 304,
"end": 325,
"text": "(Devlin et al., 2018;",
"ref_id": "BIBREF2"
},
{
"start": 326,
"end": 344,
"text": "Li and Yang, 2018)",
"ref_id": "BIBREF7"
},
{
"start": 356,
"end": 374,
"text": "(Cai et al., 2018;",
"ref_id": "BIBREF1"
},
{
"start": 375,
"end": 399,
"text": "Goyal and Ferrara, 2018)",
"ref_id": "BIBREF4"
},
{
"start": 464,
"end": 492,
"text": "(Radhakrishnan et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We present our semi-automatic approach to build a gold standard dataset of annotated claims for topic classification. Since ClaimsKG covers a wide range of different topics, annotating a random sample of claims would not yield a sufficient number of claims per topic. Thus, we identified a set of 7 topics that have a high number of claims in ClaimsKG and are relevant for claim-related studies: \"healthcare \" (1777), \"taxes\" (1519), \"elections\" (1074), \"crime\" (947), \"education\" (1263), \"immigration\" (1147) and \"environment\" (567). We then automatically identify claims potentially referring to these topics using the keywords assigned by the fact-checking sites. First, we mapped all keywords to common high-level concepts in two upper level taxonomies: the TheSoz thesaurus of social sciences (Zapilko et al., 2013) and the UNESCO Thesaurus 2 employing a dictionary-based entity linking approach (the concepts are noted as TOPIC in Figure 1 ). We then extracted a random subset of claims that are linked to at least one of the chosen topics through their keywords. Note that one claim can correspond to several concepts thus creating a multi-label dataset. To validate and complete the semi-automatically assigned labels, we finally asked 5 annotators to re-annotate the dataset and assign the claims to all applicable topics. This gold standard, composed of 629 annotated claims, has a Krippendorff's \u03b1 annotator agreement (Masi distance) (Passonneau, 2006) of 0.75 which is a reasonably high agreement but also shows that the task is not trivial. For example, consider the claim \"Nobody is leaving Memphis. That's a myth.\" uttered by a city councilman, with the keywords \"Population\" and \"Census\" assigned by the fact-checking site. 3 At first glance, none of the selected topics seems to apply. However, the claim review explains that this claim had been uttered in context of a debate concerning the fear that a proposed onetime tax for schools might make people leave the city with this claim defending the tax. Thus, this claim may be interpreted as being about \"taxes\" and even \"education\", depending on how much of the pragmatic context is taken into account. In the final dataset 4 the topic distribution is the following: \"healthcare\" (25%), \"taxes\" (21%), \"elections\" (17%), \"crime\" (16%), \"education\" (13%), \"immigration\" (12%) and \"environment\" (10%).",
"cite_spans": [
{
"start": 798,
"end": 820,
"text": "(Zapilko et al., 2013)",
"ref_id": "BIBREF14"
},
{
"start": 1445,
"end": 1463,
"text": "(Passonneau, 2006)",
"ref_id": "BIBREF8"
},
{
"start": 1740,
"end": 1741,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 937,
"end": 945,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Claim Topic Classification Dataset",
"sec_num": "2"
},
{
"text": "Graph embedding models. We train 5 a CAN-DECOM/PARAFAC model with N3 regularization (CP-N3) (Lacroix et al., 2018) . 6 We computed a model for ClaimsKG (CKG) and a variant without keywords (CKG-KW) needed in the ablation studies. The link prediction performance, reported in Table 1 , is lower than for YAGO3-10, the standard dataset most similar to ClaimsKG at an equivalent rank (MRR = 0.54, HITS@[1, 3, 10]=[47%, 58%, 68%]): ClaimsKG is larger and sparser (fewer triples per relation, more disconnected structure), which could explain this.",
"cite_spans": [
{
"start": 92,
"end": 114,
"text": "(Lacroix et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 117,
"end": 118,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 275,
"end": 282,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Representation Learning and Evaluation Pipeline",
"sec_num": "3"
},
{
"text": "Feature Fusion and Evaluation Pipeline. The graph embeddings are used as features along with text embeddings in a multi-class, multi-label topic classification task. Given the small size of the dataset it was difficult to use supervised neural encoding architectures to learn intermediary representations, e.g. Bi-LSTM or Transformer, (no meaningful convergence), we rather used a classical machine learning pipeline with standard classifiers from Scikit-learn (+grid-search on held-out training data and 10-fold cross-validation). 7 Text embeddings for claims were computed through a SOTA unsupervised pooling method (Akbik et al., 2019) implemented in the flair 8 library on the basis of language models from the transformers repository. We tested most base and large models: DistilRoberta (base models) and GPT-2 (large models) consistently performed best and were retained in the evaluation.",
"cite_spans": [
{
"start": 618,
"end": 638,
"text": "(Akbik et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representation Learning and Evaluation Pipeline",
"sec_num": "3"
},
{
"text": "Comparison and combination of graph and text embeddings. We explore the performance of graph embedding vs. text embedding features and whether there is any complementarity of the two. We train and evaluate a ridge classifier (bayesian ridge regressor used as a classifier) 9 as per section 3 by using [ 1 . We use the topic associations extracted from the graph for the construction of the dataset (pre human-annotation) as a baseline. If graph embeddings can capture the equivalence structures that were used to create the baseline effectively, we expect that using them as features for the topic classification task will allow us to reach similar performance to that of the baseline. Graph embeddings alone lead to poor perfor-7 Code: https://github.com/claimskg/claimskg-embeddings 8 https://github.com/zalandoresearch/flair 9 We evaluated several classifiers from scikit-learn, but report only RidgeClassifier as it consistently led to better average accuracy by a significant margin mance, but there is a small complementarity with text embeddings. Adding graph embeddings to GPT-2 Large lowers performance: it is possible that most of the claims and associated reviews are part of GPT2's training data, thus making any information captured from the metadata superfluous. The baseline being the basis for the gold annotations prior to human annotation, it is expected to achieve a very high performance: given the poor performance of graph embedding features alone, it is likely that the model fails to capture these equivalence structures effectively.",
"cite_spans": [
{
"start": 828,
"end": 829,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "4"
},
{
"text": "Impact of neighbourhood features. The LHS claim embeddings did not capture much useful information for the task. Given the local nature of the link prediction training criterion, do we need to consider the embeddings of the neighbourhood to find useful features that capture the equivalence structures of the baseline? For each neighbour (author, date, sources, mentions in review and claim), we retrieve the RHS and relation vectors. We aggregate by (1) flat concatenation (Flat Concat.); (2) concat. of triple vectors (claim LHS\u00d7relation\u00d7neighbour RHS -Triple Concat.). Table 2 presents the results: using the neighbourhood brings a small improvement (+8.39/CKG, +2.80/CKG+TEDR, +0.60/CKG+GPT2), compared to CKG alone or in combination with text embeddings, particularly using concatenation, although we are far from the baseline.",
"cite_spans": [],
"ref_spans": [
{
"start": 572,
"end": 579,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "4"
},
{
"text": "Ablation studies. For the link prediction models, the most informative features arise from the claim/keyword/topic equivalence structures, as they are used to generate the graph baseline (81% accuracy). To understand if those structures are captured beyond relying on classification performance, we investigate three settings: (1) text embeddings of keywords only (KW only) (2) graph embedding without the keyword subgraph (no keywords, no topic concepts, in green in Figure 1 -CKG No KW); (3) Text embedding of all text fields (claim, review headline, author, keywords, date). Table 2 presents the results. When we remove the keyword subgraph, the graph embedding features become irrelevant for the task (0.60% for CKG No KW). Text embeddings of only keywords lead to a classification performance similar to CKG embeddings with keywords (-4.20/DR, -2.40/GPT2), but capture somewhat different information as their combination leads to an improvement over CKG alone (+10.60 with CKG+GPT2). Concatenating neighbourhood vectors for CKG without keywords leads to lower performance, meaning that the information captured that is useful for this task is captured from the keyword structures. In the last setting, we can verify if this additional information captured by claim graph embeddings is similar to what we get from augmented text embeddings that include all the text from the immediate neighbourhood: the results indicate a small complementary with GPT2 (best overall result at 76.2% accuracy), but degraded performance with DR.",
"cite_spans": [],
"ref_spans": [
{
"start": 468,
"end": 476,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 578,
"end": 585,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "4"
},
{
"text": "Discussion. We have been able to determine, as hypothesized that most of the useful information learned by the link prediction graph embeddings comes from the subgraph pertaining to keywords (green nodes in Figure 1 ), however the overall resulting classification performance with only embedding features is low (with or without neighbourhood), especially compared to the baseline. One hypothesis could be that the structure of the keyword subgraph is captured to some extent in the embeddings of claims and in the neighbourhood, but since the link prediction performance itself is low compared to standard graphs, there is only some part of the structure that the graph embedding model manages to capture. Of course, the size of the topic classification dataset plays a role in the classification performance, however if the representations learned on CKG (which is in no-way a small dataset by link prediction standards) were able to capture the relevant structures, we should be able to reach results closer to the baseline and to text embedding features (on the same dataset).",
"cite_spans": [],
"ref_spans": [
{
"start": 207,
"end": 215,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "4"
},
{
"text": "In the setting of this controlled topic classification task, the structures in question are the equivalence cliques between claims, keywords and topic concepts, which are more complex than the direct links that the local link prediction objective is meant to capture. Although recent advanced in link prediction make models capable of capturing specific formal properties of a relation (transitive, reflexive, anti-symmetric, etc.) in multi-relational graphs, they do not go beyond direct links. Given that such models are increasingly used to infer new relations in complex KGs (e.g., in biomedical informatics), this is a significant limitation of using these approaches for the inference of complex relations or for a downstream classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation and Discussion",
"sec_num": "4"
},
{
"text": "We evaluated the effectiveness of claim embeddings as features in a topic classification dataset, produced specifically to allow probing how specific features impact classification performance. We evaluate several strategies for feature retrieval from graph embeddings and combine them with text embedding features (flair + DistilRoberta/GPT2). We found a small complimentary between the features, however, the low accuracy resulting from using graph embeddings alone (compared to the baseline) and the ablation studies show that the graph embedding model's reliance on a local link prediction objective likely limits the ability of the model to capture more complex relationships (e.g. equivalence cliques between claims, keywords and topic concepts). This echoes some of the open-problems identified in the 2019 Graph Representation Learning workshop at NeurIPS (Sumba and Ortiz, 2019) . Given that link prediction models are increasingly used with complex KGs to infer new relations (KG completion), this limitation is something to keep in mind and should drive researchers working on knowledge graphs to explore more general graph representation learning approaches such as graph neural networks or random-walk approaches.",
"cite_spans": [
{
"start": 864,
"end": 887,
"text": "(Sumba and Ortiz, 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "https://data.gesis.org/claimskg/site/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://tinyurl.com/y6ysg4ju 4 https://github.com/claimskg/claim topics dataset 5 Code: https://github.com/twktheainur/kbc 6 Current SOTA. Optimal parameters within hardware constraints (GeForce 2080Ti with 11GB VRAM) -CP Model, Rank 50, Adagrad optimizer, 0.1 learning rate, N3 regularizer with coefficient 0.005, 30 epochs max, batch size 150 -Approx. 3h/epoch \u00d7 3 models \u00d7 30 epochs \u00d7275W 270h \u00d7 275W74.25KW h@$0.31/KW h $23",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Pooled contextualized embeddings for named entity recognition",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Akbik",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Bergmann",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Vollgraf",
"suffix": ""
}
],
"year": 2019,
"venue": "Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "724--728",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1078"
]
},
"num": null,
"urls": [],
"raw_text": "Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled contextualized embeddings for named entity recognition. In NACACL: Human Language Technologies, Volume 1 (Long and Short Papers), pages 724-728, Minneapolis, Minnesota. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A comprehensive survey of graph embedding: Problems, techniques, and applications",
"authors": [
{
"first": "Hongyun",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Kevin Chen-Chuan",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "30",
"issue": "9",
"pages": "1616--1637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyun Cai, Vincent W Zheng, and Kevin Chen- Chuan Chang. 2018. A comprehensive survey of graph embedding: Problems, techniques, and appli- cations. IEEE Transactions on Knowledge and Data Engineering, 30(9):1616-1637.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Quantifying controversy on social media",
"authors": [
{
"first": "Kiran",
"middle": [],
"last": "Garimella",
"suffix": ""
},
{
"first": "Gianmarco",
"middle": [],
"last": "De Francisci",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Morales",
"suffix": ""
}
],
"year": 2018,
"venue": "Aristides Gionis, and Michael Mathioudakis",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiran Garimella, Gianmarco De Francisci Morales, Aristides Gionis, and Michael Mathioudakis. 2018. Quantifying controversy on social media. ACM Trans. on Soc. Comp., 1(1):3.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Graph embedding techniques, applications, and performance: A survey. Knowledge-Based Systems",
"authors": [
{
"first": "Palash",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Emilio",
"middle": [],
"last": "Ferrara",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "151",
"issue": "",
"pages": "78--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Palash Goyal and Emilio Ferrara. 2018. Graph embed- ding techniques, applications, and performance: A survey. Knowledge-Based Systems, 151:78-94.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Simple embedding for link prediction in knowledge graphs",
"authors": [
{
"first": "David",
"middle": [],
"last": "Seyed Mehran Kazemi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Poole",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems 31",
"volume": "",
"issue": "",
"pages": "4284--4295",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seyed Mehran Kazemi and David Poole. 2018. Simple embedding for link prediction in knowledge graphs. In S. Bengio, H. Wallach, H. Larochelle, K. Grau- man, N. Cesa-Bianchi, and R. Garnett, editors, Ad- vances in Neural Information Processing Systems 31, pages 4284-4295. Curran Associates, Inc.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Canonical tensor decomposition for knowledge base completion",
"authors": [
{
"first": "Timothee",
"middle": [],
"last": "Lacroix",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Obozinski",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "80",
"issue": "",
"pages": "2863--2872",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothee Lacroix, Nicolas Usunier, and Guillaume Obozinski. 2018. Canonical tensor decomposi- tion for knowledge base completion. In Proceed- ings of the 35th International Conference on Ma- chine Learning, volume 80 of Proceedings of Ma- chine Learning Research, pages 2863-2872, Stock- holmsm\u00e4ssan, Stockholm Sweden. PMLR.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Word embedding for understanding natural language: a survey",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "Guide to Big Data Applications",
"volume": "",
"issue": "",
"pages": "83--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Li and Tao Yang. 2018. Word embedding for un- derstanding natural language: a survey. In Guide to Big Data Applications, pages 83-104. Springer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Measuring agreement on set-valued items (MASI) for semantic and pragmatic annotation",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Passonneau",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Passonneau. 2006. Measuring agreement on set-valued items (MASI) for semantic and pragmatic annotation. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06), Genoa, Italy. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Elden: Improved entity linking using densified knowledge graphs",
"authors": [
{
"first": "Priya",
"middle": [],
"last": "Radhakrishnan",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2018,
"venue": "NACACL: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1844--1853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Priya Radhakrishnan, Partha Talukdar, and Vasudeva Varma. 2018. Elden: Improved entity linking us- ing densified knowledge graphs. In NACACL: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 1844-1853.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Between the interaction of graph neural networks and semantic web",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Sumba",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [],
"last": "Ortiz",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 NeurIPS Workshop on Graph Representation Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Sumba and Jos\u00e9 Ortiz. 2019. Between the inter- action of graph neural networks and semantic web. In Proceedings of the 2019 NeurIPS Workshop on Graph Representation Learning.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Claimskg -a knowledge graph of fact-checked claims",
"authors": [
{
"first": "Andon",
"middle": [],
"last": "Tchechmedjiev",
"suffix": ""
},
{
"first": "Pavlos",
"middle": [],
"last": "Fafalios",
"suffix": ""
},
{
"first": "Katarina",
"middle": [],
"last": "Boland",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Dietze",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Zapilko",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Todorov",
"suffix": ""
}
],
"year": 2019,
"venue": "International Semantic Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andon Tchechmedjiev, Pavlos Fafalios, Katarina Boland, Stefan Dietze, Benjamin Zapilko, and Kon- stantin Todorov. 2019. Claimskg -a knowledge graph of fact-checked claims. In International Se- mantic Web Conference. Springer.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The spread of true and false news online",
"authors": [
{
"first": "Soroush",
"middle": [],
"last": "Vosoughi",
"suffix": ""
},
{
"first": "Deb",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Sinan",
"middle": [],
"last": "Aral",
"suffix": ""
}
],
"year": 2018,
"venue": "Science",
"volume": "359",
"issue": "6380",
"pages": "1146--1151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018. The spread of true and false news online. Science, 359(6380):1146-1151.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Relevant document discovery for factchecking articles",
"authors": [
{
"first": "Xuezhi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Cong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Baumgartner",
"suffix": ""
},
{
"first": "Flip",
"middle": [],
"last": "Korn",
"suffix": ""
}
],
"year": 2018,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "525--533",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhi Wang, Cong Yu, Simon Baumgartner, and Flip Korn. 2018. Relevant document discovery for fact- checking articles. In WWW, pages 525-533.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Thesoz: A skos representation of the thesaurus for the social sciences",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Zapilko",
"suffix": ""
},
{
"first": "Johann",
"middle": [],
"last": "Schaible",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Mayr",
"suffix": ""
},
{
"first": "Brigitte",
"middle": [],
"last": "Mathiak",
"suffix": ""
}
],
"year": 2013,
"venue": "Semantic Web",
"volume": "4",
"issue": "3",
"pages": "257--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Zapilko, Johann Schaible, Philipp Mayr, and Brigitte Mathiak. 2013. Thesoz: A skos representa- tion of the thesaurus for the social sciences. Seman- tic Web, 4(3):257-263.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Simplified structure of ClaimsKG and graph baseline structures. KW=Keyword, C=Claim.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "CKG] graph embedding features (claim left-hand side vector), [(2) TEDR, (3) TEGPT2] text embedding features (pooled token vectors) from DistilRoberta (DR) and GPT2, [(1) & (2), (1) & (3)] the combination of both (concatenation), as reported in the first segment ofTable 2",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"type_str": "table",
"text": "Link prediction performance for ClaimsKG graph embeddings (Standard metrics: Mean Reciprocal Rank (MRR), HITS@1, HITS@3, HITS@10).",
"html": null,
"content": "<table/>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "",
"html": null,
"content": "<table/>"
}
}
}
}