Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
File size: 10,305 Bytes
fad35ef |
1 |
{"forum": "ajrveGQBl0", "submission_url": "https://openreview.net/forum?id=ajrveGQBl0", "submission_content": {"keywords": ["Knowledge Graph", "Contextualized Embeddings"], "TL;DR": "A new paradigm for contextualized knowledge graph embeddings", "authorids": ["AKBC.ws/2020/Conference/Paper10/Authors"], "title": "DOLORES: Deep Contextualized Knowledge Graph Embeddings", "authors": ["Anonymous"], "pdf": "/pdf/04b8e92a75e47c3f82fea13623e8daa192beb673.pdf", "subject_areas": ["Information Extraction"], "abstract": "We introduce Dolores, a new knowledge graph embeddings, that effectively capture contextual cues and dependencies among entities and relations. First, we note that short paths on knowledge graphs comprising of chains of entities and relations can encode valuable information regarding their contextual usage. We operationalize this notion by representing knowledge graphs not as a collection of triples but as a collection of entity-relation chains, and learn embeddings using deep neural models that capture such contextual usage. Based on Bi-Directional LSTMs, our model learns deep representations from constructed entity-relation chains. We show that these representations can be easily incorporated into existing models to significantly advance the performance on several knowledge graph tasks like link prediction, triple classification, and multi-hop knowledge base completion (in some cases by 11%).", "paperhash": "anonymous|dolores_deep_contextualized_knowledge_graph_embeddings", "archival_status": "Archival"}, "submission_cdate": 1581705786961, "submission_tcdate": 1581705786961, "submission_tmdate": 1588645357897, "submission_ddate": null, "review_id": ["Z0lci4n7iNw", "qhhvd4aJeD", "UaWF7hxW-80"], "review_url": ["https://openreview.net/forum?id=ajrveGQBl0¬eId=Z0lci4n7iNw", "https://openreview.net/forum?id=ajrveGQBl0¬eId=qhhvd4aJeD", "https://openreview.net/forum?id=ajrveGQBl0¬eId=UaWF7hxW-80"], "review_cdate": [1585208467812, 1585475673581, 1585615562571], "review_tcdate": [1585208467812, 1585475673581, 1585615562571], "review_tmdate": [1585695564051, 1585695563779, 1585695563514], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["AKBC.ws/2020/Conference/Paper10/AnonReviewer2"], ["AKBC.ws/2020/Conference/Paper10/AnonReviewer1"], ["AKBC.ws/2020/Conference/Paper10/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["ajrveGQBl0", "ajrveGQBl0", "ajrveGQBl0"], "review_content": [{"title": "The paper shows good improvement over the existing work. The paper is interesting and the idea is novel.", "review": "This work proposes Dolores that captures contextual dependency between entity and relation pairs. The proposed method improves existing work. The idea of the paper is to incorporate Random walks and a language model. The idea is interesting and novel. However, the explanation of the training of the method is missing.\n\nComments\n- The method has a shortcoming that it does not include the last entity. Let's assume that we have a sequence e_1,r_1,e_2,r_2,...,e_n,r_n,e_n+1. For the forward LSTM, e_n+1 is not included while e1 is not included for the backward LSTM.\n- The loss function of the method is not defined. \n- How to train the method is not clear. Is the method pretrained before each task?\n- For the vector of Dolores, how are the multiple paths incorporated? In the experimental setting, the author generates 20 chains for each node. but how to incorporate the multiple chains is not clear\n- For the link prediction task, it would be better to include ConvE + Dolores.\n\nThe paper shows good improvement over the existing work. The paper is interesting and the idea is novel.", "rating": "8: Top 50% of accepted papers, clear accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Bi-LSTM is proposed to learn knowledge graph embeddings. The key idea is not new and the evaluation has some flaws.", "review": "This paper presents a knowledge graph embedding approach that generates chains of entities and relations from a knowledge graph and learns the embeddings using Bi-LSTM. Results are shown to demonstrate that the proposed model can be incorporated into existing predictive models for different knowledge graph related tasks.\n\nThe key idea of using recurrent nets to learn embeddings from knowledge graph paths is not new. The authors try arguing the novelty, e.g., with respect to Das et al. [2017], in terms of 1) the different goal of learning generic embeddings rather than reasoning, and 2) the different way that paths are generated. However, the model by Das et al. also has a representation learning part; path generation of the proposed method cannot be seen as a contribution either as it is from the Node2Vec work.\n\nIn terms of the experimental evaluation, my main question is whether the comparison is fair. The authors compare original versions of existing methods with such methods incorporated with Dolores. This appears to be a comparison between a model without pretraining vs. the model with pretraining using Dolores. We all know that pretraining helps improve model performance and so, it is not surprising that a model incorporated with Dolores (e.g., ConvKB+Dolores) outperforms its original version and other comparison models without pretraining (e.g., Dolores, RNN-Path-entity). A more fair comparison would be comparing the effect of Dolores as a pretraining method with other pretraining methods.\n\nSome technical details in the method and experiment sections need to be clarified:\n- Section 3.4, \"By accepting triples or paths from certain tasks as input (not the paths generated by path generator)\" <= how exactly are paths obtained frm given tasks?\n- How are the weights of embeddings at different layers learned in Eq. 4?\n- In Section 4.1, it is mentioned that 20 chains are generated for each node. Is it always possible to extract 20 chains for any node? And, why is the parameter set to 20? How do different settings of this parameter affect the result?\n", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Representation learning yielding nice gains on a number of tasks. Some definitions could be clarified.", "review": "This paper presents a method of representing knowledge graph nodes and relations\nby sampling paths and applying a sequence model. The approach is motivated by\nrecent advances in building contextualized word representations (in particular\nElMO) and the learned representations are applied to a number of downstream\ntasks, with positive effects. This approach differs from other applications of\nRNNs to path modeling in its focus on learning reusable representations by\nmodeling random walks, rather than attempting to learn to model paths of some\nspecific type.\n\nThe results are compelling. Dolores seems to yield representations that can be\napplied effectively to a range of downstream tasks and models. I would like to\nsee more discussion of the models enhanced (ConvKB is introduced in the caption\nof Table 3. only), and I would also like to see how much Dolores could improve\nthe non SOTA approaches. However, the current set of evaluations show that\nDolores provides significant gains over existing work in a number of settings.\n\nPoints for improvement:\n\nI found the model description to be confusing. We are told repeatedly that the\napproach is building representations of [entity, relation] pairs. It is not\nclear from the description whether we are supposed to assume that the\nrepresentation of this pair is decomposed into separate, concatenated, entity\nand relation components. From the description of the model, it seems that the\noutput layer applies a softmax over all possible (entity, relation)\npairs. Conversely, Figure 2 seems to illustrate a decomposition of the output\nlayer into concatenated entity and relation representations and Table 5\nillustrates nearest neighbors of a single entity node (in context). Section 3\nshould be adapted to very explicitly state the nature of the predictive output\nlayer, and the loss that is used to train. \n\n\nSince Dolores' training procedure is so different from that of the downstream\ntasks, I would like to see some discussion of how the authors avoid overlap\nbetween pre-training and test graphs for e.g. FB15k.", "rating": "7: Good paper, accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1588285349062, "meta_review_tcdate": 1588285349062, "meta_review_tmdate": 1588341535846, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "The paper introduces a simple and effective approach to obtaining entity embeddings (relying on RNN-encoded walks and ELMo style losses). The approach works well, is simple, and well-motivated.\n\nWhile the underlying principles have been studied (e.g., RNN embeddings of walks or learning representations relying on walks as in DeepWalk), there is enough novelty in the proposed method. The other two reviewers are positive. \n\nWe would encourage the authors to address the reviewers' comments (e.g., regarding clarity in R3; I had similar issues with understanding the model structure and the learning procedure / objective). \n\nIt may be interesting to discuss the relation with graph neural networks (esp. with relational GCNs), which also learn a contextualized representation of entities, using similar types of losses. It may make sense to discuss why linearization can be beneficial (from representation learning or efficiency perspectives).", "meta_review_readers": ["everyone"], "meta_review_writers": ["AKBC.ws/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=ajrveGQBl0¬eId=E3jpoC6rGBs"], "decision": "Accept"} |