Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
text
Languages:
English
Size:
10K - 100K
License:
File size: 16,107 Bytes
fad35ef |
1 |
{"forum": "Hkg7rbcp67", "submission_url": "https://openreview.net/forum?id=Hkg7rbcp67", "submission_content": {"title": "Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications", "authors": ["Pouya Pezeshkpour", "Yifan Tian", "Sameer Singh"], "authorids": ["pezeshkp@uci.edu", "yifant@uci.edu", "sameer@uci.edu"], "keywords": ["Adversarial Attack", "Knowledge Base Completion"], "TL;DR": "", "abstract": "Representing entities and relations in an embedding space is a well-studied approach for machine learning on relational data. Existing approaches, however, primarily focus on improving ranking metrics and ignore other aspects of knowledge base representations, such as robustness, interpretability, and ability to detect errors. In this paper, we propose adversarial attacks on link prediction models (AALP): identifying the fact to add into or remove from the knowledge graph that changes the prediction of a target fact. Using these attacks, we are able to identify the most influential related fact for a predicted link and investigate the sensitivity of the model to additional made-up facts. We introduce an efficient approach to estimate the effect of making a change by approximating the change in the embeddings upon altering the knowledge graph. In order to avoid the combinatorial search over all possible facts, we introduce an inverter function and gradient-based search to identify the adversary in a continuous space. We demonstrate that our models effectively attack the link prediction models by reducing their accuracy between 6-45% for different metrics. Further, we study patterns in the most influential neighboring facts, as identified by the adversarial attacks. Finally, we use the proposed approach to detect incorrect facts in the knowledge base, achieving up to 55% accuracy in identifying errors.", "pdf": "", "archival status": "", "subject areas": [], "paperhash": "pezeshkpour|investigating_robustness_and_interpretability_of_link_prediction_via_adversarial_modifications", "_bibtex": "@inproceedings{\npezeshkpour2019investigating,\ntitle={Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications},\nauthor={Pouya Pezeshkpour and Yifan Tian and Sameer Singh},\nbooktitle={Automated Knowledge Base Construction (AKBC)},\nyear={2019},\nurl={https://openreview.net/forum?id=Hkg7rbcp67}\n}"}, "submission_cdate": 1542459707115, "submission_tcdate": 1542459707115, "submission_tmdate": 1580993622069, "submission_ddate": null, "review_id": [], "review_url": [], "review_cdate": [], "review_tcdate": [], "review_tmdate": [], "review_readers": [], "review_writers": [], "review_reply_count": [], "review_replyto": [], "review_content": [], "comment_id": ["H1gAlBPYEE", "rklHU8HhXN", "BkgelUH27N", "HJg3jBS37V", "Byg7QXBh74", "B1ez2MBnXN"], "comment_cdate": [1549526261914, 1548666445094, 1548666344069, 1548666276318, 1548665626949, 1548665514024], "comment_tcdate": [1549526261914, 1548666445094, 1548666344069, 1548666276318, 1548665626949, 1548665514024], "comment_tmdate": [1549526261914, 1548666445094, 1548666344069, 1548666276318, 1548665626949, 1548665514024], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["AKBC.ws/2019/Conference/Paper50/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper50/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper50/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper50/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper50/Authors", "AKBC.ws/2019/Conference"], ["AKBC.ws/2019/Conference/Paper50/Authors", "AKBC.ws/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Request to Update Reviews", "comment": "We would like to ask the reviewers to let us know if they have any thoughts on our response to their concerns, and if we have addressed the concerns, to please update the scores. Thank you!"}, {"title": "Response to Reviewer1", "comment": "We would like to thank the reviewer for their helpful comments. \n\n- \u201cFuture research directions\u201d\n\nAs the future work, we aim to investigate practical uses of \\AALP in improving existing link prediction models through two scenarios, 1) by identifying the error links in the training data and removing them from the KG, and 2) by incorporating the intuition gained through interpretability to improve existing representations (e.g. rule extracting experiment). We are also interested extending our approach to adversarial modifications that are more than just a single edge, i.e. what is the minimal set of facts to be changed to change the prediction of the model.\n\n- \u201cFigure 1 clarity\u201d\n\nWe address this issue in the revised version.\n\n- \u201cExpandability to additive models\u201d\n\nThe approach that we consider for approximating the effect of adversarial modification can potentially generalized to additive models as well. As an example, we drive a first-order approximation of the change for TransE model in the appendix. We focus on the multiplicative models due to their recent success.\n\n- \"Uncertain Test data\"\n\nYes, the evaluation set in this case is model-dependent. For each model, from the test triples that the model predicts correctly, we pick 100 triples with the minimum difference between their scores and the negative sample with the highest score. The distribution of relations differs because the different models are confident on different relations (we can provide further details if needed).\n\n- \u201cThe alignment of the numbers in the Table 3\u201d\n\nIn the revised version we improved the alignment of the numbers.\n"}, {"title": "Response to Reviewer2 (part 1)", "comment": "We would like to thank the reviewer for their helpful comments, and have attempted to address the concerns. \n\n- D1. \u201cThe definition of the loss function\u201d\n\nWe follow [1] to define the loss function, and thus omit the details. The summation is over the observed triples (<s, r, o>) in the training data. As a result, based on the definition of the Y_o^{s,r}, the zero values of Y_o^{s,r} represent the negative samples. In the revised version of the paper, we add more details to loss function to provide a more precise definition.\n\n- D2. \u201cThe optimization problems\u201d\n\nWe agree that we could define the optimization problem as $\\argmin \\bar \\psi(s,r,o)$, but that is identical to $\\argmax \\psi(s,r,o) - \\bar \\psi(s,r,o)$, since \\psi(s,r,o) is a constant for the search. We use the latter for notational convenience for the final approximation. \n\n- D3. \u201cThe focus on fixed-object attacks and easily expandability claim\u201d\n\nWe address this issue in the revised version and provide more justifications on the expandability of the method in the appendix.\n\n- D4. \u201cThe clarity of Sec. 4.1\u201d.\n\nWe provided a polished version of this section in the revised version.\n\n- D5. \u201chaving the same object argument\u201d\n\nWe add more clarification to the new version of the paper.\n\n- D6. \u201cClarifying the process of finding z_{s',r'}\u201d\n\nThe explanation of our method to find z_{s\u2019, r\u2019} is provided in section 4.2. We further add more clarification in the revised version of the work.\n\n- D7. \u201cAccuracy of the inverter network and maximum inner-product search\u201d\n\nWe add a new study on the accuracy of the inverter networks to the paper. As a result, our networks achieve more than 90% accuracy demonstrating their capability in correctly inverting the vector z_{s,r} to {s,r}. Furthermore, although we could use maximum inner-product search for DistMult, we were looking for a general algorithm which would work across multiple models. We elaborate this issue in the revised version. \n\n- D8. \u201cThe effect of normalization\u201d\n\nWe did not consider the effect of normalization in our approximations, because a recent implementation [1] found it unnecessary. However, normalization can be easily incorporated into our formulation as an additional term when deriving the approximation. We can include the details in the appendix if the reviewer feels that will be valuable.\n\n- D9. \u201cThe description of the experimental setup\u201d\n\nFor training the link prediction task, we adopt the implementation and hyperparameters from [1]. To retrain the models, we simply alter the training data and do the training process from scratch with the same hyperparameters. In section 4.2, to find the optimum z_{s\u2019, r\u2019} we use a gradient-based method which we found its optimum step size through a grid search. As we mentioned, we adopt the same configuration as the [1] that uses the filtered scenario. We have added more clarification on this part to the revised version.\n\n- D10. \u201cThe natural baseline for AALP-Remove\u201d\n\nWe consider this new baseline in our revised version and provide its performance in Table 3. As for AALP-Add, the reasonable manifestation of this baseline is $-f(s,r)$ which we have already considered in the paper.\n"}, {"title": "Response to Reviewer2 (part 2)", "comment": "- D11. \u201cThe influence function as well as the relationship to Koh and Liang [2017]\u201d\n\nWe provide more clarification on influence function in the revised version.\n\n- D12. \u201cAALP vs influence function experiment\u201d\n\nThe scores are based on the correlation of the **true** ranking (which is calculated by literally removing each triple one by one and observing the effect of retraining) and the ranking of the triples\u2019 effect based on AALP and the other baselines. Since the approaches make different approximations, the rankings are different. The targets are triples randomly selected from the data and we average the result by choosing 10 random target samples. It is intractable to evaluate all triples as potential adversaries even for these small KBs, since we would have to retrain the model for every triple. We added more details for this section in the revised version.\n\n- D13. \u201cThe discussion in the experimental study and the reason behind better performance of attacks on some relations\u201d\n\nIt is difficult to conjecture precisely why some relations are more robust, and we hope these results will seed future research in this direction. We have revised the text for the experiments in the current version of the paper to be more clear.\n\n- D14. \u201cHow much interpretability we can get from attacks\u201d\n\nInterpretability comes primarily from the removal, which aids our understanding by identifying the observed fact that has the **most influence** on the model for predicting a specific target. Note that the adversaries that are uninsightful/unintuitive does not mean the approach is not useful for interpretability, but might indicate a problem with the model or the dataset.\n \n- D15. \u201cHow rules have been extracted\u201d\n\nWe provide a more detailed explanation of our rule extraction method and compare the extracted rules with an existing method for the rule extraction [2] in the revised version.\n\n- D16. \u201cError detection experiment\u201d\n\nIn our setting, we assume that the target triples that have an error in their neighborhood are given. We can make the error detection fully automatic by setting a threshold (or even learn this threshold) on the value of $\\Delta_{(s',r')}(s,r,o)$ (defined in the revised version) and choose the triples whose removal causes a change less than this threshold as our errors.\n\n- D17. \u201cThe relationship to [Minervini et al., 2017, Cai and Wang, 2017]\u201d\n\nThe authors in [Minervini et al., 2017] consider an adversarial training using some predefined rules to provide a more accurate representation of KBs. Furthermore, [Cai and Wang, 2017] utilizes the GANs network to provide more meaningful negative samples during training. As a result, none of these works can be considered a suitable baseline for our method. We added more details about them in the related work section.\n\n[1] Dettmers, Tim, et al. \"Convolutional 2d knowledge graph embeddings.\" Thirty-Second AAAI 2018.\n[2] Yang, Bishan, et al. \"Embedding entities and relations for learning and inference in knowledge bases.\" International Conference on Learning Representations (ICLR 2015)."}, {"title": "Response to Reviewer3", "comment": "We would like to thank the reviewer for their helpful comments. \n\n- \u201cBeing clear and reproducible\u201d.\n\nWe added more explanation on the experiments and implementations in the revised version of the work. Further, we will make the code publicly available with the final version of our paper."}, {"title": "General Response", "comment": "We sincerely appreciate all the reviews, they provided useful and constructive feedback. In the revised paper, we address the concerns and suggestions to strengthen our paper. We hope reviewers revisit the rating in light of our revision and response. The following summarizes our changes. \n\n1) Inverter function accuracy: To study the accuracy of our inverter functions we evaluate the performance of our networks on the test set of our benchmarks. Our networks achieve more than 90% accuracy demonstrating their capability in correctly inverting the vector z_{s,r} to {s,r}. Details are provided in the revised paper (see Table 1).\n\n2) Scalability test: To better evaluate the performance of AALP (in the revised version we rename the method to AMLP) with influence function (IF), we compare the time to compute a single adversary by IF to AALP, as we steadily grow the number of entities (randomly chosen subgraphs), averaged over 10 random triples. Based on this experiment, we showed that AALP is mostly unaffected by the number of entities while IF increases quadratically. Considering that real-world KGs have tens of thousands of times more entities than our setting, we demonstrate that IF is infeasible for them (see Figure 4).\n\n3) New alternative for our method as AALP-flip: Using AALP method, we introduce a new alternative that increases the score of a fake fact, i.e., we identify the adversary that would increase the prediction for a fake fact. We study the effect of this new adversarial attack in Table 3.\n\n4) New baseline for AALP-Remove: We consider a new baseline for AALP-Remove by removing the neighbor where f(s',r') is closest to f(s,r) and demonstrate its behavior in Table 3.\n\n5) More on rule extraction: We provide a more detailed explanation of our rule extraction method and compare the extracted rules with an existing method for the rule extraction [1] in the revised version.\n\n\nMore details: We add more detailed explanations for our problem setup and experiments and provide a more detailed discussion on the behavior of the models in each experiment. We include more details on the expandability of our methods to other settings in the appendix.\n\n[1] Yang, Bishan, et al. \"Embedding entities and relations for learning and inference in knowledge bases.\" International Conference on Learning Representations (ICLR 2015).\n"}], "comment_replyto": ["Hkg7rbcp67", "BylFwOqgGV", "H1enoaZGfV", "H1enoaZGfV", "rye-yPICGN", "Hkg7rbcp67"], "comment_url": ["https://openreview.net/forum?id=Hkg7rbcp67¬eId=H1gAlBPYEE", "https://openreview.net/forum?id=Hkg7rbcp67¬eId=rklHU8HhXN", "https://openreview.net/forum?id=Hkg7rbcp67¬eId=BkgelUH27N", "https://openreview.net/forum?id=Hkg7rbcp67¬eId=HJg3jBS37V", "https://openreview.net/forum?id=Hkg7rbcp67¬eId=Byg7QXBh74", "https://openreview.net/forum?id=Hkg7rbcp67¬eId=B1ez2MBnXN"], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": null} |