AMSR / conferences_raw /iclr20 /ICLR.cc_2020_Conference_B1gXYR4YDH.json
mfromm's picture
Upload 3539 files
fad35ef
{"forum": "B1gXYR4YDH", "submission_url": "https://openreview.net/forum?id=B1gXYR4YDH", "submission_content": {"title": "DSReg: Using Distant Supervision as a Regularizer", "authors": ["Yuxian Meng", "Muyu Li", "Xiaoya Li", "Wei Wu", "Fei Wu", "Jiwei Li"], "authorids": ["yuxian_meng@shannonai.com", "muyu_li@shannonai.com", "xiaoya_li@shannonai.com", "wei_wu@shannonai.com", "wufei@zju.edu.cn", "jiwei_li@shannonai.com"], "keywords": [], "abstract": "In this paper, we aim at tackling a general issue in NLP tasks where some of the negative examples are highly similar to the positive examples, i.e., hard-negative examples). We propose the distant supervision as a regularizer (DSReg) approach to tackle this issue. We convert the original task to a multi-task learning problem, in which we first utilize the idea of distant supervision to retrieve hard-negative examples. The obtained hard-negative examples are then used as a regularizer, and we jointly optimize the original target objective of distinguishing positive examples from negative examples along with the auxiliary task objective of distinguishing soften positive examples (comprised of positive examples and hard-negative examples) from easy-negative examples. In the neural context, this can be done by feeding the final token representations to different output layers. Using this unbelievably simple strategy, we improve the performance of a range of different NLP tasks, including text classification, sequence labeling and reading comprehension. ", "pdf": "/pdf/dc311d99d631764f3da837dac2d31a6841ea7181.pdf", "paperhash": "meng|dsreg_using_distant_supervision_as_a_regularizer", "original_pdf": "/attachment/dc311d99d631764f3da837dac2d31a6841ea7181.pdf", "_bibtex": "@misc{\nmeng2020dsreg,\ntitle={{\\{}DSR{\\}}eg: Using Distant Supervision as a Regularizer},\nauthor={Yuxian Meng and Muyu Li and Xiaoya Li and Wei Wu and Fei Wu and Jiwei Li},\nyear={2020},\nurl={https://openreview.net/forum?id=B1gXYR4YDH}\n}"}, "submission_cdate": 1569439354868, "submission_tcdate": 1569439354868, "submission_tmdate": 1577168270461, "submission_ddate": null, "review_id": ["r1x4lqk0Yr", "r1lGMTihtr", "rkeE7qr0YH"], "review_url": ["https://openreview.net/forum?id=B1gXYR4YDH&noteId=r1x4lqk0Yr", "https://openreview.net/forum?id=B1gXYR4YDH&noteId=r1lGMTihtr", "https://openreview.net/forum?id=B1gXYR4YDH&noteId=rkeE7qr0YH"], "review_cdate": [1571842539657, 1571761417692, 1571867163833], "review_tcdate": [1571842539657, 1571761417692, 1571867163833], "review_tmdate": [1572972494389, 1572972494345, 1572972494303], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2020/Conference/Paper1241/AnonReviewer2"], ["ICLR.cc/2020/Conference/Paper1241/AnonReviewer1"], ["ICLR.cc/2020/Conference/Paper1241/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1gXYR4YDH", "B1gXYR4YDH", "B1gXYR4YDH"], "review_content": [{"experience_assessment": "I have read many papers in this area.", "rating": "6: Weak Accept", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "title": "Official Blind Review #2", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory.", "review": "The authors propose a novel approach to leverage Distant Supervision for discriminating between positive examples and \"negative examples that share salient features with the positive ones.\" In spite of its simplicity, the method appears to be quite promising.\n\nThe main feedback for the authors is to describe \"early & in detail\" the distant supervision techniques used in the experiments. The paper would be greatly improved by adding:\n- an intuitive paragraph in the intro that explains a concrete example of DS high level, but with enough details for the reader to grasp the idea)\n- adding a new section right after related work (and before the current \"3. Models\") in which you present in great detail (and with concrete & complete examples) the two main DS techniques that are used in the experiments; with that solid understanding \n\n\nOther comments:\n- for sake of simplicity & understand-ability, you should avoid the use of the term \"soften positive examples\" in the abstract\n- avoid using the term \"unbelievable\" (one in abstract & twice in intro\"\n- the last paragraph before \"Conclusion\" seems to refer to an earlier version of Figure 4, which, in its current form, does NOT \nhave S(pos,E) or the word \"interesting\""}, {"experience_assessment": "I have published one or two papers in this area.", "rating": "3: Weak Reject", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "title": "Official Blind Review #1", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory.", "review": "This paper proposes to improve performance of NLP tasks by focusing on negative examples that are similar to positive examples (e.g. hard negatives). This is achieved by regularizing the model using extra output classifiers trained to classify examples into up to three classes: positive, negative-easy, and negative-hard. Since those labels are not provided in the original data, examples are classified using heuristics (e.g. negative examples that contain a lot of features predictive of a positive class will be considered as hard-negative examples), which are used to provide distant supervision. This general approach is evaluated on phrase classification tasks, one information extraction task, and one MRCQA task.\n\nAlthough the proposed approach is interesting, this paper has several weaknesses (i) the method is not sufficiently justified or analyzed; (ii) there are missing links with previous work (notably on domain adversarial training); (iii) experimental setting is rather weak.\n\n1) About the justification of the approach:\n1.1) I feel like the proposed are not intuitively justified enough. They point out that \"L_2 can be thought as an objective to capture the shared features in positive examples and hard-negative examples\". Why would that be good from an intuitive perspective ?\n1.2) L_3 is forcing the model to group all the hard-negative examples together. Do you have an intuition why would that be useful ?\n1.3) What happens if the model overfits the hard negative examples in the training set ? This would mean that it has captured some features that can distinguish positive / negative label. Why would L_3 help in that case ?\n\n2) About related approaches:\n2.1) How does this method relate to domain adversarial training applied to positive and hard-negatives and adversarial examples in general ?\n2.2) Would similar performance be obtained by virtual adversarial training for example ?\n\n3) About the experimental setting:\n3.1) The performance reported is well below the use of recent work on these datasets and recent models such as BERT. Would these improvements carry over to bigger architectures ?\n3.2) In SST, the paper say they use BERT large, but the baseline performance (81.5) is well below BERT large performance in the original paper (94.9, https://arxiv.org/pdf/1810.04805.pdf). Why the mismatch ?\n3.3) What's the proportion of hard-negative examples mined for training and test set ? While the heuristics used seem reasonable, without those numbers, it is impossible to know if the heuristics truly predict hard-negative examples.\n3.4) Does the performance gain comes from better predicting hard-negative examples in the test set ? One could analyze the performance per error type (i.e. true positive, false negative, false positive (easy), false positive (hard)) with the baseline model and of the various proposed regularizing tasks (e.g. L_2 and L_3, both in training and test).\n3.5) The heuristics are used to pick what should be adversarial examples, but there is no mentions of this concept in the text. Oversampling those adversarial examples could, potentially, improve the performance of the baseline model. It would be interesting to try this.\n3.6) If possible, it would be good to add standard deviation to the results obtained running multiple runs.\n3.7) The visualization sub-section is anecdotal and not especially illuminating, and its text seems to refer to a different example than the figures (\"interesting\" is not in the figures.)\n\nMinor points:\n\n- Section 3 (Models) may be made shorter, the models used are utterly simple. This could free up space for more experiments.\n- In the tables, simply adding the name of the used model to the \"L1\" rows would be clearer.\n- The description of the pipelined results in Section 4.1 does not match the results shown in the table.\n- The citations are not well integrated with the text (\\citep vs \\cite), and the formatting of CRF changes in the last paragraph of Section 3.2.\n"}, {"experience_assessment": "I have published one or two papers in this area.", "rating": "3: Weak Reject", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "title": "Official Blind Review #3", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory.", "review": "This paper is aimed at tackling a general issue in NLP: Hard-negative training data (negative but very similar to positive) can easily confuse standard NLP model. To solve this problem, the authors first applied distant supervision technique to harvest hard-negative training examples and then transform the original task to a multi-task learning problem by splitting the original labels to positive, hard-negative, and easy-negative examples. The authors consider using 3 different objective functions: L1, the original cross entropy loss; L2, capturing the shared features in positive and hard-negative examples as regularizer of L1 by introducing a new label z; L3, a three-class classification objective using softmax.\nThis authors evaluted their approach on two tasks: Text Classification and Sequence Labeling. This implementation showed improvement of performance on both tasks.\n\nStrenghts:\n+ the paper proposes a reasonable way to try to improve accuracy by identifying hard-negative examples\n+ the paper is well written, but it would benefit from another round of proofreading for grammar and clarity\n\nWeaknesses:\n- performance of the proposed method highly depends on labels of hard-negative examples. The paper lacks insight about a principled way to label such examples, the costs associated with such labeling, and impacts of the labeling quality on accuracy. The experiments are not making a convincing case that similar improvements could be obtained on a larger class of problems.\n- The objective function L3 is not well justified.\n- It would be important to see if the proposed method is also beneficial with the state of the art neural networks on the two applications. \n- Table 3 (text classification result) does not list baselines."}], "comment_id": ["rkgfsgE4jH", "ryeGoDXNoB"], "comment_cdate": [1573302425646, 1573300121635], "comment_tcdate": [1573302425646, 1573300121635], "comment_tmdate": [1573302425646, 1573300121635], "comment_readers": [["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2020/Conference/Paper1241/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1241/Authors", "ICLR.cc/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "response to Review #1 ", "comment": "About the justification of the approach:\n1.1) I feel like the proposed are not intuitively justified enough. They point out that \"L_2 can be thought as an objective to capture the shared features in positive examples and hard-negative examples\". Why would that be good from an intuitive perspective ?\n1.2) L_3 is forcing the model to group all the hard-negative examples together. Do you have an intuition why would that be useful ?\n1.3) What happens if the model overfits the hard negative examples in the training set ? This would mean that it has captured some features that can distinguish positive / negative label. Why would L_3 help in that case ?\n\nRegarding the intuition: \nSorry for the confusion. Let me use an analogy to describe the intuition behind what this paper is doing. Think about a situation where we want to separate eagles (positive examples) from animals that are not eagles (negative examples). The tricky part here is that we have birds-that-are-not-eagles. Since they are not eagles, they are negative examples. But they do share a lot of common features with eagles, making the model hard to distinguish them from eagles. birds-that-are-not-eagles are hard-neg examples in this paper. The rest, animals-that-are-not-birds are easy-neg examples. \n\nre: They point out that \"L_2 can be thought as an objective to capture the shared features in positive examples and hard-negative examples\". Why would that be good from an intuitive perspective ?\nL2 separates pos+hard-neg from easy-neg. In our analogy, its function is to separate birds (eagles+ birds-that-are-not-eagles= birds) from animals-that-are-not-birds. Then \"Why would that be good from an intuitive perspective\"? Since L2 learns the birds features, which are shared by both eagles and birds-that-are-not-eagles, it would be then easier for models to learn what are not shared between them using L1 and L3, leading to better performance. \n\nre: 1.2) L_3 is forcing the model to group all the hard-negative examples together. Do you have an intuition why would that be useful ?\nWhy is grouping hard-negative examples useful? We are grouping animals-that-are-not-birds and training a classifier to separate eagles vs birds-that-are-not-eagles vs animals-that-are-not-birds. It is intuitive that this might help. "}, {"title": "response to review#3", "comment": "thank you for the sensible comments. \n\nre: performance of the proposed method highly depends on labels of hard-negative examples . The paper lacks insight about a principled way to label such examples,\n\nWe really appreciate your sensible and sharp comment. We completely agree with the point that there is no unified way to harvest hard-negative examples. But I think this does not hinder the contribution of this paper: \n\nFirst, just as the Mintz et al.,09 's paper shows that it is useful to use the idea of distant supervision to harvest/augment training data, we think it is similarly meaningful to show that you can use the distant supervision idea identify hard-negative examples, and your model will improve. \n\nSecond, the fact that there is no unified and principled way to harvest negative examples is not because the flaws of the proposed model, but because different tasks are just different, and definitions for hard-negative examples are intrinsically different for different tasks. \nIf we look at the distant supervision literature, we can find that how people use the idea of distant supervision to help different tasks are very different, e.g., for relation extraction, researchers use triples in freebase to harvest sentences containing the mention of these triples (Mintz et al.,2009 Distant supervision for relation extraction without labeled data); for sentiment analysis, researchers use emoticon to harvest training data ( Go et al., 2009 Twitter sentiment classification using distant supervision); in security researchers use events in calendar to harvest training data. Using task-specific data harvesting methods do not prevent them from being very influential (and highly cited) papers. \n\n\nre: - The objective function L3 is not well justified. \nThank you for your comment. We will make this point clear and justified. L3 (pos vs easy-neg vs hard-neg) is of the same nature with the combination of L1 (pos vs easy-neg + hard-neg) and L2 (pos + hard-neg vs easy-neg). L1+L2 can actually fully express L3: using L1 we can know which examples are positive, using L2 we can know which examples are easy-neg, and the rest are hard-neg. \nAdding L3 can empirically help the model, since it makes it very explicit that easy-neg and hard-neg are different. \n\n\nre: It would be important to see if the proposed method is also beneficial with the state of the art neural networks on the two applications.\nThank you for the comment. For text classification, we use BERT_large as a baseline. For sequence labeling, we use BERT_large+CRF as a baseline. For these two tasks, we believe we did adopt SOTA (or nearly SOTA) neural structures. \nFor reading comprehension, BiDAF was used, which is not SOTA. We redid experiments and used SpanBERT. The proposed model still consistently outperforms baselines when using SpanBERT as a backbone: \n \n L1(baseline) L1+L2 L1+L2+L3 Human\nBLEU-1 35.20/35.33 36.59/36.71 37.04/37.11 44.24/44.43\nBLEU-4 16.29/16.41 17.08/17.11 17.32/17.40 18.17/19.65\nMeteor 16.90/16.65 18.32/18.38 18.77/18.92 23.87/24.14\nROUGE-L 39.89/39.78 41.43/41.38 42.21/42.17 57.17/57.02\n\nre: - Table 3 (text classification result) does not list baselines.\nSorry for the confusion. Actually, the first line L1 is the baseline, since L1 denotes the objective only based on original golden labels (see Eq1 and Eq5 ). We will make this point more clear in the updated version. \n"}], "comment_replyto": ["r1lGMTihtr", "rkeE7qr0YH"], "comment_url": ["https://openreview.net/forum?id=B1gXYR4YDH&noteId=rkgfsgE4jH", "https://openreview.net/forum?id=B1gXYR4YDH&noteId=ryeGoDXNoB"], "meta_review_cdate": 1576798718387, "meta_review_tcdate": 1576798718387, "meta_review_tmdate": 1576800918165, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "This paper proposes a way to handle the hard-negative examples (those very close to positive ones) in NLP, using a distant supervision approach that serves as a regularization. The paper addresses an important issue and is well written; however, reviewers pointed put several concerns, including testing the approach on the state-of-art neural nets, and making experiments more convincing by testing on larger problems.\n \n", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1gXYR4YDH&noteId=KASHz9JtHB"], "decision": "Reject"}