id
int64
0
549
review
stringlengths
314
12.7k
spans
sequence
labels
sequence
0
This paper describes a state-of-the-art CCG parsing model that decomposes into tagging and dependency scores, and has an efficient A* decoding algorithm. Interestingly, the paper slightly outperforms Lee et al. (2016)'s more expressive global parsing model, presumably because this factorization makes learning easier. It's great that they also report results on another language, showing large improvements over existing work on Japanese CCG parsing. One surprising original result is that modeling the first word of a constituent as the head substantially outperforms linguistically motivated head rules. Overall this is a good paper that makes a nice contribution. I only have a few suggestions: -I liked the way that the dependency and supertagging models interact, but it would be good to include baseline results for simpler variations (e.g. not conditioning the tag on the head dependency). -The paper achieves new state-of-the-art results on Japanese by a large margin. However, there has been a lot less work on this data - would it also be possible to train the Lee et al. parser on this data for comparison? -Lewis, He and Zettlemoyer (2015) explore combined dependency and supertagging models for CCG and SRL, and may be worth citing.
[ [ 320, 381 ], [ 382, 453 ], [ 609, 669 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Major_claim" ]
1
The paper considers a synergistic combination of two non-HMM based speech recognition techniques: CTC and attention-based seq2seq networks. The combination is two-fold: 1. first, similarly to Kim et al. 2016 multitask learning is used to train a model with a joint CTC and seq2seq cost. 2. second (novel contribution), the scores of the CTC model and seq2seq model are ensembled during decoding (results of beam search over the seq2seq model are rescored with the CTC model). The main novelty of the paper is in using the CTC model not only as an auxiliary training objective (originally proposed by Kim et al. 2016), but also during decoding. - Strengths: The paper identifies several problems stemming from the flexibility offered by the attention mechanism and shows that by combining the seq2seq network with CTC the problems are mitigated. - Weaknesses: The paper is an incremental improvement over Kim et al. 2016 (since two models are trained, their outputs can just as well be ensembled). However, it is nice to see that such a simple change offers important performance improvements of ASR systems. - General Discussion: A lot of the paper is spent on explaining the well-known, classical ASR systems. A description of the core improvement of the paper (better decoding algorithm) starts to appear only on p. 5. The description of CTC is nonstandard and maybe should either be presented in a more standard way, or the explanation should be expanded. Typically, the relation p(C|Z) (eq. 5) is deterministic - there is one and only one character sequence that corresponds to the blank-expanded form Z. I am also unsure about the last transformation of the eq. 5.
[ [ 658, 845 ], [ 860, 920 ], [ 922, 995 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Jus_neg_1" ]
2
The authors propose ‘morph-fitting’, a method that retrofits any given set of trained word embeddings based on a morphologically-driven objective that (1) pulls inflectional forms of the same word together (as in ‘slow’ and ‘slowing’) and (2) pushes derivational antonyms apart (as in ‘expensive’ and ‘inexpensive’). With this, the authors aim to improve the representation of low-frequency inflections of words as well as mitigate the tendency of corpus-based word embeddings to assign similar representations to antonyms. The method is based on relatively simple manually-constructed morphological rules and is demonstrated on both English, German, Italian and Russian. The experiments include intrinsic word similarity benchmarks, showing notable performance improvements achieved by applying morph-fitting to several different corpus-based embeddings. Performance improvement yielding new state-of-the-art results is also demonstrated for German and Italian on an extrinsic task - dialog state tracking. Strengths: - The proposed method is simple and shows nice performance improvements across a number of evaluations and in several languages. Compared to previous knowledge-based retrofitting approaches (Faruqui et al., 2015), it relies on a few manually-constructed rules, instead of a large-scale knowledge base, such as an ontology. - Like previous retrofitting approaches, this method is easy to apply to existing sets of embeddings and therefore it seems like the software that the authors intend to release could be useful to the community. - The method and experiments are clearly described. 
 Weaknesses: - I was hoping to see some analysis of why the morph-fitted embeddings worked better in the evaluation, and how well that corresponds with the intuitive motivation of the authors. - The authors introduce a synthetic word similarity evaluation dataset, Morph-SimLex. They create it by applying their presumably semantic-meaning-preserving morphological rules to SimLex999 to generate many more pairs with morphological variability. They do not manually annotate these new pairs, but rather use the original similarity judgements from SimLex999. The obvious caveat with this dataset is that the similarity scores are presumed and therefore less reliable. Furthermore, the fact that this dataset was generated by the very same rules that are used in this work to morph-fit word embeddings, means that the results reported on this dataset in this work should be taken with a grain of salt. The authors should clearly state this in their paper. - (Soricut and Och, 2015) is mentioned as a future source for morphological knowledge, but in fact it is also an alternative approach to the one proposed in this paper for generating morphologically-aware word representations. The authors should present it as such and differentiate their work. - The evaluation does not include strong morphologically-informed embedding baselines. General Discussion: With the few exceptions noted, I like this work and I think it represents a nice contribution to the community. The authors presented a simple approach and showed that it can yield nice improvements using various common embeddings on several evaluations and four different languages. I’d be happy to see it in the conference. Minor comments: - Line 200: I found this phrasing unclear: “We then query … of linguistic constraints”. - Section 2.1: I suggest to elaborate a little more on what the delta is between the model used in this paper and the one it is based on in Wieting 2015. It seemed to me that this was mostly the addition of the REPEL part. - Line 217: “The method’s cost function consists of three terms” - I suggest to spell this out in an equation. - Line 223: x and t in this equation (and following ones) are the vector representations of the words. I suggest to denote that somehow. Also, are the vectors L2-normalized before this process? Also, when computing ‘nearest neighbor’ examples do you use cosine or dot-product? Please share these details. - Line 297-299: I suggest to move this text to Section 3, and make the note that you did not fine-tune the params in the main text and not in a footnote. - Line 327: (create, creates) seems like a wrong example for that rule. 
 - I have read the author response
[ [ 1021, 1051 ], [ 1057, 1147 ], [ 1345, 1443 ], [ 1448, 1552 ], [ 1556, 1607 ], [ 2288, 2407 ], [ 2420, 2507 ], [ 2996, 3075 ], [ 3249, 3290 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Jus_pos_3", "Eval_pos_3", "Eval_pos_4", "Jus_neg_1", "Eval_neg_1", "Major_claim", "Major_claim" ]
3
This paper outlines a method to learn sense embeddings from unannotated corpora using a modular sense selection and representation process. The learning is achieved by a message passing scheme between the two modules that is cast as a reinforcement learning problem by the authors. - Strengths: The paper is generally well written, presents most of its ideas clearly and makes apt comparisons to related work where required. The experiments are well structured and the results are overall good, though not outstanding. However, there are several problems with the paper that prevent me from endorsing it completely. - Weaknesses: My main concern with the paper is the magnification of its central claims, beyond their actual worth. 1) The authors use the term "deep" in their title and then several times in the paper. But they use a skip-gram architecture (which is not deep). This is misrepresentation. 2) Also reinforcement learning is one of the central claims of this paper. However, to the best of my understanding, the motivation and implementation lacks clarity. Section 3.2 tries to cast the task as a reinforcement learning problem but goes on to say that there are 2 major drawbacks, due to which a Q-learning algorithm is used. This algorithm does not relate to the originally claimed policy. Furthermore, it remains unclear how novel their modular approach is. Their work seems to be very similar to EM learning approaches, where an optimal sense is selected in the E step and an objective is optimized in the M step to yield better sense representations. The authors do not properly distinguish their approach, nor motivative why RL should be preferred over EM in the first place. 3) The authors make use of the term pure-sense representations multiple times, and claim this as a central contribution of their paper. I am not sure what this means, or why it is beneficial. 4) They claim linear-time sense selection in their model. Again, it is not clear to me how this is the case. A highlighting of this fact in the relevant part of the paper would be helpful. 5) Finally, the authors claim state-of-the-art results. However, this is only on a single MaxSimC metric. Other work has achieved overall better results using the AvgSimC metric. So, while state-of-the-art isn't everything about a paper, the claim that this paper achieves it - in the abstract and intro - is at least a little misleading.
[ [ 295, 330 ], [ 332, 424 ], [ 425, 460 ], [ 465, 518 ], [ 519, 615 ], [ 630, 731 ], [ 732, 2417 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Eval_pos_3", "Major_claim", "Eval_neg_1", "Jus_neg_1" ]
4
- Strengths: This is a well written paper. The paper is very clear for the most part. The experimental comparisons are very well done. The experiments are well designed and executed. The idea of using KD for zero-resource NMT is impressive. - Weaknesses: There were many sentences in the abstract and in other places in the paper where the authors stuff too much information into a single sentence. This could be avoided. One can always use an extra sentence to be more clear. There could have been a section where the actual method used could be explained in a more detailed. This explanation is glossed over in the paper. It's non-trivial to guess the idea from reading the sections alone. During test time, you need the source-pivot corpus as well. This is a major disadvantage of this approach. This is played down - in fact it's not mentioned at all. I could strongly encourage the authors to mention this and comment on it. - General Discussion: This paper uses knowledge distillation to improve zero-resource translation. The techniques used in this paper are very similar to the one proposed in Yoon Kim et. al. The innovative part is that they use it for doing zero-resource translation. They compare against other prominent works in the field. Their approach also eliminates the need to do double decoding. Detailed comments: -Line 21-27 - the authors could have avoided this complicated structure for two simple sentences. Line 41 - Johnson et. al has SOTA on English-French and German-English. Line 77-79 there is no evidence provided as to why combination of multiple languages increases complexity. Please retract this statement or provide more evidence. Evidence in literature seems to suggest the opposite. Line 416-420 - The two lines here are repeated again. They were first mentioned in the previous paragraph. Line 577 - Figure 2 not 3!
[ [ 13, 44 ], [ 45, 88 ], [ 89, 138 ], [ 139, 187 ], [ 188, 245 ], [ 260, 404 ], [ 699, 862 ], [ 863, 936 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5", "Eval_neg_1", "Eval_neg_2", "Jus_neg_2" ]
5
- Strengths: * Elaborate evaluation data creation and evaluation scheme. * Range of compared techniques: baseline/simple/complex - Weaknesses: * No in-depth analysis beyond overall evaluation results. - General Discussion: This paper compares several techniques for robust HPSG parsing. Since the main contribution of the paper is not a novel parsing technique but the empirical evaluation, I would like to see a more in-depth analysis of the results summarized in Table 1 and 2. It would be nice to show some representative example sentences and sketches of its analyses, on which the compared methods behaved differently. Please add EDM precision and recall figures to Table 2. The EDM F1 score is a result of a mixed effects of (overall and partial) coverage, parse ranking, efficiency of search, etc. The overall coverage figures in Table 1 are helpful but addition of EDM recall to Table 2 would make the situations clearer. Minor comment: -Is 'pacnv+ut' in Table 1 and 2 the same as 'pacnv' described in 3.4.3?
[ [ 16, 74 ], [ 149, 204 ] ]
[ "Eval_pos_1", "Eval_neg_1" ]
7
This paper presents a corpus of annotated essay revisions. It includes two examples of application for the corpus: 1) Student Revision Behavior Analysis and 2) Automatic Revision Identification The latter is essentially a text classification task using an SVM classifier and a variety of features. The authors state that the corpus will be freely available for research purposes. The paper is well-written and clear. A detailed annotation scheme was used by two annotators to annotate the corpus which added value to it. I believe the resource might be interesting to researcher working on writing process research and related topics. I also liked that you provided two very clear usage scenarios for the corpus. I have two major criticisms. The first could be easily corrected in case the paper is accepted, but the second requires more work. 1) There are no statistics about the corpus in this paper. This is absolutely paramount. When you describe a corpus, there are some information that should be there. I am talking about number of documents (I assume the corpus has 180 documents (60 essays x 3 drafts), is that correct?), number of tokens (around 400 words each essay?), number of sentences, etc. I assume we are talking about 60 unique essays x 400 words, so about 24,000 words in total. Is that correct? If we take the 3 drafts we end up with about 72,000 words but probably with substantial overlap between drafts. A table with this information should be included in the paper. 2) If the aforementioned figures are correct, we are talking about a very small corpus. I understand the difficulty of producing hand-annotated data, and I think this is one of the strengths of your work, but I am not sure about how helpful this resource is for the NLP community as a whole. Perhaps such a resource would be better presented in a specialised workshop such as BEA or a specialised conference on language resources like LREC instead of a general NLP conference like ACL. You mentioned in the last paragraph that you would like to augment the corpus with more annotation. Are you also willing to include more essays? Comments/Minor: - As you have essays by native and non-native speakers, one further potential application of this corpus is native language identification (NLI). - p. 7: "where the unigram feature was used as the baseline" - "word unigram". Be more specific. - p. 7: "and the SVM classifier was used as the classifier." - redundant.
[ [ 381, 417 ], [ 522, 635 ], [ 636, 714 ], [ 715, 845 ], [ 849, 935 ], [ 935, 1493 ], [ 1498, 1581 ], [ 1582, 1786 ], [ 1786, 1979 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Major_claim", "Eval_neg_1", "Jus_neg_1", "Jus_neg_2", "Eval_pos_2", "Major_claim" ]
8
This paper presents several weakly supervised methods for developing NERs. The methods rely on some form of projection from English into another language. The overall approach is not new and the individual methods proposed are improvements of existing methods. For an ACL paper I would have expected more novel approaches. One of the contributions of the paper is the data selection scheme. The formula used to calculate the quality score is quite straightforward and this is not a bad thing. However, it is unclear how the thresholds were calculated for Table 2. The paper says only that different thresholds were tried. Was this done on a development set? There is no mention of this in the paper. The evaluation results show clearly that data selection is very important, but one may not know how to tune the parameters for a new data set or a new language pair. Another contribution of the paper is the combination of the outputs of the two systems developed in the paper. I tried hard to understand how it works, but the description provided is not clear. The paper presents a number of variants for each of the methods proposed. Does it make sense to combine more than two weakly supervised systems? Did the authors try anything in this direction. It would be good to know a bit more about the types of texts that are in the "in-house" dataset.
[ [ 155, 322 ], [ 323, 563 ], [ 564, 699 ], [ 867, 1061 ] ]
[ "Eval_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3" ]
9
The paper proposes a model for the Stanford Natural Language Inference (SNLI) dataset, that builds on top of sentence encoding models and the decomposable word level alignment model by Parikh et al. (2016). The proposed improvements include performing decomposable attention on the output of a BiLSTM and feeding the attention output to another BiLSTM, and augmenting this network with a parallel tree variant. - Strengths: This approach outperforms several strong models previously proposed for the task. The authors have tried a large number of experiments, and clearly report the ones that did not work, and the hyperparameter settings of the ones that did. This paper serves as a useful empirical study for a popular problem. - Weaknesses: Unfortunately, there are not many new ideas in this work that seem useful beyond the scope the particular dataset used. While the authors claim that the proposed network architecture is simpler than many previous models, it is worth noting that the model complexity (in terms of the number of parameters) is fairly high. Due to this reason, it would help to see if the empirical gains extend to other datasets as well. In terms of ablation studies, it would help to see 1) how well the tree-variant of the model does on its own and 2) the effect of removing inference composition from the model. Other minor issues: 1) The method used to enhance local inference (equations 14 and 15) seem very similar to the heuristic matching function used by Mou et al., 2015 (Natural Language Inference by Tree-Based Convolution and Heuristic Matching). You may want to cite them. 2) The first sentence in section 3.2 is an unsupported claim. This either needs a citation, or needs to be stated as a hypothesis. While the work is not very novel, the the empirical study is rigorous for the most part, and could be useful for researchers working on similar problems. Given these strengths, I am changing my recommendation score to 3. I have read the authors' responses.
[ [ 424, 506 ], [ 506, 660 ], [ 661, 729 ], [ 744, 864 ], [ 864, 1339 ], [ 1615, 1673 ], [ 1674, 1742 ], [ 1743, 1964 ] ]
[ "Eval_pos_1", "Jus_pos_2", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Major_claim" ]
10
- Strengths: This paper contributes to the field of knowledge base-based question answering (KB-QA), which is to tackle the problem of retrieving results from a structured KB based on a natural language question. KB-QA is an important and challenging task. The authors clearly identify the contributions and the novelty of their work, provide a good overview of the previous work and performance comparison of their approach to the related methods. Previous approaches to NN-based KB-QA represent questions and answers as fixed length vectors, merely as a bag of words, which limits the expressiveness of the models. And previous work also don’t leverage unsupervised training over KG, which potentially can help a trained model to generalize. This paper makes two major innovative points on the Question Answering problem. 1) The backbone of the architecture of the proposed approach is a cross-attention based neural network, where attention is used for capture different parts of questions and answer aspects. The cross-attention model contains two parts, benefiting each other. The A-Q attention part tries to dynamically capture different aspects of the question, thus leading to different embedding representations of the question. And the Q-A attention part also offer different attention weight of the question towards the answer aspects when computing their Q-A similarity score. 2) Answer embeddings are not only learnt on the QA task but also modeled using TransE which allows to integrate more prior knowledge on the KB side. Experimental results are obtained on Web questions and the proposed approach exhibits better behavior than state-of-the-art end-to-end methods. The two contributions were made particularly clear by ablation experiment. Both the cross-attention mechanism and global information improve QA performance by large margins. The paper contains a lot of contents. The proposed framework is quite impressive and novel compared with the previous works. - Weaknesses: The paper is well-structured, the language is clear and correct. Some minor typos are provided below. 1. Page 5, column 1, line 421: re-read  reread 2. Page 5, column 2, line 454: pairs be  pairs to be - General Discussion: In Equation 2: the four aspects of candidate answer aspects share the same W and b. How about using separate W and b for each aspect? I would suggest considering giving a name to your approach instead of "our approach", something like ANN or CA-LSTM…(yet something different from Table 2). In general, I think it is a good idea to capture the different aspects for question answer similarity, and cross-attention based NN model is a novel solution for the above task. The experimental results also demonstrate the effectiveness of the authors’ approach. Although the overall performance is weaker than SP-based methods or some other integrated systems, I think this paper is a good attempt in end-to-end KB-QA area and should be encouraged.
[ [ 257, 333 ], [ 335, 448 ], [ 745, 824 ], [ 825, 1858 ], [ 1859, 1897 ], [ 1897, 1983 ], [ 1998, 2062 ], [ 2941, 3029 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Jus_pos_3", "Eval_pos_4", "Eval_pos_5", "Eval_pos_6", "Major_claim" ]
11
paper_summary Since only minor revisions have been made to the paper, my views of the paper have not changed. For details, please see my previous review comments. The author’s response has answered my previous questions very well and added relevant analysis to the revised draft. In my opinion, the analysis of the negative phenomenon on NLU corpora in this paper is comprehensive. But as its contribution is incremental, it is unlikely to be improved through minor modifications. In summary, I think it is a borderline paper of ACL, or as a Findings paper. summary_of_strengths How to deal with negation semantic is one of the most fundamental and important issues in NLU, which is especially often ignored by existing models. This paper verifies the significance of the problem on multiple datasets, and in particular, proposes to divide the negations into important and unimportant types and analyzes them (Table 2). The work of the paper is comprehensive and solid. summary_of_weaknesses However, I think the innovation of this paper is general. The influence of negation expressions on NLP/NLU tasks has been widely proposed in many specialized studies, as well as in the case/error analysis of many NLP/NLU tasks. In my opinion, this paper is the only integration of these points of view and does not provide deeper insights to inspire audiences in related fields. comments,_suggestions_and_typos NA
[ [ 280, 381 ], [ 389, 420 ], [ 422, 479 ], [ 481, 557 ], [ 729, 920 ], [ 921, 971 ], [ 994, 1051 ], [ 1052, 1221 ], [ 1237, 1295 ], [ 1300, 1371 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Eval_neg_2", "Major_claim", "Jus_pos_2", "Eval_pos_2", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Eval_neg_3" ]
12
paper_summary The paper defines a CBMI metric over the NMT source and a target word (given the target history) and then uses it to re-weight the NMT training loss. The definition is simplified to the quotient of NMT probability and the LM probability. Experiments shows that the training strategy improves the translation quality, over two training datasets, outperforming previous works. The paper further shows the method also improves the human evaluation. summary_of_strengths - The proposed method appears to be simple, but works; -Paper appears to be well written; -Experiments comparison and analysis, human evaluation; Overall, paper did a good job in presenting and examining the effectiveness of a simple idea. summary_of_weaknesses I think the paper (and related works) presented the works in a way that they presented a hypothesis (eg, importance of token reweighing), then conduct experiments and analysis showing the effectiveness of the method, then saying re-weighing the token importance works. After finishing reading, I felt the need to go back go re-examine the hypothesis to understand more and realized that I still don't understand the problem in a machine learning sense. The authors are encouraged to (at least) post some "aha" examples showing re-weighting this way indeed is the one that matters. Also, discussing and revealing the reason why NMT still needs this re-weighting even though the NMT model can in principle implicitly capture them would be really helpful. comments,_suggestions_and_typos Please see the weakness section.
[ [ 482, 536 ], [ 537, 571 ], [ 628, 721 ], [ 1014, 1197 ], [ 1198, 1498 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1" ]
13
paper_summary This paper describes the development of a data set that can be used to develop a system that can generate automated feedback to learners' responses to short-answer questions. The data set includes questions, their answers, and their feedback in the domain of computer networking, mostly in English but with a sizable German subset as well. The paper describes the construction of the data set, including agreement and encountered challenges, as well as experimental results that can serve as a baseline for future work. summary_of_strengths Although the domain is niche, since the authors do an extremely thorough job of thoughtfully constructing their data set with expert annotators and guidance, agreement-measurement, and validity evidence, this paper should serve as a model to the community with respect to how to compile similar data sets. While the authors mention that the data set is small -- 4,519 response/feedback pairs covering 22 English and 8 German questions -- it's actually quite large for something that is completely human-compiled and human-reviewed. This paper is very clear, easy to follow, and well-organized. summary_of_weaknesses Unfortunately, the final data set contains imbalanced classes, something the authors aim to address in future versions of the data set. I wouldn't use this as a reason to reject this paper, however. Some in our community may find this work, and its domain, rather niche; this paper would be a great fit for the BEA workshop. comments,_suggestions_and_typos Can the authors mention the dates during which the data was collected? Since this was such a big manual effort, I wouldn't be surprised if the bulk of the work was done in 2021 on data collected in 2020, for instance. This is also important since the domain is computer networking which changes fairly rapidly. On line 005, insert "many" between "by" and "Automatic". On line 040, change "interesting" to "useful". On line 054, "in the last decades" should read "over the past decades". On line 154, "detrimental for" should be "detrimental to". The last sentence of section 2.2, beginning with "Lastly, structured collections..." seems out-of-place here. Should this be a separate paragraph? Or can you do more to tie it in with the preceding sentences? On line 395, "refined for multiple years" should be "refined over multiple years". In this field, it's typical to refer to learners' responses to questions as "responses" rather than "submissions". Just a minor thing you may want to consider :)
[ [ 558, 761 ], [ 762, 862 ], [ 1090, 1152 ], [ 1175, 1310 ], [ 1375, 1446 ], [ 1447, 1501 ] ]
[ "Jus_pos_1", "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Eval_neg_2", "Major_claim" ]
14
paper_summary This paper presents a cross-lingual information retrieval approach using knowledge distillation. The underlying model is ColBERT with XLM-R as the pretained language model. The approach makes use of a teacher model based on query translation and monolingual IR in English. The student model is trained with two objectives. One is an IR objective to match the teacher model's query-passage relevance predictions. The second objective is to learn a representation of the non-english text that most closely matches the teacher's representation at the token level. This relies on a cross lingual token alignment based on greedily aligning tokens with the highest cosine similarity. The authors do abalations of their two objectives and find they are both useful and also compare against fine-tuning ColBERT directly on cross lingual data. On the XOR-TyDi leaderboard, one of this paper's models is the current best. summary_of_strengths - Novel approach that does cross lingual IR where the resulting model does not use MT -New cross lingual token alignment based on multilingual pretrained langauge model -Good abalations and comparisons with fine-tuning on cross lingual data -Strong performance on zero-shot settings as well -The paper has best performance on XOR-TyDi summary_of_weaknesses No major weaknesses comments,_suggestions_and_typos line 62-64 asks whether a high performance CLIR model can be trained that can be operate without having to rely on MT. But the training process still relies on MT, so this approach does still rely on MT, right? I guess the point is that it only relies on MT at training time and not at evaluation / inference. It might be possible to try to make this clearer.
[ [ 950, 1033 ], [ 1036, 1117 ], [ 1119, 1189 ], [ 1191, 1239 ], [ 1307, 1326 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Major_claim" ]
15
paper_summary The paper investigates methods to automatically generate morally framed arguments (relying on a specific stance, on the given topic focusing on the given morals), and analyses the effect of these arguments on different audiences (namely, as liberals and conservatives). summary_of_strengths - The topic of the paper is potentially interesting to the ACL audience in general, and extremely interesting in particular to the Argument Mining (and debating technology) research community. Investigating methods to inject morals into argument generation systems to make arguments more effective and convincing is a very valuable step in the field (opening at the same time ethical issues). -The paper is clear, well written and nicely structured -The experimental setting is well described and the applied methods are technically sound. It relies on the solid framework of the IBM Debater technology. summary_of_weaknesses - very limited size of the user study (6 people in total, 3 liberals and 3 conservatives). Moreover, a "stereotypical" hypothesis of their political vision is somehow assumed) -the Cohen’s κ agreement was 0.32 on the moral assignment -> while the authors claim that this value is in line with other subjective argument-related annotations, I still think it is pretty low and I wonder about the reliability of such annotation. comments,_suggestions_and_typos [line 734] Ioana Hulpu? - > check reference
[ [ 308, 498 ], [ 499, 698 ], [ 700, 754 ], [ 756, 845 ], [ 846, 910 ], [ 935, 970 ], [ 972, 1023 ], [ 1025, 1109 ], [ 1274, 1358 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Jus_pos_4", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Eval_neg_3" ]
17
paper_summary This paper is about improving the prosody of neural text-to-speech (NTTS) systems using the surrounding context of a given input text. The study introduced an extension to a well known NTTS system i.e., FastSpeech-2. The extension is a phoneme level conditional VAE. As cited in the current paper both FastSpeech-2 and conditional VAE are already proposed in the literature. The main novelty of this paper is representation of surrounding utterances using a pre-trained BERT model and generation of prosodically varied samples with the help of learned contextual information. Authors followed standard TTS evaluation protocols to evaluate their proposed architecture, and evaluation results are in favor of the proposed architecture. summary_of_strengths - This paper introduced a new component to FastSpeech-2, a well known non-autoregressive NTTS architecture, called as cross utterance conditional VAE (CUC-VAE). -The CUC-VAE contains two main components 1) cross utterance (CU) embedding and 2) CU enhanced conditional VAE. summary_of_weaknesses - As a reviewer, I found the paper slightly difficult to read -- some long sentences can be rewritten to improve the clarity of the paper reading. -The subjective results are derived on a small set of utterances (11 audios) using a small number of listeners (23 subjects), this may not be substantial enough for statistical significance of the results published in the paper. -It is not clear why CUC-VAE TTS system with L=1 performed worse than baseline system -- an appropriate reason or further analysis may be required to validate this. -In general, there are quite a few things missing -- details provided in comments section. comments,_suggestions_and_typos **Typos:** -Background section: "...high fidelity thank to…" -> "...high fidelity thanks to…" -Background section: " … Fang et al., 2019).Many…" -> " … Fang et al., 2019). Many…" -Figure-1: "...which integrated to into…" -> "...which integrated into…" **Comments:** -Author did not mention how the initial durations of phonemes are obtained. -Are durations of phonemes predicted in frames or seconds? -Figure-1 did not mention how the proposed CUC-VAE TTS system works in the inference time. Moreover, it is hard to understand the color schema followed in the Figure-1, there is no legend. -There is no mentioning of train, valid and test set splits in the dataset section. -In Table-2 the baseline system received a better MOS score than the baseline + fine-grained VAE and baseline + CVAE, why is it? Whereas in Table-4 the baseline system show high MCD and FFE error than the baseline + fine-grained VAE and baseline + CVAE systems, why is it? -How do you represent the reference mel-spectrogram at phoneme level? -Did you use pre-trained HiFi-GAN to synthesize speech from the predicted mel-spectrograms?
[ [ 1070, 1129 ], [ 1130, 1214 ], [ 1216, 1340 ], [ 1342, 1444 ], [ 1446, 1530 ], [ 1531, 1609 ], [ 1612, 1660 ], [ 1661, 1701 ] ]
[ "Eval_neg_1", "Jus_neg_1", "Jus_neg_2", "Eval_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4" ]
18
paper_summary This paper proposes a novel refinement method to synchronously refine the previously generated words and generate the next word for language generation models. The authors accomplish this goal with an interesting implementation without introducing additional parameters. Specifically, the authors reuses the context vectors at previous decoding steps (i.e., c_1, c_2, ..., c_{i-2}) to calculate the refined probabilities in a similar way to the standard generation probabilities (the only difference is that using c_{0<n<i-1} instead of c_{i-1}). A refinement operation will be conducted at a previous position, where the refinement probability is greater than the generation probability. To reduce the computational cost and potential risk of "over-refinement", the authors design a local constraint that narrow the refinement span to the N nearest tokens. In model training, the authors randomly select future target words not greater than N to cover a variety of different future contexts as bleu parts. summary_of_strengths 1. A novel approach to accomplish the modeling of future context. 2. Comprehensive experiments to validate the effectiveness of the proposed approach across different tasks (e.g., standard and simultaneous machine translation, storytelling, and text summarization). 3. Detailed analyses to show how each component (e.g., the hyper parameter N, local constraints and refinement mask) works. summary_of_weaknesses The main concern is the measure of the inference speed. The authors claimed that "the search complexity of decoding with refinement as consistent as that of the original decoding with beam search" (line 202), and empirically validated that in Table 1 (i.e., #Speed2.). Even with local constraint, the model would conduct 5 (N=5) more softmax operations over the whole vocabulary (which is most time-consuming part in inference) to calculate the distribution of refinement probabilities for each target position. Why does such operations only marginally decrease the inference speed (e.g., form 3.7k to 3.5k tokens/sec for Transformer-base model)? How do we measure the inference speed? Do you follow Kasai et al., (2021) to measure inference speed when translating in mini-batches as large as the hardware allows. I guess you report the batch decoding speed since the number is relatively high. Please clarify the details and try to explain why the refinement model hardly affect the inference speed. The score will be increased if the authors can address the concern. [1] Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah Smith. Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Translation. ICLR 2021. comments,_suggestions_and_typos 1. Line118: SelfAtt_c => Cross Attention, the attention network over the encoder representations is generally called as cross attention. 2. Ablation study in Section 4.1.3 should be conducted on validation sets instead of test sets (similar to Section 4.1.2). In addition, does the refinement mask in Table 2 denote that randomly selecting future target words no greater than N in model training (i.e., Line 254)? 3. Is PPL a commonly-used metric for storytelling?
[ [ 1046, 1108 ], [ 1113, 1216 ], [ 1218, 1308 ], [ 1314, 1358 ], [ 1359, 1427 ], [ 1428, 1433 ], [ 1458, 1513 ], [ 1514, 2458 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Jus_pos_2", "Eval_pos_3", "Jus_pos_3", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1" ]
19
paper_summary The paper proposes 6 test corpora for vision and language captioning systems that target specific competency. For each competency, examples are generated semi-automatically from existing language + vision tasks, such QA in V7W, and are created in a FOIL style, where one example correctly describes the image, while another example makes a minimal change to caption and does not describe the image. Systems are then challenged to prefer captions that correctly identify the image. The competencies tested include existence, plurality/counting, spatial reasoning (via prepositions), situational knowledge (via imSitu data), and coreference. The paper evaluates several recent pre-training based models, finding that many fail at their challenges, and that the multi-task model 12-in-1, works best. summary_of_strengths Proposes a fairly diverse set of challenges that could be a useful diagnostic going forward. The paper evaluates currently relevant model on the diagnostic, establishing clear baselines for their dataset moving forward. Because the paper encompasses essentially 5 independent datasets, it a very substantial body of work. It seems larger than a standard paper. summary_of_weaknesses (being a previous reviewer R BWRg, I will respond to previously identified weakness) I still find the argument of what is and is not included in the diagnostic unclear. In many ways, this seems like a case of a subset of competencies that we have enough visual annotations to semi-automatically create data for. In my opinion, the paper should steer away from making arguments that these examples are deeply linguistic, beyond, involving nouns, counting, verbs, and coreference. As such, I find the title and some of the introduction over-claiming, but, this is really a matter of opinion, resting on what exactly 'linguistic' means. The main body of the paper still lacks examples but I appreciate their inclusion in the appendix. It's very hard to imagine the foils from the descriptions alone. This may be asking a lot, but the paper would be significantly improved if the last page were almost entirely made of examples from the appendix. This is a CVPR style of presentation, and would require significant text trimming. The examples were good overall, but the co-ref part of the benchmark stands out. It is essentially a QA task, which isn't really compatible with just caption based training that most of the evaluated most are setup to do (with the exception of 12-1). This isn't an issue, because its not really the benchmark's problem, but I am not sure the format of the foil is that sensible. I suspect this will be the least used of the new foils, but I don't have a concrete proposal how it could be improved to really be a captioning task. comments,_suggestions_and_typos -
[ [ 833, 925 ], [ 1054, 1118 ], [ 1120, 1195 ], [ 1303, 1385 ], [ 1386, 1851 ], [ 1852, 1949 ], [ 1950, 2243 ], [ 2245, 2325 ], [ 2326, 2774 ] ]
[ "Eval_pos_1", "Jus_pos_2", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3" ]
20
paper_summary Motivated by empirical findings that training models with Prompt Tuning can achieve the same performance as fully fine-tuning a model, but the training takes much longer to reach the same performance, they explore ways to exploit knowledge from already trained prompts. They explore using already trained prompts to transfer knowledge between tasks (using the same frozen model) and also the transfer of prompt between _different_ frozen models. For between task transfer, they either directly re-use a prompt from a source task on the target task or they use the prompt learned from the source task as the initialization point for the target task. For between model transfer, they uses these same methods but include a learnable `projector` (a small, 2 layer neural-network) that maps the prompts from one frozen model to another be using the projected prompt in one of the methods mentioned above. They have two methods for learning this `projector`. In the first method, which they call _Distance Minimization_, they minimize the $L_2$ distance between a projected source prompt (trained on the source frozen model) and a target prompt (a prompt trained on the same task using the target model). In the second method (_Task Tuning_) they learn the `projector` via backpropagation. In this case they take a prompt trained on a source task $P_s$, project it ($Proj(P_s)$) and then use that when prompt tuning the target model. Gradient updates are only applied to the projector. They also look at several methods of prompt similarity and use them to predict prompt transferability. They main methods are Cosine and Euclidean distances between prompt tokens and their novel model activation similarity where prompts are fed into frozen models and the activations of the feed-forward layers are recorded. The call this method _ON_. ### Results Their first results look at the performance of directly re-using a prompt trained on a source task for a downstream task. They find that this can produce strong performance (measured in relative performance, the direct source to target prompt transfer performance divided by the performance researched from directly training on the target task) within clusters of similar tasks. Their second results look at the performance of using a prompt learned on a source task to initialize the prompt for a target task and then doing Prompt Tuning. They find that this method can give consistent gains in terms of task performance as well as speed of convergence. Their third set results examine transfer across models. They find that direct re-use of a prompt projected by the `projector` learned via the _Distance Minimization_ method results in poor performance, especially within the Sentiment tasks. They find that direct reuse of a prompt projected by a `projector` learned with their _Task Tuning_ method does better especially when the tasks are within the same cluster. They also look at how using a _Task Tuning_ prompt to initialize training of a new prompt performs and finds that it can lead to some improvements in task performance and small improvements in convergence speed. Their final set of results examine use prompt similarity methods to predict prompt transferablity (in the context of direct prompt reuse). They find that all methods are able to distinguish between multiple prompts (created by training with different random seeds) trained for the same task from prompts trained for other tasks. They also find that _ON_ produces a ranking of similar prompts that best correlate with direct reuse performance (using Spearman's rank correlation scores). They also find that the correlation decreases as the size of the frozen model grows. summary_of_strengths The strengths of the paper include: * Experiments on many different and diverse datasets, 17 with a good mixture of sentiment, NLI, EJ, Paraphrase detection, and Question answers. * Experiments across many model sizes and architectures, including encoder-only models like RoBERTa instead of just the encoder-decoder and decoder-only models we see else where. * The inclusion of small motivating experiments like the convergence speed are a great way to establish the importance of the work and the impact it would have. * The use of the same methods (direct reuse of prompts and using prompts as initialization) in different settings (cross task transfer with the same model and cross model transfer with the same task) and similar results in each demonstrate the robustness of the method. * Not only does their novel prompt similarity method (_ON_ based on model activations when processing the prompt) work great at predicting direct use similarity, it also captures the non-linear way the model interacts with the prompt in a way that simple methods like token similarity can. summary_of_weaknesses The majority of the weaknesses in the paper seem to stem from confusion and inconsistencies between some of the prose and the results. 1. Figure 2, as it is, isn't totally convincing there is a gap in convergence times. The x-axis of the graph is time, when it would have been more convincing using steps. Without an efficient, factored attention for prompting implementation a la [He et al. (2022)](https://arxiv.org/abs/2110.04366) prompt tuning can cause slow downs from the increased sequence length. With time on the x-axis it is unclear if prompt tuning requires more steps or if each step just takes more time. Similarly, this work uses $0.001$ for the learning rate. This is a lot smaller than the suggested learning rate of $0.3$ in [Lester et al (2021)](https://aclanthology.org/2021.emnlp-main.243/), it would have been better to see if a larger learning rate would have closed this gap. Finally, this gap with finetuning is used as a motivating examples but the faster convergence times of things like their initialization strategy is never compared to finetuning. 2. Confusion around output space and label extraction. In the prose (and Appendix A.3) it is stated that labels are based on the predictions at `[MASK]` for RoBERTa Models and the T5 Decoder for generation. Scores in the paper, for example the random vector baseline for T5 in Table 2 suggest that the output space is restricted to only valid labels as a random vector of T5 generally produces nothing. Using this rank classification approach should be stated plainly as direct prompt reuse is unlikely to work for actual T5 generation. 3. The `laptop` and `restaurant` datasets don't seem to match their descriptions in the appendix. It is stated that they have 3 labels but their random vector performance is about 20% suggesting they actually have 5 labels? 4. Some relative performance numbers in Figure 3 are really surprising, things like $1$ for `MRPC` to `resturant` transfer seem far too low, `laptop` source to `laptop` target on T5 doesn't get 100, Are there errors in the figure or is where something going wrong with the datasets or implementation? 5. Prompt similarities are evaluated based on correlation with zero-shot performance for direct prompt transfer. Given that very few direct prompt transfers yield gain in performance, what is actually important when it comes to prompt transferability is how well the prompt works as an initialization and does that boost performance. Prompt similarity tracking zero-shot performance will be a good metric if that is in turn correlated with transfer performance. The numbers from Table 1 generally support that this as a good proxy method as 76% of datasets show small improvements when using the best zero-shot performing prompt as initialization when using T5 (although only 54% of datasets show improvement for RoBERTa). However Table 2 suggests that this zero-shot performance isn't well correlated with transfer performance. In only 38% of datasets does the best zero-shot prompt match the best prompt to use for transfer (And of these 5 successes 3 of them are based on using MNLI, a dataset well known for giving strong transfer results [(Phang et al., 2017)](https://arxiv.org/abs/1811.01088)). Given that zero-shot performance doesn't seem to be correlated with transfer performance (and that zero-shot transfer is relatively easy to compute) it seems like _ON_'s strong correlation would not be very useful in practice. 6. While recent enough that it is totally fair to call [Vu et al., (2021)](https://arxiv.org/abs/2110.07904) concurrent work, given the similarity of several approaches there should be a deeper discussion comparing the two works. Both the prompt transfer via initialization and the prompt similarity as a proxy for transferability are present in that work. Given the numerous differences (Vu et al transfer mostly focuses on large mixtures transferring to tasks and performance while this work focuses on task to task transfer with an eye towards speed. _ ON_ as an improvement over the Cosine similarities which are also present in Vu et al) it seems this section should be expanded considering how much overlap there is. 7. The majority of Model transfer results seem difficult to leverage. Compared to cross-task transfer, the gains are minimal and the convergence speed ups are small. Coupled with the extra time it takes to train the projector for _Task Tuning_ (which back propagation with the target model) it seems hard to imagine situations where this method is worth doing (that knowledge is useful). Similarly, the claim on line 109 that model transfer can significantly accelerate prompt tuning seems lie an over-claim. 8. Line 118 claims `embedding distances of prompts do not well indicate prompt transferability` but Table 4 shows that C$_{\text{average}}$ is not far behind _ON_. This claim seems over-reaching and should instead be something like "our novel method of measuring prompt similarity via model activations is better correlated with transfer performance than embedding distance based measures" comments,_suggestions_and_typos 1. Line 038: They state that GPT-3 showed extremely large LM can give remarkable improvements. I think it would be correct to have one of their later citations on continually developed LM as the one that showed that. GPT-3 mostly showed promise for Few-Shot evaluation, not that it get really good performance on downstream tasks. 2. Line 148: I think it would make sense to make a distinction between hard prompt work updates the frozen model (Schick and Schütez, etc) from ones that don't. 3. Line 153: I think it makes sense to include [_Learning How to Ask: Querying LMs with Mixtures of Soft Prompts_ (Qin and Eisner, 2021)](https://aclanthology.org/2021.naacl-main.410.pdf) in the citation list for work on soft prompts. 4. Figure 3: The coloring of the PI group makes the text very hard to read in Black and White. 5. Table 1: Including the fact that the prompt used for initialization is the one that performed best in direct transfer in the caption as well as the prose would make the table more self contained. 6. Table 2: Mentioning that the prompt used as cross model initialization is from _Task Tuning_ in the caption would make the table more self contained. 7. Line 512: It is mentioned that _ON_ has a drop when applied to T$5_{\text{XXL}}$ and it is suggested this has to do with redundancy as the models grow. I think this section could be improved by highlighting that the Cosine based metrics have a similar drop (suggesting this is a fact of the model rather than the fault of the _ON_ method). Similarly, Figure 4 shows the dropping correlation as the model grows. Pointing out the that the _ON_ correlation for RoBERTA$_{\text{large}}$ would fit the tend of correlation vs model size (being between T5 Base and T5 Large) also strengths the argument but showing it isn't an artifact of _ON_ working poorly on encoder-decoder models. I think this section should also be reordered to show that this drop is correlated with model size. Then the section can be ended with hypothesizing and limited exploration of model redundancy. 8. Figure 6. It would have been interesting to see how the unified label space worked for T5 rather than RoBERTAa as the generative nature of T5's decoding is probably more vulnerable to issue stemming from different labels. 9. _ ON_ could be pushed farther. An advantage of prompt tuning is that the prompt is transformed by the models attention based on the value of the prompt. Without having an input to the model, the prompts activations are most likely dissimilar to the kind of activations one would expect when actually using the prompt. 10. Line 074: This sentence is confusing. Perhaps something like "Thus" over "Hence only"? 11. Line 165: Remove "remedy,"
[ [ 3769, 3821 ], [ 3823, 3911 ], [ 3918, 3972 ], [ 3972, 4095 ], [ 4850, 4984 ], [ 4985, 9951 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Jus_pos_2", "Eval_neg_1", "Jus_neg_1" ]
21
paper_summary This paper focuses on using bandit learning to learn from user feedback for Extractive QA (EQA), the binary supervisory signals from user feedback serve as rewards pushing QA systems to evolve. The learning algorithm aims to maximise the rewards of all QA examples, which consists of online learning and offline learning, the online learning receives user feedback and updates model parameters after seeing one QA example, whereas offline learning updates model parameters after seeing all QA examples. The experimental results on QA datasets from MRQA support the effectiveness of the proposed bandit learning approach, proving that the proposed approach can consistently improve model’s performance on SQuAD, HotpotQA and NQ in in-domain experiments under online learning especially when there are extremely little QA examples available for SQuAD. Besides, a set of experiments are conducted to investigate the difference between online learning and offline learning, and the importance of model initialisation in the proposed bandit learning approach. summary_of_strengths 1. The proposed bandit learning approach that learns from user feedback for EQA is novel, which simulates real deployment environment and provides insights for further exploration in bridging the gap between QA model training and deployment. 2. Empirical results show the effectiveness of the proposed approach, especially the in-domain experimental results for online learning. 3. Conducting extensive experiments studying the effect of domain transfer and model initialisation. summary_of_weaknesses 1. The binary reward from user feedback is weak due to the large search space for EQA, resulting in the incapability of providing precise supervisory signals. Need to design a more sophisticated reward. 2. The proposed approach heavily relies on how accurate the initial model is, which means it is highly sensitive to model initialisation, limiting its usefullness. 3. In in-domain experiments of online and offline learning, bandit learning approach hurts model’s performance under some scenarios especially for TriviaQA and SearchQA. 4. Some other papers of learning from feedback for QA should be compared, such as Learning by Asking Questions, Misra et al. CVPR 2017. comments,_suggestions_and_typos Questions: 1. Why only use single-pass in online learning?
[ [ 1095, 1334 ], [ 1338, 1471 ], [ 1476, 1574 ], [ 1600, 1755 ], [ 1756, 1799 ], [ 1804, 1938 ], [ 1939, 1963 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Jus_neg_1", "Eval_neg_1", "Jus_neg_2", "Eval_neg_2" ]
22
paper_summary This paper works on the problem of personalization in knowledge grounded conversation (KGC). To develop a benchmark, the authors collected a new KGC dataset based on Reddit containing personalized information (e.g. user profile and dialogue history). The authors propose a probabilistic model for utterance generation conditioned on both personalized profile (personal memory) and personal knowledge. Dual learning is employed to better learn the unconstrained relation between personal memory $Z^m$ and knowledge $Z^k$ , and variational method is proposed to approximately marginalize out $Z^m$ and $Z^k$ during inference. The results with automatic evaluation show promising improvement and human evaluation also validates this. Finally, various ablation studies are conducted to reveal the contribution of each model component. summary_of_strengths - The problem of personalization in KGC is a relatively overlooked yet important problem. The authors developed a promising method and benchmark for this new challenge. - The idea of incorporating dual learning to link personalized sources (e.g. personal memory and knowledge) is very interesting and convincing. I’d like to see follow-up works comparing the ideas against this paper’s. - The improvement in automatic evaluation is significant (though not fully reliable, as the author’s acknowledge in line 522). Human evaluation also corroborates the proposed model’s superiority, though the improvement becomes less significant. summary_of_weaknesses - The paper is generally well-written and easy to follow, but the definition of personal memory was quite ambiguous and not fully defined. For instance, does this concept include false beliefs (incorrect knowledge), subjective opinions (unsupported knowledge) or inferential knowledge? What would be the unit of personal memory in the context of visually grounded dialogues (line 134)? How can we extend the idea to inter-personal knowledge, i.e. common ground? - I understand the space is limited, but I think more information/explanation on the collected dataset should be added (e.g. data collecting procedure and reviewing process). comments,_suggestions_and_typos - In lines 198-220, explanation of $\phi$, $\psi$ and $\pi$ is not clear. Can they be better explained or incorporated in Figure 2? - In Figure 2, should the distilled distribution of $Z^p$ not be conditioned on $Z^k$? In the text, $q_\phi (Z^p | C, R)$ is not conditioned on $Z^k$ (lines 199, 207) - Typo: “the the” in line 278 - For Table 3, did you also evaluate against human answers (e.g. original response)? If available, it may be better to be incorporated. - What exactly is personal memory? How is this defined, esp. in other domains? I’d like to see more discussion on this in the updated paper.
[ [ 957, 1034 ], [ 1038, 1179 ], [ 1256, 1498 ], [ 1584, 1659 ], [ 1661, 1983 ], [ 2025, 2102 ], [ 2103, 2157 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2" ]
24
paper_summary This paper proposes a solution for "Contrastive Conflicts". What exactly are “Contrastive Conflicts”? They occur when multiple questions are derived from a passage, each with different semantics. The questions are going to be close to the passage in representation space and by transitivity they are going to be close among themselves even though they are semantically different (Transitivity Conflict). In addition to this, if multiple questions derived from the same passage are in the same training batch, then the questions will see that passage as both positive and negative (In-Batch Conflict). The solution proposed by the paper is to use smaller granularity units, i.e. contextualized sentences. Per sentence representations are computed by using per sentence special indicator tokens, then a similar approach to DPR is used to finetune sentence representations. Because different questions have answers in different sentences the contrastive conflict is generally resolved. Improvements are reported on NQ, TriviaQA and SQuAD, especially on SQuAD where conflicts are reported to be severe (i.e. often multiple different questions are extracted from the same passage). Extensive experiments show that the method does well even in transfer learning. summary_of_strengths Strengths: -The paper obtains small but convincing improvements on NQ and TriviaQA, and large but a bit puzzling results on SQuAD (considering that one of the baselines does not match the DPR paper and that SQuAD can benefit dramatically from combining DPR with BM25, but it is not done in this paper). -The paper presents many interesting ablations and transfer learning experiments that help further convince the reader of the efficacy of the method. summary_of_weaknesses Weaknesses: -Retrieving (avg # sentences) * 100 sentences (see section 3.3) instead of just 100 sentences seems to be a bit of a cheat. For a strict comparison to DPR, Top-20 and Top-100 performance should be reported with exactly those numbers of retrieved elements and without post-processing on larger sets of retrieved passages. One could argue that allowing for more expensive passage retrieval is what is giving the improvements in this paper, other than for SQuAD where the lower granularity does seem to be helping, except it doesn’t help as much as BM25. -The idea of having more granular representations for passage retrieval is far from new. The authors do cite DensePhrases (Lee et al. 2021), but don’t mention that it’s already at lower granularity than passage level. They could also cite for example ColBERT (Khattab et al. 2021). -The big improvement reported in Table 2 for SQuAD “Single” is a bit confusing since it relies on a Top-20 number that is much lower that what is reported on the DPR paper (although this seems to be a common problem). On the positive side, the number reported for SQuAD “Multi” matches the DPR paper. comments,_suggestions_and_typos Suggestions: -Line 91: the authors claim that contrastive conflicts are *the* cause for bad performance on SQuAD, but the statement seems unjustified at that point. It might make sense to refer to later results in the paper.
[ [ 1305, 1422 ], [ 1597, 1746 ], [ 1781, 1904 ], [ 1905, 2332 ], [ 2334, 2421 ], [ 2422, 2614 ], [ 2617, 2693 ], [ 2694, 2832 ], [ 2963, 3113 ], [ 3114, 3174 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4" ]
26
paper_summary The paper presents a novel approach to understanding math problems from their textual formulations. The approach builds on those from related work, choosing syntactic representations. The key novelties are (1) an internal graph representation of the operators and (2) a novel pretraining setting. The model achieves vast improvements over prior art. summary_of_strengths The new model addresses several key problems of previous work and appears to contribute a very logically motivated extension, modeling the structure of the required mathematical operations. The model description is clear and the experimental setup and results are reasonably clear and allow for an easy comparison with related work. There is also an ablation study to analyze the contribution of the individual components of the model. The paper is easy to read. summary_of_weaknesses The model section seems to lack comparison with prior work. It is not entirely clear what is novel here and what is taken from prior work. It is also not entirely clear to me if pretraining is performed with data from all tasks and whether the same setup had been used previously. If this is different from prior work, that would be unfair and a major flaw. comments,_suggestions_and_typos I'd like to see my doubt about the pretraining cleared up.
[ [ 311, 364 ], [ 386, 575 ], [ 576, 606 ], [ 611, 719 ], [ 823, 848 ], [ 873, 932 ], [ 933, 1231 ] ]
[ "Eval_pos_8", "Eval_pos_7", "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1" ]
29
paper_summary See the prior review for a summary. Based upon the author response I do raise my score slightly from 2.5 to 3.0 to reflect that the definitions referenced in the author response might be sufficient for a target audience that is intimately familiar with WSD. On the other hand, it remains open as to what the impact of the proposed approach would be on any of the noted downstream applications, or beyond English. While WSD can be considered part of the traditional NLP preprocessing pipeline, it's impact on modern end-to-end solution is likely small. Nevertheless there might be high-impact cases such as token-based retrieval (which is used widely), and investigating the impact of the proposed approach on such applications might provide a convincing data point that can provide evidence for the impact of the proposed work. summary_of_strengths See the prior review. summary_of_weaknesses See the prior review. comments,_suggestions_and_typos See the prior review.
[]
[]
31
paper_summary The paper describes a new approach towards MeSH label prediction, utilizing the title abstract journal relative information. The proposed model combines BiLSTMs, Dilated CNNs and GCNNs to extract features from abstracts, titles and the mesh term hierarchy respectively. Limiting the search MeSH space with information extraction from metadata (such as other articles published in that journal) allows for a boost in performance by building dynamic attention masks. The final model shows good performance compared to related approaches, one of which uses the full article. summary_of_strengths - Utilized information past the document itself to limit the MeSH search space -Introduces novel end-to-end architecture that can be used in other tasks involving scholarly articles -Achieves good performance compared to related approaches. summary_of_weaknesses - Threshold is said to have a very big impact but is not discussed in detail with different ablations. How does threshold affect computational complexity (outside of performance)? -Some of the design choices are not explained well (e.g. why IDF-weighting) -Training time (epochs) and computational complexity of the kNN and GCNN component is not discussed. comments,_suggestions_and_typos - Equations 10 & 11 should be H_{abstract} instead of D_{abstract}? If not, when is H_{abstract} used? -There is a significant drop in performance for MeSH terms when metadata are not available, leading to a worse performance than other methods (Ablations-d). In case of new journals or preprints, is this the expected performance? -With the tuned threshold, how many MeSH terms are not selected during the dynamic masking on average in the different data splits? What is the hierarchical level of these terms? -A few minor typos, proof reading should fix them. Nothing major.
[ [ 688, 789 ], [ 791, 847 ], [ 875, 974 ], [ 974, 1052 ], [ 1054, 1103 ], [ 1104, 1230 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2" ]
34
paper_summary The performance of structured prediction models can be greatly improved by scaling to larger state spaces, yet the inference complexity of these models scales poorly w.r.t. the size of the state space. The goal of this work is to reduce the inference complexity of structured models by factorizing the clique potentials using low-rank tensor decompositions and performing message passing in an induced rank space instead of the original state space. This work makes three contributions: 1. Using the language of factor graph grammars, this work unifies previous low-rank tensor decomposition works such as Yang et al 2021b and Chiu et al 2021. This work shows that those works are essentially performing message passing on a factor graph with two types of nodes: the original state nodes and auxiliary rank nodes induced by the low-rank tensor decomposition. 2. On a sub-family of factor graph grammars which subsume most commonly-used structured prediction models such as HMMs, HSMMs, and PCFGs, this work proposes to marginalize the state nodes first and only perform inference in the induced rank nodes, which reduces the complexity by replacing a factor of the state size by a factor of the rank size which is usually smaller. 3. Empirically this work scales HMMs and PCFGs to very large state spaces and achieves strong performance. summary_of_strengths 1. This work is insightful in pointing out that by performing message passing only in the rank space after marginalizing the original state nodes (which is a one-time cost), a factor of the number of states in the total complexity can be replaced by a factor of the rank size. This idea is generally applicable to a large family of factor graph grammars that have one external node per hypergraph fragment, and it might enable scaling many structured prediction models. 2. This work gets strong empirical performance by scaling to very large state spaces when compared to previous structured prediction works. In particular, this work trains the largest-ever PCFG in the task of unsupervised parsing on PTB (to my knowledge) and establishes a new state-of-the-art performance in this particular task. 3. This work confirms findings of previous works such as Chiu and Rush 2020 that scaling structured prediction models can improve performance. For example, Figure 6 (b) suggests that scaling PCFGs to beyond 10k pre-terminals might further improve modeling performance. summary_of_weaknesses By showing that there is an equivalent graph in the rank space on which message passing is equivalent to message passing in the original joint state and rank space, this work exposes the fact that these large structured prediction models with fully decomposable clique potentials (Chiu et al 2021 being an exception) are equivalent to a smaller structured prediction model (albeit with over-parameterized clique potentials). For example, looking at Figure 5 (c), the original HMM is equivalent to a smaller MRF with state size being the rank size (which is the reason why inference complexity does not depend on the original number of states at all after calculating the equivalent transition and emission matrices). One naturally wonders why not simply train a smaller HMM, and where does the performance gain of this paper come from in Table 3. As another example, looking at Figure 4 (a), the original PCFG is equivalent to a smaller PCFG (with fully decomposable potentials) with state size being the rank size. This smaller PCFG is over-parameterized though, e.g., its potential $H\in \mathcal{R}^{r \times r}$ is parameterized as $V U^T$ where $U,V\in \mathcal{R}^{r \times m}$ and $r < m$, instead of directly being parameterized as a learned matrix of $\mathcal{R}^{r \times r}$. That being said, I don't consider this a problem introduced by this paper since this should be a problem of many previous works as well, and it seems an intriguing question why large state spaces help despite the existence of these equivalent small models. Is it similar to why overparameterizing in neural models help? Is there an equivalent form of the lottery ticket hypothesis here? comments,_suggestions_and_typos In regard to weakness #1, I think this work would be strengthened by adding the following baselines: 1. For each PCFG with rank r, add a baseline smaller PCFG with state size being r, but where $H, I, J, K, L$ are directly parameterized as learned matrices of $\mathcal{R}^{r \times r}$, $\mathcal{R}^{r \times o}$, $\mathcal{R}^{r}$, etc. Under this setting, parsing F-1 might not be directly comparable, but perplexity can still be compared. 2. For each HMM with rank r, add a baseline smaller HMM with state size being r.
[ [ 1380, 1403 ], [ 1404, 1653 ], [ 1851, 1987 ], [ 1988, 2179 ], [ 4207, 4280 ], [ 4282, 4707 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Jus_pos_2", "Eval_neg_1", "Jus_neg_1" ]
37
paper_summary In this work, the authors proposed a unified model of task-oriented dialogue understanding and response generation. The two major enhancements are adopting task-oriented dialogue pre-training on a data collection, and introducing the prompt-based learning for the multi-task capability via one model. From the experimental results, the pre-training strategy proved useful to improve the performance on the benchmark MultiWOZ. summary_of_strengths While the idea of task-specific pre-training is not new, it is still interesting, and the proposed method proved effective in leveraging the language backbone T5, and can be potentially applied to other models and tasks. summary_of_weaknesses 1. There are some other contemporary state-of-the-art models, the authors can consider citing and including them for an extensive comparison. 2. It will be good to see some analysis and insights on different combinations of pre-training datasets introduced in Table 1. comments,_suggestions_and_typos Here are some questions: 1. Since some of the sub-tasks, like dialogue state tracking, require a fixed format of the output, if the model generation is incomplete or in an incorrect format, how can we tackle this issue? 2. The dialogue multi-task pre-training introduced in this work is quite different from the original language modeling (LM) pre-training scheme of backbones like T5. Thus I was curious about why not pre-train the language backbone on the dialogue samples first with the LM scheme, then conduct the multi-task pre-training? Will this bring some further improvement? 3. It will be good to see some results and analysis on the lengthy dialogue samples. For instance, will the performance drop on the lengthy dialogues?
[ [ 462, 542 ], [ 548, 681 ], [ 709, 848 ], [ 851, 976 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Eval_neg_2" ]
38
paper_summary The paper studies the benefits of introducing a Bayesian perspective to abstractive summarization. The authors run the MC dropout method on two pre-trained summarization models, sampling different summarization texts according to specific dropout filters. They use BLEUVarN as a metric of uncertainty for a possible summarization, showing the variations across the summary samples. The authors conduct experiments on three datasets on the correlation between the uncertainty and summarization performances, and show that the performance of the summarization can slightly improve by selecting the "median" summary across the pool of sampled ones. summary_of_strengths - To the extent of my knowledge, it is the first work that study model uncertainty (in the particular form of the variability of generated summaries) in abstractive summarization. - The paper provides an analyses on three collections, showing the (cor)relations between the metric of summarization uncertainty (or in fact summarization variability) and ROUGH. They observe that in general the higher the uncertainty score of a summary, the lower its ROUGH score. - The work shows that the performance of summarization can be slightly improved by selecting the summary that lays in the "centroid" of the pool of generated summaries. summary_of_weaknesses My main concerns are the lack of novelty and proper comparison with a previous study. - As correctly mentioned in the paper, the work of Xu et al. is not based on MC dropout. However, that work still provide a metric of uncertainty over a generated summary. In fact, the metric of Xu et al. (namely the entropy of the generation distributions) comes with no or little extra computational costs, while the MC dropout of 10 or 20 introduces considerably large feedforward overheads. I believe the method of Xu et al. can be compared against in the experiments of 5.1. This can let the reader know whether the extra cost of MC dropout method comes with considerable benefits. - There is no specific novelty in the method. The observation regarding the correlation between uncertainty and performance is in fact an expected one, and has already observed in several previous studies (also in the context of language generation), like: Not All Relevance Scores are Equal: Efficient Uncertainty and Calibration Modeling for Deep Retrieval Models Daniel Cohen, Bhaskar Mitra, Oleg Lesota, Navid Rekabsaz, Carsten Eickhoff In proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021)- - The reported improvement is marginal, while achieved with the large overhead of MC sampling. My guess is that the improvement is only due to the effect of ensembling, inherent in MC droupout. comments,_suggestions_and_typos As mentioned above: - I believe the method of Xu et al. can be compared against in the experiments of 5.1. This can let the reader know whether the extra cost of MC dropout method comes with considerable benefits. - More evidences regarding the performance improvement, showing that it is not only due to the effect of ensembling. - Studying more efficient and recent Bayesian approaches, such as: Agustinus Kristiadi, Matthias Hein, and Philipp Hennig. 2020. Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks. In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 119), Hal Daumé III and Aarti Singh (Eds.). PMLR
[ [ 684, 861 ], [ 1338, 1423 ], [ 1424, 2010 ], [ 2011, 2057 ], [ 2057, 2579 ], [ 2581, 2675 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3" ]
39
paper_summary This work has created a benchmark dataset for multi-task learning for biomedical datasets. Based on this new benchmark dataset, this work has proposed instruction learning based multi-task learning, which has shown to outperform single-task learning as well as vallina multi-task learning. summary_of_strengths 1. This work has newly aggregated more than 20 biomedical datasets in 9 categories into a new multi-task paradigm and formalize them into a text to text format so that we can build one unified model for all different tasks. 2. This work has proposed using manually created instructions for multi-task learning so that the model can be instructed to perform each task without confusion. And this method has been shown to outperform a lot the vanilla multi-task learning and also outperform single-task learning in some cases. summary_of_weaknesses 1. In the proposed method, the BI would be concatenated with instances as the input to the BART model, and in the BI, examples are provided. Actually these examples are extracted from those instances, then why should we still have examples in BI? How about just having those instructions in the BI? 2. One important baseline is missing: in those methods proposed for DecaNLP and UnifiedQA, etc., other types of tokens or phrases are used to indicate which task/dataset each input instance belongs to, which is very important to let the model know what the input instance it is. However, in the baseline of vanilla multi-task learning (V-BB), no such kinds of special tokens are used at all, which forms a very unfair baseline to be compared with. The model are fed by so many instances from various kinds of tasks without any differentiation, which for sure would lead to deteriorate performance. For this reason, the effectiveness or the necessity of BI is questionable. 3. More deep analysis over the impacts of different kinds of designs of the BI is needed, since such designs can vary a lot among different designers or writers. If so, the performance would be very unstable due to the variance of BI, which makes this type of method not applicable to real-world problems. 4. Only Rouge-L is used for evaluation, which makes the evaluation not that reliable. Especially for some classification tasks, Rouge-L is not sensitive enough. comments,_suggestions_and_typos 1. In lines 382-384, it is mentioned that "We have discarded long samples (>1024 token length) from validation and testing data as well.". I think it is not appropriate to throw any examples from the test set.
[ [ 1178, 1212 ], [ 1213, 1622 ], [ 1623, 1772 ], [ 1773, 1848 ], [ 1852, 1938 ], [ 1939, 2010 ], [ 2159, 2242 ], [ 2242, 2317 ] ]
[ "Eval_neg_1", "Eval_neg_1", "Jus_neg_2", "Eval_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4" ]
41
paper_summary This paper propose prefix-based models for controllable text generation. Similar to [1], prefixes are token embeddings of language models (e.g., GPT-2) used for learning attribute-specific information and steering the generation of the fixed language models. The authors further add a contrastive loss to enhance the models' controllability. In addition, an unsupervised learning method is introduced to handle scenarios where labels are not available. The authors evaluated the proposed models on multiple controllable text generation tasks, such as controlling sentiment and topics. The experimental results show that comparing to baselines like PPLM and GeDi, the proposed model can achieve a good balance between fluency and controllability. [1] Prefix-tuning: Optimizing continuous prompts for generation. ACL 2021 summary_of_strengths - The proposed lightweight model achieved strong performance in multiple controllable text generation tasks. -The idea of controlling language models in unsupervised way is interesting and new. summary_of_weaknesses - Missing human evaluation for the proposed unsupervised learning method. The major technical contribution (novelty) of the paper is controlling language models in unsupervised manner. Unfortunately, human evaluation is absent (in table 4) to demonstrate its effectiveness. -For the multi-aspect controlling experiments, CTRL[1] and PPLM[2] should be good baselines. [1] CTRL: A conditional transformer language model for controllable generation. [2] Plug and play language models: A simple approach to controlled text generation. ICLR 2020 comments,_suggestions_and_typos Please consider adding new human evaluation results and baselines as mentioned in weaknesses.
[ [ 858, 964 ], [ 965, 1050 ], [ 1075, 1146 ], [ 1147, 1258 ], [ 1258, 1346 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_1" ]
42
paper_summary The paper provides a benchmark dataset that can be used for training & evaluation of automated fact checking systems. The major contribution of this paper is that they provide a large collection of 33,697 claims with associated review articles and premise articles. In the experiment, this work presents a two-stage detection framework, including evidence sentence extraction and claim veracity inference. LSTM-based baselines and RoBERTa-based baselines are included and compared. summary_of_strengths 1. The idea of using premises articles for claim inference in automated fact checking is interesting. 2. The paper is overall well-structured and the methods are explained clearly. summary_of_weaknesses 1. The methods are not novel as they are largely borrowing from existing work. 2. It would be nice to have more detailed descriptions of the data collection process, e.g., label mapping, and data statistics, (how many articles per claims? how many sentences per articles? sentence length?) If not enough space on the main text, these information could be added on appendix. 3. It would be better if the authors evaluate more state-of-the-art methods on this benchmark dataset. 4. In section 3.3, the authors claim that the disadvantage of using web search is indirect data leak. Can we eliminate the data leak through filtering with publishing time. comments,_suggestions_and_typos 1. The prequential evaluation is well-written. It would be interesting to see more such analysis and discussion of the datasets. 2. Did you try the combination of TF-IDF and dense retrieval for evidence sentence extraction? 3. As your dataset is imbalanced, it would be better to see some analysis of the outputs.
[ [ 522, 621 ], [ 625, 701 ], [ 727, 803 ], [ 807, 891 ], [ 891, 1099 ], [ 1413, 1457 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_pos_3" ]
43
paper_summary *Note: I reviewed this paper in an earlier ARR cycle. There are no changes in the updated version that warrant a change in my score or the review. I’ve updated a summary of weaknesses to reflect the updates, and have listed a few suggestions on grammar.* This work presents a method (X-GEAR) for zero-shot, cross-lingual event argument extraction. X-GEAR takes as input i) a passage, ii) a trigger word (a predicate, e.g., "killed"), and iii) a template indicating the desired roles (e.g., <Victim>NONE</Victim>). The output is the template filled with event arguments extracted from the passage (e.g., NONE might be replaced with civilian). X-GEAR is built using the standard Seq2seq framework with a copy mechanism, where the input is composed of the triplet (passage, template, trigger word) flattened as a sequence, and the output is the template filled with desired roles. The method relies on recent advances in large, multilingual pre-trained language models (PTLM) such as MT5, which have been shown to perform robust cross-lingual reasoning. The key insight of the method is to use language-agnostic special tokens (e.g., <Victim>) for the template. Fine-tuning on the source language helps learn meaningful representations for templates, which allows their approach to work across target languages supported by the PTLM. summary_of_strengths - The paper presents a simple but intuitive method for solving an important problem. The simplicity of the proposed method is a significant strength of this work. As the authors note, existing systems that perform structured extraction often rely on a pipeline of sub-modules. X-GEAR replaces that with a simple Seq2seq framework which is considerably easier to use and maintain. - The proposed method is clearly defined, the experiments are thorough and show considerable gains over the baselines. - The analysis provides several insights into the strengths and weaknesses of the proposed approach. summary_of_weaknesses The authors have addressed some of the weaknesses highlighted in the previous review. However, it would be great if the weakness of the proposed approach is also highlighted in the future version. Specifically, the method is not *truly* zero-shot as it can only work in cases where a PLTM for the target languages is available. I believe that this is an important point and should be highlighted in conclusion or related work. comments,_suggestions_and_typos - L100 “Zero-shot cross-lingual learning *is an” -L104: Various structured prediction tasks have *been studied, -The footnote markers should be placed after the punctuation mark (e.g., L557).
[ [ 1369, 1530 ], [ 1530, 1746 ], [ 1748, 1787 ], [ 1789, 1865 ], [ 1868, 1967 ], [ 2085, 2186 ], [ 2187, 2317 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_neg_1", "Jus_neg_1" ]
45
In the present paper, the authors describe the results of a quantitative analysis of various genres in terms of coreference. They analyse a number of coreference-related features and compare the genres from the point of view of their distribution. The aim is to find the differences between spoken and written genres. The work is interesting and useful, as the number of studies of coreference in spoken texts is limited. The paper address a number of important issues in both coreference analysis (e.g. how should the distance be measured) and the analysis of spoken language. As the authors use a number of existing resources, they also assure the comparability of the categories used. Here, it would be interesting to know, if there were any problems, e.g. if there were still some incompatible categories that the authors came across. Specific comments I like the discussion about the variation in distance measured by different means at the beginning of section 2. Specifically, in a cross-lingual task, token-based measure is a problem. However, there could be differences across studies using various metrics. If measured in sentences or clauses, the distance may vary depending on a genre, if there is a variation in sentence length in terms of words (in spoken texts, there could be shorter sentences, etc.). The question is, if the distance should be measured in characters, but I believe that the decision depends on the conceptual background on what one wants to find out. Another point in Section 2 in the discussion of the diverging results could be variation within spoken and written texts various authors use in their analysis. There could be further dimensions that have an impact on the choice of referring expressions in a language, e.g. narrativeness, if there are dialogues or monologues, etc. Concerning related work, Kunz et al. (2017) point out that coreference devices (especially personal pronouns) and some features of coreference chains (chain length and number of chains) contribute to the distinction between written and spoken registers. There are several works concerned with lexicogrammar suggesting that distinction between written vs. spoken, and also between formal vs. colloquial are weak in English (Mair 2006: 183). Table 1: the statistics on different parts of OntoNotes and the total number in OntoNotes are given in one table in the same column formatting, which is slightly misleading. 4.1: large NP spans vs. shot NP spans – sometimes only heads of nouns or full NPs are considered. References to examples: 1→ (1), etc. Personal pronouns: 1st and 2nd person pronouns are not considered in the analysis of coreference in some frameworks. The authors should verify which cases they include into their analysis. The finding about NPs being more dominant is not surprising (and was also expected by the authors) and has also something to do with the fact that spoken texts reveal a reduced information density if compared to the written ones. The discussion about the results on spoken vs. written is good and important. Even within written text, there could be a continuum, e.g. political speeches, which are written to be spoken or fictional texts that contain dialogues (as the authors point out themselves), could be closer to further spoken texts. At the same time, academic speeches or TED talks that contain less interaction with the audience (depending on a speaker’s style) could be closer to written texts, also in terms of referring expressions – we would expect them to contain more NPs, and probably complex Nps describing some notions. Overall, it is interesting to know if there are more dimensions than just the difference between spoken and written in the data, e.g. narrativeness (narrative vs. non-narrative) or dialogicity (dialogic vs. monologic), etc. In fact, genre classification can and should be sometimes more fine-grained than just drawing a rigid line between texts that are considered to be spoken and those that are considered to be written. Textual problems: Page 2, Section 2: interfering mentions. - These → There are some typographical problems in the text. Page 3, Section 3: I am not sure if the abbreviation Sct. is allowed by the Coling style. In the reference list, the authors should check spelling of the some entries, e.g. english→ English in Berfin et al. (2019), Zeldes (2018). There is an empty space in Godfrey et al. (1992). Cited references: Kunz, Kerstin and Degaetano-Ortlieb, Stefania and Lapshinova-Koltunski, Ekaterina and Menzel, Katrin and Steiner, Erich (2017). GECCo -- an empirically-based comparison of English-German cohesion. In De Sutter, Gert and Lefer, Marie-Aude and Delaere, Isabelle (eds.), Empirical Translation Studies: New Methodological and Theoretical Traditions. Mouton de Gruyter, pages 265–312. @BOOK{Mair2006, title = {Twentieth-Century English: History, Variation and Standardization}, publisher = {Cambridge University Press}, year = {2006}, author = {Mair, Christian}, address = {Cambridge} }
[ [ 318, 352 ], [ 354, 421 ], [ 857, 969 ], [ 970, 1043 ], [ 2257, 2400 ], [ 2401, 2431 ], [ 2989, 3067 ], [ 3067, 3595 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Jus_pos_2", "Jus_neg_1", "Eval_neg_1", "Eval_pos_3", "Jus_pos_3" ]
46
Summary - The paper studies the problem of under-translation common in auto-regressive neural machine translation. - Two main pieces are introduced in this research work, random noise to the length constraint, output length prediction using BERT. - The English-Japanese ASPEC dataset is used to evaluate the contribution of the two proposed improvements. - A stronger or similar performance is shown for all the 4 length language groups using the proposed approach. Especially in the shorted range, the authors show more than 3 points improvement over the vanilla transformer. - An interesting insight I got for the long sentences, vanilla transformer tend to produce shorter length sentences. The proposed approach generated translation close to the gold reference length, atleast for the dataset in use. Strenghts - Ablation is performed for both the new component, random noise and BERT based output length prediction. - Strong BLEU score for short sentence range and relatively close to gold reference length compared to the vanilla transformer. Concerns - The work of Lakew et al, uses English-Italian and English-German datasets for evaluation. These datasets should be used to have a consistent evaluation with the past work. - Following from the last one, any specific reason why the English-Japanese dataset is a better choice for your proposed methods? Perhaps you can **motivate on linguistic grounds** why the language Japanese is a better testing ground for your method. - Including an extra BERT-based-output-length prediction can incur additional computational overhead. The overhead of this computation should be stated in the work. - In the introduction, you mention __However, the input sentence length is not a good estimator of the output length.__. I'm not sure why is this the case.
[]
[]
47
Overviews: This paper focuses on Abusive Language Detection (ALD) and proposes a generic ALD model MACAS with multi-aspect embeddings for generalised characteristics of several types of ALD tasks across some domains. Strengths: The motivation of this paper is clear, i.e., to solve the problem that "What would be the best generic ALD ...", as described at the beginning of paragraph 2, section 1. The generic abusive language typology is categorised as two aspects, i.e., target aspect and content aspect, and multi-aspect embedding layer considers embeddings of both target and content, followed by cross-attention gate flow to refine four types of embeddings. The proposed model outperforms baselines on all the seven datasets. Detailed ablation studies have been given in section 5 as well. Weaknesses: For the structure of the paper, section 4 can be integrated into section 5 as a sub-section. the description of baselines in section 4 is too detailed, which should be refined and shortened appropriately. The paragraph 3, section 5.1 can be titled as a sub-section called "case study" since this paragraph analyzes some prediction examples in Table 3.
[ [ 227, 266 ], [ 267, 397 ], [ 900, 958 ], [ 959, 1011 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_neg_1", "Jus_neg_1" ]
48
This work built a fake news prediction model using both news and user representation from user-generated texts. Experimental results showed that the user text information contributed to predicting the fake. Moreover, the paper showed linguistic analysis to show typical expressions by users in real and fake news. Cosine similarities between users are calculated using proposed user vectors to confirm the echo chamber effect. Introducing vectors of news spreading users sounds an interesting idea. The paper's investigation, the user vector made from linguistic features contribute, is interesting and important. The results of active topics by users for both real and fake is also impressive. There are some ways to build user vectors not only from their timeline and profiles but also from their tweets itself (e.g., Persona chat model). Does the proposed method have a clear advantage to such models?
[ [ 430, 501 ], [ 502, 616 ], [ 617, 698 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3" ]
49
In this paper, the authors argue that using the topmost encoder output alone is problematic or suboptimal to neural machine translation. They propose multi-view learning, where the topmost encoding layer is regarded as the primary view and one intermediate encoding layer is used as an auxiliary view. Both views, as encoder outputs, are transferred to corresponding decoder steams, with shared model parameters except for the encoder-decoder attention. Prediction consistency loss is used to constrains these two streams. The authors claim that this method can improve the robustness of the encoding representations. Experiments on five translation tasks show better performance compared to vanilla baselines, and generalization to other neural architectures. On one hand, the experiments conducted in this paper are rich, including five translation tasks, two NMT architectures, shallow and deep models, and many ablations and analysis. On the other hand, I have several concerns regarding motivation, claims and experiments: -The authors pointed out two problems for using the topmost encoder output alone: 1) overfitting; 2) “It cannot make full use of representations extracted from lower encoder layers,”. I’m not convinced by the second one especially. For example, in PreNorm-based Transformer, the final encoder output is actually a direct addition of all previous encoding layers. Although there is a layer normalization, I believe this output carries critical information from lower layers. -The authors claim that “circumventing the necessity to change the model structure.”, but the proposed method requires to change the decoder, and manipulate the parameter sharing pattern. In my opinion, the method still requires structure modification. -The major ablations and analysis are performed on IWSLT De-En task, which is actually a low-resource task, where regularization is the main bottleneck. From Table 1, it seems like the proposed approach yields much smaller gains on large-scale WMT En-De task compared to low-resource tasks. Thus, it’s still questionable whether the conclusion from experiments on low-resource task can generalize to high-resource tasks. -Which WMT En-De test set did you use? WMT14 or WMT16? It seems like the authors used WMT16 for test, but the baseline (33.06 tokenized BLEU) is below standard (~34 BLEU). -Besides, some experiment has mixed results and is hard to draw convincing conclusions. For example, in Table 5, MV-3-6 (shared) achieves the best performance on De->En while MV-3-6 is the best on Ro->En. It seems like different tasks have different preferences (share or separate). In the paper, the author only highlights the superiority of separate settings on Ro->En task. Overall, I'm not convinced by the motivation and the analysis on low-resource tasks (In particular, this paper doesn't target at low-resource translation. Note that the authors claim that "our method has a good generalization for the scale of data size."). I think the score of this paper is around 3.5 with several unclear questions to be solved. Since we don't have this option, I prefer to give the score of 3.
[ [ 773, 823 ], [ 823, 938 ], [ 959, 1027 ], [ 1029, 1260 ], [ 1261, 1502 ], [ 1504, 1690 ], [ 1691, 1755 ], [ 1757, 2047 ], [ 2047, 2176 ], [ 2350, 2436 ], [ 2437, 2725 ], [ 2726, 2809 ], [ 2810, 2982 ], [ 2983, 3140 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Major_claim", "Eval_neg_1", "Jus_neg_1", "Jus_neg_2", "Eval_neg_2", "Jus_neg_3", "Eval_neg_3", "Eval_neg_4", "Jus_neg_4", "Eval_neg_5", "Jus_neg_5", "Major_claim" ]
50
This paper is about characters in narrative texts, and it claims to contribute a) an operational definition of characters that is „narratologically grounded“, b) an annotated corpus (which will be released) and c) classification experiments on the automatic distinction between characters and non-characters. This paper is well written and good to read. The topic is interesting and clearly relevant. I have some concerns, however: 1. The definition of a ‚character‘ is based on the concept ‚plot‘. While this is naturally following from the narratological literature, it begs the question what a plot is. And of course, this also presumes that there is ‚the plot‘ — what if there are more than one, or if it is highly subjective? Another term that is used for defining a ‚character’ is animacy. In factual texts, there is a pretty clear distinction between animate and inanimate beings, but in fictional texts, this boundary might become blurry quickly, because it is entirely conceivable that objects have properties that are usually reserved for animate beings. Thus, this term would need to be defined more concretely. The definition thus rests on other, not defined terms. 2. The annotation experiments yields high agreement, so maybe this is not so relevant in practice. But the agreement has been measured on only one of the three sub corpora, and presumably on the easiest one: Fairy tales, which have a pretty clear plot. It would be much more convincing if the annotation comparison would have been done on a portion from each corpus, and I do not see a reason why this was not done. 3. The annotation procedure description contains the sentence „First, we read the story and find the events important to the plot.“ I am not sure what this means exactly — was there an agreement across the annotators what the events important to the plot are, before the annotation? This of course would make the annotation task much easier. 4. One of the corpora the authors use consists of broadcast news transcripts from OntoNotes. I would need a lot more arguing about this in the paper, in order to believe the authors that news broadcast is a narrative. While it clearly has narrative elements, they have very different goals and textual properties. Firstly, the ‚plot‘ (understood as a sequence of events in the real world) is only partially represented in a news text, while you have a full plot in many narrative texts. 5. From the third corpus, the authors annotated only one chapter from each novel. This also seems questionable to me, in particular because length of a coreference chain later is such an important feature. In a full novel, the picture might be very different than in a single chapter. Concretely: The evaluation of an event being relevant to a plot could be very different if the full plot is known. 6. What I feel is missing from the paper is a quantitative data analysis independent of the classification experiments. What is the distribution of character- and non-character-chains? How long are they in comparison? This would make it much easier to interpret and evaluate the results properly. 7. The length of a coreference chain has been used as „an integer feature“ (4.2.1). Should this not be normalized in some way, given the very different text lengths? 8. Why is there no baseline for the OntoNotes and CEN corpora? To sum up: While I think this is an interesting task, and the paper is very well written, it makes several assumptions that do not hold general and has a somewhat weak theoretical basis. The classification experiments are pretty straightforward (as the title suggests), and — given the assumptions and restrictions introduced earlier — deliver not very surprising results.
[ [ 309, 353 ], [ 354, 400 ], [ 401, 430 ], [ 2508, 2542 ], [ 2544, 2826 ], [ 2830, 2946 ], [ 2947, 3124 ], [ 3372, 3407 ], [ 3413, 3443 ], [ 3445, 3540 ], [ 3542, 3728 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Major_claim", "Eval_neg_2", "Jus_neg_2", "Eval_neg_1", "Jus_neg_1", "Eval_pos_3", "Eval_pos_4", "Eval_neg_3", "Eval_neg_4" ]
52
paper_summary The paper presents QuALITY, a new benchmark for question answering over long passages. All the questions are multiple-choice, composed by professional writers and validated by MTurk annotators to be answerable and unambiguous. A subset of especially challenging questions is also selected in a task where annotators answer the question under strict time constraints. The paper presents a detailed analysis of the dataset and a thorough evaluation of long-context and extractive QA models on the presented data, demonstrating that all the models are far behind human performance. summary_of_strengths - Long-passage QA datasets are harder to collect and relatively scarce, so the new dataset would be a valuable addition to the field. -The data collection and annotation process is very well thought out and includes multiple validation steps. The data is further validated in qualitative and quantitative analysis. -The experimental part is thorough: both long-context models and extractive models are evaluated, and there are additional experiments with supplementary training data and no-context baselines. The choice of the QA baselines seems reasonable to me (although my expertise in QA is limited). -The paper is clearly written and easy to follow, and both the data collection and the experimental evaluation are documented in detail. summary_of_weaknesses My only (very minor) concern: the qualitative analysis is somewhat hard to understand without reading the Appendix (see comment below). That can easily be addressed given an extra page. comments,_suggestions_and_typos - Without looking at the Appendix, I found it difficult to interpret the different reasoning strategies mentioned in Section 3.6 and Table 5. This section might read more smoothly if you include an example question or a very short explanation for a few most popular types, such as "Description" or "Symbolism". It was also not clear to me how the questions were annotated for reasoning strategy without reading the passages: was it just by looking at the question, or with the Ctrl+F type keyword search in the passage? -This is perhaps too much to ask, but I am very curious about the 4% where the annotator-voted gold label does not match the writer’s label. If the authors have done any analysis on why the annotators might disagree with the writer, I would love to see it! -L275: this inclusion criteria -> these inclusion criteria -L441: perhaps you meant Table 6, not Table 9? Not having to go to the Appendix for the results would make things easier for the reader.
[ [ 617, 685 ], [ 690, 747 ], [ 750, 857 ], [ 931, 964 ], [ 966, 1123 ], [ 1124, 1178 ], [ 1222, 1269 ], [ 1275, 1356 ], [ 1381, 1516 ], [ 1600, 2119 ] ]
[ "Jus_pos_1", "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Jus_pos_3", "Eval_pos_4", "Eval_pos_5", "Eval_pos_6", "Eval_neg_1", "Jus_neg_1" ]
53
This paper presents a comparison of several vector combination techniques on the task of relation classification. - Strengths: The paper is clearly written and easy to understand. - Weaknesses: My main complaint about the paper is the significance of its contributions. I believe it might be suitable as a short paper, but certainly not a full-length paper. Unfortunately, there is little original thought and no significantly strong experimental results to back it up. The only contribution of this paper is an 'in-out' similarity metric, which is itself adapted from previous work. The results seem to be sensitive to the choice of clusters and only majorly outperforms a very naive baseline when the number of clusters is set to the exact value in the data beforehand. I think that relation classification or clustering from semantic vector space models is a very interesting and challenging problem. This work might be useful as an experimental nugget for future reference on vector combination and comparison techniques, as a short paper. Unfortunately, it does not have the substance to merit a full-length paper.
[ [ 127, 179 ], [ 194, 357 ], [ 358, 469 ], [ 470, 771 ], [ 904, 1044 ], [ 1044, 1120 ] ]
[ "Eval_pos_1", "Major_claim", "Eval_neg_1", "Jus_neg_1", "Eval_pos_1", "Major_claim" ]
54
paper_summary The paper has not changed materially from the previous version. Please refer to my previous detailed summary. The new version addresses a few weaknesses I had pointed out previously, such as to include important results that were initially deferred to the appendix and to drop a misleading comparison. It also adds more comparisons to BitFit in table 2. I do appreciate that these changes improve the clarity of the paper, however, the present version still lacks an in-depth comparison to other related work on parameter efficient models as criticized in my previous review. Likewise, experimentation on only GLUE provides an inherently limited picture on the performance of the proposed approach and can draw an overly positive conclusion (refer to Figure 2 in [1] from the previous review). ** I am increasing my score due to improved clarity to 3, but underscore that a more in-depth comparison on other datasets and with other parameter-efficient approaches is still missing.** Currently, the paper could be interesting to a narrow audience that is knowledgeable in the area, i.e., being able to assess the proposed solutions amid the limited experimental setup. [1] He et al. (ICLR 2022) "Towards a Unified View of Parameter-Efficient Transfer Learning." https://arxiv.org/pdf/2110.04366.pdf summary_of_strengths The paper has not changed materially. Please refer to previous summary. summary_of_weaknesses A few weaknesses have been addressed, especially as to the lack of information and to remove misleading information. Some major points of criticism, however still stand: More comparisons would be necessary to get a better sense of whether AdapterBias performs universally well. This concerns both datasets and models/methods. 1) Experimentation on only the GLUE datasets is limited in that it often draws an overly positive picture. Please refer to [1] from the summary above and other references from my prior review. This raises the question in which setups the proposed approach would be usable. 2) Various baselines are missing. A comparison to other adapter architectures would be reasonable and a few other approaches such as LoRA [2], prefix tuning [3], parallel adapter [4], and Compacter [5]. [1] He et al. (ICLR 2022) "Towards a Unified View of Parameter-Efficient Transfer Learning." https://arxiv.org/pdf/2110.04366.pdf [2] Hu et al. (ArXiv 2021). " LoRA: Low-rank adaptation of large language models." https://arxiv.org/abs/2106.09685 [3] Li et al. (ACL 2021). " Prefix-tuning: Optimizing continuous prompts for generation." https://arxiv.org/abs/2101.00190 [4] Zhu et al. (ArXiv 2021). " Serial or Parallel? Plug-able Adapter for multilingual machine translation." https://arxiv.org/abs/2104.08154v1 [5] Mahabadi et al. (NeurIPS 2021). " Compacter: Efficient Low-Rank Hypercomplex Adapter Layers." https://arxiv.org/pdf/2106.04647.pdf comments,_suggestions_and_typos no further comments
[ [ 369, 436 ], [ 447, 590 ], [ 591, 808 ], [ 812, 865 ], [ 867, 995 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Eval_neg_2", "Major_claim", "Eval_neg_3" ]
55
paper_summary **Note**: *This is only a slight revision of my previous review for a previous version of this paper. I did not re-check all the details of the paper carefully, I mostly focused on checking the parts where I had reservations towards the previous version; I simply hope that the parts which I already found good in the previous version stayed good or were improved in this version. But I found already the previous version of the paper to be very good.* The paper describes a model called AlephBERT, which is a BERT language model for Hebrew that surpasses previous such models thanks to being trained on larger data and with better handling of the morphological richness of Hebrew. The paper also compiles together an evaluation toolkit for evaluating Hebrew language models, based on pre-existing tasks and datasets. The model and all code is planned to be released with the camera-ready version of the paper. The paper is definitely mostly a resource paper: most of the stuff is laborious but mostly straightforward, gathering data from available sources, training a model using existing approaches, compiling a benchmarking toolkit from existing tasks and datasets, and evaluating the trained model with this toolkit. The only part which is more research-heavy is handling the rich morphology of Hebrew, where the authors experiment with introducing a morphological segmentation component into the neural setup (a task which is highly non-trivial for Hebrew). The authors evaluate all of their contributions and prove that each of them brings improvements over the previous state of the art. summary_of_strengths The resources created by the authors seem to be extremely useful for nearly anyone dealing with Hebrew in NLP, as large pretrained language models are the core of most current approaches. The approach used for handling complex Hebrew morphology is novel and potentially inspirative for other morphologically complex languages. While I have a feeling that ACL does not prefer publishing pure resource papers, I believe that in case where the created resource is very useful, these papers should have their place at ACL. Besides, there is also a research component to the paper (although the research component itself would not suffice for a long paper). The paper is very well written and very nice to read and easy to understand. summary_of_weaknesses I found several minor problems and uncertainties in the previous version of the paper, but the authors managed to address practically all of these in their revised version. My only remaining reservation thus is towards the claimed but not demonstrated language-agnosticity of the presented approach, which I find to be too strong a claim (or maybe I have a different understanding of what "language agnostic" means). comments,_suggestions_and_typos In their response to the previous reviews, the authors list the following improvement: "We describe the Twitter data acquisition and cleanup process.", but I have not found this improvement in the current version (but I admit I might have simply overlooked it; all I am saying is I did not find it at the places where I would expect it).
[ [ 1631, 1818 ], [ 1819, 1957 ], [ 2038, 2150 ], [ 2159, 2282 ], [ 2284, 2361 ], [ 2557, 2682 ], [ 2684, 2801 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Major_claim", "Eval_pos_3", "Eval_pos_4", "Eval_neg_1", "Jus_neg_1" ]
56
paper_summary The paper introduces a pre-trained vision language model (FewVLM) for prompt-based few-shot vision language tasks such as image captioning and vision question answering. The model is pre-trained with a combined objective of masked language modeling and prefix language modeling. Compared to giant pre-trained vision language models, FewVLM is relatively smaller, but it achieves significantly better zero-shot and few-shot performances, as reported. The authors also conducted a fine-grained analysis understanding the effect of different prompts, data sizes, and pre-training objectives. Their findings include that 1) zero-shot tasks are more sensitive to prompt crafting than few-shot tasks. 2) low-quality prompt also learn fast when increasing data size 3) the masked language modeling objective helps vqa more while the prefix language modeling objective boosts captioning performance. summary_of_strengths - The idea is straightforward and the results presented are solid and strong. It shows that with the proper objective for pre-training, the pre-trained models could be more performant on zero-shot and few-shot tasks even when the model size is much smaller than those giant pre-trained vision language models -The analysis is comprehensive and interesting, and some of the conclusions align well with the findings in NLP tasks. For example, prompt crafting is essential for zero-shot prediction, which inspires better prompt searching. summary_of_weaknesses - The baselines are not very well explained in the paper, making it hard to understand the difference between the proposed model and the baselines. It would be much better if the authors could add some brief introductions for each baseline model. -The paper also lacks analysis or an intuitive explanation as to why the proposed model outperforms large pre-trained models like Frozen. The numbers look strong, but the analysis focus on how different factors affect FewVLM instead of why FewVLM outperforms baselines. comments,_suggestions_and_typos - I also wonder why some numbers are missing from table 2-5? Is it because these numbers are not reported in the original papers?
[ [ 930, 1005 ], [ 1006, 1236 ], [ 1238, 1355 ], [ 1356, 1464 ], [ 1489, 1634 ], [ 1635, 1734 ], [ 1736, 1872 ], [ 1873, 2005 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Jus_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2" ]
57
paper_summary The paper presents a method for representing the relevance of a linguistic dataset to the corresponding language and its speakers. As a proxy for speakers of a certain language the authors use geographical entities, particularly countries. The representation they aim to build relies on entity linking, so the authors explore this problem on several multilingual datasets, and draw conclusions regarding the cross-lingual consistency of NER and EL systems. summary_of_strengths The paper addresses an important problem, that gives a new way of assessing the representativeness of a dataset for a specific language. Since such text collections are at the basis of every other language task, and provide language models on which much of the higher level processing is based, it is important to have collections that are representative for the language (and speakers) that are targeted. summary_of_weaknesses While the main idea of the paper is valuable and interesting, and thoroughly explored, it is based on some assumptions whose soundness is debatable. Details are in the comments section. -there is a disconnect between the visualizations and the rest of the processing. -the preprocessing of the datasets (many for low-resource languages) needs resources that are themselves scarce, incomplete, or borrowed from other languages (that may use other scripts, and hence there is a transliteration problem on top of others). This makes the kind of processing presented here a bit unrealistic, in the sense that it could not be deployed on any collected dataset, and give an objective view of the representativeness of that dataset for the corresponding language (this is linked to the first point, and explanations are below) -some information in the data is discarded (topical adjectives, historical entities), and it is not clear what impact using it would have on the final geographical mapping. comments,_suggestions_and_typos With regards to the disconnect between the visualizations and the rest of the processing: the visualizations are based on geographical statistics for entities in a text, but these entities are already marked. It would have been useful to see how an end-to-end process performs: apply NER on the NER and QA datasets, and build the same visualizations as in section 3. How do the visualization using imperfect NER/EL resources and processing tools compare to the visualizations obtained on the annotated data? Are they very far apart, or the underlying "character" of the dataset is still retrievable even in such imperfect conditions? This links to the second potential weakness, regarding the applicability of this method to newly collected datasets (which is the aim, right?). The geographical mapping presented is left to the subjective inspection of a human judge. Which is not necessarily bad in itself, but as the more detailed maps in the appendix show, the characteristics of some datasets are very very similar (e.g. for European countries for example, or other geographically close countries). It may be useful to have a more rigorous evaluation of the geographical mapping, by showing that from the geographical distribution of entities, one can predict the country corresponding to the dataset's language. This could be done in an unsupervised manner, or using a linear regression model, or something similarly simple -- maybe by deducting an "average" entity geographical distribution model, such that local characteristics become more prominent, or by computing (in an unsupervised manner) some weights that would downplay the contribution of entities from countries that are always represented (like a "country tfidf" maybe?). Some geographical indicators are disregarded, and that may have an impact on the visualizations. Annotating topical adjectives that indicate countries seems doable, based on the anchor texts of links pointing to countries, which are easy to obtain (for some languages). The same for some of the historical entities that no longer exist, but some of which have corresponding GPS coordinates that could be used. The point is that both the resources and the process used to build the geographical maps of the datasets are incomplete. Some are by necessity (because the available resources are incomplete), some by choice (the adjectives and historical figures). We need to know the impact of such processing constraints. It is interesting to analyze the correlation between socio-economic factors, but how does that impact the construction or characteristics of the datasets? Some of these factors -- e.g. the GDP, -- could be (in this experiment) a proxy for the level of web presence of the population, and the level of information digitization of that particular population. Maybe some parameters that measure these issues more explicitly -- which seem more closely relevant to the process of textual collection building -- would provide better insights into data characteristics. Using a country as a proxy for language is useful, but it may skew the data representation, as the authors themselves recognize. What happens with languages that occupy the same country-level geographical space, but are distinct, as happens with multi-lingual countries? The same with languages that cross many borders. A bit more insight into how these are reflected in dataset characteristics and how they impact the usefulness of the dataset would be very useful. Why does the cross-language consistency matter here? Each dataset (for the geographical mapping) is analyzed separately, so while cross-lingual consistency is indeed a problem, it is not clear how it is related to the problem of dataset mapping. Is the cross-lingual consistency a signal of something other than the general performance of NER/EL systems? Some little typos: were => where (almost everywhere "were" appears) then => than (line 319)
[ [ 493, 629 ], [ 630, 899 ], [ 1009, 1107 ], [ 1109, 1189 ], [ 1441, 1508 ], [ 1509, 1741 ], [ 1743, 1915 ], [ 1948, 2581 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_neg_1", "Eval_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_2" ]
58
- Strengths: The approach described in the manuscript outperformed the previous approaches and achieved the state-of-the-art result. Regarding data, the method used the combination of market and text data. The approach used word embeddings to define the weight of each lexicon term by extending it to the similar terms in the document. - Weaknesses: Deep-learning based methods were known to be able to achieve relatively good performances without much feature engineering in sentimental analysis. More literature search is needed to compare with the related works would be better. The approach generally improved performance by feature-based methods without much novelty in model or proposal of new features. - General Discussion: The manuscript described an approach in sentimental analysis. The method used a relatively new method of using word embeddings to define the weight of each lexicon term. However, the novelty is not significant enough.
[ [ 13, 132 ], [ 498, 581 ], [ 582, 709 ], [ 911, 948 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Eval_neg_2", "Eval_neg_3" ]
59
paper_summary This paper investigates the effectiveness of entity representations in multilingual language models. The proposed mLUKE model exhibits strong empirical results with the word inputs (mLUKE-W), it also also shows even better performance with the entity representations (mLUKE-E) in cross-lingual transfer tasks. The authors' analysis reveals that entity representations provide more language-agnostic features to solve the downstream tasks. Extensive experimental results suggest a promising direction to pursue further on how to leverage entity representations in multilingual tasks. summary_of_strengths 1. The authors explore the effectiveness of leveraging entity representations for downstream cross-lingual tasks. They train a multilingual language model with 24 languages with entity representations and show mLUKE model consistently outperforms word-based pretrained models in various cross-lingual transfer tasks. 2. The authors show that a cloze-prompt-style fact completion task can effectively be solved with the query and answer space in the entity vocabulary. 3. The results show that entity-based prompt elicits correct factual knowledge more likely than using only word representations. summary_of_weaknesses Most of languages in LAMA are rich-resourced languages indeed, the authors may need to test mLUKE on some low-resourced languages. comments,_suggestions_and_typos This paper has done a solid work on Multilingual Pretrained Language Models. This paper is well written and easy to read.
[ [ 1241, 1371 ], [ 1405, 1482 ], [ 1483, 1528 ] ]
[ "Eval_neg_1", "Eval_pos_1", "Eval_pos_2" ]
60
This paper describes (1) new corpus resources for the under-resourced Kinyarwanda and Kirundi languages, (2) preliminary experiments on genre classification using these corpora. The resources are described thoroughly, and a useful survey of related work on these languages is presented. A variety of models are used in the experiments, and strong baseline results on this task are achieved, including experiments on transfer learning from the better-resourced Kinyarwanda to Kirundi; an approach likely to play an important role in scaling NLP to the Bantu language family, which has a small number of reasonably-resourced languages, e.g. Swahili, Lingala, Chichewa. Overall the paper should be of interest to COLING attendees. General comments: Abstract: "datasets... for multi-class classification". It would be good to note here and in the introductions that this is specifically a genre or subject classification task. Introduction: "has made access to information more easily" => "has made access to information easier" Introduction, p.2 "In this family, they are..." => "In this family, there are..." Introduction: "fourteen classes... twelve classes". Again, as in the abstract, should make clear what these classes are! Last line of p. 2 "who have not been" => "which have not been" Related work. You might also note Jackson Muhirwe's PhD work at Makerere; some of which was published here: Muhirwe J. (2010) Morphological Analysis of Tone Marked Kinyarwanda Text. In: Yli-Jyrä A., Kornai A., Sakarovitch J., Watson B. (eds) Finite-State Methods and Natural Language Processing. FSMNLP 2009. Lecture Notes in Computer Science, vol 6062. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-14684-8_6 3.3 Dataset cleaning. I know it's just a change in perspective, but I'd prefer viewing the cleaning and stopword removal as standard pre-processing steps; suggesting distributing these as tools vs. distributing the corpora with these steps applied. Classifiers should work on un-preprocessed text in any case. 3.4 I don't understand how the cleaning steps you described could reduce the vocabulary from 370K to 300K. Please clarify. 4.1 In training the word embeddings, you say "removing stopwords". Does that mean removed from the corpus before training? I'm not sure I see the value in doing so, and wonder if it negatively impacts the quality of the embeddings. 4.1 Given the morphological complexity of these languages, I wonder whether results might be improved by working at the subword level (syllables, or morphemes... cf. Muhirwe's work above). This could conceivably help is the cross-lingual training as well. You do have Char-CNN experiments but there may not be enough data to get competitive results at the character level. 4.3.2 "different epochs and number of features... different train sets"; this is fine, but you should refer to the table where these choices are actually laid out 4.4.1 Had the Char-CNN converged at 20 epochs?
[ [ 178, 217 ], [ 222, 285 ], [ 287, 334 ], [ 340, 390 ], [ 391, 665 ], [ 667, 727 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Jus_pos_4", "Major_claim" ]
61
- Strengths: This paper proposes the use of HowNet to enrich embedings. The idea is interesting and gives good results. - Weaknesses: The paper is interesting, but I am not sure the contibution is important enough for a long paper. Also, the comparision with other works may not be fair: authors should compare to other systems that use manually developed resources. The paper is understandable, but it would help some improvement on the English. - General Discussion:
[ [ 72, 119 ], [ 164, 230 ], [ 238, 286 ], [ 288, 366 ], [ 400, 446 ] ]
[ "Eval_pos_1", "Major_claim", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2" ]
62
[update after reading author response: the alignment of the hidden units does not match with my intuition and experience, but I'm willing to believe I'm wrong in this case. Discussing the alignment in the paper is important (and maybe just sanity-checking that the alignment goes away if you initialize with a different seed). If what you're saying about how the new model is very different but only a little better performing -- a 10% error reduction -- then I wonder about an ensemble of the new model and the old one. Seems like ensembling would provide a nice boost if the failures across models are distinct, right? Anyhow this is a solid paper and I appreciate the author response, I raise my review score to a 4.] - Strengths: 1) Evidence of the attention-MTL connection is interesting 2) Methods are appropriate, models perform well relative to state-of-the-art - Weaknesses: 1) Critical detail is not provided in the paper 2) Models are not particularly novel - General Discussion: This paper presents a new method for historical text normalization. The model performs well, but the primary contribution of the paper ends up being a hypothesis that attention mechanisms in the task can be learned via multi-task learning, where the auxiliary task is a pronunciation task. This connection between attention and MTL is interesting. There are two major areas for improvement in this paper. The first is that we are given almost no explanation as to why the pronunciation task would somehow require an attention mechanism similar to that used for the normalization task. Why the two tasks (normalization and pronunciation) are related is mentioned in the paper: spelling variation often stems from variation in pronunciation. But why would doing MTL on both tasks result in an implicit attention mechanism (and in fact, one that is then only hampered by the inclusion of an explicit attention mechanism?). This remains a mystery. The paper can leave some questions unanswered, but at least a suggestion of an answer to this one would strengthen the paper. The other concern is clarity. While the writing in this paper is clear, a number of details are omitted. The most important one is the description of the attention mechanism itself. Given the central role that method plays, it should be described in detail in the paper rather than referring to previous work. I did not understand the paragraph about this in Sec 3.4. Other questions included why you can compare the output vectors of two models (Figure 4), while the output dimensions are the same I don't understand why the hidden layer dimensions of two models would ever be comparable. Usually how the hidden states are "organized" is completely different for every model, at the very least it is permuted. So I really did not understand Figure 4. The Kappa statistic for attention vs. MTL needs to be compared to the same statistic for each of those models vs. the base model. At the end of Sec 5, is that row < 0.21 an upper bound across all data sets? Lastly, the paper's analysis (Sec 5) seems to imply that the attention and MTL approaches make large changes to the model (comparing e.g. Fig 5) but the experimental improvements in accuracy for either model are quite small (2%), which seems like a bit of a contradiction.
[ [ 39, 120 ], [ 174, 328 ], [ 625, 724 ], [ 744, 799 ], [ 806, 880 ], [ 900, 944 ], [ 951, 984 ], [ 1299, 1355 ], [ 1357, 1414 ], [ 1415, 2492 ], [ 2493, 2836 ], [ 2856, 2896 ] ]
[ "Eval_neg_1", "Jus_neg_1", "Major_claim", "Eval_pos_1", "Eval_pos_2", "Eval_neg_2", "Eval_neg_3", "Eval_pos_3", "Eval_neg_4", "Jus_neg_4", "Jus_neg_5", "Eval_neg_5" ]
63
paper_summary This paper proposes a unified representation model Prix-LM for multilingual knowledge base (KB) construction and completion. Specifically, they leverage monolingual triples and cross-lingual links from existing multilingual KBs DBpedia, and formulate them as the autoregressive language modeling training objective via starting from XLM-R’s pretrained model. They conduct experiments on four tasks including Link Prediction (LP), Knowledge probing from LMs (LM-KP), Cross-lingual entity linking (XEL), and Bilingual lexicon induction (BLI). The results demonstrate the effectiveness of the proposed approach. summary_of_strengths 1. They propose a novel approach Prix-LM that can be insightful to the community about how to integrate structural knowledge from multilingual KBs into the pretrained language model. 2. They conduct comprehensive experiments on four different tasks and 17 diverse languages with significant performance gains which demonstrate the effectiveness of their approach. summary_of_weaknesses Though this paper has conducted comprehensive experiments on knowledge related tasks, it would be even stronger if they demonstrate there also exists improvement on the multilingual knowledge-intensive benchmark, like KILT. comments,_suggestions_and_typos N/A
[ [ 648, 828 ], [ 832, 1010 ] ]
[ "Eval_pos_1", "Eval_pos_2" ]
64
paper_summary *(minor edits from previous review XYZ)* Text style transfer is the task of rewriting a sentence into a target style while approximately preserving its content. Modern style transfer research operates in an "unsupervised" setting, where no parallel training data (pairs of sentences differing in style) is available, but assume access to a large unpaired corpus in each style. This paper argues that a large unpaired corpus to train style transfer systems might be hard to obtain in practice, especially in certain domains. To tackle this issue, the authors present a new meta-learning approach (DAML) which trains a style transfer system that can quickly adapt to unseen domains during inference (with a few unpaired examples). The authors build their style transfer system using a discriminative learning objective (via a style classifier) while fine-tuning T5, which they call ST5. The authors approach DAML-ST5 outperforms several baselines on sentiment transfer and Shakespeare author imitiation, and ablation studies confirm the design decisions. summary_of_strengths *(identical to my previous review GAJd, see "Weaknesses" for my response to the revised manuscript)* 1. This paper tackles a practically relevant problem. While current style transfer research does not leverage supervised data, it requires a large amount of unpaired data which may not be practical to obtain in low-resource languages or domains. Hence, building style transfer systems which can quickly adapt in low-resource settings is important, since it eliminates the expensive requirement of hand-curating unpaired datasets for each low-resource domain / language. 2. The paper presents an interesting method based on model-agnostic meta learning [1] (with modifications to make it suitable for domain adaptation) to learn a good initialization which works well across domains. During inference, the model can quickly adapt to a new domain, with decent performance with just 1% of the target domain data. Experimental results confirm the proposed approach outperforms several strong baselines. The paper also has ablation studies to justify the various design decisions used in the approach. [1] - https://arxiv.org/abs/1703.03400 summary_of_weaknesses The authors presented an excellent response and addressed all the concerns in my previous review GAJd in their revised manuscript. In particular, the authors added experiments on the new Shakespeare dataset, used extra automatic metrics to evaluate their approach and found consistent trends, clarified some questions I had about the modeling, added comparisons to recent few-shot style transfer approaches. I have increased my score to 4. It would be nice to move some of the new results into the main body of the paper with the extra 9th page, especially the experiments on the Shakespeare dataset. comments,_suggestions_and_typos Several references are missing their venues / journals / arXiv identifiers, you can get the correct bib entries for papers from https://aclanthology.org, Google Scholar or arXiv.
[ [ 1192, 1243 ], [ 1244, 1659 ], [ 1663, 1872 ], [ 1873, 2186 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Jus_pos_2" ]
65
paper_summary This paper proposes a simple but powerful approach that uses a single Transformer architecture to tackle KG link prediction and question answering treated as sequence-to-sequence tasks. This approach can reduce the model size up to 90% compared to conventional Knowledge graph embedding (KGE) models, and the performance of this approach is best among small-sized baseline models. summary_of_strengths 1. This paper uses the Transformer structure for KG link prediction and question answering tasks, and this simple approach seems powerful. 2. This paper conducts a large number of experiments on multiple datasets and analyzes the experimental results. summary_of_weaknesses Minor: The paper only contains a high-level description of the proposed approach that benefits the performance of KGQA. It would be better if the authors provide some explicit cases or discussions to explain how pre-training on KG link prediction can improve performance on KGQA compared with the previous representative works later. comments,_suggestions_and_typos Specific comments for improving the work: 1. The authors may provide some explicit cases or discussions to explain how pre-training on KG link prediction can improve performance on KGQA. 2. This paper shows KG link prediction performance from the proposed model trained on Wikidata5M in section 4.4. it would be better to show the KG link prediction performance from KGT5 after finetuning for QA, and showing performance on KG link prediction and KGQA with multi-task setting is also a good choice.
[ [ 520, 555 ], [ 562, 672 ], [ 702, 814 ], [ 815, 1029 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1" ]
66
paper_summary This paper introduces a new method for MeSH indexing, combining multiple methods including, but not limited to, dilated CNNs, masked attention, and graph CNNs. Overall, the proposed approach makes substantial improvements over prior state-of-the-art methods. For example, Micro F1 improves over BERTMeSH from 0.685 to 0.745. Similar improvements are also found for the example-based measures (e.g., example-based F1). Furthermore, a comprehensive ablation study was performed, showing that the label feature model has the largest impact on model performance, yet, other parts of the method still impact performance substantially. summary_of_strengths Overall, the paper is well-written and easy to read. Furthermore, the improvement over prior work is substantial. It is neither easy nor trivial to make such considerable performance improvements for MeSH indexing, especially for Micro F1. For instance, BERTMeSH [1] only improves DeepMeSH [2] by only 2% in Micro F1 [1] after five years of work. Hence, seeing a Micro F1 near 0.75 is a huge breakthrough. References: [1] Peng, Shengwen, et al. "DeepMeSH: deep semantic representation for improving large-scale MeSH indexing." Bioinformatics 32.12 (2016): i70-i79. [2] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. summary_of_weaknesses Overall, there are three major weaknesses in this paper. First, the paper uses a custom training and validation dataset pulled from PubMed, making comparisons difficult. Using the data from the yearly BioASQ shared tasks would be better to use their data so new methods are more easily comparable. I understand this is common in similar studies (e.g., by BERTMeSH [3]), but a standardized dataset seems possible and useful. Second, while the hyperparameters are discussed, it is not clear whether hyperparameters were optimized for the baseline models. What were the chosen parameters? Was the validation dataset used to optimize them similarly to the proposed method? If so, why is the standard deviation not reported for the baseline models (e.g., in Table 1)? Given the substantial performance differences between the proposed model and prior work, this additional information must be reported to ensure fair comparisons. Third, while this may be the first paper to use GCNNs for MeSH indexing, it is widely used for similar biomedical text classification tasks (e.g., ICD Coding). For instance, [1] directly combines BiLSTMs with GCNNs and label features in a very similar manner to the method proposed in this paper, albeit with exceptions such as [1] does not use dilated CNNs. Furthermore, that work has been expanded on to better understand the impact of the GCNNs and whether they are needed [2]. Hence, the paper would substantially help if the related work sections were expanded to include citations with similar methodologies. In my opinion, the "Dynamic Knowledge-enhanced Mask Attention Module" is one of the most innovative parts of the paper and should be highlighted more in the introduction. References: [1] Chalkidis, Ilias, et al. "Large-Scale Multi-Label Text Classification on EU Legislation." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. [2] Chalkidis, Ilias, et al. "An Empirical Study on Large-Scale Multi-Label Text Classification Including Few and Zero-Shot Labels." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020. [3] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. comments,_suggestions_and_typos Page 3, Line 240-252: There are a few variations of LSTMs [1]. Is the one used in this paper the same as the 1997 paper? Page 2, Line 098-100: The phrase "latent semantics" is unclear. It may help the paper if that phrase is expanded, e.g., does this mean the contextual information from combining multiple layers of neural networks? Page 4, Line 287: I believe "edges are implement MeSH hierarchies" should be "edges represent relationships in the MeSH hierarchy" Page 6, Line 416-417: I believe the phrase ", and we converted all words are lowercased" Should be ", and we converted all words to lowercase" References: [1] Graves,A. et al. (2012) Supervised Sequence Labelling with Recurrent Neural Networks. Vol. 385. Springer, Berlin.
[ [ 666, 718 ], [ 719, 779 ], [ 780, 905 ], [ 906, 1071 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Jus_pos_3" ]
67
paper_summary This paper describes a contrastive learning approach to automatically solving math word problems (MWP) and investigates multilingual approaches to the problem. Additionally, it provides further evidence that the top layers of BERT will learn task-specific patterns, as shown in prior works. This paper treats MWP solving as a text-to-equation-tree translation problem using an encoder-decoder architecture. To motivate the use of contrastive learning, the paper opens with an analysis of the effect of training epoch and encoder layer on the clustering of MWPs by prototype equation. t-SNE plots show expected clustering effects as layer/epoch increases. Analysis of raw high dimensional representations show that problems with similar lexical semantics or topic are given different representations when the prototype equation differs, especially in layer 12, while problems with the same prototype equation are embedded closer together. Moreover, it is shown that MPWs that are represented closer to the center of the cluster of problems with the same prototype equation are more likely to be correctly solved. The contrastive learning approach proposed here involves finding difficult negative examples, which is done by choosing structurally similar equation trees with different operations in the intermediate nodes. Additional positive examples come from either trees or subtrees which consist of the same structure and operations as the target equation. For the multilingual approach, mBERT is substituted as the encoder. Results show that the contrastive learning method improves MWP solving in both the monolingual and multilingual settings compared to recent baselines. Ablations show the value of choosing difficult negative examples and other design decisions. Analysis shows that the contrastive learning objective results in well defined clusters. Accuracy is especially improved for examples farther from the cluster center. summary_of_strengths The contrastive learning for MWP solving seems to improve performance summary_of_weaknesses Technique is limited to problems that can be modeled by equation trees. A lot of paper real estate is given to an analysis that basically shows:
 -undertrained models don’t work -only using part of the encoding function (the bottom N layers) doesn’t work I don’t think this analysis will be of much use to the ACL community. It seems like the cosine similarity of lower layers in figure 3 are relatively high, while the t-SNE visualizations in Figure 2 are more mixed. Do you think t-SNE is accurately representing the latent space? comments,_suggestions_and_typos The paper would benefit from connections to prior work on BERTology. An intro to this line of research can be found at https://huggingface.co/docs/transformers/bertology
[ [ 1977, 2047 ], [ 2070, 2141 ], [ 2143, 2325 ], [ 2326, 2395 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Jus_neg_2", "Eval_neg_2" ]
68
paper_summary Existing self-explaining models mostly generate the short rationales with the assumption that short rationales are more intuitive to humans, while this work discusses the question that whether the shortest rationale is the most understandable for humans. In this work, the authors design a self-explaining model, LIMITEDINK, that can take controls on rationale length by incorporating contextual information and supporting flexibly extracting rationales at any target length. By generating rationales at different length levels, the authors study how much rationale would be sufficient for humans to confidently make predictions. Experiments on various tasks demonstrate that the proposed method outperforms most prior works, and meanwhile show that the shortest rationales are not the best for human understanding. summary_of_strengths 1. The method proposed in this work is effective and can outperform several strong baselines on the performance of both label predictions and rationale predictions. 2. The problem, the effect of the rationales at different length levels, discussed in this work is meaningful and the conclusions may serve as good guidance for further research in this field. summary_of_weaknesses 1. Although this work points out that shortest rationales are largely not the best for human understanding, the appropriate lengths are still subject to the datasets or even the instances. The length of meaningful rationales may largely depend on the density of the information related to the task. As pointed in Section 5, a more rigorous evaluation is needed to better understand what is a good rationale explanation. 2. This work does not report how "short" the rationales generated by prior works are. As shown in Section 1, recent works agree that good rationales should be "shortest yet sufficient", while this work seems to simply focus more on "shortest". This brings out the concern that whether the main question discussed in this work can really stand for the trend of current works on this task. (a). I think one potential solution to handle this concern is that - by extending or shortening the golden rationales and see whether such perturbations outperform or underperform the original one. comments,_suggestions_and_typos 1. I would like to see some examples of the generated rationales at different length levels from the proposed methods, as well as the rationales generated by the baselines. Such examples can help the readers to better understand the influence of rationale lengths.
[ [ 855, 1016 ], [ 1020, 1210 ], [ 1236, 1531 ], [ 1532, 1652 ], [ 1897, 2040 ], [ 2056, 2249 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Jus_neg_1", "Eval_neg_1", "Eval_neg_2", "Jus_neg_2" ]
69
paper_summary This paper compares two figures, Firth and Harris, who are often cited as foundational in modern computational linguistics but who are rarely actually read, perhaps even not by the people who cite them. It does a deep dive into their work and takes an opinionated stance that Harris was “introverted”, focused on language as a system in isolation and Firth was extroverted, focusing on language as it exists in the world. summary_of_strengths This is an interesting paper, of a type rarely seen at ACL venues: intellectual history with an opinionated thesis. I genuinely enjoyed reading it and learned from it. I imagine the same would be true for many folks in the ACL community. There is some real scholarship here, as it investigates the works of these mid-20th c scholars, derives an opinionated synthesis, and applies it to modern NLP. Given that NLP as a field is extremely forward-looking, often considering something even a year or two old to be ancient history, this is a valuable perspective. summary_of_weaknesses The paper points out that Firth’s work is somewhat scattered and hard to get a clear grip on. Yet it ends up coming down at times in a way that feels to me a bit too much “Harris bad, Firth good”. The claim is that Firth’s views are well aligned with a strand of thought, currently popular in NLP (and well articulated in Bender & Koller’s position piece and Bisk et al.) that “you can’t learn language on the radio” and that language meaning needs to be in embedded context in a way that is heavily socially mediated. The argument is that, by contrast, Harris misses the boat on this. I wasn’t quite convinced on this point. It makes for an interesting contrast for the two thinkers, but it also seems to me to be a bit unfair to Harris since it’s hard to counterfactually reason about how Harris would have reacted to the current state of NLP. And I could imagine a variety of other arguments about the relevance of his work in NLP today. Firth’s positions are, according to the paper, admittedly sometimes murky and not always spelled out, which means it is easy to attribute a wider variety of perspectives to him. So I think there should be some caution in that framing. It also seems possible that a “radically distributional”, like the kind attributed to Harris, could in fact capture a wide range of rich social contexts and individual variation. For instance, GPT-3 which is trained as if it’s trained on a single monolithic dialect, can be a quite effective code-switcher when prompted with different registers. I’ll mention one other thing, which isn’t really a weakness but is more of a meta-concern: One potential pitfall of submitting and publishing this kind of work in an ACL venue is that the reviewers (like me) and audience are not necessarily going to be experts in this methodology and so care should be taken to make sure it is well reviewed by people who have the relevant expertise. An example of the way in which ACL is not necessarily set up for this kind of work is that I have to select whether the work is reproducible: I picked "1 = They would not be able to reproduce the results here no matter how hard they tried." since it's hard to imagine some other set of authors deciding to read Harris and Firth and writing the same paper :-). But the broader point is that some of this veers methodologically into intellectual history, which I’m certainly not an expert in, and the ACL reviewing process is not necessarily set up to review a paper with this method. That's not a reason not to publish it! In fact, it's all the more reason to give it serious consdieration. But I think there should be some thought given to make sure the work is well evaluated. comments,_suggestions_and_typos -The paper says that computational linguists routinely cite Harris and Firth. This is true of textbooks and big review papers. But my impression is that many in the ACL community do not engage with them at all.
[ [ 458, 486 ], [ 488, 573 ], [ 574, 695 ], [ 696, 731 ], [ 733, 855 ], [ 856, 986 ], [ 986, 1018 ], [ 1135, 1238 ], [ 1238, 2562 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Major_claim", "Eval_pos_2", "Jus_pos_2", "Jus_pos_3", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1" ]
70
paper_summary Note - I reviewed this paper in the past and had a positive criticism about it. The authors also addressed my previous comments and I keep my positive review from before. This paper discusses methods for improving multi-domain training for dialog response generation. The authors experiment with several approaches to improve multi-domain models, namely (1) "Interleaved Learning", when data from multiple domains/corpora is concatenated and used for training, (2) "Labeled Learning" where each example is encoded using an additional corpus-specific embedding/label that guides the model, (3) "Multi-Task Labeled Learning" where the model has an additional classification head that determines the domain/corpora label based on the given context, and (4) "Weighted Learning" where the authors propose a weighted loss function that give more weight on words that are especially salient in a given domain. The authors run experiments that evaluate the different approaches using 4 dialog datasets (PersonaChat, OpenSubtitles, Ubuntu and Twitter) where they show the effect of each approach on the resulting model as measured using BLEU, perplexity and F1. While the experiments show that there is no single best approach on all metrics, the proposed approaches improve the results over simple corpora concatenation or single-corpora training. A human evaluation showed that the proposed "Weighted Learning" approach was favorable in comparison to the other methods. summary_of_strengths The main strengths of the paper are as follows: The highlighted task of multi-domain dialog generation is important, practical and relatively understudied. To the best of my knowledge, the proposed "Weighted Learning" approach is novel The experiments are thorough and convincing, especially as they include a human evaluation summary_of_weaknesses The main weakness of the paper is that some of the proposed approaches lack novelty - "interleaved learning", "labeled learning", "multi-task labeled learning" were studied extensively in the MT community. Having said that, I am not aware of works applying those approaches to open-domain dialog generation. comments,_suggestions_and_typos line 230 - "learning material" --> "training data"
[ [ 1547, 1655 ], [ 1656, 1827 ], [ 1850, 1933 ], [ 1936, 2056 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Jus_neg_1", "Eval_neg_1" ]
71
paper_summary This paper proposes a new task formulation to solve complex tasks. In this new formulation, there are multiple agents, each of which is capable of solving some specific types of tasks. For example, there can be a QA agent that answers natural language (NL) questions and an instruction following agent that could execute actions to accomplish an NL intent. Given a complex task described in NL, the model is asked to communicate with each agent for their task-specific knowledge and use the returned answers to proceed with the task. In this work, they instantiate the complex task as multi-hop QA and the agents as a TextQA agent that is able to reason over large text corpus, TableQA agents that could answer questions given structured data like tables, and MathQA agent that could perform numerical reasoning. Each agent also has their own auxiliary data like (NL, answer) supervision and their independent KBs. They design a model that is able to decompose a multi-hop question to simple questions that could be answered by one agent. They compare this model with other black-box models that do not perform communication with agents and show significant improvements on a synthetic dataset they create. summary_of_strengths - The proposed new task formulation is novel and interesting. Intuitively, it is a promising way to resolve the complex tasks people encounter daily. The paper also provides a detailed and clear definition of this new task. summary_of_weaknesses - The instantiation of the task could not fully justify the benefit of the new task formulation. In this new proposed setting, an ideal criterion for designing individual agents is that each has mutually exclusive functionalities, and it is challenging to develop a unified model. For example, the google search agent and the Alexa shopping agent described in the introduction make such a case. However, this work design a synthetic dataset, and the agents are separated by the different forms of knowledge (text vs table) and the different proportions of knowledge in the KB. This separation is OK as long as it could reveal the true distribution in reality -- there is some knowledge that is more accessible through text than structured data and vice versa. However, the data construction process did not consider this and did a random split. A more realistic setting will bring up some interesting questions like "how does the decomposer know which agent is more suitable to answer the current question?", " how can we curate such annotations?" etc, which are not explicitly touched by the current work. To me, my main takeaway is that question decomposition is helpful, which has been studied in previous works like BREAK (Wolfson el at + 2020). Related to this concern, I also have a question regarding training the question decomposition component. According to F3, the NL questions to the text agent and the table agent look pretty similar (e.g. [table] What movies has #1 written? vs. [text] #1 produces which materials?), what are the supervision signals that hint the model to predict one agent over another? - Some descriptions of the experiment setting are somewhat vague, and therefore it is not super clear whether the comparisons are fair. My main question is how factual knowledge is provided to each model? * In *Models with Access to Agent Knowledge*, how do you construct the context? Do you randomly sample some context from the *possible world* of the question? * Do you somehow distinguish the source (e.g., knowledge of TextQA, knowledge of TableQA)? * After decomposing the question through `NextGen`, how do you provide the context when querying an individual agent? Do you provide the ground truth context without distractors? Or do you train some retriever (like in *Models with Fact Supervision*) to retrieve the context? comments,_suggestions_and_typos - Some case study and more systematic error analysis can probably help the readers to understand in which cases the proposed method works and how.
[ [ 1245, 1304 ], [ 1305, 1392 ], [ 1393, 1467 ], [ 1492, 1586 ], [ 1587, 3108 ], [ 3111, 3244 ], [ 3245, 3858 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2" ]
72
paper_summary The authors proposed a Locally Aggregated Feature Attribution method and claimed that this is a novel gradient-based feature attribution method for NLP models. summary_of_strengths The authors proposed a Locally Aggregated Feature Attribution method summary_of_weaknesses Results are varying so much on two different datasets comments,_suggestions_and_typos Did you use an attention mechanism? If yes, what are the significant changes you observed between the two approaches? If not, could you please check a performance comparison with any attention mechanism? Why are the results varying so much on two different datasets? Is your model biased on a particular data? Did you check the disparity and fairness of data?
[ [ 288, 341 ] ]
[ "Eval_neg_1" ]
73
This paper presents a corpus study of coreferences comparing different genres (news, blogs, conversations) and media (written, transcribed speech, microblogging) based on the Ontonotes and Switchboard corpora and a dataset from Twitter sub-threads. The analysed factors include the use of pronouns and noun phrases, the characteristics of such NP mentions (syntactic complexity) and various distances measured between mentions of the same entity. This is an interesting study, and could be potentially useful for models trying to do domain adaptation, as coreferecen systems for written text perform poorly on conversations and microblogging. Overall it seems the contributions are only moderately significant however, for the following reasons: (1) the paper builds on two papers: (Aktas et al., 2018), where the twitter data was collected and described, and (Aktas et al., 2019) which described coreferences in Ontonotes sub-corpora/genres in what I assume is a similar manner (the paper is not freely available, only the abstract). It is not clear how the present paper adds to these papers, and should be made more explicit. (2) the interest for coreference model is rather vaguely described, and it would have been interesting to have a more detailed descriptions of how the knowledge derived from the study could be used in such models. The paper mentions experiments using models trained on written texts applied to other genres/media, how hard would it have been to experiment training on other data, or to combine them ? This seems too preliminary to assess the real interest for automated models. More minor points: -the introduction is rather strangely constructed, and almost reads as a post-introduction section/a related work section already. The context should be made clearer and a few examples wouldn't hurt. -i'm not sure I understand the term "coreference strategies", which seem to imply an intentionality in the way coreferences are produced in different contexts. A lot of what is shown in the paper could be attributed to more general aspects of the genres/media (longer sentences for purely written text, more context available, etc) and some of the properties of coreferences could just be by-product of that. The use of specific personal pronouns (1st/2nd/3rd) is another example. -there is zero descriptions of the statistical tests used, and of the assumptions made, if a parametric model was used. This should be addressed. Also some conclusions are based on multiple-testing, which should include some kind of correction (it might have been done, but again, there is zero details about this). -some technical details are presented a little vaguely, which could be understood given size constraints, but sometimes it is a bit too much: for instance, instead of explaining what hierarchical clustering method was applied, the paper only mentions using some R implementation with default settings, which is rather uninformative. -about the clustering, why not cluster on all the dimensions at the same time ? ( with some normalization of features of course) Details: -Tables/figures have rather cursory captions. For instance table 1 coud recall the meanings of abbreviations for all sub-corpora, especially from Ontonotes. It is also not a good idea to have ontonotes as a whole *and* all the subcorpora without making it clear. -section 3.1, the paper mentions the use of a sentence splitter from (Proisl and Uhrig, 2016) which is a German sentence splitter ? -table 2: why not give relative (to corpus size) frequency instead of absolute frequency ? this would make it easier to interpret.
[ [ 448, 552 ], [ 553, 644 ], [ 645, 719 ], [ 752, 1038 ], [ 1039, 1133 ], [ 1138, 1202 ], [ 1206, 1347 ], [ 1536, 1613 ], [ 1634, 1763 ], [ 1764, 1832 ], [ 1834, 1992 ], [ 1993, 2313 ], [ 2316, 2434 ], [ 2460, 2558 ], [ 2633, 2772 ], [ 2774, 2964 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Major_claim", "Jus_neg_1", "Eval_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4", "Eval_neg_5", "Eval_neg_6", "Eval_neg_7", "Jus_neg_7" ]
74
paper_summary This paper is about determining the syntactic ability of two Dutch variants of transformer based language model BERT: BERTje and RobBERT. The authors use a Multiple Context Free Grammar (MCFG) formalism to model two patterns of Dutch syntax: control verb nesting and verb raising. These rule-based grammatical models are used to generate a test set which is limited by a bound on recursion depth and populated from a lexicon. For evaluation, each verb occurrence garners a prediction of which referential noun phrase is selected by it, and the resulting accuracy is reported. The authors show results that demonstrate drastically worse performance as recursive depth and number of noun phrases increase, and conclude that the models have not properly learned the underlying syntax of the linguistic phenomena they describe; ie discontinuous constituents/cross-serial dependencies. summary_of_strengths As someone unfamiliar with Dutch and with this area of research, I felt this paper did an excellent job of motivating their reasoning for their research and of describing the ways that Dutch syntax is different from English. Figures and examples were clear and well-done. The article was clearly and concisely written, and appears to be a valuable contribution that adds counter-evidence to claims about how much syntax BERT-based models actually “know”. Authors are careful not to exaggerate the consequences of their findings and make suggestions for how this work could be expanded with other languages or other tasks. summary_of_weaknesses I was unable to get the provided code to work. I tried both on my Macbook and on a Linux-based computing cluster. To be fair, I did not try for very long (< 15 minutes), and I also did not have access to a GPU so I tried to run it on a CPU. It’s possible that was the problem, but it wasn’t stated that that was a requirement. It seems that if I understood the instructions in the readme properly, there were a few __init__ files missing. However, even after changing those, I ran into a number of other errors. The readme was also a bit sparse, ie “Play around with the results as you see fit”. I commend the authors for including the code and data with the submission, but I would have liked to see a script included already (i.e. not just a snippet in the readme) along with a brief description of any dependencies required beyond the requirements.txt and what one might expect when running the script. Another weakness I felt, was a lack of description of previous/related work. They mention in the very beginning that "Assessing the ability of large-scale language models to automatically acquire aspects of linguistic theory has become a prominent theme in the literature ever since the inception of BERT", but didn't reference other work to provide similar counter evidence to the consensus they referenced in Rogers et al. (2020). As someone not familiar with this area of research, maybe there is not so much to cite here, but if that is the case, I feel it should be mentioned why there is no related work section. comments,_suggestions_and_typos Citation should be formatted like so: "The consensus points to BERT-like models having some capacity for syntactic understanding (Rogers et al., 2020)."
[ [ 982, 1141 ], [ 1142, 1188 ], [ 1189, 1235 ], [ 1240, 1277 ], [ 1279, 1371 ], [ 2468, 2544 ], [ 2545, 3087 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Jus_pos_4", "Eval_neg_1", "Jus_neg_1" ]
75
paper_summary This work introduces a new dataset, the Hindi Legal Documents Corpus (HLDC), a corpus with 900 thousand legal documents in Hindi. This corpus is collected from public data, and the authors intend to release (in addition to the corpus) the scripts necessary for its creation and processing, along with models and code for the experiments in the paper. The authors examine the task of predicting the verdict of bail applications (a binary task, which is to predict whether or not the application was denied or granted). A variety of models are explored for this task; while accuracy is better than the majority baseline, there is still much room for progress. The headroom in performance even for this simple task highlights the challenges in using natural language processing and machine learning systems for legal use cases. Overall, I believe the data and experiments introduced by this work would be interesting to many, and I recommend it's acceptance. summary_of_strengths 1. This work introduces a new, large-scale dataset containing legal documents in a low-resource language. This can be a valuable resource for many, and could help advance research in natural language processing for legal use cases. 2. Authors thoroughly describe the process of data collection and cleaning, and intend to open-source code for reproducing these steps. 3. Through experiments, authors demonstrate the challenges of current techniques in a simple (yet telling) task of predicting the outcome of bail applications. The authors report multiple baselines and will publicly release their code and models. 4. The authors take many steps to anonymize the dataset, removing names, gender information, titles, locations, times, etc. 5. This paper is clear and well written. summary_of_weaknesses Some minor considerations: 1. It would be informative to users if authors reported sensitivity of their experiments to hyper-parameters, along with standard deviations on their numbers. 2. The presented error analyses are anecdotal, and might not be reflective of the overall behavior of the system. It would strengthen this paper if authors further explored systematic biases in their datasets and models (e.g. how does accuracy/F1 vary by district?) comments,_suggestions_and_typos Footnote marks should come after punctuation.
[ [ 995, 1098 ], [ 1098, 1223 ], [ 1227, 1298 ], [ 1363, 1520 ], [ 1610, 1663 ], [ 1664, 1730 ], [ 1736, 1774 ], [ 1986, 2096 ], [ 2097, 2249 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5", "Jus_pos_5", "Eval_pos_6", "Eval_neg_1", "Jus_neg_1" ]
76
paper_summary This paper proposed a confidence estimation method for neural machine translation (NMT) by jointly training the NMT model with a confidence network which learns to output a confidence score per example. The confidence score (a scalar between 0 and 1) is used to provide “hints” for the NMT model, that is interpolating the original prediction probabilities with the ground truth probability distribution. Higher confidence indicates less hints provided. The two models are trained jointly, where NMT learns the task and the confidence network learns to produce the correct confidence. Besides, the confidence is also utilized to smooth labels for preventing miscalibration. Experiments on several quality estimation tasks demonstrate the effectiveness of the proposed method in improving model performance and detecting noisy samples and out-of-domain data. summary_of_strengths 1. This paper focused on an important problem in estimating confidence for poorly calibrated NMT models. Different from previous work based on Monte Carlo dropout, the proposed method, learning confidence estimation during training, is more efficient and may be benefit for future research. 2. The paper is well-written and easy to follow. The experiments are sufficient and promising. summary_of_weaknesses 1. Since an additional confidence network has been involved in producing confidence score, how to ensure the confidence network would not be over-confident or under-confident? Would this be an endless loop if another network is needed to assess the uncertainty of the confidence network? 2. The improvement compared to other unsupervised methods is not impressive, while there is still a big gap with the strong QE model BERT-BiRNN. comments,_suggestions_and_typos N/A
[ [ 898, 1000 ], [ 1000, 1186 ], [ 1191, 1236 ], [ 1237, 1283 ], [ 1600, 1742 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_neg_1" ]
77
The paper explores the use of probabilistic models (gaussian processes) to regress on the target variable of post-editing time/rates for quality estimation of MT output. The paper is well structured with a clear introduction that highlights the problem of QE point estimates in real-world applications. I especially liked the description of the different asymmetric risk scenarios and how they entail different estimators. For readers familiar with GPs the paper spends quite some space to reflect them, but I think it is worth the effort to introduce these concepts to the reader. The GP approach and the choices for kernels and using warping are explained very clearly and are easy to follow. In general the research questions that are to be answered by this paper are interesting and well phrased. However, I do have some questions/suggestions about the Results and Discussion sections for Intrinsic Uncertainty Evaluation: -Why were post-editing rates chosen over prediction (H)TER? TER is a common value to predict in QE research and it would have been nice to justify the choice made in the paper. -Section 3.2: I don't understand the first paragraph at all: What exactly is the trend you see for fr-en & en-de that you do not see for en-es? NLL and NLPD 'drastically' decrease with warped GPs for all three datasets. -The paper indeed states that it does not want to advance state-of-the-art (given that they use only the standard 17 baseline features), but it would have been nice to show another point estimate model from existing work in the result tables, to get a sense of the overall quality of the models. -Related to this, it is hard to interpret NLL and NLPD values, so one is always tempted to look at MAE in the tables to get a sense of 'how different the predictions are'. Since the whole point of the paper is to say that this is not the right thing to do, it would be great provide some notion of what is a drastic reduce in NLL/NLPD worth: A qualitative analysis with actual examples. Section 4 is very nicely written and explains results very intuitively! Overall, I like the paper since it points out the problematic use of point estimates in QE. A difficult task in general where additional information such as confidence arguably are very important. The submission does not advance state-of-the-art and does not provide a lot of novelty in terms of modeling (since GPs have been used before), but its research questions and goals are clearly stated and nicely executed. Minor problems: -Section 4: "over and underestimates" -> "over- and underestimates" -Figure 1 caption: Lines are actually blue and green, not blue and red as stated in the caption. -If a certain toolkit was used for GP modeling, it would be great to refer to this in the final paper.
[ [ 171, 225 ], [ 226, 303 ], [ 304, 423 ], [ 585, 697 ], [ 698, 803 ], [ 1108, 1166 ], [ 1168, 1326 ], [ 1641, 1792 ], [ 1795, 2009 ], [ 2010, 2081 ], [ 2082, 2173 ], [ 2279, 2386 ], [ 2387, 2420 ], [ 2426, 2498 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_neg_1", "Jus_neg_1", "Eval_pos_5", "Jus_pos_5", "Eval_pos_6", "Major_claim", "Eval_neg_2", "Jus_neg_2", "Eval_pos_7" ]
78
This is a highly satisfying paper. It is a report of various NLP efforts for several Indigenous languages of Canada It goes deeply enough into the technical details of the projects to show that the efforts are viable and successful, without getting bogged down in numbers or linguistic details that are unimportant to people external to the projects. Where the paper does get technical is in a discussion of the differing difficulties of speech recognition for different languages, providing a useful case study to demonstrate that one-size technology approaches are not necessarily universal stand-alone solutions. The paper understates two points that could be further investigated. 1 "Rule based approaches may seem outdated in contrast to statistical or neural methods. However, with most Indigenous languages, existing corpora are not large enough to produce accurate statistical models." Why apologize for using a better approach? Rules may be "outdated" because they are inefficient for certain languages with reams of available data and scads of phenomena that don't fit. For polysynthetic languages, though, one could posit that a fairly small set of rules might be highly predictive - humans invoke algorithms to construct patterned speech that would otherwise be incomprehensible for the listener to deconstruct, and those same algorithms can be encoded for use by machines. At the least, it would be worth proposing that the languages in this study can offer a test of rule-based vs. inference-based processes, and propose performing such comparisons when the data for the study languages is sufficiently mature. 2. This paper shows remarkable achievement for minority languages as a result of a $6 million grant. This is a crucial scientific finding: money works! Important research can make great strides regarding languages that are usually neglected, if and only if funding is available for people to take the time to do the work. The billions that have been pumped into languages like English have in fact resulted in technologies that can be applied at much lower cost to languages like Kanyen’kéha, but there are still costs. The paper could make more of an advocacy point for what relatively modest funding could do for languages in places where leaders have not yet had the same impetuses as witnessed in Canada, including India and Africa where "minority" language is often a misnomer. The paper nicely shows what can be done for languages well outside of the research mainstream, particularly in collaboration between the researchers and the communities. Without a doubt, this paper should be part of the program.
[ [ 0, 34 ], [ 116, 350 ], [ 1629, 1726 ], [ 1727, 1777 ], [ 1778, 2408 ], [ 2409, 2578 ], [ 2579, 2638 ] ]
[ "Major_claim", "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Jus_pos_3", "Eval_pos_4", "Major_claim" ]
79
The aim of this paper is to show that distributional information stored in word vector models contain information about POS labels. They use a version of the BNC annotated with UD POS and in which words have been replaced by lemmas. They train word embeddings on this corpus, then use the resulting vectors to train a logistic classifier to predict the word POS. Evaluations are performed on the same corpus (using cross-validation) as well as on other corpora. Results are clearly presented and discussed and analyzed at length. The paper is clear and well-written. The main issue with this paper is that it does not contain anything new in terms of NLP or ML. It describe a set of straightforward experiments without any new NLP or ML ideas or methods. Results are interesting indeed, in so far that they provide an empirical grounding to the notion of POS. In that regard, it is certainly worth being published in a (quantitative/emprirical) linguistic venue. On another note, the literature on POS tagging and POS induction using word embeddings should be cited more extensively (cf. for instance Lin, Ammar, Duer and Levin 2015; Ling et al. 2015 [EMNLP]; Plank, Søgaard and Goldberg 2016...).
[ [ 530, 566 ], [ 567, 661 ], [ 662, 754 ], [ 755, 785 ], [ 787, 859 ], [ 860, 962 ], [ 980, 1082 ], [ 1083, 1197 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Jus_neg_1", "Eval_pos_2", "Jus_pos_2", "Major_claim", "Eval_neg_2", "Jus_neg_2" ]
80
- Strengths: This paper reports on an interesting project to enable people to design their own language for interacting with a computer program, in place of using a programming language. The specific construction that the authors focus on is the ability for people to make definitions. Very nicely, they can make recursive definitions to arrive at a very general way of giving a command. The example showing how the user could generate definitions to create a palm tree was motivating. The approach using learning of grammars to capture new cases seems like a good one. - Weaknesses: This seems to be an extension of the ACL 2016 paper on a similar topic. It would be helpful to be more explicit about what is new in this paper over the old one. There was not much comparison with previous work: no related work section. The features for learning are interesting but it's not always clear how they would come into play. For example, it would be good to see an example of how the social features influenced the outcome. I did not otherwise see how people work together to create a language. - General Discussion:
[ [ 286, 387 ], [ 388, 485 ], [ 486, 569 ], [ 585, 656 ], [ 657, 746 ], [ 748, 796 ], [ 798, 821 ], [ 824, 922 ], [ 923, 1092 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3" ]
81
paper_summary This work proposes to explicitly model sentence-level representations of both the source and target side of unsupervised machine translation. The authors utilize normalizing flows to model the sentence representations in a flexible space as transformed from a (shared between languages) simple base distribution. At translation time the invertibility of normalizing flows can be used to map between sentence representations in different languages. In experiments the authors test the methods' viability on many language pairs and show competitive performance across the board. summary_of_strengths - The proposed method seems sound and novel. - The authors run extensive experiments on unsupervised machine translation and show moderate improvements across the board. Applying the method on top of XLM seems to result in good improvements over existing techniques, except for MASS. - The paper is mostly well-written except for one crucial point mentioned below in the weaknesses. summary_of_weaknesses - The unsupervised translation tasks are all quite superficial, taking existing datasets of similar languages (e.g. En-De Multi30k, En-Fr WMT) and editing them to an unsupervised MT corpus. - Improvements on Multi30k are quite small (< 1 BLEU) and reported over single runs and measuring BLEU scores alone. It would be good to report averages over multiple runs and report some more modern metrics as well like COMET or BLEURT. - It is initially quite unclear from the writing where the sentence-level representations come from. As they are explicitly modeled, they need supervision from somewhere. The constant comparison to latent variable models and calling these sentence representations latent codes does not add to the clarity of the paper. I hope this will be improved in a revision of the paper. comments,_suggestions_and_typos Some typos: -001: "The latent variables" -> "Latent variables" -154: "efficiently to compute" -> "efficient to compute" -299: "We denote the encoder and decoder for encoding and generating source-language sentences as the source encoder and decoder" - unclear -403: "langauge" -> "language"
[ [ 615, 657 ], [ 660, 782 ], [ 783, 896 ], [ 899, 996 ], [ 1021, 1081 ], [ 1083, 1208 ], [ 1211, 1251 ], [ 1252, 1262 ], [ 1449, 1547 ], [ 1548, 1823 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3" ]
82
paper_summary The authors present an approach for knowledge enhanced counseling reflection generation. It uses dialogue context as well as commonsense and domain knowledge for generating responses in counseling conversations. Two methods for knowledge integration are proposed: a retrieval-based method and a generative method. Experimental results show that both methods for knowledge incorporation improve the system's performance. CONTRIBUTIONS: (1) The authors propose a pipeline that collects domain knowledge (medical) through web mining and apply it to build up a counseling knowledge base. (2) The authors use the domain knowledge they collected along with commonsense knowledge bases for the task of reflection generation. (3) The authors analyze different types of commonsense and domain knowledge, as well as their effect on the generation task. summary_of_strengths - Overall, the paper is clear in its objectives and methodology followed. The work is well structured, easy to read and follow. -The authors show empirical success of their approach. -The overall story is convincing. The proposed approach is tested with reasonable models and appropriate experiments. The experimental results are promising, demonstrating the effectiveness of the proposed method. Thus, the paper makes valuable contributions to the field. -The approach is well motivated and addresses a problem that is relevant to the community. summary_of_weaknesses - Lack of illustrative examples regarding the model outputs. -Some details regarding the knowledge collection process have been omitted (see "Questions" below). comments,_suggestions_and_typos QUESTIONS: -Fig. 2: Why did you discard the "anatomy" category? -l. 221: How many query templates did you specify in total? -l. 227: What's the size of the set of knowledge candidates? -l. 550: Did you calculate the agreement between the annotators? Were the annotators authors of the paper? MINOR: -Try to better align the figures with the text. -fix punctuation: l. 336, l. 433, l. 445, l. 534 -Table 2: The highlighting of the numbers does not correspond to the caption ("highest scores are in bold, second highest scores in italic")
[ [ 883, 954 ], [ 955, 1008 ], [ 1010, 1063 ], [ 1065, 1097 ], [ 1098, 1181 ], [ 1182, 1277 ], [ 1284, 1336 ], [ 1338, 1428 ], [ 1453, 1511 ], [ 1513, 1612 ], [ 1645, 1936 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5", "Eval_pos_6", "Eval_pos_7", "Eval_pos_8", "Eval_neg_1", "Eval_neg_2", "Jus_neg_2" ]
84
paper_summary This paper presents an interesting finding, i.e., fine-tuning only the bias terms of pre-trained language models is competitive with fine-tuning the entire model. The authors compared the proposed method Bias-terms Fine-tuning (BitFit) with other parameter-efficient fine-tuning methods (e.g., Adapters, Diff-Pruning). The experimental results on GLUE benchmark show that BitFit can achieve strong performance with less trainable parameters. summary_of_strengths - The paper is well written and easy to understand. -The proposed method (BitFit) is neat and novel. -The authors show strong empirical results on GLUE benchmark. summary_of_weaknesses I do not have any concerns about this paper. comments,_suggestions_and_typos It would be helpful to compare BitFit with Adapter and Diff-Pruning base on other language models (e.g.,RoBERTa, T5). But current version is good enough for a short paper.
[ [ 480, 529 ], [ 531, 578 ], [ 580, 641 ], [ 664, 709 ], [ 864, 914 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Major_claim", "Major_claim" ]
85
paper_summary This paper proposes a novel method to explore the search space of neural text generation models. The proposed method includes two key components of a modified best-first search and a path recombination mechanism. The authors conduct experiments on text summarization and machine translation tasks. The experiment results show that the proposed method generates massive-scale candidate sentences and obtain comparable or even better metric scores. summary_of_strengths - The description of the proposed approach is clear and easy to follow. - The paper presents a well-rounded set of experiments on text summarization and machine translation. - The authors provide a lot of details in the appendix, which helps the reproducibility. summary_of_weaknesses - Although BFS is briefly introduced in Section 3, it's still uneasy to understand for people who have not studied the problem. More explanation is preferable. comments,_suggestions_and_typos - Algorithm 1, line 11: the function s(·) should accept a single argument according to line 198. - Figure 6: the font size is a little bit small.
[ [ 485, 554 ], [ 557, 656 ], [ 659, 746 ], [ 771, 896 ], [ 897, 929 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Jus_neg_1", "Eval_neg_1" ]
86
The paper proposes a convolutional neural network approach to model the coherence of texts. The model is based on the well-known entity grid representation for coherence, but puts a CNN on top of it. The approach is well motivated and described, I especially appreciate the clear discussion of the intuitions behind certain design decisions (e.g. why CNN and the section titled 'Why it works'). There is an extensive evaluation on several tasks, which shows that the proposed approach beats previous methods. It is however strange that one previous result could not be reproduced: the results on Li/Hovy (2014) suggest an implementation or modelling error that should be addressed. Still, the model is a relatively simple 'neuralization' of the entity grid model. I didn't understand why 100-dimensional vectors are necessary to represent a four-dimensional grid entry (or a few more in the case of the extended grid). How does this help? I can see that optimizing directly for coherence ranking would help learn a better model, but the difference of transition chains for up to k=3 sentences vs. k=6 might not make such a big difference, especially since many WSJ articles may be very short. The writing seemed a bit lengthy, the paper repeats certain parts in several places, for example the introduction to entity grids. In particular, section 2 also presents related work, thus the first 2/3 of section 6 are a repetition and should be deleted (or worked into section 2 where necessary). The rest of section 6 should probably be added in section 2 under a subsection (then rename section 2 as related work). Overall this seems like a solid implementation of applying a neural network model to entity-grid-based coherence. But considering the proposed consolidation of the previous work, I would expect a bit more from a full paper, such as innovations in the representations (other features?) or tasks. minor points: - this paper may benefit from proof-reading by a native speaker: there are articles missing in many places, e.g. '_the_ WSJ corpus' (2x), '_the_ Brown ... toolkit' (2x), etc. - p.1 bottom left column: 'Figure 2' -> 'Figure 1' - p.1 Firstly/Secondly -> First, Second - p.1 'limits the model to' -> 'prevents the model from considering ...' ? - Consider removing the 'standard' final paragraph in section 1, since it is not necessary to follow such a short paper.
[ [ 201, 245 ], [ 247, 394 ], [ 396, 509 ], [ 765, 918 ], [ 920, 1193 ], [ 1194, 1277 ], [ 1279, 1612 ], [ 1613, 1727 ], [ 1792, 1835 ], [ 1924, 1985 ], [ 1987, 2096 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_pos_3", "Major_claim", "Eval_neg_3", "Jus_neg_3" ]
88
paper_summary The authors present a method of calibrating learned multiclass classification models during training, that is, improving model calibration (in other words, pushing accuracy-versus-model-confidence graphs towards the identity line---well-calibrated models have confidence perfectly reflecting prediction accuracy). The proposed method is done during training, rather than being a post-hoc model which recalibrates a learned model to maximize performance on a held-out calibration set. This has the benefit of allowing the model to use the whole train set (i.e. not requiring a held-out calibration set), in addition to ideally leveraging the entire train set to do calibration. In this sense it follows a small number of published methods. Overall the core writeup of the method (sec 3) was difficult to follow. The core design decisions of the method were difficult to find the motivation of. I believe the decisions pivot around an increased sample-efficiency claim (line 316), but this claim was not stated precisely (the free variable $\epsilon$ bounds "calibration error," which I do not believe is defined), and the claim does not have a proof, so the relationship between the system setup decisions and sample efficiency claims is not at all clear. Generally, the calibrated systems' plots do not seem obviously notably superior to the baseline calibrations (Fig 2, Fig 1(top)), though the methods do indeed achieve superior ECE (expected calibration error) performance across some of the tasks (xSLUE, a suite of multiclass classification NLP tasks, Table 1) and, somewhat surprisingly, superior top-level accuracy/F1 (table 1). Given the importance of proper calibration to so many applications of NLP, I suspect these methods would be of interest to practitioners in the field even if they do not provide an across-the-board calibration improvement, but it is not clear from the writeup when they work and when they do not. That is, this writeup does not really address the question "when should a practitioner use this very complex method of calibration, rather than the much conceptually simpler post-hoc Platt scaling, given that the curves in Fig 2 and Fig 1 seem to hint that the calibration benefits of this method seem to be very noisy." Is there a property of the dataset or predictor where we can expect this method to really help? Generally, I think with notable revisions this paper would be of interest to practitioners, but it is somewhat difficult to follow the method exactly and know how we should expect it to affect performance on a given novel task. summary_of_strengths - The authors present a novel calibration technique in multiclass classification setups that leverages the entire train set (rather than a held-out calibration set). This method learns both a Platt-scaling transform (an affine transform on logits, basically), in addition to iteratively adaptively binning the train set based on this learned transform, I believe so that we can calculate the ECE without a held-out set. -The proposed methods perform superior to baselines with respect to calibration error on more tasks than they perform worse. -The proposed modifications improve against the baseline MLE systems in terms of accuracy/F1 across many tasks; that is, the calibration term appears to act as an effective model regularizer in some setups. summary_of_weaknesses - The technical description of the method (sec 3) was quite difficult to follow. The central methodological decisions (adaptive binning, discretization thereafter by doing what appears to be essentially isotonic regression) were difficult to find the motivation of. I believe these decisions center around being able to make a sample efficiency claim (line 317), but I'm not sure why this method gives that sample efficiency property, and no proof of the claim is presented. More concrete questions/concerns are given in "comments" below, written as I read the presentation. -There are a few properties of the evaluation/results which are a bit confusing and call conclusions into question. For example: - PosPS (post-hoc Platt Scaling) is lower than MLE in accuracy on a few tasks---If I understand correctly, this should not be true in principle, since the max predictions are supposed to be invariant. Is this just model retrain noise from starting off at different initial model params? Or do I misunderstand? - The reliability diagrams (Fig 1, top), that is the accuracy vs confidence plots, certainly don't make the proposed methods look better (a perfect system will have the identify function here), and in fact exhibit some pretty notable pathologies (high-accuracy spikes in low confidence regimes, e.g.). Is this a property of dataset pathologies or does it reflect variance/unpredictability in the proposed methods? - In the ECE-versus-number-of-bins plots in Figure 2, the two calibrated systems (PosCal and the proposed) all have ECEs (calibration performance) very close to the MLE for most values. This hints at the fact that the methods may be very sensitive to bin size and often provide no actual expected calibration improvement, is that right? How was this bin-size hyperparam selected? Was its selected value chosen from the held-out dev set without looking at test at all? -Difficult to see how this can extend to the structured prediction encoder/decoder setups used quite often across NLP. Does this only work for relatively few-class multiclass classification? This is not a fatal flaw. comments,_suggestions_and_typos - "These observations prove the efficacy of our method in maintaining a perfect balance between model performance and model uncertainity-a testimony of an ideal calibrator" this is a pretty over-the-top claim! " Perfect balance?" This doesn seem like isn't an "ideal calibrator," it has nontrivial ECE still, right? I'd scale this paragraph's claims back to things that are empirically supportable -tab 2 is somewhat confusing. Can you turn P1 and P2 into one confusion matrix per row and present it that way? These different columns are all just very different sort of things, strange that they're presented next to each other -What is Fig 2's dataset? the average across all the tasks? Just one task? -The definition of perfect calibration in the info is perhaps a bit confusing as-is (namely, $P$ has to be a joint over the covariates $x$ and the predictions $f$ right so you need some sort of metric over both, is that right? Or perhaps you just require $f$ to be Borel-measurable and deterministic or something. ( Unrelatedly, I also realize that if $f$ is itself nondeterministic, then you probably need an expectation operator right?). Anyways, it might be helpful to add a short sentence to the text here explaining this eq in words, no need to be too precise. -Is the output of step (3) differentiable? Do you need subgradients or something to get the loss? I guess this is why it's helpful to know exactly what $\beta$ is. -A nit but i might use something other than "distance" to describe $d$ in line 231, since it's not in general a valid distance metric, maybe "calibration mismatch" or something. -Also maybe change $q$ to $\hat q$ in the eq on line 232? Since the thing you're minimizing isn't the true value $q$, which we don't have access to even in principle. -I'm a bit perplexed by the discrepancy between the matrix $Q$, which is essentially a function of the predictor network, and the $q$ in 232, which is not. Does $Q$ as given suffice to allow us to calculate arbitrary distance functionals between $p$ and $q$, as described in line 231? -You haven't defined calibration error but instantiate it via $\epsilon$ in line 254. It's abs(p - q) to use the terminology of line (232), is that right? ECE is usually given with the expectation randomness integrating over the simplex $D_K$ right, do we have to do that here? Not sure if this is essential to get all the details precise here but probably it would be good to define what "calibration error" is at the least. -Don't think you define what the Platt-scaling set $G$ is ranged over by the argmin in step 1. -What is the set $\beta$ in line 311? -It's not clear to me why step (3) of the algorithm should be necessary at all. It seems like this wouldn't change the calibration but allows us to estimate the ECE? Is that right? Or does this adaptive binning + discretization (essentially isotonic regression, right?) actually affect calibration in expectation? -Line (7) of alg 1's pseucode is referenced in the text but the fig doesn't have line numbers, is it possible to add them "the achieved reduction in ECE as compared to all baselines is significant" what does this mean? Paired t-test? p < 0.05? Either describe the test or remove this significance claim -do you have a proof of the sample-efficiency claim in line 317? would be good to put this into an appendix -Is it possible to compare to MCDropout? It would be really nice to see that comparison, but this isn't crucial.
[ [ 753, 824 ], [ 825, 1268 ], [ 1280, 1377 ], [ 1378, 1397 ], [ 1650, 1724 ], [ 1735, 1871 ], [ 1877, 1946 ], [ 1948, 2364 ], [ 2376, 2455 ], [ 2461, 2593 ], [ 2617, 2702 ], [ 2703, 2779 ], [ 3161, 3270 ], [ 3272, 3367 ], [ 3392, 3470 ], [ 3471, 3964 ], [ 3966, 4081 ], [ 4081, 4406 ], [ 6248, 6330 ], [ 6331, 6976 ], [ 8167, 8245 ], [ 8246, 8479 ] ]
[ "Eval_neg_1", "Jus_neg_1", "Eval_neg_3", "Jus_neg_3", "Jus_pos_1", "Eval_pos_1", "Eval_neg_4", "Jus_neg_4", "Major_claim", "Eval_neg_7", "Eval_pos_2", "Jus_pos_2", "Eval_pos_3", "Jus_pos_3", "Eval_neg_8", "Jus_neg_8", "Eval_neg_2", "Jus_neg_2", "Eval_neg_10", "Jus_neg_10", "Eval_neg_9", "Jus_neg_9" ]
90
- Strengths: This paper tries to use the information from arguments, which is usually ignored yet actually quite important, to improve the performance of event detection. The framework is clear and simple. With the help of the supervised attention mechanism, an important method that has been used in many tasks such as machine translation, the performance of their system outperforms the baseline significantly. - Weaknesses: The attention vector is simply the summation of two attention vectors of each part. Maybe the attention vector could be calculated in a more appropriate approach. For the supervised attention mechanism, two strategies are proposed. Both of them are quite straightforward. Some more complicated strategies can work better and can be tried. - General Discussion: Although there are some places that can be improved, this paper proposed a quite effective framework, and the performance is good. The experiment is solid. It can be considered to be accepted.
[ [ 171, 205 ], [ 206, 412 ], [ 512, 590 ], [ 591, 767 ], [ 844, 892 ], [ 897, 921 ], [ 922, 946 ], [ 947, 984 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5", "Major_claim" ]
91
paper_summary This paper empirically studied CLIP models as few-shot learners for two vision-language understanding tasks: VQA and Visual entailment. In the VQA task, the paper proposed a two-step method to mitigate the gap between natural language description and question answering. In addition, the paper used only a very small set of parameters in CLIP models during fine-tuning, including bias and normalization terms. summary_of_strengths It studied how to transfer CLIP zero-shot capabilities into VLU tasks and confirms CLIP models can be good few-shot learners. The paper proposed a two-step prompt generation method to apply CLIP on VQA. The paper identified only a small number of parameters are enough to fine-tune the CLIP few-shot learner. summary_of_weaknesses The way of using T5 for template generation is unclear, and lack of evaluation of the template generation quality. In model fine-tuning experiments, it is unclear about the learning rates and epochs for different parameter settings, which could largely affect the results. comments,_suggestions_and_typos See the weakness.
[ [ 778, 833 ], [ 838, 892 ], [ 893, 1051 ] ]
[ "Eval_neg_1", "Eval_neg_2", "Eval_neg_3" ]
92
Summary: This paper proposes a novel solution for abstractive dialogue summarization, which is a challenging task because it requires modeling discourse structure and long dependencies between utterances in the dialogue. The proposed approach consists of several parts: (1) Representing the input dialogue as a graph, using topic words and utterance embeddings to represent nodes, and topic word overlap to create edges. ( Topic words also include the names of participants in the dialogue.) (2) Graph encoder with masked self-attention to encode the graph, while focusing on the most important utterances (3) Standard sequence-to-sequence model to encode the entire dialogue context as a sequence. (4) A topic-word-aware decoding method that uses the topic words in two ways: with a coverage mechanism to ensure coverage/prevent repetition, and with a pointer mechanism to allow expressing topic words. Overall, I found the proposed method in this paper very convincing. The various parts of the solution (in particular the graph representation) are well designed, and the incorporation of topic words is intuitively a good approach. At the same time though, I think the paper misses out on some crucial analysis and evaluation, so as it stands right now, the results are insufficient to convincingly declare that the proposed solution works really really well (although the results presented in the paper are definitely a very positive sign that this approach does work really well). Despite this weakness, I think the proposed solution is novel enough and (especially if addressed) the results positive enough that the community would gain value from this paper. Strengths: -Proposed method is intuitive and well motivated, and explained clearly for the most part. There are several moving parts, but they are all tied together well. -Experimental results are very promising, and the provided examples illuminate the advantages of the proposed solution well. -In addition to automatic evaluation, the authors also took the effort to perform human evaluation and other attention-based analyses of the model. -Paper is well-structured and easy to follow. Weaknesses: 1) There's very little dataset analysis in this paper, which makes it hard to know exactly how challenging the problems posed by these datasets really are. The original SAMSum dataset paper doesn't have any analysis either, which makes it all the more important to have some kind of analysis here in this paper. The other dataset used here (the Automobile Master Corpus) has neither an analysis nor a citation, and there are also no examples whatsoever of this dataset. Some questions that would be important to answer, at the very least, are: a) What are the length distributions of the summaries and the dialogues respectively, and what's the relationship between the length of a dialogue and the length of its summary? b) How many topic words does each dialogue have? How many of those actually occur in the final summary? c) What are some interesting discourse phenomena that need to be correctly modeled in order to generate an accurate summary? 2) There's no analysis of examples that this model performs poorly on, or other gaps that need to be addressed. 3) While it's great that human evaluation was performed, it's lacking in a couple of different ways: a) The human evaluation metrics should be described in further detail. What exactly do "relevance" and "readability" encompass? What were the guidelines given to the evaluators to rate these? b) Were the examples double/triple reviewed in any way? c) Related to (a) - another important aspect of a good summary is its completeness. Did either of the human evaluation metrics encompass completeness? d) Human evaluation wasn't conducted on the gold summaries, which is unfortunate because it would provide a sense of an appropriate ceiling for the human evaluation numbers. 4) Some of the claims in the paper don't really seem to follow from the results of the experiments. For example, the paper makes the following claim from the human evaluation results: "As we can see, Pointer Generator suffers from repetition and generates many trivial facts. For Fast Abs RL Enhanced model, it successfully concentrates on the salient information, however, the dialogue structure is not well constructed. By introducing the topic word information and coverage mechanism, our TGDGA model avoids repetitive problems and better extracts the core information in the dialogue." However, I don't know if any of these claims can be directly inferred from the results of the human evaluation (unless there are aspects of the human evaluation that were not described). 5) For a model as complex as the one proposed here, I would have liked to have seen some kind of ablation analysis that shows the importance of each of the moving parts. While the proposed approach makes sense intuitively, there's not enough convincing experimental evidence to show that each of its parts is crucial (even though the results section seems to claim this). Other comments/suggestions: 1) in Section 4.2 - why are stop words filtered from the vocab? Does that mean that they can only be predicted through the pointer mechanism? That seems like too strict a restriction. 2) How is temporal information represented in the graph representation? Or does the model rely entirely on the seq2seq to learn temporal information, while the graph just captures structural relationships? 3) In section 5.1 - what is the Separator? It was not introduced before this section, but maybe it should be introduced in Sec 4.3. 4) Another claim (in Sec 5.1) that doesn't seem to be supported by the results: "Besides, the TGDGA model outperformsthe Transformer model based on fully connected relationships, which demonstrates that our dialogue graph structures effectively prune unnecessary connections between utterances"
[ [ 907, 974 ], [ 975, 1137 ], [ 1163, 1231 ], [ 1260, 1487 ], [ 1489, 1668 ], [ 1681, 1770 ], [ 1771, 1839 ], [ 1841, 1881 ], [ 1886, 1964 ], [ 2114, 2158 ], [ 2174, 2326 ], [ 2327, 3131 ], [ 3135, 3244 ], [ 3248, 3345 ], [ 3349, 3753 ], [ 3936, 4032 ], [ 4033, 4718 ], [ 4889, 5090 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_neg_1", "Jus_neg_1", "Major_claim", "Eval_pos_2", "Jus_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Eval_neg_4", "Jus_neg_4", "Eval_neg_5", "Jus_neg_5", "Eval_neg_5" ]
93
The paper is clearly written, and the claims are well-supported. The Related Work in particular is very thorough, and clearly establishes where the proposed work fits in the field. I had two main questions about the method: (1) phrases are mentioned in section 3.1, but only word representations are discussed. How are phrase representations derived? (2) There is no explicit connection between M^+ and M^- in the model, but they are indirectly connected through the tanh scoring function. How do the learned matrices compare to one another (e.g., is M^- like -1*M^+?)? Furthermore, what would be the benefits/drawbacks of linking the two together directly, by enforcing some measure of dissimilarity? Additionally, statistical significance of the observed improvements would be valuable. Typographical comments: -Line 220: "word/phase pair" should be "word/phrase pair" -Line 245: I propose an alternate wording: instead of "entities are translated to," say "entities are mapped to". At first, I read that as a translation operation in the vector space, which I think isn't exactly what's being described. -Line 587: "slightly improvement in F-measure" should be "slight improvement in F-measure" -Line 636: extraneous commas in citation -Line 646: "The most case" should be "The most likely case" (I'm guessing) -Line 727: extraneous period and comma in citation
[ [ 0, 64 ], [ 66, 181 ] ]
[ "Eval_pos_1", "Eval_pos_2" ]
96
paper_summary The paper investigates the problem of identifying unanswerable questions in multiple choice MRC. It proposes two ways of tackling this problem: Firstly, by explicitly augmenting training data with unanswerable examples and secondly, by thresholding on (estimated) prediction uncertainty. The paper goes on to show that this can help to identify and abstain from (falsely) predicting hard examples and to identify unanswerable questions on a constructed dataset. summary_of_strengths The stregth of the paper is that it touches upon a topic that appears under-explored in the literature. The paper is written well and the results are presented clearly. Evaluation metrics are well motivated and discussed in detail, which is important as they deviate from the usual F1/Accuracy measures used for evaluating MC-MRC. summary_of_weaknesses I have identified some weaknesses in the paper: - How general are the obtained results? From what I can tell, most of the analysis is performed on the results of one optimised model (ensemble), on one dataset (ReClor). I believe the methodology is general enough to be applied to other MC-MRC datasets. Why was ReClor and only ReClor chosen? It would be interesting to see, whether the reported results pertain across different datasets. For example CosmosQA (https://arxiv.org/pdf/1909.00277.pdf) comes with built-in unanswerable questions. This work should be mentioned. - While most of the results are clear, I had difficulties interpreting the figures. What makes it difficult is that they all have different axes and threshold over different values. I believe compacting section 2 that discusses the high-level overview of the MC-MRC task to create more room for more thorough explanations of the results would be beneficial. This could be done by providing more informative captions of the figures or linking back to the equations and introduced names (e.g. beta). comments,_suggestions_and_typos What follows are some minor remarks, questions and comments: - The hyperparameter section is rather terse. It is not clear which hyper-parameters are selected from. Appropriate information should be added, at least in the appendix. -What is "Fraction unanswerable" in Figure 5 and how is it obtained? -I don't believe it is fair to make the comparison in Table 4. While the manuscript acknowledges that, it does not mention the precise nature of the unfairness. The point is that for MAP, the %UNANS is based on model predictions, it's not a parameter that can be freely chosen, unlike the Implicit method. Here, the dev-mixed set was used both for reporting the final accuracy comparison and to select the best %UNANS threshhold for implicit. It would be more fair to either estimate %UNANS from the training data (i.e. 25%) or to split the DEV-mixed in half, use one half to estimate the best %UNANS threshhold and the other half to compare Implicit with the chosen threshhold to MAP. This wouldn't change much of the argument as from what I can tell from Figure 5, Implicit is still better than MAP with %UNANS of 25.
[ [ 498, 601 ], [ 602, 666 ], [ 667, 829 ], [ 852, 898 ], [ 903, 940 ], [ 941, 1426 ], [ 1430, 1511 ], [ 1512, 1926 ], [ 2022, 2065 ], [ 2066, 2190 ], [ 2261, 2322 ], [ 2323, 3080 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Major_claim", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_pos_1", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4" ]
97
paper_summary The paper presents AcTune, an active learning framework that combines self training on high-confident samples and data annotation on low-confident samples. The paper also proposes two new methods: (1) region-based sampling and (2) momentum-based memory bank, to improve the sampling strategy in active learning and to reduce label noise in self-training. The paper provides extensive experiments to show the advantage of the proposed method (both performance and label efficiency) and and offered ablation studies to analyse inner-workings of this method. The paper is overall a solid contribution for combining active learning and self-training in NLP and I recommend acceptance. summary_of_strengths The paper proposes a novel and effective framework to combine active learning + self training in NLP. The additional two strategies designed in the paper (region-based sampling and momentum-based memory bank) are well-motivated. The experiments are thorough with convincing ablation studies. summary_of_weaknesses No major concerns. comments,_suggestions_and_typos n/a
[ [ 570, 695 ], [ 717, 818 ], [ 819, 945 ], [ 946, 1009 ], [ 1032, 1051 ] ]
[ "Major_claim", "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Major_claim" ]
100
- strengths This is a novel approach to modeling the compositional structure of complex categories that maintains a set theoretic interpretation of common nouns and modifiers, while also permitting a distributional interpretation of head modification. The approach is well motivated and clearly defined and the experiments show that show that this decomposed representation can improve upon the Hearst-pattern derived IsA relations upon which it is trained in terms of coverage. - weaknesses The experiments are encouraging. However, it would be nice to see ROC curves for the new approach alone, not in an ensemble with Hearst patterns. Table 5 tells us that Mods_I increases coverage at the cost of precision and Figure 2 tells us that Mods_I matches Hearst pattern precision for the high precision region of the data. However, neither of these tell us whether the model can distinguish between the high and low precision regions, and the ROC curves (which would tell us this) are only available for ensembled models. I believe that Eqn. 7 has an unnecessary $w$ since it is already the case that $w=D(\rangle e, p, o \langle)$. - discussion Overall, this is a nice idea that is well described and evaluated. I think this paper would be a good addition to ACL.
[ [ 12, 98 ], [ 99, 251 ], [ 252, 302 ], [ 1144, 1263 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Major_claim" ]
101
This paper develops an LSTM-based model for classifying connective uses for whether they indicate that a causal relation was intended. The guiding idea is that the expression of causal relations is extremely diverse and thus not amenable to syntactic treatment, and that the more abstract representations delivered by neural models are therefore more suitable as the basis for making these decisions. The experiments are on the AltLex corpus developed by Hidley and McKeown. The results offer modest but consistent support for the general idea, and they provide some initial insights into how best to translate this idea into a model. The paper distribution includes the TensorFlow-based models used for the experiments. Some critical comments and questions: - The introduction is unusual in that it is more like a literature review than a full overview of what the paper contains. This leads to some redundancy with the related work section that follows it. I guess I am open to a non-standard sort of intro, but this one really doesn't work: despite reviewing a lot of ideas, it doesn't take a stand on what causation is or how it is expressed, but rather only makes a negative point (it's not reducible to syntax). We aren't really told what the positive contribution will be except for the very general final paragraph of the section. - Extending the above, I found it disappointing that the paper isn't really clear about the theory of causation being assumed. The authors seem to default to a counterfactual view that is broadly like that of David Lewis, where causation is a modal sufficiency claim with some other counterfactual conditions added to it. See line 238 and following; that arrow needs to be a very special kind of implication for this to work at all, and there are well-known problems with Lewis's theory (see http://bcopley.com/wp-content/uploads/CopleyWolff2014.pdf). There are comments elsewhere in the paper that the authors don't endorse the counterfactual view, but then what is the theory being assumed? It can't just be the temporal constraint mentioned on page 3! - I don't understand the comments regarding the example on line 256. The authors seem to be saying that they regard the sentence as false. If it's true, then there should be some causal link between the argument and the breakage. There are remaining issues about how to divide events into sub-events, and these impact causal theories, but those are not being discussed here, leaving me confused. - The caption for Figure 1 is misleading, since the diagram is supposed to depict only the "Pair_LSTM" variant of the model. My bigger complaint is that this diagram is needlessly imprecise. I suppose it's okay to leave parts of the standard model definition out of the prose, but then these diagrams should have a clear and consistent semantics. What are all the empty circles between input and the "LSTM" boxes? The prose seems to say that the model has a look-up layer, a Glove layer, and then ... what? How many layers of representation are there? The diagram is precise about the pooling tanh layers pre-softmax, but not about this. I'm also not clear on what the "LSTM" boxes represent. It seems like it's just the leftmost/final representation that is directly connected to the layers above. I suggest depicting that connection clearly. - I don't understand the sentence beginning on line 480. The models under discussion do not intrinsically require any padding. I'm guessing this is a requirement of TensorFlow and/or efficient training. That's fine. If that's correct, please say that. I don't understand the final clause, though. How is this issue even related to the question of what is "the most convenient way to encode the causal meaning"? I don't see how convenience is an issue or how this relates directly to causal meaning. - The authors find that having two independent LSTMs ("Stated_LSTM") is somewhat better than one where the first feeds into the second. This issue is reminiscent of discussions in the literature on natural language entailment, where the question is whether to represent premise and hypothesis independently or have the first feed into the second. I regard this as an open question for entailment, and I bet it needs further investigation for causal relations too. So I can't really endorse the sentence beginning on line 587: "This behaviour means that our assumption about the relation between the meanings of the two input events does not hold, so it is better to encode each argument independently and then to measure the relation between the arguments by using dense layers." This is very surprising since we are talking about subparts of a sentence that might share a lot of information. - It's hard to make sense of the hyperparameters that led to the best performance across tasks. Compare line 578 with line 636, for example. Should we interpret this or just attribute it to the unpredictability of how these models interact with data? - Section 4.3 concludes by saying, of the connective 'which then', that the system can "correctly disambiguate its causal meaning", whereas that of Hidey and McKeown does not. That might be correct, but one example doesn't suffice to show it. To substantiate this point, I suggest making up a wide range of examples that manifest the ambiguity and seeing how often the system delivers the right verdict. This will help address the question of whether it got lucky with the example from table 8.
[ [ 475, 543 ], [ 761, 880 ], [ 882, 1338 ], [ 1362, 1465 ], [ 1466, 2093 ], [ 2096, 2162 ], [ 2163, 2490 ], [ 2616, 2681 ], [ 2682, 3334 ], [ 3337, 3391 ], [ 3392, 3833 ], [ 3834, 4298 ], [ 4299, 4615 ], [ 4730, 4823 ], [ 4824, 4978 ], [ 5155, 5221 ], [ 5222, 5474 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4", "Eval_neg_5", "Jus_neg_5", "Jus_neg_6", "Eval_neg_6", "Eval_neg_7", "Jus_neg_7", "Eval_neg_8", "Jus_neg_8" ]
102
paper_summary This paper is about the design of an automatic phoneme transcription system for transcription assistance. In particular, it targets endangered languages with only one speaker data, and the goal is to reduce the cost of transcription such languages. With all due respect to previous research, the authors note that each of them uses a different system, and that each of them has been tested on a different language. The authors also propose a model that is retrained from pre-trained models in multiple languages, and hypothesize that this will be effective for speech recognition with small amounts of data, such as for endangered languages. The authors designed a unified experimental setup, the STP test bed, and used it to compare 4 different models under 11 different languages. The experiments showed that the system with the multilingual pre-trained model performed better in many languages. The authors also estimated that there is a boundary where the recognition rate drops significantly around 90 minutes of training data. summary_of_strengths The strength of this paper is that it uses a unified experimental setup to conduct experiments on endangered language automatic transcriptions, which have traditionally been conducted with different models and different languages. The discussion of the experiment, for example, whether it is suitable for fieldwork or not, is given from a humanistic perspective as well, so that it can be considered as a contribution not only to technology but also to the humanities. In addition, the authors have published a container for reproducing some of the experiments, which is expected to have a significant impact not only on the paper itself but also on future research in this field. summary_of_weaknesses The motivation, purpose, and selection of models for the study are appropriate. However, I have some concerns about the experiment. 1. Due to the characteristics of endangered language evaluation, I feel that the test data is very limited. The authors split the data 9:1 between training and testing, but for example, with 90 minutes of data, there are only 10 minutes of test data. Of course I understand that this is unavoidable due to data limitations, but I think some approach or support for this is needed. For example, cross-validation can be considered. ( Note that I am not referring to the cross-validation set commonly employed in neural net training.) Other possibilities include showing that the distribution of phonemes in the test data is not significantly different from the overall distribution, or calculating perplexity. Averaging the languages with the same weights is also anxious in terms of the reliability of the test set mentioned above. For experiments with such a small amount of data, I think a confidence interval should be shown. 2. As the authors describe at the end of their discussion, the number of speakers is very small. Therefore, I feel that it is not possible to distinguish whether the experimental results are speaker-dependent or language-dependent. Of course I understand the difficulty of the experiments with endangered languages, but this is not a reason to relax the experimental conditions for generalization. For example, I think the authors could conduct a quasi-limited experiment using a European language for which a large amount of data is available. If it is inappropriate in those languages, the authors should explain why. I am very sympathetic to the philosophy of this paper and understand its importance. However, I believe that the experimental setup should be very carefully designed, as this study could be a baseline for future work in this field. comments,_suggestions_and_typos Even if I take into account the convenience of fieldwork, there seems to be little need to compare training times. And if it only takes 24 hours, it seems acceptable. If you want to describe the training time, I encourage you to discuss it more.
[ [ 1075, 1306 ], [ 1307, 1445 ], [ 1455, 1543 ], [ 1546, 1758 ], [ 1781, 1861 ], [ 1862, 1913 ], [ 1917, 2022 ], [ 2023, 2847 ], [ 2851, 2945 ], [ 2946, 3471 ], [ 3472, 3557 ], [ 3738, 3852 ] ]
[ "Eval_pos_2", "Jus_pos_1", "Eval_pos_1", "Eval_pos_3", "Eval_pos_4", "Major_claim", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_pos_5", "Eval_neg_3" ]
103
paper_summary The paper presents a framework for abstractive summarization of long documents that repeatedly segments text, summarizes each segment and then feeds the concatenated summaries as an input to the next iteration. When the input is below a predefined number of tokens, the final summary is generated. This design has the advantages of being able to use powerful pretrained models (e.g. BART) that are designed for shorter text, and avoiding expensive attention computation over long spans. summary_of_strengths - The approach is simple, powerful and flexible - a good idea for applying powerful pretrained models to longer text without truncation. -A very comprehensive evaluation is provided with a good selection of datasets and both automatic and human evaluation which show the model’s clear advantage over baselines. -The paper is well-written and easy to understand. summary_of_weaknesses - **Inefficient design of target segmentation**: If I understand correctly, all input segments are assigned a target segment. This probably causes unnecessary summarization of irrelevant input segments - if the text is massive and the gold summary is very short, why not ignore most of the input segments and only select sufficient input segments to cover the targets? Also, this allows targets to be duplicated over different input segments. -**Missing analysis of model behavior**: It would be useful and interesting to know what exactly is happening at each stage, e.g. how much text is produced at each stage, and how noisy or detailed intermediate summaries look like. -**Missing analysis of empirical running time** comments,_suggestions_and_typos **Suggestions** -Line 249: “Huge time costs” sounds very informal, maybe change to “considerable running time” or similar -Please indicate if f-score, precision or recall is used when ROUGE is mentioned -Follow-up on weakness 1): ignoring input segments could be implemented by assigning “empty” targets, e.g. using a single special token that when predicted would trigger a segment to be discarded in the next step. **Questions** -Table 9 in Appendix: what does “keyword emerged in gold summary” mean? -“Separate model for each stage” is mentioned, but I am still not sure whether this means 2 (fine and coarse) or N models? -Does the number of input segments at each coarse stage always stay the same as it was in stage 1, before reaching the fine-grained stage?
[ [ 525, 659 ], [ 661, 833 ], [ 835, 885 ], [ 910, 955 ], [ 957, 1582 ], [ 1584, 1631 ], [ 1691, 1729 ], [ 1731, 1785 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Eval_neg_3", "Jus_neg_3" ]
105
This work proposes a self-learning bootstrapping approach to learning bilingual word embeddings, which achieves competitive results in tasks of bilingual lexicon induction and cross-lingual word similarity although it requires a minimal amount of bilingual supervision: the method leads to competitive performance even when the seed dictionary is extremely small (25 dictionary items!) or is constructed without any language pair specific information (e.g., relying on numerals shared between languages). The paper is very well-written, admirably even so. I find this work 'eclectic' in a sense that its original contribution is not a breakthrough finding (it is more a 'short paper idea' in my opinion), but it connects the dots from prior work drawing inspiration and modelling components from a variety of previous papers on the subject, including the pre-embedding work on self-learning/bootstrapping (which is not fully recognized in the current version of the paper). I liked the paper in general, but there are few other research questions that could/should have been pursued in this work. These, along with only a partial recognition of related work and a lack of comparisons with several other relevant baselines, are my main concern regarding this paper, and they should be fixed in the updated version(s). *Self-learning/bootstrapping of bilingual vector spaces: While this work is one of the first to tackle this very limited setup for learning cross-lingual embeddings (although not the first one, see Miceli Barone and more works below), this is the first truly bootstrapping/self-learning approach to learning cross-lingual embeddings. However, this idea of bootstrapping bilingual vector spaces is not new at all (it is just reapplied to learning embeddings), and there is a body of work which used exactly the same idea with traditional 'count-based' bilingual vector spaces. I suggest the authors to check the work of Peirsman and Pado (NAACL 2010) or Vulic and Moens (EMNLP 2013), and recognize the fact that their proposed bootstrapping approach is not so novel in this domain. There is also related work of Ellen Riloff's group on bootstrapping semantic lexicons in monolingual settings. *Relation to Artetxe et al.: I might be missing something here, but it seems that the proposed bootstrapping algorithm is in fact only an iterative approach which repeatedly utilises the previously proposed model/formulation of Artetxe et al. The only difference is the reparametrization (line 296-305). It is not clear to me whether the bootstrapping approach draws its performance from this reparametrization (and whether it would work with the previous parametrization), or the performance is a product of both the algorithm and this new parametrization. Perhaps a more explicit statement in the text is needed to fully understand what is going on here. *Comparison with prior work: Several very relevant papers have not been mentioned nor discussed in the current version of the paper. For instance, the recent work of Duong et al. (EMNLP 2016) on 'learning crosslingual word embeddings without bilingual corpora' seems very related to this work (as the basic word overlap between the two titles reveals!), and should be at least discussed if not compared to. Another work which also relies on mappings with seed lexicons and also partially analyzes the setting with only a few hundred seed lexicon pairs is the work of Vulic and Korhonen (ACL 2016) 'on the role of seed lexicons in learning bilingual word embeddings': these two papers might also help the authors to provide more details for the future work section (e.g., the selection of reliable translation pairs might boost the performance further during the iterative process). Another very relevant work has appeared only recently: Smith et al. (ICLR 2017) discuss 'offline bilingual word vectors, orthogonal transformations and the inverted softmax'. This paper also discusses learning bilingual embeddings in very limited settings (e.g., by relying only on shared words and cognates between two languages in a pair). As a side note, it would be interesting to report results obtained using only shared words between the languages (such words definitely exist for all three language pairs used in the experiments). This would also enable a direct comparison with the work of Smith et al. (ICLR 2017) which rely on this setup. *Seed dictionary size and bilingual lexicon induction: It seems that the proposed algorithm (as discussed in Section 5) is almost invariant to the starting seed lexicon, yielding very similar final BLI scores regardless of the starting point. While a very intriguing finding per se, this also seems to suggest an utter limitation of the current 'offline' approaches: they seem to have hit the ceiling with the setup discussed in the paper; Vulic and Korhonen (ACL 2016) showed that we cannot really improve the results by simply collecting more seed lexicon pairs, and this work suggests that any number of starting pairs (from 25 to 5k) is good enough to reach this near-optimal performance, which is also very similar to the numbers reported by Dinu et al. (arXiv 2015) or Lazaridou et al. (ACL 2015). I would like to see more discussion on how to break this ceiling and further improve BLI results with such 'offline' methods. Smith et al. (ICLR 2017) seem to report higher numbers on the same dataset, so again it would be very interesting to link this work to the work of Smith et al. In other words, the authors state that in future work they plan to fine-tune the method so that it can learn without any bilingual evidence. This is an admirable 'philosophically-driven' feat, but from a more pragmatic point of view, it seems more pragmatic to detect how we can go over the plateau/ceiling which seems to be hit with these linear mapping approaches regardless of the number of used seed lexicon pairs (Figure 2). *Convergence criterion/training efficiency: The convergence criterion, although crucial for the entire algorithm, both in terms of efficiency and efficacy, is mentioned only as a side note, and it is not entirely clear how the whole procedure terminates. I suspect that the authors use the vanishing variation in crosslingual word similarity performance as the criterion to stop the procedure, but that makes the method applicable only to languages which have a cross-lingual word similarity dataset. I might be missing here given the current description in the paper, but I do not fully understand how the procedure stops for Finnish, given that there is no crosslingual word similarity dataset for English-Finnish. *Minor: -There is a Finnish 'Web as a Corpus' (WaC) corpus (lines 414-416): https://www.clarin.si/repository/xmlui/handle/11356/1074 -Since the authors claim that the method could work with a seed dictionary containing only shared numerals, it would be very interesting to include an additional language pair which does not share the alphabet (e.g., English-Russian, English-Bulgarian or even something more distant such as Arabic and/or Hindi). *After the response: I would like to thank the authors for investing their time into their response which helped me clarify some doubts and points raised in my initial review. I hope that they would indeed clarify these points in the final version, if given the opportunity.
[ [ 506, 556 ], [ 557, 584 ], [ 585, 973 ], [ 975, 1003 ], [ 1005, 1096 ], [ 1116, 1158 ], [ 1163, 1223 ], [ 1660, 1729 ], [ 1730, 2209 ], [ 2896, 2999 ], [ 3000, 4398 ], [ 5963, 6173 ], [ 6174, 6635 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Jus_pos_2", "Major_claim", "Eval_neg_1", "Eval_neg_2", "Eval_neg_3", "Eval_neg_1", "Jus_neg_1", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4" ]
106
paper_summary The authors introduce a new annotated event detection dataset that is focused on annotation suicidal events. The annotation focuses on the following event types: suicide-related actions, thoughts/ideation, risk factors related to life, relationship, health, and other, in addition to protective factors related to positive activities such as taking medication. The author apply state-of-the-art event detection models on their dataset, where the performance is poorer in comparison to more established event detection domains. As a result, the authors are calling the community to build models that can improve the performance. summary_of_strengths - The developed dataset is interesting and the task is very important to NLPers who work on mental health. I see its impact going beyond the event detection task to other tasks in the same domain that are related to understanding what kind of triggers affect suicidality. This, hopefully, would yield better performance of the suicide risk assessment and other mental-health related classifiers. -The fact that the annotators are mental health domain experts makes of a more reliable data. -I find the discussion of the challenges of annotations very valuable and it resonates with many previous research where cases of someone talking about a friend trying to committing suicide should be differentiated than the person who is posting attempting suicide. -The baseline models that the authors use are strong and recent. -Thorough reproducibility check list (e.g model parameters, annotation examples) summary_of_weaknesses - Although it is not uncommon to see the "OTHER" label in annotation schema as a candidate label when the annotators cannot find a better mapping with the rest of the labels, I think OTHER label makes the annotations weaker. The annotators would default to it in many cases and as a result the number of annotations for that label become much higher in comparison to the other labels (in your case 15343 for OTHER vs. 21635 for the rest of the labels). This would generate problems such as data imbalance, potential noisy labels where the model might over predict the OTHER label. -The call to the community by the authors for better performance models is a valid one, but we cannot eliminate putting some burden of the poor performance on the dataset itself. This might be due to embedded annotations' issues that can be simply a result of the domain itself. Did the authors do a thorough analysis to confirm that hypothesis? -I think it is very important to see the performance of the models on each label separately. For instance, a valuable analysis would be to compare the performance per label with the annotation challenges. comments,_suggestions_and_typos - Please check the weaknesses section for suggestions on how to improve the work. For instance, I would emphasize again that the readers need to see the performance on each event type to better understand the challenges and how to improve the models. For RF-HEALTH event type, you mention that it is about mentions that directly affect the subject's health. This can be ambiguous, for instance you might have a transitive relationship between life events and health, thus the event types are not exclusive (as you mention this was part of your annotation design). Could you please elaborate more on that? Is your dataset intersected with CLPsych 2019 dataset (Zirikly et al. 2019)? It would be very interesting and valuable if we have the ED annotations on that dataset to further push the performance of the risk assessment models with the use of the triggers. Typos: Preposition is missing after models on line 276
[ [ 668, 704 ], [ 709, 771 ], [ 773, 937 ], [ 938, 1061 ], [ 1064, 1156 ], [ 1159, 1227 ], [ 1232, 1423 ], [ 1426, 1489 ], [ 1492, 1572 ], [ 1597, 1819 ], [ 1820, 2175 ], [ 2178, 2355 ], [ 2356, 2522 ], [ 3013, 3142 ], [ 3143, 3367 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Jus_pos_3", "Eval_pos_4", "Eval_pos_5", "Jus_pos_5", "Eval_pos_6", "Eval_pos_7", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3" ]
109
This paper introduces Neural Symbolic Machines (NSMs) --- a deep neural model equipped with discrete memory to facilitate symbolic execution. An NSM includes three components: (1) a manager that provides weak supervision for learning, (2) a differentiable programmer based on neural sequence to sequence model, which encodes input instructions and predicts simplified Lisp programs using partial execution results stored in external discrete memories. ( 3) a symbolic computer that executes programs and provide code assistance to the programmer to prune search space. The authors conduct experiments on a semantic parsing task (WebQuestionsSP), and show that (1) NSM is able to model language compositionality by saving and reusing intermediate execution results, (2) Augmented REINFORCE is superior than vanilla REINFROCE for sequence prediction problems, and (3) NSM trained end-to-end with weak supervision is able to outperform existing sate-of-the-art method (STAGG). - Strengths - The idea of using discrete, symbolic memories for neural execution models is novel. Although in implementation it may simply reduce to copying previously executed variable tokens from an extra buffer, this approach is still impressive since it works well for a large-scale semantic parsing task. - The proposed revised REINFORCE training schema using imperfect hypotheses derived from maximum likelihood training is interesting and effective, and could inspire future exploration in mixing ML/RL training for neural sequence-to-sequence models. - The scale of experiments is larger than any previous works in modeling neural execution and program induction. The results are impressive. - The paper is generally clear and well-written, although there are some points which might require further clarification (e.g., how do the keys ($v_i$'s in Fig. 2) of variable tokens involved in computing action probabilities? Conflicting notations: $v$ is used to refer to variables in Tab. 1 and memory keys in Fig 1.). Overall, I like this paper and would like to see it in the conference. - Weaknesses - [Choice of Dataset] The authors use WebQuestionsSP as the testbed. Why not using the most popular WebQuestions (Berant et al., 2013) benchmark set? Since NSM only requires weak supervision, using WebQuestions would be more intuitive and straightforward, plus it could facilitate direct comparison with main-stream QA research. - [Analysis of Compositionality] One of the contribution of this work is the usage of symbolic intermediate execution results to facilitate modeling language compositionality. One interesting question is how well questions with various compositional depth are handled. Simple one-hop questions are the easiest to solve, while complex multi-hop ones that require filtering and superlative operations (argmax/min) would be highly non-trivial. The authors should present detailed analysis regarding the performance on question sets with different compositional depth. - [Missing References] I find some relevant papers in this field missing. For example, the authors should cite previous RL-based methods for knowledge-based semantic parsing (e.g., Berant and Liang., 2015), the sequence level REINFORCE training method of (Ranzato et al., 2016) which is closely related to augmented REINFORCE, and the neural enquirer work (Yin et al., 2016) which uses continuous differentiable memories for modeling neural execution. - Misc. - Why is the REINFORCE algorithm randomly initialized (Algo. 1) instead of using parameters pre-trained with iterative ML? - What is KG server in Figure 5?
[ [ 988, 1072 ], [ 1091, 1241 ], [ 1242, 1301 ], [ 1305, 1551 ], [ 1554, 1664 ], [ 1665, 1692 ], [ 1695, 1741 ], [ 1751, 1814 ], [ 1815, 2015 ], [ 2017, 2087 ], [ 3018, 3068 ], [ 3069, 3446 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Jus_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5", "Eval_pos_6", "Eval_neg_1", "Jus_neg_1", "Major_claim", "Eval_neg_2", "Jus_neg_2" ]
110
paper_summary This paper presents a new dataset - DISAPERE, which includes discourse related annotations over scientific peer reviews and rebuttals. Each review is paired with the first rebuttal text. The authors develop four levels for review annotation: (1) review-action, (2) aspect, (3) polarity, and (4) fine-review action; and two levels for rebuttals: (1) argumentative and (2) non-argumentative ones. This dataset could help better characterize the intentions and interactions between reviewers and authors, which in turn can assist decision making for area chair. The authors also tested two machine learning tasks: (1) sentence classification for the proposed schema, and (2) sentence ranking to determine the context mapping from rebuttal to review. Preliminary results show that pre-trained transformer models achieve moderate performance, therefore leaving room for future work. summary_of_strengths 1. This work releases a new dataset of 506 review-rebuttal pairs with sentence level annotation for discourse related aspects. 1. The proposed taxonomy is comprehensive and captures various aspects of peer review text. 1. The authors framed practical machine learning tasks over the dataset, and benchmarked performance of baseline transformer models. 1. In the updated draft, the authors have accounted for the majority of the comments in the previous review. Overall the paper is more clear, and certain minor mistakes have been rectified. summary_of_weaknesses N/A comments,_suggestions_and_typos - The hyperlink for footnote 3 and 4 do not seem to work. -Line 172: an argument level -> on argument level
[ [ 1045, 1134 ], [ 1378, 1459 ] ]
[ "Eval_pos_1", "Eval_pos_2" ]
111
The authors proposed an unsupervised algorithm for Universal Dependencies that does not require training. The tagging is based on PageRank for the words and a small amount of hard-coded rules. The article is well written, very detailed and the intuition behind all prior information being added to the model is explained clearly. I think that the contribution is substantial to the field of unsupervised parsing, and the possibilities for future work presented by the authors give rise to additional research.
[ [ 194, 331 ], [ 332, 414 ], [ 419, 512 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3" ]
113
paper_summary This paper presentes a multi task learning approach for automatic grading of English essays, by considering a holistic score as well as scores on individual essay traits. The authors proposed an LSTM based model and compared single task and multi task settings to show that Multi-task learning based system gives better performance, and is also much faster than the single task setup. They also present a comparison with a BERT model, and report a series of ablation tests to understand the relationship between traits and holistic scores. summary_of_strengths In Automatic Essay Scoring research, it is more common to develop models for a holistic scoring, although in general, there is an agreement that a score may have many dimensions (e.g., content, spelling/grammar, organization etc). This paper is among the few papers in the direction of modeling multiple dimensions of essay scoring. They performed ablation tests in the multi task learning setup, to understand what traits are useful for each set of essays - I think this is an interesting experiment I did not see before in this task's context. They use a popular dataset which is publicly available and uploaded their code along with the paper. summary_of_weaknesses - The paper does not seem to have any comparison with previous work on this topic at all. They directly do different experiments using their own architecture. Simple baselines (e.g., document length) that are commonly used for this problem can be used as a comparison point, in a STL setup, where the predicted variable can be different (holistic score, individual trait scores), keeping the text representation constant. - The paper misses a discussion on the limitations of the current approach. For example, the authors commented in the response pdf to one of the reviewers that performance difference across traits is due to topical variation. The modeling process does not have any specific component to account for such topical variations resulting in different scores. It could be a potential limitation. I am not saying a paper is bad because it has a limitation. I think acknowledging limitations gives a more holistic perspective for the reader about the approach. comments,_suggestions_and_typos I reviewed the previous version of this paper, and most of the minor comments I mentioned have been addressed in this version. One other comment: -How are the trait scores obtained for the prompts that did not have them in the original dataset? The authors claim they took from another source, but understanding how they are created is relevant for this paper. - I think having a few points of comparison to your approach will give a better perspective for us as readers. Comparison between LSTM and BERT is good, but not enough, as this topic still has a dominant approach of combining some linguistic features with neural models, for modeling overall score. For individual traits too, predicting the trait score instead of individual score, keeping rest of the set up same, may give you a quick comparison point, and make the paper more complete.
[ [ 909, 1032 ], [ 1035, 1121 ], [ 1249, 1336 ], [ 1337, 1668 ], [ 1671, 1744 ], [ 1745, 2222 ], [ 2622, 2730 ], [ 2731, 3109 ] ]
[ "Jus_pos_1", "Eval_pos_1", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3" ]
115
paper_summary This paper investigates the degree of knowledge that pre-trained LM, with only access to a vocabulary of subword tokens, have about the character composition of these tokens, and if enriching these models with orthographic information about the tokens can improve them. It proposes to use a probe they name SpellingBee: a generative character-based LM that takes as input an uncontextualized word embedding from a model, and tries to predict the correct character sequence. It is trained on part of the model's vocabulary, and tested on the other: if it manages to succesfully generalize, the embedding must contain orthographic information. The probe is tested on 4 models: Roberta-base, and 3 others showing change in a particular aspect: Roberta-large for size, AraBert for language, and GPT2-Medium for an autoregressive model. The probe's capacity to predict the character sequence is evaluated by counting the exact matches, and with a finer-grained metric measuring overlap. Compared to a control experiment where the probe is not trained and only randomly initialized, the probe is able to better rebuild character sequences when fed with embeddings from the LMs (up to 30-40% from 0 for exact matches); however, it's performance is weakened when the training part of the vocabulary is filtered (removing token too similar to those in the testing part, or with the same lemma). However, using a probe trained on the full vocabulary as a way to initialize a LM does not seem to be useful, as the LM reaches the same training loss as a control one rather quickly. summary_of_strengths - This paper poses a research question of great interest, and answers it while carefully considering many possible factors (models, filtering the training vocabulary). -It investigates a potential application of this answer. -The paper is very clearly written, and easy to follow. summary_of_weaknesses - The results given by the probe are a little difficult to interpret; as, while there is a control experiment where the probe has no information, it would be very useful to have an idea of what the probe can do when fed with embeddings that we know contain orthographic information. If testing the probe on static embeddings (word2vec, glove), fasttext embeddings could work; in this setting, I believe uncontextualized embeddings from CharacterBERT (El Boukkouri et al, 2020) could work. comments,_suggestions_and_typos - The samples of errors shown in Table 3 seem to often have the first, or few first characters right. Did you at some point try to filter by prefix rather than lemmas ?
[ [ 1608, 1662 ], [ 1668, 1728 ], [ 1729, 1772 ], [ 1832, 1887 ], [ 1912, 1979 ], [ 1980, 2400 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Jus_pos_2", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1" ]
118
paper_summary This paper proposes to learn discriminative representations for open relation extraction. In specific, the authors first introduce three data augmentation strategies to generate positive, hard negative, and semi-hard negative samples. Then, the proposed model not only uses instance ranking to optimize each instance's relation representations but also learns relation representations by grouping them together. Experimental results demonstrate the effectiveness of the proposed method. So, I recommend accepting the paper as a short paper. summary_of_strengths 1. The presentation is clear. 2. The experiments are convincing as compared with six SOTA baselines and three variants of the proposed model. 3. The method is simple yet effective. summary_of_weaknesses Does the proposed method heavily depends on the data augmentation quality? It is better to give further discussion. comments,_suggestions_and_typos NA
[ [ 426, 500 ], [ 501, 554 ], [ 580, 606 ], [ 610, 640 ], [ 641, 718 ], [ 722, 757 ], [ 856, 896 ] ]
[ "Eval_pos_4", "Major_claim", "Eval_pos_1", "Eval_pos_2", "Jus_pos_2", "Eval_pos_3", "Eval_neg_1" ]
120
This paper introduces a joint decoder model that generates both transcript and translation, conditioned on some speech utterance as input. The decoder model, on an intuitive level, decodes transcript and translation tokens jointly with separately parameterized decoders which are conditioned not only on the source speech, but also on the (partial) hidden representations of each other. While there exists a related prior paper, this paper introduces a tighter coupling, alongside a more comprehensive evaluation with stronger baselines and comparison across a large number of setting. The conclusions are convincing. The description of both high-level intuitions and low-level details is excellent and makes this paper very interesting to readers. A couple of years into research on end-to-end models on speech translation, with much focus on direct models that do not create transcripts at all, turning toward end-to-end models that do create both transcripts and translations has been a recent trend, making this paper timely and relevant. While the paper is strong as-is, there are some weaknesses which if addressed would lead to an even stronger camera-ready version: -The paper does not discuss linguistic aspects in detail. In particular, I'd appreciate some more discussion on why decisions on transcripts should lead to improved translations. This paper gives empirical evidence, and justifies the approach from an engineer's perspective only. To make this suggestion more precise: In the intro, please consider elaborating further on this sentence: "We believe that these are two edge design choices and that a tighter coupling of ASR and MT is desirable for future end-to-end ST applications." -The focus seems to be on improving translations only (and probably lead to the choice of alpha=0.3 for the training objective). This is a reasonable choice, but should be stated more explicitly, given that one could also target improvements in both BLEU score *and* WER. In fact, the proposed model seems to experience a trade-off between translation accuracy and transcription accuracy, which in itself is a very interesting observation. It might be worth citing a highly relevant, concurrent work that goes the other direction, assuming that both translation and transcript are of equal importance: https://arxiv.org/pdf/2007.12741.pdf . Another related work to cite might be https://ieeexplore.ieee.org/document/5947637 who have reported on BLEU/WER tradeoff quite a few years ago. -On "chained decoders": are these conceptually related / identical to http://arxiv.org/abs/1802.06655 ? -Evaluation: please do not say "significant" unless you have actually formally verified statistical significance, in which case it would be necessary to report the details of the stat. significance check. Also, the standard nowadays is to use sacreBLEU to compute comparable BLEU scores (please see https://www.aclweb.org/anthology/W18-6319/ on why it is impossible to compare BLEU scores when the tokenization details are not known or not consistent). -typo: "weekly tight" -> "weakly tied"
[ [ 387, 585 ], [ 586, 617 ], [ 618, 748 ], [ 749, 1003 ], [ 1004, 1041 ], [ 1043, 1173 ], [ 1175, 1231 ], [ 1232, 1352 ], [ 1353, 1453 ], [ 1454, 1705 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Jus_pos_4", "Eval_pos_4", "Major_claim", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2" ]
121
Related to a recent notion of Visual Dialog tasks, the paper introduces and evaluates a sophisticated neural architecture to answer queries related to images through a dialog (a sequence of several queries and answers). There are several components in the system but the paper focuses on two of them, namely VTA to map visual features to textual features found in the dialog history and current query; and VGAT to build graphs from these visual-textual pairs An evaluation is done on the VisDial dataset for 5 metrics and with comparison with several SOTA systems covering different approaches. The proposed system performs best for all metrics. Ablation seems to confirm the interest of both components (VTA and VGAT), even if the results are maybe not so significative. A few exemples are provided for illustration. The task and the proposed architecture are interesting. However, it is difficult to follow the details of the model, also because of (maybe) some errors. There are also a lot a parameters and some hypothesis that may be discussed. For instance, it is unclear why a graph representation is really needed instead of some ranking of the visual-textual pairs and how exactly the graph is exploited, a priori (from Fig. 4) the top-5 strongest connections in the graph More specific remarks: - in 3.1, is the number k of visual features a fixed parameter (and then which value is used for the experiments) or a parameter depending on the image being processed ? -In eq. (1), (2), and maybe (3) and (4), I wonder if i should range from 1 to h rather than from 1 to k ? -in 3.3, I don't understand what you mean by "homogenous information" -in 3.3, what do you mean by two textual operations (with different colors) -the construction of the sequence of graphs in 3.3 is really unclear; in eq (10), do you mean G(i>0) rather than G(i) ? If I understand correctly, G(i=0) and G(i>0) are a kind of serialization of the graphs but it is unclear if e(i>0)=e(i) ? In eq (6), are you using new multiheads or are they related to those defined in 3.2 ? Why k iteration steps ? Is it to identify at each step (i) the most interesting neighbor nodes for node (i) ? -in 3.1: how if build the list of 100 answers for each query ? Are some false answers randomly added to the right answer, or a specific set of 100 answers is provided for each query ? It is a single set of 100 answers for each dialog ?
[ [ 820, 875 ], [ 876, 973 ], [ 975, 1050 ], [ 1051, 1282 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Eval_neg_2", "Jus_neg_2" ]
122
paper_summary The paper proposes a new approach named BEEP to combine information from clinical notes and relevant medical literature to enhance the prediction of patient outcomes (prolonged mechanical ventilation, in-hospital mortality and length of stay). The medical literature is retrieved from PubMed and then ranked per relevance. The embeddings of clinical notes and top relevant literature are then combined to predict the patient outcomes on MIMIC-III datasets. Experiments show improved accuracy on 2 out of 3 tasks. summary_of_strengths - Overall well-written narratives with clear descriptions of methodology and experiments, though a lot of information is in the appendix which makes it a bit difficult to switch between main content and appendix. - The idea of retrieving medical literature to enhance and provide evidence to patient outcome prediction is attractive. The proposed methods of retrieval and reranking are reasonable. - Experiments are well established and clear. The proposed method BEEP shows advantages on PMV and MOR tasks. summary_of_weaknesses - The proposed method heavily relies on BERT-based encoders and BERT has a word limit of 512 tokens. But most discharge summaries in MIMIC-III have much more than 512 tokens. This may mean a lot of information in discharge summaries is truncated and the model may not be able to build a comprehensive representation of patient condition. - The reliability and interoperability of the proposed method are in doubt based on Figure 3 which shows a high percentage of unhelpful literature is retrieved especially for the LOS task. How will such unhelpful literature impact patient outcomes? How can this be improved? - The performance on LOS is not convincing and the paper does not provide much insight on why. - The experiments do not seem to consider structured features at all (e.g. 17 clinical features from [1] based on MIMIC-III) which however are critical for patient outcome prediction from both clinical and ML perspectives [2]. The experiments may need a baseline that leverages structured features to show the advantage of using clinical notes and interpret BEEP's performance. [1] https://www.nature.com/articles/s41597-019-0103-9 [2] https://arxiv.org/abs/2107.11665 comments,_suggestions_and_typos - In the abstract and experiment section, expressions like "5 points" are confusing. " 5% increase" or "0.05 increase" would be clearer. - In the abstract, what is "increasing F1 by up to t points and precision @Top-K by a large margin of over 25%" based on? The paper may make the statement clearer by mentioning the specific setup that achieves the largest improvement margin. - Based on Appendix B, the bi-encoder is trained on TREC 2016 with 30 EHRs. The paper may discuss how representative these 30 EHRs are to MIMIC-III EHRs. Also as 30 is rather small, the paper may discuss whether it is enough empirically. - Line 559, the paper may discuss why the model does not perform well on LOS, why a high percentage of unhelpful literature are retrieved (even for correct predictions) and how such a high percentage of unhelpful literature impact the reliability of the model. - The paper may discuss why use MeSH rather than other ontologies like UMLS, SNOMED, HPO etc. - Are those 8 categories in Appendix I mutually exclusive?
[ [ 551, 761 ], [ 764, 882 ], [ 883, 946 ], [ 949, 992 ], [ 993, 1056 ], [ 1420, 1492 ], [ 1493, 1605 ], [ 1695, 1787 ], [ 1790, 1856 ], [ 1856, 1912 ], [ 1913, 2014 ], [ 2292, 2374 ], [ 2375, 2426 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_3", "Eval_neg_4", "Jus_neg_4" ]
123
paper_summary The paper deals with multi-task learning. The authors find that having separate networks to learn separate tasks would lead to good performance, but requires a large memory. Using MT-DNN would save memory, but the results are not satisfying. The authors thus propose an approach that saves memory and at the same time achieves good results on GLUE. Figure 1 gives a good summary. First, train task-specific models (but for each model, only finetune the top n layers). Second, do knowledge distillation, so that we can compress the n layers into a smaller number of layers. Third, merge all the models together as shown in Figure 1(c). The GLUE performance is comparable to full fine-tuning (i.e., tuning k models separately where k is the number of tasks), and the proposed approach saves 2/3 of memory. summary_of_strengths - Writing is clear. -Many baseline experiments are performed, including DistillBERT, BERT-of-Theseus, MT-DNN, and many others. -The observation about MT-DNN’s degradation on QQP (line 237) is interesting. This observation reminds me of the intermediate training empirical paper, where in Table 1 they find that QQP is a very different task compared to others: https://arxiv.org/pdf/2005.00628.pdf summary_of_weaknesses Other models -Another simple baseline is to have two separate models: one for tasks that lead to “task interference” like QQP, another for other tasks. I wonder if this baseline will perform better than the authors’ approaches (both in terms of memory and performance). There could be other ways of clustering the tasks. -Adapter is a widely used framework that performs well for MTL. Although not directly connected to the authors' approach, I think that the authors should at least discuss it a bit more in the paper. Given the adaptor framework, would the authors’ approaches perform better? Are the authors’ approaches complementary? Experimental details -Given that this is an empirical paper, I believe that much more detailed descriptions on hyperparameters (for example, on each task) are necessary. More tasks -Relatively minor: Efficient BERT related papers recently often report SQuAD results as well, given that it’s a different format (span selection instead of multiple choice) and the skillset may be different from GLUE. Do authors think that their approach would adapt to SQuAD? Motivation -Relatively minor: If there is a much larger number of tasks, the authors’ approach may not be as efficient as shown in table 2 (i.e., two thirds of memory), given that the authors’ approach is still O(k) where k is the number of task. comments,_suggestions_and_typos Abstract: “overhead” -> it’ll be great to elaborate what overhead you’re referring to (especially because this is the abstract). Minor: commas should follow “i.e.” Line 136: “We find that as long as the student model is properly initialized, the vanilla KD can be as performant as those more sophisticated methods” <- it'll be great if the authors can elaborate.
[ [ 842, 859 ], [ 861, 901 ], [ 902, 964 ], [ 969, 1045 ], [ 1046, 1238 ], [ 1646, 1781 ], [ 1782, 1899 ], [ 1923, 2027 ], [ 2028, 2055 ], [ 2056, 2070 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Jus_pos_2", "Eval_pos_3", "Jus_pos_3", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_2" ]
124
paper_summary The authors aim to improve interpretability for structure and style control in knowledge-grounded conversational models. They propose to use two sequential latent variables for structure and style respectively. 1) m - binary indicator for segment boundaries within a sentence 2) z - style controller attribute to switch between content, knowledge, and style decoders. They use a variational framework for training using an evidence lower bound of the likelihood. Overall their encoder-decoder model outperforms the baselines in automatic and human metrics on two knowledge grounded dialog datasets WizardsOfWikipedia and CMU_DoG. Their models is also more robust and generalizable as it consistently outperforms the baselines with 10% or lower amount of in-domain data. While adding adapters to their decoders they can adjust the style (sentiment) of their responses while maintaining decent performance in automatic evaluation. Their metrics pklg and lklg suggest that their models can easily adapt the latent variables' distribution for different datasets. summary_of_strengths - Interpretable model which predicts segment boundaries and is able to switch style decoders based on the context. Potentially useful for many applications. -Robust and generalizable model which can easily adapt to new styles with limited training data. summary_of_weaknesses - The writing structure and flow can be improved. Many of the crucial details regarding automatic labeling, baselines, human evaluation results etc. are moved to appendix which disrupts the reading flow. comments,_suggestions_and_typos - Their model shows improvement in low resource setting. But, it will be interesting to see what are the overall gains compared to the baselines with 100% of the in-domain data. -missing section in line 492
[ [ 645, 695 ], [ 696, 783 ], [ 1098, 1209 ], [ 1211, 1252 ], [ 1254, 1348 ], [ 1375, 1422 ], [ 1423, 1576 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_1", "Eval_pos_2", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1" ]
126
paper_summary Prior work has used interpretation to improve inference, while ignoring “using inference logic to enhance interpretation.” This work deals with “mutual promotion” where they “promote” in both directions. Specifically, the “mutual promotion” is done using stepwise integration mechanism (SIM; Section 2.2). Additionally, adversarial fidelity regularization is used to further “improve the fidelity of inference and interpretation” (Section 2.3). Essentially, during training, the model generates explanation first, and at the last time-step, generates the prediction. Therefore, the explanation and prediction are using largely the same parameters, and gradient updates would influence both the explanation and prediction. The authors experiment on NLI and conversational QA tasks. The explanation is scored against the gold-standard human-provided evaluation results in e-SNLI and CoS-E datasets. summary_of_strengths Interpretation is an important problem. The figures are well-designed and help readers understand the algorithms. The method performs well on out-of-domain datasets (MNLI and SICK-E). The mutual information discussion is well-motivated. summary_of_weaknesses I’m not convinced that AFiRe (the adversarial regularization) brings significant improvement, especially because -BLEU improvements are small (e.g., 27.93->28.64; would humans be able to identify the differences?) -Hyperparameter details are missing. -Human evaluation protocols, payment, etc. are all missing. Who are the raters? How are they "educated" and how do the authors ensure the raters provide good-faith annotations? What is the agreeement? Other baselines are not compared against. For example, what if we just treat the explanation as a latent variable as in Zhou et al. (2021)? https://arxiv.org/pdf/2011.05268.pdf A few other points that are not fatal: -Gold-standard human explanation datasets are necessary, given the objective in line 307. -Does it mean that inference gets slowed down drastically, and there’s no way to only do inference (i.e., predict the label)? I don’t think this is fatal though. What’s the coefficient of the p(L, E | X) term in line 307? Why is it 1? Hyperparamter details are missing, so it’s not clear whether baselines are well-tuned, and whether ablation studies provide confident results. The writing is not careful, and often impedes understanding. -Line 229: What’s t? -Line 230: What’s n? -Line 273: having X in the equation without defining it is a bit weird; should there be an expectation over X? -Sometimes, the X is not bolded and not italicized (line 262). Sometimes, the X is not bolded but italicized (line 273). Sometimes, the X is bolded but not italicized (line 156). -Line 296: L and E should be defined in the immediate vicinity. Again, sometimes L, E are italicized (line 296) and sometimes not (line 302). -Line 187: It’s best to treat Emb as a function. Having l’ and e’ as superscripts is confusing. -In Table 4, why sometimes there are punctuations and sometimes there are no punctuations? comments,_suggestions_and_typos - Perplexity does not necessarily measure fluency. For example, an overly small perplexity may correspond to repeating common n-grams. But it’s okay to use it as a coarse approximation of fluency. -Line 191: \cdot should be used instead of regular dot Section 2.1: It would be best to define the dimensionalities of everything. -Line 182: A bit confusing what the superscript p means. -Line 229: What’s t? -Line 230: What’s n? -Line 255: Comma should not start the line.
[ [ 935, 974 ], [ 976, 1049 ], [ 1050, 1100 ], [ 1101, 1118 ], [ 1120, 1172 ], [ 1196, 1289 ], [ 1290, 1409 ], [ 1411, 1446 ], [ 1448, 1506 ], [ 1507, 1647 ], [ 1648, 1689 ], [ 1690, 1824 ], [ 2192, 2226 ], [ 2227, 2334 ], [ 2336, 2396 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Jus_pos_3", "Eval_pos_4", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4", "Jus_neg_5", "Eval_neg_5", "Eval_neg_6" ]
128
The paper describes a model for morphological segmentation. The model is a neural network that takes as input a representation of the word to segment and a representation of the context. The model is trained and tested on mongolian data and reaches good performances (98% f measure). The work is sound and well conducted but lacks a well fromulated scientific question. This question could be "is context important for morphological analysis" and the answer will be yes, with a quanfication of the role of context. But such a question is not really novel. In theory, the model could be applied on other languages, it would have been interesting to see the performances on different languages offering different morphological systems. the paper is generally well written, there are some typos and some formulations are not very natural some of the typos and/or questionable formulations: p1 close related -> closely related p1 several segmentation results -> different segmentation results ? p2 artificial linguistics ?? what is this ?? p2 LTMS -> LSTM p3 For Self-attention -> Self-attention p5 following Vaswani -> follows Vaswani p6 we used has annotated -> we used has been annotated p7 but there is the principle a confound -> not clear what that means p9 our-word -> ??
[ [ 189, 268 ], [ 269, 284 ], [ 287, 372 ], [ 374, 738 ], [ 739, 775 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_neg_1", "Jus_neg_1", "Eval_pos_2" ]
129
paper_summary This paper addresses the question of whether the spelling of words has been retained / encoded by large language models. First, it introduces a probe to discover the spelling of a word based on the embeddings in the model input. This probe, spelling bee, is essentially a character level language model which is conditioned on the word embedding. They find that a significant amount of information about spelling is retained by the embedding, and that explicitly providing information about spelling during training by using spelling bee does not provide additional advantage to the model. The authors conclude that this indicates language models “can quickly acquire all the character level information they need without directly observing the composition of each token”. summary_of_strengths I liked the fact that the training and test data splits considered the possibility that words in the test set might benefit too much from training spelling bee on a train set with similarly spelled words. I think that the question of whether the model has implicitly learned character composition is valuable, and there should be some significant interest in this paper as a result. summary_of_weaknesses I want to see a baseline that tests spelling bee on representations specifically optimized for morphology or spelling, so that we can see what the performance of this probe would be on complete information about spelling. As is, it's not clear what a true upper-bound performance would be for such a probe. I felt that the conclusions drawn from the attempt to train with additional spelling information were not well justified. After all, if training with the additional information actually damaged performance, the authors would not have concluded that the model doesn’t use the information or that the information was somehow harmful or misleading. Instead, they would rightly assume that the particular procedure they were using to add that information was not providing it in a structure that the model could easily use. However, because it doesn’t change performance at all, the authors conclude that the model is able to acquire everything it needs without direct observation of the spelling. It’s not clear to me that these results contradict the idea that some information about character composition might be able to help a model. There needs to be more detail on the implementation of spelling bee. comments,_suggestions_and_typos I felt that this paper could do with more citations to the existing literature on morphological/character composition in neural models (e.g., https://aclanthology.org/P17-1184/)
[ [ 1015, 1192 ], [ 1216, 1437 ], [ 1438, 1522 ], [ 1524, 1645 ], [ 1646, 2359 ], [ 2360, 2428 ], [ 2462, 2596 ], [ 2597, 2639 ] ]
[ "Eval_pos_1", "Jus_neg_1", "Eval_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Eval_neg_4", "Jus_neg_4" ]
130
- Strengths: Introduces a new document clustering approach and compares it to several established methods, showing that it improves results in most cases. The analysis is very detailed and thorough--quite dense in many places and requires careful reading. The presentation is organized and clear, and I am impressed by the range of comparisons and influential factors that were considered. Argument is convincing and the work should influence future approaches. - Weaknesses: The paper does not provide any information on the availability of the software described. - General Discussion: Needs some (minor) editing for English and typos--here are just a few: Line 124: regardless the size > regardless of the size Line 126: resources. Because > resources, because Line 205: consist- ing mk > consisting of mk Line 360: versionand > version and
[ [ 13, 155 ], [ 157, 257 ], [ 258, 298 ], [ 303, 391 ], [ 392, 414 ], [ 419, 463 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5", "Eval_pos_6" ]