id
int64
0
549
review
stringlengths
314
12.7k
spans
sequence
labels
sequence
0
This paper describes a state-of-the-art CCG parsing model that decomposes into tagging and dependency scores, and has an efficient A* decoding algorithm. Interestingly, the paper slightly outperforms Lee et al. (2016)'s more expressive global parsing model, presumably because this factorization makes learning easier. It's great that they also report results on another language, showing large improvements over existing work on Japanese CCG parsing. One surprising original result is that modeling the first word of a constituent as the head substantially outperforms linguistically motivated head rules. Overall this is a good paper that makes a nice contribution. I only have a few suggestions: -I liked the way that the dependency and supertagging models interact, but it would be good to include baseline results for simpler variations (e.g. not conditioning the tag on the head dependency). -The paper achieves new state-of-the-art results on Japanese by a large margin. However, there has been a lot less work on this data - would it also be possible to train the Lee et al. parser on this data for comparison? -Lewis, He and Zettlemoyer (2015) explore combined dependency and supertagging models for CCG and SRL, and may be worth citing.
[ [ 320, 381 ], [ 382, 453 ], [ 609, 669 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Major_claim" ]
1
The paper considers a synergistic combination of two non-HMM based speech recognition techniques: CTC and attention-based seq2seq networks. The combination is two-fold: 1. first, similarly to Kim et al. 2016 multitask learning is used to train a model with a joint CTC and seq2seq cost. 2. second (novel contribution), the scores of the CTC model and seq2seq model are ensembled during decoding (results of beam search over the seq2seq model are rescored with the CTC model). The main novelty of the paper is in using the CTC model not only as an auxiliary training objective (originally proposed by Kim et al. 2016), but also during decoding. - Strengths: The paper identifies several problems stemming from the flexibility offered by the attention mechanism and shows that by combining the seq2seq network with CTC the problems are mitigated. - Weaknesses: The paper is an incremental improvement over Kim et al. 2016 (since two models are trained, their outputs can just as well be ensembled). However, it is nice to see that such a simple change offers important performance improvements of ASR systems. - General Discussion: A lot of the paper is spent on explaining the well-known, classical ASR systems. A description of the core improvement of the paper (better decoding algorithm) starts to appear only on p. 5. The description of CTC is nonstandard and maybe should either be presented in a more standard way, or the explanation should be expanded. Typically, the relation p(C|Z) (eq. 5) is deterministic - there is one and only one character sequence that corresponds to the blank-expanded form Z. I am also unsure about the last transformation of the eq. 5.
[ [ 658, 845 ], [ 860, 920 ], [ 922, 995 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Jus_neg_1" ]
2
The authors propose ‘morph-fitting’, a method that retrofits any given set of trained word embeddings based on a morphologically-driven objective that (1) pulls inflectional forms of the same word together (as in ‘slow’ and ‘slowing’) and (2) pushes derivational antonyms apart (as in ‘expensive’ and ‘inexpensive’). With this, the authors aim to improve the representation of low-frequency inflections of words as well as mitigate the tendency of corpus-based word embeddings to assign similar representations to antonyms. The method is based on relatively simple manually-constructed morphological rules and is demonstrated on both English, German, Italian and Russian. The experiments include intrinsic word similarity benchmarks, showing notable performance improvements achieved by applying morph-fitting to several different corpus-based embeddings. Performance improvement yielding new state-of-the-art results is also demonstrated for German and Italian on an extrinsic task - dialog state tracking. Strengths: - The proposed method is simple and shows nice performance improvements across a number of evaluations and in several languages. Compared to previous knowledge-based retrofitting approaches (Faruqui et al., 2015), it relies on a few manually-constructed rules, instead of a large-scale knowledge base, such as an ontology. - Like previous retrofitting approaches, this method is easy to apply to existing sets of embeddings and therefore it seems like the software that the authors intend to release could be useful to the community. - The method and experiments are clearly described. 
 Weaknesses: - I was hoping to see some analysis of why the morph-fitted embeddings worked better in the evaluation, and how well that corresponds with the intuitive motivation of the authors. - The authors introduce a synthetic word similarity evaluation dataset, Morph-SimLex. They create it by applying their presumably semantic-meaning-preserving morphological rules to SimLex999 to generate many more pairs with morphological variability. They do not manually annotate these new pairs, but rather use the original similarity judgements from SimLex999. The obvious caveat with this dataset is that the similarity scores are presumed and therefore less reliable. Furthermore, the fact that this dataset was generated by the very same rules that are used in this work to morph-fit word embeddings, means that the results reported on this dataset in this work should be taken with a grain of salt. The authors should clearly state this in their paper. - (Soricut and Och, 2015) is mentioned as a future source for morphological knowledge, but in fact it is also an alternative approach to the one proposed in this paper for generating morphologically-aware word representations. The authors should present it as such and differentiate their work. - The evaluation does not include strong morphologically-informed embedding baselines. General Discussion: With the few exceptions noted, I like this work and I think it represents a nice contribution to the community. The authors presented a simple approach and showed that it can yield nice improvements using various common embeddings on several evaluations and four different languages. I’d be happy to see it in the conference. Minor comments: - Line 200: I found this phrasing unclear: “We then query … of linguistic constraints”. - Section 2.1: I suggest to elaborate a little more on what the delta is between the model used in this paper and the one it is based on in Wieting 2015. It seemed to me that this was mostly the addition of the REPEL part. - Line 217: “The method’s cost function consists of three terms” - I suggest to spell this out in an equation. - Line 223: x and t in this equation (and following ones) are the vector representations of the words. I suggest to denote that somehow. Also, are the vectors L2-normalized before this process? Also, when computing ‘nearest neighbor’ examples do you use cosine or dot-product? Please share these details. - Line 297-299: I suggest to move this text to Section 3, and make the note that you did not fine-tune the params in the main text and not in a footnote. - Line 327: (create, creates) seems like a wrong example for that rule. 
 - I have read the author response
[ [ 1021, 1051 ], [ 1057, 1147 ], [ 1345, 1443 ], [ 1448, 1552 ], [ 1556, 1607 ], [ 2288, 2407 ], [ 2420, 2507 ], [ 2996, 3075 ], [ 3249, 3290 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Jus_pos_3", "Eval_pos_3", "Eval_pos_4", "Jus_neg_1", "Eval_neg_1", "Major_claim", "Major_claim" ]
3
This paper outlines a method to learn sense embeddings from unannotated corpora using a modular sense selection and representation process. The learning is achieved by a message passing scheme between the two modules that is cast as a reinforcement learning problem by the authors. - Strengths: The paper is generally well written, presents most of its ideas clearly and makes apt comparisons to related work where required. The experiments are well structured and the results are overall good, though not outstanding. However, there are several problems with the paper that prevent me from endorsing it completely. - Weaknesses: My main concern with the paper is the magnification of its central claims, beyond their actual worth. 1) The authors use the term "deep" in their title and then several times in the paper. But they use a skip-gram architecture (which is not deep). This is misrepresentation. 2) Also reinforcement learning is one of the central claims of this paper. However, to the best of my understanding, the motivation and implementation lacks clarity. Section 3.2 tries to cast the task as a reinforcement learning problem but goes on to say that there are 2 major drawbacks, due to which a Q-learning algorithm is used. This algorithm does not relate to the originally claimed policy. Furthermore, it remains unclear how novel their modular approach is. Their work seems to be very similar to EM learning approaches, where an optimal sense is selected in the E step and an objective is optimized in the M step to yield better sense representations. The authors do not properly distinguish their approach, nor motivative why RL should be preferred over EM in the first place. 3) The authors make use of the term pure-sense representations multiple times, and claim this as a central contribution of their paper. I am not sure what this means, or why it is beneficial. 4) They claim linear-time sense selection in their model. Again, it is not clear to me how this is the case. A highlighting of this fact in the relevant part of the paper would be helpful. 5) Finally, the authors claim state-of-the-art results. However, this is only on a single MaxSimC metric. Other work has achieved overall better results using the AvgSimC metric. So, while state-of-the-art isn't everything about a paper, the claim that this paper achieves it - in the abstract and intro - is at least a little misleading.
[ [ 295, 330 ], [ 332, 424 ], [ 425, 460 ], [ 465, 518 ], [ 519, 615 ], [ 630, 731 ], [ 732, 2417 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Eval_pos_3", "Major_claim", "Eval_neg_1", "Jus_neg_1" ]
4
- Strengths: This is a well written paper. The paper is very clear for the most part. The experimental comparisons are very well done. The experiments are well designed and executed. The idea of using KD for zero-resource NMT is impressive. - Weaknesses: There were many sentences in the abstract and in other places in the paper where the authors stuff too much information into a single sentence. This could be avoided. One can always use an extra sentence to be more clear. There could have been a section where the actual method used could be explained in a more detailed. This explanation is glossed over in the paper. It's non-trivial to guess the idea from reading the sections alone. During test time, you need the source-pivot corpus as well. This is a major disadvantage of this approach. This is played down - in fact it's not mentioned at all. I could strongly encourage the authors to mention this and comment on it. - General Discussion: This paper uses knowledge distillation to improve zero-resource translation. The techniques used in this paper are very similar to the one proposed in Yoon Kim et. al. The innovative part is that they use it for doing zero-resource translation. They compare against other prominent works in the field. Their approach also eliminates the need to do double decoding. Detailed comments: -Line 21-27 - the authors could have avoided this complicated structure for two simple sentences. Line 41 - Johnson et. al has SOTA on English-French and German-English. Line 77-79 there is no evidence provided as to why combination of multiple languages increases complexity. Please retract this statement or provide more evidence. Evidence in literature seems to suggest the opposite. Line 416-420 - The two lines here are repeated again. They were first mentioned in the previous paragraph. Line 577 - Figure 2 not 3!
[ [ 13, 44 ], [ 45, 88 ], [ 89, 138 ], [ 139, 187 ], [ 188, 245 ], [ 260, 404 ], [ 699, 862 ], [ 863, 936 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5", "Eval_neg_1", "Eval_neg_2", "Jus_neg_2" ]
5
- Strengths: * Elaborate evaluation data creation and evaluation scheme. * Range of compared techniques: baseline/simple/complex - Weaknesses: * No in-depth analysis beyond overall evaluation results. - General Discussion: This paper compares several techniques for robust HPSG parsing. Since the main contribution of the paper is not a novel parsing technique but the empirical evaluation, I would like to see a more in-depth analysis of the results summarized in Table 1 and 2. It would be nice to show some representative example sentences and sketches of its analyses, on which the compared methods behaved differently. Please add EDM precision and recall figures to Table 2. The EDM F1 score is a result of a mixed effects of (overall and partial) coverage, parse ranking, efficiency of search, etc. The overall coverage figures in Table 1 are helpful but addition of EDM recall to Table 2 would make the situations clearer. Minor comment: -Is 'pacnv+ut' in Table 1 and 2 the same as 'pacnv' described in 3.4.3?
[ [ 16, 74 ], [ 149, 204 ] ]
[ "Eval_pos_1", "Eval_neg_1" ]
7
This paper presents a corpus of annotated essay revisions. It includes two examples of application for the corpus: 1) Student Revision Behavior Analysis and 2) Automatic Revision Identification The latter is essentially a text classification task using an SVM classifier and a variety of features. The authors state that the corpus will be freely available for research purposes. The paper is well-written and clear. A detailed annotation scheme was used by two annotators to annotate the corpus which added value to it. I believe the resource might be interesting to researcher working on writing process research and related topics. I also liked that you provided two very clear usage scenarios for the corpus. I have two major criticisms. The first could be easily corrected in case the paper is accepted, but the second requires more work. 1) There are no statistics about the corpus in this paper. This is absolutely paramount. When you describe a corpus, there are some information that should be there. I am talking about number of documents (I assume the corpus has 180 documents (60 essays x 3 drafts), is that correct?), number of tokens (around 400 words each essay?), number of sentences, etc. I assume we are talking about 60 unique essays x 400 words, so about 24,000 words in total. Is that correct? If we take the 3 drafts we end up with about 72,000 words but probably with substantial overlap between drafts. A table with this information should be included in the paper. 2) If the aforementioned figures are correct, we are talking about a very small corpus. I understand the difficulty of producing hand-annotated data, and I think this is one of the strengths of your work, but I am not sure about how helpful this resource is for the NLP community as a whole. Perhaps such a resource would be better presented in a specialised workshop such as BEA or a specialised conference on language resources like LREC instead of a general NLP conference like ACL. You mentioned in the last paragraph that you would like to augment the corpus with more annotation. Are you also willing to include more essays? Comments/Minor: - As you have essays by native and non-native speakers, one further potential application of this corpus is native language identification (NLI). - p. 7: "where the unigram feature was used as the baseline" - "word unigram". Be more specific. - p. 7: "and the SVM classifier was used as the classifier." - redundant.
[ [ 381, 417 ], [ 522, 635 ], [ 636, 714 ], [ 715, 845 ], [ 849, 935 ], [ 935, 1493 ], [ 1498, 1581 ], [ 1582, 1786 ], [ 1786, 1979 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Major_claim", "Eval_neg_1", "Jus_neg_1", "Jus_neg_2", "Eval_pos_2", "Major_claim" ]
8
This paper presents several weakly supervised methods for developing NERs. The methods rely on some form of projection from English into another language. The overall approach is not new and the individual methods proposed are improvements of existing methods. For an ACL paper I would have expected more novel approaches. One of the contributions of the paper is the data selection scheme. The formula used to calculate the quality score is quite straightforward and this is not a bad thing. However, it is unclear how the thresholds were calculated for Table 2. The paper says only that different thresholds were tried. Was this done on a development set? There is no mention of this in the paper. The evaluation results show clearly that data selection is very important, but one may not know how to tune the parameters for a new data set or a new language pair. Another contribution of the paper is the combination of the outputs of the two systems developed in the paper. I tried hard to understand how it works, but the description provided is not clear. The paper presents a number of variants for each of the methods proposed. Does it make sense to combine more than two weakly supervised systems? Did the authors try anything in this direction. It would be good to know a bit more about the types of texts that are in the "in-house" dataset.
[ [ 155, 322 ], [ 323, 563 ], [ 564, 699 ], [ 867, 1061 ] ]
[ "Eval_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3" ]
9
The paper proposes a model for the Stanford Natural Language Inference (SNLI) dataset, that builds on top of sentence encoding models and the decomposable word level alignment model by Parikh et al. (2016). The proposed improvements include performing decomposable attention on the output of a BiLSTM and feeding the attention output to another BiLSTM, and augmenting this network with a parallel tree variant. - Strengths: This approach outperforms several strong models previously proposed for the task. The authors have tried a large number of experiments, and clearly report the ones that did not work, and the hyperparameter settings of the ones that did. This paper serves as a useful empirical study for a popular problem. - Weaknesses: Unfortunately, there are not many new ideas in this work that seem useful beyond the scope the particular dataset used. While the authors claim that the proposed network architecture is simpler than many previous models, it is worth noting that the model complexity (in terms of the number of parameters) is fairly high. Due to this reason, it would help to see if the empirical gains extend to other datasets as well. In terms of ablation studies, it would help to see 1) how well the tree-variant of the model does on its own and 2) the effect of removing inference composition from the model. Other minor issues: 1) The method used to enhance local inference (equations 14 and 15) seem very similar to the heuristic matching function used by Mou et al., 2015 (Natural Language Inference by Tree-Based Convolution and Heuristic Matching). You may want to cite them. 2) The first sentence in section 3.2 is an unsupported claim. This either needs a citation, or needs to be stated as a hypothesis. While the work is not very novel, the the empirical study is rigorous for the most part, and could be useful for researchers working on similar problems. Given these strengths, I am changing my recommendation score to 3. I have read the authors' responses.
[ [ 424, 506 ], [ 506, 660 ], [ 661, 729 ], [ 744, 864 ], [ 864, 1339 ], [ 1615, 1673 ], [ 1674, 1742 ], [ 1743, 1964 ] ]
[ "Eval_pos_1", "Jus_pos_2", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Major_claim" ]
10
- Strengths: This paper contributes to the field of knowledge base-based question answering (KB-QA), which is to tackle the problem of retrieving results from a structured KB based on a natural language question. KB-QA is an important and challenging task. The authors clearly identify the contributions and the novelty of their work, provide a good overview of the previous work and performance comparison of their approach to the related methods. Previous approaches to NN-based KB-QA represent questions and answers as fixed length vectors, merely as a bag of words, which limits the expressiveness of the models. And previous work also don’t leverage unsupervised training over KG, which potentially can help a trained model to generalize. This paper makes two major innovative points on the Question Answering problem. 1) The backbone of the architecture of the proposed approach is a cross-attention based neural network, where attention is used for capture different parts of questions and answer aspects. The cross-attention model contains two parts, benefiting each other. The A-Q attention part tries to dynamically capture different aspects of the question, thus leading to different embedding representations of the question. And the Q-A attention part also offer different attention weight of the question towards the answer aspects when computing their Q-A similarity score. 2) Answer embeddings are not only learnt on the QA task but also modeled using TransE which allows to integrate more prior knowledge on the KB side. Experimental results are obtained on Web questions and the proposed approach exhibits better behavior than state-of-the-art end-to-end methods. The two contributions were made particularly clear by ablation experiment. Both the cross-attention mechanism and global information improve QA performance by large margins. The paper contains a lot of contents. The proposed framework is quite impressive and novel compared with the previous works. - Weaknesses: The paper is well-structured, the language is clear and correct. Some minor typos are provided below. 1. Page 5, column 1, line 421: re-read  reread 2. Page 5, column 2, line 454: pairs be  pairs to be - General Discussion: In Equation 2: the four aspects of candidate answer aspects share the same W and b. How about using separate W and b for each aspect? I would suggest considering giving a name to your approach instead of "our approach", something like ANN or CA-LSTM…(yet something different from Table 2). In general, I think it is a good idea to capture the different aspects for question answer similarity, and cross-attention based NN model is a novel solution for the above task. The experimental results also demonstrate the effectiveness of the authors’ approach. Although the overall performance is weaker than SP-based methods or some other integrated systems, I think this paper is a good attempt in end-to-end KB-QA area and should be encouraged.
[ [ 257, 333 ], [ 335, 448 ], [ 745, 824 ], [ 825, 1858 ], [ 1859, 1897 ], [ 1897, 1983 ], [ 1998, 2062 ], [ 2941, 3029 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Jus_pos_3", "Eval_pos_4", "Eval_pos_5", "Eval_pos_6", "Major_claim" ]
11
paper_summary Since only minor revisions have been made to the paper, my views of the paper have not changed. For details, please see my previous review comments. The author’s response has answered my previous questions very well and added relevant analysis to the revised draft. In my opinion, the analysis of the negative phenomenon on NLU corpora in this paper is comprehensive. But as its contribution is incremental, it is unlikely to be improved through minor modifications. In summary, I think it is a borderline paper of ACL, or as a Findings paper. summary_of_strengths How to deal with negation semantic is one of the most fundamental and important issues in NLU, which is especially often ignored by existing models. This paper verifies the significance of the problem on multiple datasets, and in particular, proposes to divide the negations into important and unimportant types and analyzes them (Table 2). The work of the paper is comprehensive and solid. summary_of_weaknesses However, I think the innovation of this paper is general. The influence of negation expressions on NLP/NLU tasks has been widely proposed in many specialized studies, as well as in the case/error analysis of many NLP/NLU tasks. In my opinion, this paper is the only integration of these points of view and does not provide deeper insights to inspire audiences in related fields. comments,_suggestions_and_typos NA
[ [ 280, 381 ], [ 389, 420 ], [ 422, 479 ], [ 481, 557 ], [ 729, 920 ], [ 921, 971 ], [ 994, 1051 ], [ 1052, 1221 ], [ 1237, 1295 ], [ 1300, 1371 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Eval_neg_2", "Major_claim", "Jus_pos_2", "Eval_pos_2", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Eval_neg_3" ]
12
paper_summary The paper defines a CBMI metric over the NMT source and a target word (given the target history) and then uses it to re-weight the NMT training loss. The definition is simplified to the quotient of NMT probability and the LM probability. Experiments shows that the training strategy improves the translation quality, over two training datasets, outperforming previous works. The paper further shows the method also improves the human evaluation. summary_of_strengths - The proposed method appears to be simple, but works; -Paper appears to be well written; -Experiments comparison and analysis, human evaluation; Overall, paper did a good job in presenting and examining the effectiveness of a simple idea. summary_of_weaknesses I think the paper (and related works) presented the works in a way that they presented a hypothesis (eg, importance of token reweighing), then conduct experiments and analysis showing the effectiveness of the method, then saying re-weighing the token importance works. After finishing reading, I felt the need to go back go re-examine the hypothesis to understand more and realized that I still don't understand the problem in a machine learning sense. The authors are encouraged to (at least) post some "aha" examples showing re-weighting this way indeed is the one that matters. Also, discussing and revealing the reason why NMT still needs this re-weighting even though the NMT model can in principle implicitly capture them would be really helpful. comments,_suggestions_and_typos Please see the weakness section.
[ [ 482, 536 ], [ 537, 571 ], [ 628, 721 ], [ 1014, 1197 ], [ 1198, 1498 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1" ]
13
paper_summary This paper describes the development of a data set that can be used to develop a system that can generate automated feedback to learners' responses to short-answer questions. The data set includes questions, their answers, and their feedback in the domain of computer networking, mostly in English but with a sizable German subset as well. The paper describes the construction of the data set, including agreement and encountered challenges, as well as experimental results that can serve as a baseline for future work. summary_of_strengths Although the domain is niche, since the authors do an extremely thorough job of thoughtfully constructing their data set with expert annotators and guidance, agreement-measurement, and validity evidence, this paper should serve as a model to the community with respect to how to compile similar data sets. While the authors mention that the data set is small -- 4,519 response/feedback pairs covering 22 English and 8 German questions -- it's actually quite large for something that is completely human-compiled and human-reviewed. This paper is very clear, easy to follow, and well-organized. summary_of_weaknesses Unfortunately, the final data set contains imbalanced classes, something the authors aim to address in future versions of the data set. I wouldn't use this as a reason to reject this paper, however. Some in our community may find this work, and its domain, rather niche; this paper would be a great fit for the BEA workshop. comments,_suggestions_and_typos Can the authors mention the dates during which the data was collected? Since this was such a big manual effort, I wouldn't be surprised if the bulk of the work was done in 2021 on data collected in 2020, for instance. This is also important since the domain is computer networking which changes fairly rapidly. On line 005, insert "many" between "by" and "Automatic". On line 040, change "interesting" to "useful". On line 054, "in the last decades" should read "over the past decades". On line 154, "detrimental for" should be "detrimental to". The last sentence of section 2.2, beginning with "Lastly, structured collections..." seems out-of-place here. Should this be a separate paragraph? Or can you do more to tie it in with the preceding sentences? On line 395, "refined for multiple years" should be "refined over multiple years". In this field, it's typical to refer to learners' responses to questions as "responses" rather than "submissions". Just a minor thing you may want to consider :)
[ [ 558, 761 ], [ 762, 862 ], [ 1090, 1152 ], [ 1175, 1310 ], [ 1375, 1446 ], [ 1447, 1501 ] ]
[ "Jus_pos_1", "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Eval_neg_2", "Major_claim" ]
14
paper_summary This paper presents a cross-lingual information retrieval approach using knowledge distillation. The underlying model is ColBERT with XLM-R as the pretained language model. The approach makes use of a teacher model based on query translation and monolingual IR in English. The student model is trained with two objectives. One is an IR objective to match the teacher model's query-passage relevance predictions. The second objective is to learn a representation of the non-english text that most closely matches the teacher's representation at the token level. This relies on a cross lingual token alignment based on greedily aligning tokens with the highest cosine similarity. The authors do abalations of their two objectives and find they are both useful and also compare against fine-tuning ColBERT directly on cross lingual data. On the XOR-TyDi leaderboard, one of this paper's models is the current best. summary_of_strengths - Novel approach that does cross lingual IR where the resulting model does not use MT -New cross lingual token alignment based on multilingual pretrained langauge model -Good abalations and comparisons with fine-tuning on cross lingual data -Strong performance on zero-shot settings as well -The paper has best performance on XOR-TyDi summary_of_weaknesses No major weaknesses comments,_suggestions_and_typos line 62-64 asks whether a high performance CLIR model can be trained that can be operate without having to rely on MT. But the training process still relies on MT, so this approach does still rely on MT, right? I guess the point is that it only relies on MT at training time and not at evaluation / inference. It might be possible to try to make this clearer.
[ [ 950, 1033 ], [ 1036, 1117 ], [ 1119, 1189 ], [ 1191, 1239 ], [ 1307, 1326 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Major_claim" ]
15
paper_summary The paper investigates methods to automatically generate morally framed arguments (relying on a specific stance, on the given topic focusing on the given morals), and analyses the effect of these arguments on different audiences (namely, as liberals and conservatives). summary_of_strengths - The topic of the paper is potentially interesting to the ACL audience in general, and extremely interesting in particular to the Argument Mining (and debating technology) research community. Investigating methods to inject morals into argument generation systems to make arguments more effective and convincing is a very valuable step in the field (opening at the same time ethical issues). -The paper is clear, well written and nicely structured -The experimental setting is well described and the applied methods are technically sound. It relies on the solid framework of the IBM Debater technology. summary_of_weaknesses - very limited size of the user study (6 people in total, 3 liberals and 3 conservatives). Moreover, a "stereotypical" hypothesis of their political vision is somehow assumed) -the Cohen’s κ agreement was 0.32 on the moral assignment -> while the authors claim that this value is in line with other subjective argument-related annotations, I still think it is pretty low and I wonder about the reliability of such annotation. comments,_suggestions_and_typos [line 734] Ioana Hulpu? - > check reference
[ [ 308, 498 ], [ 499, 698 ], [ 700, 754 ], [ 756, 845 ], [ 846, 910 ], [ 935, 970 ], [ 972, 1023 ], [ 1025, 1109 ], [ 1274, 1358 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Jus_pos_4", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Eval_neg_3" ]
17
paper_summary This paper is about improving the prosody of neural text-to-speech (NTTS) systems using the surrounding context of a given input text. The study introduced an extension to a well known NTTS system i.e., FastSpeech-2. The extension is a phoneme level conditional VAE. As cited in the current paper both FastSpeech-2 and conditional VAE are already proposed in the literature. The main novelty of this paper is representation of surrounding utterances using a pre-trained BERT model and generation of prosodically varied samples with the help of learned contextual information. Authors followed standard TTS evaluation protocols to evaluate their proposed architecture, and evaluation results are in favor of the proposed architecture. summary_of_strengths - This paper introduced a new component to FastSpeech-2, a well known non-autoregressive NTTS architecture, called as cross utterance conditional VAE (CUC-VAE). -The CUC-VAE contains two main components 1) cross utterance (CU) embedding and 2) CU enhanced conditional VAE. summary_of_weaknesses - As a reviewer, I found the paper slightly difficult to read -- some long sentences can be rewritten to improve the clarity of the paper reading. -The subjective results are derived on a small set of utterances (11 audios) using a small number of listeners (23 subjects), this may not be substantial enough for statistical significance of the results published in the paper. -It is not clear why CUC-VAE TTS system with L=1 performed worse than baseline system -- an appropriate reason or further analysis may be required to validate this. -In general, there are quite a few things missing -- details provided in comments section. comments,_suggestions_and_typos **Typos:** -Background section: "...high fidelity thank to…" -> "...high fidelity thanks to…" -Background section: " … Fang et al., 2019).Many…" -> " … Fang et al., 2019). Many…" -Figure-1: "...which integrated to into…" -> "...which integrated into…" **Comments:** -Author did not mention how the initial durations of phonemes are obtained. -Are durations of phonemes predicted in frames or seconds? -Figure-1 did not mention how the proposed CUC-VAE TTS system works in the inference time. Moreover, it is hard to understand the color schema followed in the Figure-1, there is no legend. -There is no mentioning of train, valid and test set splits in the dataset section. -In Table-2 the baseline system received a better MOS score than the baseline + fine-grained VAE and baseline + CVAE, why is it? Whereas in Table-4 the baseline system show high MCD and FFE error than the baseline + fine-grained VAE and baseline + CVAE systems, why is it? -How do you represent the reference mel-spectrogram at phoneme level? -Did you use pre-trained HiFi-GAN to synthesize speech from the predicted mel-spectrograms?
[ [ 1070, 1129 ], [ 1130, 1214 ], [ 1216, 1340 ], [ 1342, 1444 ], [ 1446, 1530 ], [ 1531, 1609 ], [ 1612, 1660 ], [ 1661, 1701 ] ]
[ "Eval_neg_1", "Jus_neg_1", "Jus_neg_2", "Eval_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4" ]
18
paper_summary This paper proposes a novel refinement method to synchronously refine the previously generated words and generate the next word for language generation models. The authors accomplish this goal with an interesting implementation without introducing additional parameters. Specifically, the authors reuses the context vectors at previous decoding steps (i.e., c_1, c_2, ..., c_{i-2}) to calculate the refined probabilities in a similar way to the standard generation probabilities (the only difference is that using c_{0<n<i-1} instead of c_{i-1}). A refinement operation will be conducted at a previous position, where the refinement probability is greater than the generation probability. To reduce the computational cost and potential risk of "over-refinement", the authors design a local constraint that narrow the refinement span to the N nearest tokens. In model training, the authors randomly select future target words not greater than N to cover a variety of different future contexts as bleu parts. summary_of_strengths 1. A novel approach to accomplish the modeling of future context. 2. Comprehensive experiments to validate the effectiveness of the proposed approach across different tasks (e.g., standard and simultaneous machine translation, storytelling, and text summarization). 3. Detailed analyses to show how each component (e.g., the hyper parameter N, local constraints and refinement mask) works. summary_of_weaknesses The main concern is the measure of the inference speed. The authors claimed that "the search complexity of decoding with refinement as consistent as that of the original decoding with beam search" (line 202), and empirically validated that in Table 1 (i.e., #Speed2.). Even with local constraint, the model would conduct 5 (N=5) more softmax operations over the whole vocabulary (which is most time-consuming part in inference) to calculate the distribution of refinement probabilities for each target position. Why does such operations only marginally decrease the inference speed (e.g., form 3.7k to 3.5k tokens/sec for Transformer-base model)? How do we measure the inference speed? Do you follow Kasai et al., (2021) to measure inference speed when translating in mini-batches as large as the hardware allows. I guess you report the batch decoding speed since the number is relatively high. Please clarify the details and try to explain why the refinement model hardly affect the inference speed. The score will be increased if the authors can address the concern. [1] Jungo Kasai, Nikolaos Pappas, Hao Peng, James Cross, and Noah Smith. Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Translation. ICLR 2021. comments,_suggestions_and_typos 1. Line118: SelfAtt_c => Cross Attention, the attention network over the encoder representations is generally called as cross attention. 2. Ablation study in Section 4.1.3 should be conducted on validation sets instead of test sets (similar to Section 4.1.2). In addition, does the refinement mask in Table 2 denote that randomly selecting future target words no greater than N in model training (i.e., Line 254)? 3. Is PPL a commonly-used metric for storytelling?
[ [ 1046, 1108 ], [ 1113, 1216 ], [ 1218, 1308 ], [ 1314, 1358 ], [ 1359, 1427 ], [ 1428, 1433 ], [ 1458, 1513 ], [ 1514, 2458 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Jus_pos_2", "Eval_pos_3", "Jus_pos_3", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1" ]
19
paper_summary The paper proposes 6 test corpora for vision and language captioning systems that target specific competency. For each competency, examples are generated semi-automatically from existing language + vision tasks, such QA in V7W, and are created in a FOIL style, where one example correctly describes the image, while another example makes a minimal change to caption and does not describe the image. Systems are then challenged to prefer captions that correctly identify the image. The competencies tested include existence, plurality/counting, spatial reasoning (via prepositions), situational knowledge (via imSitu data), and coreference. The paper evaluates several recent pre-training based models, finding that many fail at their challenges, and that the multi-task model 12-in-1, works best. summary_of_strengths Proposes a fairly diverse set of challenges that could be a useful diagnostic going forward. The paper evaluates currently relevant model on the diagnostic, establishing clear baselines for their dataset moving forward. Because the paper encompasses essentially 5 independent datasets, it a very substantial body of work. It seems larger than a standard paper. summary_of_weaknesses (being a previous reviewer R BWRg, I will respond to previously identified weakness) I still find the argument of what is and is not included in the diagnostic unclear. In many ways, this seems like a case of a subset of competencies that we have enough visual annotations to semi-automatically create data for. In my opinion, the paper should steer away from making arguments that these examples are deeply linguistic, beyond, involving nouns, counting, verbs, and coreference. As such, I find the title and some of the introduction over-claiming, but, this is really a matter of opinion, resting on what exactly 'linguistic' means. The main body of the paper still lacks examples but I appreciate their inclusion in the appendix. It's very hard to imagine the foils from the descriptions alone. This may be asking a lot, but the paper would be significantly improved if the last page were almost entirely made of examples from the appendix. This is a CVPR style of presentation, and would require significant text trimming. The examples were good overall, but the co-ref part of the benchmark stands out. It is essentially a QA task, which isn't really compatible with just caption based training that most of the evaluated most are setup to do (with the exception of 12-1). This isn't an issue, because its not really the benchmark's problem, but I am not sure the format of the foil is that sensible. I suspect this will be the least used of the new foils, but I don't have a concrete proposal how it could be improved to really be a captioning task. comments,_suggestions_and_typos -
[ [ 833, 925 ], [ 1054, 1118 ], [ 1120, 1195 ], [ 1303, 1385 ], [ 1386, 1851 ], [ 1852, 1949 ], [ 1950, 2243 ], [ 2245, 2325 ], [ 2326, 2774 ] ]
[ "Eval_pos_1", "Jus_pos_2", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3" ]
20
paper_summary Motivated by empirical findings that training models with Prompt Tuning can achieve the same performance as fully fine-tuning a model, but the training takes much longer to reach the same performance, they explore ways to exploit knowledge from already trained prompts. They explore using already trained prompts to transfer knowledge between tasks (using the same frozen model) and also the transfer of prompt between _different_ frozen models. For between task transfer, they either directly re-use a prompt from a source task on the target task or they use the prompt learned from the source task as the initialization point for the target task. For between model transfer, they uses these same methods but include a learnable `projector` (a small, 2 layer neural-network) that maps the prompts from one frozen model to another be using the projected prompt in one of the methods mentioned above. They have two methods for learning this `projector`. In the first method, which they call _Distance Minimization_, they minimize the $L_2$ distance between a projected source prompt (trained on the source frozen model) and a target prompt (a prompt trained on the same task using the target model). In the second method (_Task Tuning_) they learn the `projector` via backpropagation. In this case they take a prompt trained on a source task $P_s$, project it ($Proj(P_s)$) and then use that when prompt tuning the target model. Gradient updates are only applied to the projector. They also look at several methods of prompt similarity and use them to predict prompt transferability. They main methods are Cosine and Euclidean distances between prompt tokens and their novel model activation similarity where prompts are fed into frozen models and the activations of the feed-forward layers are recorded. The call this method _ON_. ### Results Their first results look at the performance of directly re-using a prompt trained on a source task for a downstream task. They find that this can produce strong performance (measured in relative performance, the direct source to target prompt transfer performance divided by the performance researched from directly training on the target task) within clusters of similar tasks. Their second results look at the performance of using a prompt learned on a source task to initialize the prompt for a target task and then doing Prompt Tuning. They find that this method can give consistent gains in terms of task performance as well as speed of convergence. Their third set results examine transfer across models. They find that direct re-use of a prompt projected by the `projector` learned via the _Distance Minimization_ method results in poor performance, especially within the Sentiment tasks. They find that direct reuse of a prompt projected by a `projector` learned with their _Task Tuning_ method does better especially when the tasks are within the same cluster. They also look at how using a _Task Tuning_ prompt to initialize training of a new prompt performs and finds that it can lead to some improvements in task performance and small improvements in convergence speed. Their final set of results examine use prompt similarity methods to predict prompt transferablity (in the context of direct prompt reuse). They find that all methods are able to distinguish between multiple prompts (created by training with different random seeds) trained for the same task from prompts trained for other tasks. They also find that _ON_ produces a ranking of similar prompts that best correlate with direct reuse performance (using Spearman's rank correlation scores). They also find that the correlation decreases as the size of the frozen model grows. summary_of_strengths The strengths of the paper include: * Experiments on many different and diverse datasets, 17 with a good mixture of sentiment, NLI, EJ, Paraphrase detection, and Question answers. * Experiments across many model sizes and architectures, including encoder-only models like RoBERTa instead of just the encoder-decoder and decoder-only models we see else where. * The inclusion of small motivating experiments like the convergence speed are a great way to establish the importance of the work and the impact it would have. * The use of the same methods (direct reuse of prompts and using prompts as initialization) in different settings (cross task transfer with the same model and cross model transfer with the same task) and similar results in each demonstrate the robustness of the method. * Not only does their novel prompt similarity method (_ON_ based on model activations when processing the prompt) work great at predicting direct use similarity, it also captures the non-linear way the model interacts with the prompt in a way that simple methods like token similarity can. summary_of_weaknesses The majority of the weaknesses in the paper seem to stem from confusion and inconsistencies between some of the prose and the results. 1. Figure 2, as it is, isn't totally convincing there is a gap in convergence times. The x-axis of the graph is time, when it would have been more convincing using steps. Without an efficient, factored attention for prompting implementation a la [He et al. (2022)](https://arxiv.org/abs/2110.04366) prompt tuning can cause slow downs from the increased sequence length. With time on the x-axis it is unclear if prompt tuning requires more steps or if each step just takes more time. Similarly, this work uses $0.001$ for the learning rate. This is a lot smaller than the suggested learning rate of $0.3$ in [Lester et al (2021)](https://aclanthology.org/2021.emnlp-main.243/), it would have been better to see if a larger learning rate would have closed this gap. Finally, this gap with finetuning is used as a motivating examples but the faster convergence times of things like their initialization strategy is never compared to finetuning. 2. Confusion around output space and label extraction. In the prose (and Appendix A.3) it is stated that labels are based on the predictions at `[MASK]` for RoBERTa Models and the T5 Decoder for generation. Scores in the paper, for example the random vector baseline for T5 in Table 2 suggest that the output space is restricted to only valid labels as a random vector of T5 generally produces nothing. Using this rank classification approach should be stated plainly as direct prompt reuse is unlikely to work for actual T5 generation. 3. The `laptop` and `restaurant` datasets don't seem to match their descriptions in the appendix. It is stated that they have 3 labels but their random vector performance is about 20% suggesting they actually have 5 labels? 4. Some relative performance numbers in Figure 3 are really surprising, things like $1$ for `MRPC` to `resturant` transfer seem far too low, `laptop` source to `laptop` target on T5 doesn't get 100, Are there errors in the figure or is where something going wrong with the datasets or implementation? 5. Prompt similarities are evaluated based on correlation with zero-shot performance for direct prompt transfer. Given that very few direct prompt transfers yield gain in performance, what is actually important when it comes to prompt transferability is how well the prompt works as an initialization and does that boost performance. Prompt similarity tracking zero-shot performance will be a good metric if that is in turn correlated with transfer performance. The numbers from Table 1 generally support that this as a good proxy method as 76% of datasets show small improvements when using the best zero-shot performing prompt as initialization when using T5 (although only 54% of datasets show improvement for RoBERTa). However Table 2 suggests that this zero-shot performance isn't well correlated with transfer performance. In only 38% of datasets does the best zero-shot prompt match the best prompt to use for transfer (And of these 5 successes 3 of them are based on using MNLI, a dataset well known for giving strong transfer results [(Phang et al., 2017)](https://arxiv.org/abs/1811.01088)). Given that zero-shot performance doesn't seem to be correlated with transfer performance (and that zero-shot transfer is relatively easy to compute) it seems like _ON_'s strong correlation would not be very useful in practice. 6. While recent enough that it is totally fair to call [Vu et al., (2021)](https://arxiv.org/abs/2110.07904) concurrent work, given the similarity of several approaches there should be a deeper discussion comparing the two works. Both the prompt transfer via initialization and the prompt similarity as a proxy for transferability are present in that work. Given the numerous differences (Vu et al transfer mostly focuses on large mixtures transferring to tasks and performance while this work focuses on task to task transfer with an eye towards speed. _ ON_ as an improvement over the Cosine similarities which are also present in Vu et al) it seems this section should be expanded considering how much overlap there is. 7. The majority of Model transfer results seem difficult to leverage. Compared to cross-task transfer, the gains are minimal and the convergence speed ups are small. Coupled with the extra time it takes to train the projector for _Task Tuning_ (which back propagation with the target model) it seems hard to imagine situations where this method is worth doing (that knowledge is useful). Similarly, the claim on line 109 that model transfer can significantly accelerate prompt tuning seems lie an over-claim. 8. Line 118 claims `embedding distances of prompts do not well indicate prompt transferability` but Table 4 shows that C$_{\text{average}}$ is not far behind _ON_. This claim seems over-reaching and should instead be something like "our novel method of measuring prompt similarity via model activations is better correlated with transfer performance than embedding distance based measures" comments,_suggestions_and_typos 1. Line 038: They state that GPT-3 showed extremely large LM can give remarkable improvements. I think it would be correct to have one of their later citations on continually developed LM as the one that showed that. GPT-3 mostly showed promise for Few-Shot evaluation, not that it get really good performance on downstream tasks. 2. Line 148: I think it would make sense to make a distinction between hard prompt work updates the frozen model (Schick and Schütez, etc) from ones that don't. 3. Line 153: I think it makes sense to include [_Learning How to Ask: Querying LMs with Mixtures of Soft Prompts_ (Qin and Eisner, 2021)](https://aclanthology.org/2021.naacl-main.410.pdf) in the citation list for work on soft prompts. 4. Figure 3: The coloring of the PI group makes the text very hard to read in Black and White. 5. Table 1: Including the fact that the prompt used for initialization is the one that performed best in direct transfer in the caption as well as the prose would make the table more self contained. 6. Table 2: Mentioning that the prompt used as cross model initialization is from _Task Tuning_ in the caption would make the table more self contained. 7. Line 512: It is mentioned that _ON_ has a drop when applied to T$5_{\text{XXL}}$ and it is suggested this has to do with redundancy as the models grow. I think this section could be improved by highlighting that the Cosine based metrics have a similar drop (suggesting this is a fact of the model rather than the fault of the _ON_ method). Similarly, Figure 4 shows the dropping correlation as the model grows. Pointing out the that the _ON_ correlation for RoBERTA$_{\text{large}}$ would fit the tend of correlation vs model size (being between T5 Base and T5 Large) also strengths the argument but showing it isn't an artifact of _ON_ working poorly on encoder-decoder models. I think this section should also be reordered to show that this drop is correlated with model size. Then the section can be ended with hypothesizing and limited exploration of model redundancy. 8. Figure 6. It would have been interesting to see how the unified label space worked for T5 rather than RoBERTAa as the generative nature of T5's decoding is probably more vulnerable to issue stemming from different labels. 9. _ ON_ could be pushed farther. An advantage of prompt tuning is that the prompt is transformed by the models attention based on the value of the prompt. Without having an input to the model, the prompts activations are most likely dissimilar to the kind of activations one would expect when actually using the prompt. 10. Line 074: This sentence is confusing. Perhaps something like "Thus" over "Hence only"? 11. Line 165: Remove "remedy,"
[ [ 3769, 3821 ], [ 3823, 3911 ], [ 3918, 3972 ], [ 3972, 4095 ], [ 4850, 4984 ], [ 4985, 9951 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Jus_pos_2", "Eval_neg_1", "Jus_neg_1" ]
21
paper_summary This paper focuses on using bandit learning to learn from user feedback for Extractive QA (EQA), the binary supervisory signals from user feedback serve as rewards pushing QA systems to evolve. The learning algorithm aims to maximise the rewards of all QA examples, which consists of online learning and offline learning, the online learning receives user feedback and updates model parameters after seeing one QA example, whereas offline learning updates model parameters after seeing all QA examples. The experimental results on QA datasets from MRQA support the effectiveness of the proposed bandit learning approach, proving that the proposed approach can consistently improve model’s performance on SQuAD, HotpotQA and NQ in in-domain experiments under online learning especially when there are extremely little QA examples available for SQuAD. Besides, a set of experiments are conducted to investigate the difference between online learning and offline learning, and the importance of model initialisation in the proposed bandit learning approach. summary_of_strengths 1. The proposed bandit learning approach that learns from user feedback for EQA is novel, which simulates real deployment environment and provides insights for further exploration in bridging the gap between QA model training and deployment. 2. Empirical results show the effectiveness of the proposed approach, especially the in-domain experimental results for online learning. 3. Conducting extensive experiments studying the effect of domain transfer and model initialisation. summary_of_weaknesses 1. The binary reward from user feedback is weak due to the large search space for EQA, resulting in the incapability of providing precise supervisory signals. Need to design a more sophisticated reward. 2. The proposed approach heavily relies on how accurate the initial model is, which means it is highly sensitive to model initialisation, limiting its usefullness. 3. In in-domain experiments of online and offline learning, bandit learning approach hurts model’s performance under some scenarios especially for TriviaQA and SearchQA. 4. Some other papers of learning from feedback for QA should be compared, such as Learning by Asking Questions, Misra et al. CVPR 2017. comments,_suggestions_and_typos Questions: 1. Why only use single-pass in online learning?
[ [ 1095, 1334 ], [ 1338, 1471 ], [ 1476, 1574 ], [ 1600, 1755 ], [ 1756, 1799 ], [ 1804, 1938 ], [ 1939, 1963 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Jus_neg_1", "Eval_neg_1", "Jus_neg_2", "Eval_neg_2" ]
22
paper_summary This paper works on the problem of personalization in knowledge grounded conversation (KGC). To develop a benchmark, the authors collected a new KGC dataset based on Reddit containing personalized information (e.g. user profile and dialogue history). The authors propose a probabilistic model for utterance generation conditioned on both personalized profile (personal memory) and personal knowledge. Dual learning is employed to better learn the unconstrained relation between personal memory $Z^m$ and knowledge $Z^k$ , and variational method is proposed to approximately marginalize out $Z^m$ and $Z^k$ during inference. The results with automatic evaluation show promising improvement and human evaluation also validates this. Finally, various ablation studies are conducted to reveal the contribution of each model component. summary_of_strengths - The problem of personalization in KGC is a relatively overlooked yet important problem. The authors developed a promising method and benchmark for this new challenge. - The idea of incorporating dual learning to link personalized sources (e.g. personal memory and knowledge) is very interesting and convincing. I’d like to see follow-up works comparing the ideas against this paper’s. - The improvement in automatic evaluation is significant (though not fully reliable, as the author’s acknowledge in line 522). Human evaluation also corroborates the proposed model’s superiority, though the improvement becomes less significant. summary_of_weaknesses - The paper is generally well-written and easy to follow, but the definition of personal memory was quite ambiguous and not fully defined. For instance, does this concept include false beliefs (incorrect knowledge), subjective opinions (unsupported knowledge) or inferential knowledge? What would be the unit of personal memory in the context of visually grounded dialogues (line 134)? How can we extend the idea to inter-personal knowledge, i.e. common ground? - I understand the space is limited, but I think more information/explanation on the collected dataset should be added (e.g. data collecting procedure and reviewing process). comments,_suggestions_and_typos - In lines 198-220, explanation of $\phi$, $\psi$ and $\pi$ is not clear. Can they be better explained or incorporated in Figure 2? - In Figure 2, should the distilled distribution of $Z^p$ not be conditioned on $Z^k$? In the text, $q_\phi (Z^p | C, R)$ is not conditioned on $Z^k$ (lines 199, 207) - Typo: “the the” in line 278 - For Table 3, did you also evaluate against human answers (e.g. original response)? If available, it may be better to be incorporated. - What exactly is personal memory? How is this defined, esp. in other domains? I’d like to see more discussion on this in the updated paper.
[ [ 957, 1034 ], [ 1038, 1179 ], [ 1256, 1498 ], [ 1584, 1659 ], [ 1661, 1983 ], [ 2025, 2102 ], [ 2103, 2157 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2" ]
24
paper_summary This paper proposes a solution for "Contrastive Conflicts". What exactly are “Contrastive Conflicts”? They occur when multiple questions are derived from a passage, each with different semantics. The questions are going to be close to the passage in representation space and by transitivity they are going to be close among themselves even though they are semantically different (Transitivity Conflict). In addition to this, if multiple questions derived from the same passage are in the same training batch, then the questions will see that passage as both positive and negative (In-Batch Conflict). The solution proposed by the paper is to use smaller granularity units, i.e. contextualized sentences. Per sentence representations are computed by using per sentence special indicator tokens, then a similar approach to DPR is used to finetune sentence representations. Because different questions have answers in different sentences the contrastive conflict is generally resolved. Improvements are reported on NQ, TriviaQA and SQuAD, especially on SQuAD where conflicts are reported to be severe (i.e. often multiple different questions are extracted from the same passage). Extensive experiments show that the method does well even in transfer learning. summary_of_strengths Strengths: -The paper obtains small but convincing improvements on NQ and TriviaQA, and large but a bit puzzling results on SQuAD (considering that one of the baselines does not match the DPR paper and that SQuAD can benefit dramatically from combining DPR with BM25, but it is not done in this paper). -The paper presents many interesting ablations and transfer learning experiments that help further convince the reader of the efficacy of the method. summary_of_weaknesses Weaknesses: -Retrieving (avg # sentences) * 100 sentences (see section 3.3) instead of just 100 sentences seems to be a bit of a cheat. For a strict comparison to DPR, Top-20 and Top-100 performance should be reported with exactly those numbers of retrieved elements and without post-processing on larger sets of retrieved passages. One could argue that allowing for more expensive passage retrieval is what is giving the improvements in this paper, other than for SQuAD where the lower granularity does seem to be helping, except it doesn’t help as much as BM25. -The idea of having more granular representations for passage retrieval is far from new. The authors do cite DensePhrases (Lee et al. 2021), but don’t mention that it’s already at lower granularity than passage level. They could also cite for example ColBERT (Khattab et al. 2021). -The big improvement reported in Table 2 for SQuAD “Single” is a bit confusing since it relies on a Top-20 number that is much lower that what is reported on the DPR paper (although this seems to be a common problem). On the positive side, the number reported for SQuAD “Multi” matches the DPR paper. comments,_suggestions_and_typos Suggestions: -Line 91: the authors claim that contrastive conflicts are *the* cause for bad performance on SQuAD, but the statement seems unjustified at that point. It might make sense to refer to later results in the paper.
[ [ 1305, 1422 ], [ 1597, 1746 ], [ 1781, 1904 ], [ 1905, 2332 ], [ 2334, 2421 ], [ 2422, 2614 ], [ 2617, 2693 ], [ 2694, 2832 ], [ 2963, 3113 ], [ 3114, 3174 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4" ]
26
paper_summary The paper presents a novel approach to understanding math problems from their textual formulations. The approach builds on those from related work, choosing syntactic representations. The key novelties are (1) an internal graph representation of the operators and (2) a novel pretraining setting. The model achieves vast improvements over prior art. summary_of_strengths The new model addresses several key problems of previous work and appears to contribute a very logically motivated extension, modeling the structure of the required mathematical operations. The model description is clear and the experimental setup and results are reasonably clear and allow for an easy comparison with related work. There is also an ablation study to analyze the contribution of the individual components of the model. The paper is easy to read. summary_of_weaknesses The model section seems to lack comparison with prior work. It is not entirely clear what is novel here and what is taken from prior work. It is also not entirely clear to me if pretraining is performed with data from all tasks and whether the same setup had been used previously. If this is different from prior work, that would be unfair and a major flaw. comments,_suggestions_and_typos I'd like to see my doubt about the pretraining cleared up.
[ [ 311, 364 ], [ 386, 575 ], [ 576, 606 ], [ 611, 719 ], [ 823, 848 ], [ 873, 932 ], [ 933, 1231 ] ]
[ "Eval_pos_8", "Eval_pos_7", "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1" ]
29
paper_summary See the prior review for a summary. Based upon the author response I do raise my score slightly from 2.5 to 3.0 to reflect that the definitions referenced in the author response might be sufficient for a target audience that is intimately familiar with WSD. On the other hand, it remains open as to what the impact of the proposed approach would be on any of the noted downstream applications, or beyond English. While WSD can be considered part of the traditional NLP preprocessing pipeline, it's impact on modern end-to-end solution is likely small. Nevertheless there might be high-impact cases such as token-based retrieval (which is used widely), and investigating the impact of the proposed approach on such applications might provide a convincing data point that can provide evidence for the impact of the proposed work. summary_of_strengths See the prior review. summary_of_weaknesses See the prior review. comments,_suggestions_and_typos See the prior review.
[]
[]
31
paper_summary The paper describes a new approach towards MeSH label prediction, utilizing the title abstract journal relative information. The proposed model combines BiLSTMs, Dilated CNNs and GCNNs to extract features from abstracts, titles and the mesh term hierarchy respectively. Limiting the search MeSH space with information extraction from metadata (such as other articles published in that journal) allows for a boost in performance by building dynamic attention masks. The final model shows good performance compared to related approaches, one of which uses the full article. summary_of_strengths - Utilized information past the document itself to limit the MeSH search space -Introduces novel end-to-end architecture that can be used in other tasks involving scholarly articles -Achieves good performance compared to related approaches. summary_of_weaknesses - Threshold is said to have a very big impact but is not discussed in detail with different ablations. How does threshold affect computational complexity (outside of performance)? -Some of the design choices are not explained well (e.g. why IDF-weighting) -Training time (epochs) and computational complexity of the kNN and GCNN component is not discussed. comments,_suggestions_and_typos - Equations 10 & 11 should be H_{abstract} instead of D_{abstract}? If not, when is H_{abstract} used? -There is a significant drop in performance for MeSH terms when metadata are not available, leading to a worse performance than other methods (Ablations-d). In case of new journals or preprints, is this the expected performance? -With the tuned threshold, how many MeSH terms are not selected during the dynamic masking on average in the different data splits? What is the hierarchical level of these terms? -A few minor typos, proof reading should fix them. Nothing major.
[ [ 688, 789 ], [ 791, 847 ], [ 875, 974 ], [ 974, 1052 ], [ 1054, 1103 ], [ 1104, 1230 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2" ]
34
paper_summary The performance of structured prediction models can be greatly improved by scaling to larger state spaces, yet the inference complexity of these models scales poorly w.r.t. the size of the state space. The goal of this work is to reduce the inference complexity of structured models by factorizing the clique potentials using low-rank tensor decompositions and performing message passing in an induced rank space instead of the original state space. This work makes three contributions: 1. Using the language of factor graph grammars, this work unifies previous low-rank tensor decomposition works such as Yang et al 2021b and Chiu et al 2021. This work shows that those works are essentially performing message passing on a factor graph with two types of nodes: the original state nodes and auxiliary rank nodes induced by the low-rank tensor decomposition. 2. On a sub-family of factor graph grammars which subsume most commonly-used structured prediction models such as HMMs, HSMMs, and PCFGs, this work proposes to marginalize the state nodes first and only perform inference in the induced rank nodes, which reduces the complexity by replacing a factor of the state size by a factor of the rank size which is usually smaller. 3. Empirically this work scales HMMs and PCFGs to very large state spaces and achieves strong performance. summary_of_strengths 1. This work is insightful in pointing out that by performing message passing only in the rank space after marginalizing the original state nodes (which is a one-time cost), a factor of the number of states in the total complexity can be replaced by a factor of the rank size. This idea is generally applicable to a large family of factor graph grammars that have one external node per hypergraph fragment, and it might enable scaling many structured prediction models. 2. This work gets strong empirical performance by scaling to very large state spaces when compared to previous structured prediction works. In particular, this work trains the largest-ever PCFG in the task of unsupervised parsing on PTB (to my knowledge) and establishes a new state-of-the-art performance in this particular task. 3. This work confirms findings of previous works such as Chiu and Rush 2020 that scaling structured prediction models can improve performance. For example, Figure 6 (b) suggests that scaling PCFGs to beyond 10k pre-terminals might further improve modeling performance. summary_of_weaknesses By showing that there is an equivalent graph in the rank space on which message passing is equivalent to message passing in the original joint state and rank space, this work exposes the fact that these large structured prediction models with fully decomposable clique potentials (Chiu et al 2021 being an exception) are equivalent to a smaller structured prediction model (albeit with over-parameterized clique potentials). For example, looking at Figure 5 (c), the original HMM is equivalent to a smaller MRF with state size being the rank size (which is the reason why inference complexity does not depend on the original number of states at all after calculating the equivalent transition and emission matrices). One naturally wonders why not simply train a smaller HMM, and where does the performance gain of this paper come from in Table 3. As another example, looking at Figure 4 (a), the original PCFG is equivalent to a smaller PCFG (with fully decomposable potentials) with state size being the rank size. This smaller PCFG is over-parameterized though, e.g., its potential $H\in \mathcal{R}^{r \times r}$ is parameterized as $V U^T$ where $U,V\in \mathcal{R}^{r \times m}$ and $r < m$, instead of directly being parameterized as a learned matrix of $\mathcal{R}^{r \times r}$. That being said, I don't consider this a problem introduced by this paper since this should be a problem of many previous works as well, and it seems an intriguing question why large state spaces help despite the existence of these equivalent small models. Is it similar to why overparameterizing in neural models help? Is there an equivalent form of the lottery ticket hypothesis here? comments,_suggestions_and_typos In regard to weakness #1, I think this work would be strengthened by adding the following baselines: 1. For each PCFG with rank r, add a baseline smaller PCFG with state size being r, but where $H, I, J, K, L$ are directly parameterized as learned matrices of $\mathcal{R}^{r \times r}$, $\mathcal{R}^{r \times o}$, $\mathcal{R}^{r}$, etc. Under this setting, parsing F-1 might not be directly comparable, but perplexity can still be compared. 2. For each HMM with rank r, add a baseline smaller HMM with state size being r.
[ [ 1380, 1403 ], [ 1404, 1653 ], [ 1851, 1987 ], [ 1988, 2179 ], [ 4207, 4280 ], [ 4282, 4707 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Jus_pos_2", "Eval_neg_1", "Jus_neg_1" ]
37
paper_summary In this work, the authors proposed a unified model of task-oriented dialogue understanding and response generation. The two major enhancements are adopting task-oriented dialogue pre-training on a data collection, and introducing the prompt-based learning for the multi-task capability via one model. From the experimental results, the pre-training strategy proved useful to improve the performance on the benchmark MultiWOZ. summary_of_strengths While the idea of task-specific pre-training is not new, it is still interesting, and the proposed method proved effective in leveraging the language backbone T5, and can be potentially applied to other models and tasks. summary_of_weaknesses 1. There are some other contemporary state-of-the-art models, the authors can consider citing and including them for an extensive comparison. 2. It will be good to see some analysis and insights on different combinations of pre-training datasets introduced in Table 1. comments,_suggestions_and_typos Here are some questions: 1. Since some of the sub-tasks, like dialogue state tracking, require a fixed format of the output, if the model generation is incomplete or in an incorrect format, how can we tackle this issue? 2. The dialogue multi-task pre-training introduced in this work is quite different from the original language modeling (LM) pre-training scheme of backbones like T5. Thus I was curious about why not pre-train the language backbone on the dialogue samples first with the LM scheme, then conduct the multi-task pre-training? Will this bring some further improvement? 3. It will be good to see some results and analysis on the lengthy dialogue samples. For instance, will the performance drop on the lengthy dialogues?
[ [ 462, 542 ], [ 548, 681 ], [ 709, 848 ], [ 851, 976 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Eval_neg_2" ]
38
paper_summary The paper studies the benefits of introducing a Bayesian perspective to abstractive summarization. The authors run the MC dropout method on two pre-trained summarization models, sampling different summarization texts according to specific dropout filters. They use BLEUVarN as a metric of uncertainty for a possible summarization, showing the variations across the summary samples. The authors conduct experiments on three datasets on the correlation between the uncertainty and summarization performances, and show that the performance of the summarization can slightly improve by selecting the "median" summary across the pool of sampled ones. summary_of_strengths - To the extent of my knowledge, it is the first work that study model uncertainty (in the particular form of the variability of generated summaries) in abstractive summarization. - The paper provides an analyses on three collections, showing the (cor)relations between the metric of summarization uncertainty (or in fact summarization variability) and ROUGH. They observe that in general the higher the uncertainty score of a summary, the lower its ROUGH score. - The work shows that the performance of summarization can be slightly improved by selecting the summary that lays in the "centroid" of the pool of generated summaries. summary_of_weaknesses My main concerns are the lack of novelty and proper comparison with a previous study. - As correctly mentioned in the paper, the work of Xu et al. is not based on MC dropout. However, that work still provide a metric of uncertainty over a generated summary. In fact, the metric of Xu et al. (namely the entropy of the generation distributions) comes with no or little extra computational costs, while the MC dropout of 10 or 20 introduces considerably large feedforward overheads. I believe the method of Xu et al. can be compared against in the experiments of 5.1. This can let the reader know whether the extra cost of MC dropout method comes with considerable benefits. - There is no specific novelty in the method. The observation regarding the correlation between uncertainty and performance is in fact an expected one, and has already observed in several previous studies (also in the context of language generation), like: Not All Relevance Scores are Equal: Efficient Uncertainty and Calibration Modeling for Deep Retrieval Models Daniel Cohen, Bhaskar Mitra, Oleg Lesota, Navid Rekabsaz, Carsten Eickhoff In proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021)- - The reported improvement is marginal, while achieved with the large overhead of MC sampling. My guess is that the improvement is only due to the effect of ensembling, inherent in MC droupout. comments,_suggestions_and_typos As mentioned above: - I believe the method of Xu et al. can be compared against in the experiments of 5.1. This can let the reader know whether the extra cost of MC dropout method comes with considerable benefits. - More evidences regarding the performance improvement, showing that it is not only due to the effect of ensembling. - Studying more efficient and recent Bayesian approaches, such as: Agustinus Kristiadi, Matthias Hein, and Philipp Hennig. 2020. Being Bayesian, Even Just a Bit, Fixes Overconfidence in ReLU Networks. In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 119), Hal Daumé III and Aarti Singh (Eds.). PMLR
[ [ 684, 861 ], [ 1338, 1423 ], [ 1424, 2010 ], [ 2011, 2057 ], [ 2057, 2579 ], [ 2581, 2675 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3" ]
39
paper_summary This work has created a benchmark dataset for multi-task learning for biomedical datasets. Based on this new benchmark dataset, this work has proposed instruction learning based multi-task learning, which has shown to outperform single-task learning as well as vallina multi-task learning. summary_of_strengths 1. This work has newly aggregated more than 20 biomedical datasets in 9 categories into a new multi-task paradigm and formalize them into a text to text format so that we can build one unified model for all different tasks. 2. This work has proposed using manually created instructions for multi-task learning so that the model can be instructed to perform each task without confusion. And this method has been shown to outperform a lot the vanilla multi-task learning and also outperform single-task learning in some cases. summary_of_weaknesses 1. In the proposed method, the BI would be concatenated with instances as the input to the BART model, and in the BI, examples are provided. Actually these examples are extracted from those instances, then why should we still have examples in BI? How about just having those instructions in the BI? 2. One important baseline is missing: in those methods proposed for DecaNLP and UnifiedQA, etc., other types of tokens or phrases are used to indicate which task/dataset each input instance belongs to, which is very important to let the model know what the input instance it is. However, in the baseline of vanilla multi-task learning (V-BB), no such kinds of special tokens are used at all, which forms a very unfair baseline to be compared with. The model are fed by so many instances from various kinds of tasks without any differentiation, which for sure would lead to deteriorate performance. For this reason, the effectiveness or the necessity of BI is questionable. 3. More deep analysis over the impacts of different kinds of designs of the BI is needed, since such designs can vary a lot among different designers or writers. If so, the performance would be very unstable due to the variance of BI, which makes this type of method not applicable to real-world problems. 4. Only Rouge-L is used for evaluation, which makes the evaluation not that reliable. Especially for some classification tasks, Rouge-L is not sensitive enough. comments,_suggestions_and_typos 1. In lines 382-384, it is mentioned that "We have discarded long samples (>1024 token length) from validation and testing data as well.". I think it is not appropriate to throw any examples from the test set.
[ [ 1178, 1212 ], [ 1213, 1622 ], [ 1623, 1772 ], [ 1773, 1848 ], [ 1852, 1938 ], [ 1939, 2010 ], [ 2159, 2242 ], [ 2242, 2317 ] ]
[ "Eval_neg_1", "Eval_neg_1", "Jus_neg_2", "Eval_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4" ]
41
paper_summary This paper propose prefix-based models for controllable text generation. Similar to [1], prefixes are token embeddings of language models (e.g., GPT-2) used for learning attribute-specific information and steering the generation of the fixed language models. The authors further add a contrastive loss to enhance the models' controllability. In addition, an unsupervised learning method is introduced to handle scenarios where labels are not available. The authors evaluated the proposed models on multiple controllable text generation tasks, such as controlling sentiment and topics. The experimental results show that comparing to baselines like PPLM and GeDi, the proposed model can achieve a good balance between fluency and controllability. [1] Prefix-tuning: Optimizing continuous prompts for generation. ACL 2021 summary_of_strengths - The proposed lightweight model achieved strong performance in multiple controllable text generation tasks. -The idea of controlling language models in unsupervised way is interesting and new. summary_of_weaknesses - Missing human evaluation for the proposed unsupervised learning method. The major technical contribution (novelty) of the paper is controlling language models in unsupervised manner. Unfortunately, human evaluation is absent (in table 4) to demonstrate its effectiveness. -For the multi-aspect controlling experiments, CTRL[1] and PPLM[2] should be good baselines. [1] CTRL: A conditional transformer language model for controllable generation. [2] Plug and play language models: A simple approach to controlled text generation. ICLR 2020 comments,_suggestions_and_typos Please consider adding new human evaluation results and baselines as mentioned in weaknesses.
[ [ 858, 964 ], [ 965, 1050 ], [ 1075, 1146 ], [ 1147, 1258 ], [ 1258, 1346 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_1" ]
42
paper_summary The paper provides a benchmark dataset that can be used for training & evaluation of automated fact checking systems. The major contribution of this paper is that they provide a large collection of 33,697 claims with associated review articles and premise articles. In the experiment, this work presents a two-stage detection framework, including evidence sentence extraction and claim veracity inference. LSTM-based baselines and RoBERTa-based baselines are included and compared. summary_of_strengths 1. The idea of using premises articles for claim inference in automated fact checking is interesting. 2. The paper is overall well-structured and the methods are explained clearly. summary_of_weaknesses 1. The methods are not novel as they are largely borrowing from existing work. 2. It would be nice to have more detailed descriptions of the data collection process, e.g., label mapping, and data statistics, (how many articles per claims? how many sentences per articles? sentence length?) If not enough space on the main text, these information could be added on appendix. 3. It would be better if the authors evaluate more state-of-the-art methods on this benchmark dataset. 4. In section 3.3, the authors claim that the disadvantage of using web search is indirect data leak. Can we eliminate the data leak through filtering with publishing time. comments,_suggestions_and_typos 1. The prequential evaluation is well-written. It would be interesting to see more such analysis and discussion of the datasets. 2. Did you try the combination of TF-IDF and dense retrieval for evidence sentence extraction? 3. As your dataset is imbalanced, it would be better to see some analysis of the outputs.
[ [ 522, 621 ], [ 625, 701 ], [ 727, 803 ], [ 807, 891 ], [ 891, 1099 ], [ 1413, 1457 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_pos_3" ]
43
paper_summary *Note: I reviewed this paper in an earlier ARR cycle. There are no changes in the updated version that warrant a change in my score or the review. I’ve updated a summary of weaknesses to reflect the updates, and have listed a few suggestions on grammar.* This work presents a method (X-GEAR) for zero-shot, cross-lingual event argument extraction. X-GEAR takes as input i) a passage, ii) a trigger word (a predicate, e.g., "killed"), and iii) a template indicating the desired roles (e.g., <Victim>NONE</Victim>). The output is the template filled with event arguments extracted from the passage (e.g., NONE might be replaced with civilian). X-GEAR is built using the standard Seq2seq framework with a copy mechanism, where the input is composed of the triplet (passage, template, trigger word) flattened as a sequence, and the output is the template filled with desired roles. The method relies on recent advances in large, multilingual pre-trained language models (PTLM) such as MT5, which have been shown to perform robust cross-lingual reasoning. The key insight of the method is to use language-agnostic special tokens (e.g., <Victim>) for the template. Fine-tuning on the source language helps learn meaningful representations for templates, which allows their approach to work across target languages supported by the PTLM. summary_of_strengths - The paper presents a simple but intuitive method for solving an important problem. The simplicity of the proposed method is a significant strength of this work. As the authors note, existing systems that perform structured extraction often rely on a pipeline of sub-modules. X-GEAR replaces that with a simple Seq2seq framework which is considerably easier to use and maintain. - The proposed method is clearly defined, the experiments are thorough and show considerable gains over the baselines. - The analysis provides several insights into the strengths and weaknesses of the proposed approach. summary_of_weaknesses The authors have addressed some of the weaknesses highlighted in the previous review. However, it would be great if the weakness of the proposed approach is also highlighted in the future version. Specifically, the method is not *truly* zero-shot as it can only work in cases where a PLTM for the target languages is available. I believe that this is an important point and should be highlighted in conclusion or related work. comments,_suggestions_and_typos - L100 “Zero-shot cross-lingual learning *is an” -L104: Various structured prediction tasks have *been studied, -The footnote markers should be placed after the punctuation mark (e.g., L557).
[ [ 1369, 1530 ], [ 1530, 1746 ], [ 1748, 1787 ], [ 1789, 1865 ], [ 1868, 1967 ], [ 2085, 2186 ], [ 2187, 2317 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_neg_1", "Jus_neg_1" ]
45
In the present paper, the authors describe the results of a quantitative analysis of various genres in terms of coreference. They analyse a number of coreference-related features and compare the genres from the point of view of their distribution. The aim is to find the differences between spoken and written genres. The work is interesting and useful, as the number of studies of coreference in spoken texts is limited. The paper address a number of important issues in both coreference analysis (e.g. how should the distance be measured) and the analysis of spoken language. As the authors use a number of existing resources, they also assure the comparability of the categories used. Here, it would be interesting to know, if there were any problems, e.g. if there were still some incompatible categories that the authors came across. Specific comments I like the discussion about the variation in distance measured by different means at the beginning of section 2. Specifically, in a cross-lingual task, token-based measure is a problem. However, there could be differences across studies using various metrics. If measured in sentences or clauses, the distance may vary depending on a genre, if there is a variation in sentence length in terms of words (in spoken texts, there could be shorter sentences, etc.). The question is, if the distance should be measured in characters, but I believe that the decision depends on the conceptual background on what one wants to find out. Another point in Section 2 in the discussion of the diverging results could be variation within spoken and written texts various authors use in their analysis. There could be further dimensions that have an impact on the choice of referring expressions in a language, e.g. narrativeness, if there are dialogues or monologues, etc. Concerning related work, Kunz et al. (2017) point out that coreference devices (especially personal pronouns) and some features of coreference chains (chain length and number of chains) contribute to the distinction between written and spoken registers. There are several works concerned with lexicogrammar suggesting that distinction between written vs. spoken, and also between formal vs. colloquial are weak in English (Mair 2006: 183). Table 1: the statistics on different parts of OntoNotes and the total number in OntoNotes are given in one table in the same column formatting, which is slightly misleading. 4.1: large NP spans vs. shot NP spans – sometimes only heads of nouns or full NPs are considered. References to examples: 1→ (1), etc. Personal pronouns: 1st and 2nd person pronouns are not considered in the analysis of coreference in some frameworks. The authors should verify which cases they include into their analysis. The finding about NPs being more dominant is not surprising (and was also expected by the authors) and has also something to do with the fact that spoken texts reveal a reduced information density if compared to the written ones. The discussion about the results on spoken vs. written is good and important. Even within written text, there could be a continuum, e.g. political speeches, which are written to be spoken or fictional texts that contain dialogues (as the authors point out themselves), could be closer to further spoken texts. At the same time, academic speeches or TED talks that contain less interaction with the audience (depending on a speaker’s style) could be closer to written texts, also in terms of referring expressions – we would expect them to contain more NPs, and probably complex Nps describing some notions. Overall, it is interesting to know if there are more dimensions than just the difference between spoken and written in the data, e.g. narrativeness (narrative vs. non-narrative) or dialogicity (dialogic vs. monologic), etc. In fact, genre classification can and should be sometimes more fine-grained than just drawing a rigid line between texts that are considered to be spoken and those that are considered to be written. Textual problems: Page 2, Section 2: interfering mentions. - These → There are some typographical problems in the text. Page 3, Section 3: I am not sure if the abbreviation Sct. is allowed by the Coling style. In the reference list, the authors should check spelling of the some entries, e.g. english→ English in Berfin et al. (2019), Zeldes (2018). There is an empty space in Godfrey et al. (1992). Cited references: Kunz, Kerstin and Degaetano-Ortlieb, Stefania and Lapshinova-Koltunski, Ekaterina and Menzel, Katrin and Steiner, Erich (2017). GECCo -- an empirically-based comparison of English-German cohesion. In De Sutter, Gert and Lefer, Marie-Aude and Delaere, Isabelle (eds.), Empirical Translation Studies: New Methodological and Theoretical Traditions. Mouton de Gruyter, pages 265–312. @BOOK{Mair2006, title = {Twentieth-Century English: History, Variation and Standardization}, publisher = {Cambridge University Press}, year = {2006}, author = {Mair, Christian}, address = {Cambridge} }
[ [ 318, 352 ], [ 354, 421 ], [ 857, 969 ], [ 970, 1043 ], [ 2257, 2400 ], [ 2401, 2431 ], [ 2989, 3067 ], [ 3067, 3595 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Jus_pos_2", "Jus_neg_1", "Eval_neg_1", "Eval_pos_3", "Jus_pos_3" ]
46
Summary - The paper studies the problem of under-translation common in auto-regressive neural machine translation. - Two main pieces are introduced in this research work, random noise to the length constraint, output length prediction using BERT. - The English-Japanese ASPEC dataset is used to evaluate the contribution of the two proposed improvements. - A stronger or similar performance is shown for all the 4 length language groups using the proposed approach. Especially in the shorted range, the authors show more than 3 points improvement over the vanilla transformer. - An interesting insight I got for the long sentences, vanilla transformer tend to produce shorter length sentences. The proposed approach generated translation close to the gold reference length, atleast for the dataset in use. Strenghts - Ablation is performed for both the new component, random noise and BERT based output length prediction. - Strong BLEU score for short sentence range and relatively close to gold reference length compared to the vanilla transformer. Concerns - The work of Lakew et al, uses English-Italian and English-German datasets for evaluation. These datasets should be used to have a consistent evaluation with the past work. - Following from the last one, any specific reason why the English-Japanese dataset is a better choice for your proposed methods? Perhaps you can **motivate on linguistic grounds** why the language Japanese is a better testing ground for your method. - Including an extra BERT-based-output-length prediction can incur additional computational overhead. The overhead of this computation should be stated in the work. - In the introduction, you mention __However, the input sentence length is not a good estimator of the output length.__. I'm not sure why is this the case.
[]
[]
47
Overviews: This paper focuses on Abusive Language Detection (ALD) and proposes a generic ALD model MACAS with multi-aspect embeddings for generalised characteristics of several types of ALD tasks across some domains. Strengths: The motivation of this paper is clear, i.e., to solve the problem that "What would be the best generic ALD ...", as described at the beginning of paragraph 2, section 1. The generic abusive language typology is categorised as two aspects, i.e., target aspect and content aspect, and multi-aspect embedding layer considers embeddings of both target and content, followed by cross-attention gate flow to refine four types of embeddings. The proposed model outperforms baselines on all the seven datasets. Detailed ablation studies have been given in section 5 as well. Weaknesses: For the structure of the paper, section 4 can be integrated into section 5 as a sub-section. the description of baselines in section 4 is too detailed, which should be refined and shortened appropriately. The paragraph 3, section 5.1 can be titled as a sub-section called "case study" since this paragraph analyzes some prediction examples in Table 3.
[ [ 227, 266 ], [ 267, 397 ], [ 900, 958 ], [ 959, 1011 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_neg_1", "Jus_neg_1" ]
48
This work built a fake news prediction model using both news and user representation from user-generated texts. Experimental results showed that the user text information contributed to predicting the fake. Moreover, the paper showed linguistic analysis to show typical expressions by users in real and fake news. Cosine similarities between users are calculated using proposed user vectors to confirm the echo chamber effect. Introducing vectors of news spreading users sounds an interesting idea. The paper's investigation, the user vector made from linguistic features contribute, is interesting and important. The results of active topics by users for both real and fake is also impressive. There are some ways to build user vectors not only from their timeline and profiles but also from their tweets itself (e.g., Persona chat model). Does the proposed method have a clear advantage to such models?
[ [ 430, 501 ], [ 502, 616 ], [ 617, 698 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3" ]
49
In this paper, the authors argue that using the topmost encoder output alone is problematic or suboptimal to neural machine translation. They propose multi-view learning, where the topmost encoding layer is regarded as the primary view and one intermediate encoding layer is used as an auxiliary view. Both views, as encoder outputs, are transferred to corresponding decoder steams, with shared model parameters except for the encoder-decoder attention. Prediction consistency loss is used to constrains these two streams. The authors claim that this method can improve the robustness of the encoding representations. Experiments on five translation tasks show better performance compared to vanilla baselines, and generalization to other neural architectures. On one hand, the experiments conducted in this paper are rich, including five translation tasks, two NMT architectures, shallow and deep models, and many ablations and analysis. On the other hand, I have several concerns regarding motivation, claims and experiments: -The authors pointed out two problems for using the topmost encoder output alone: 1) overfitting; 2) “It cannot make full use of representations extracted from lower encoder layers,”. I’m not convinced by the second one especially. For example, in PreNorm-based Transformer, the final encoder output is actually a direct addition of all previous encoding layers. Although there is a layer normalization, I believe this output carries critical information from lower layers. -The authors claim that “circumventing the necessity to change the model structure.”, but the proposed method requires to change the decoder, and manipulate the parameter sharing pattern. In my opinion, the method still requires structure modification. -The major ablations and analysis are performed on IWSLT De-En task, which is actually a low-resource task, where regularization is the main bottleneck. From Table 1, it seems like the proposed approach yields much smaller gains on large-scale WMT En-De task compared to low-resource tasks. Thus, it’s still questionable whether the conclusion from experiments on low-resource task can generalize to high-resource tasks. -Which WMT En-De test set did you use? WMT14 or WMT16? It seems like the authors used WMT16 for test, but the baseline (33.06 tokenized BLEU) is below standard (~34 BLEU). -Besides, some experiment has mixed results and is hard to draw convincing conclusions. For example, in Table 5, MV-3-6 (shared) achieves the best performance on De->En while MV-3-6 is the best on Ro->En. It seems like different tasks have different preferences (share or separate). In the paper, the author only highlights the superiority of separate settings on Ro->En task. Overall, I'm not convinced by the motivation and the analysis on low-resource tasks (In particular, this paper doesn't target at low-resource translation. Note that the authors claim that "our method has a good generalization for the scale of data size."). I think the score of this paper is around 3.5 with several unclear questions to be solved. Since we don't have this option, I prefer to give the score of 3.
[ [ 773, 823 ], [ 823, 938 ], [ 959, 1027 ], [ 1029, 1260 ], [ 1261, 1502 ], [ 1504, 1690 ], [ 1691, 1755 ], [ 1757, 2047 ], [ 2047, 2176 ], [ 2350, 2436 ], [ 2437, 2725 ], [ 2726, 2809 ], [ 2810, 2982 ], [ 2983, 3140 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Major_claim", "Eval_neg_1", "Jus_neg_1", "Jus_neg_2", "Eval_neg_2", "Jus_neg_3", "Eval_neg_3", "Eval_neg_4", "Jus_neg_4", "Eval_neg_5", "Jus_neg_5", "Major_claim" ]
50
This paper is about characters in narrative texts, and it claims to contribute a) an operational definition of characters that is „narratologically grounded“, b) an annotated corpus (which will be released) and c) classification experiments on the automatic distinction between characters and non-characters. This paper is well written and good to read. The topic is interesting and clearly relevant. I have some concerns, however: 1. The definition of a ‚character‘ is based on the concept ‚plot‘. While this is naturally following from the narratological literature, it begs the question what a plot is. And of course, this also presumes that there is ‚the plot‘ — what if there are more than one, or if it is highly subjective? Another term that is used for defining a ‚character’ is animacy. In factual texts, there is a pretty clear distinction between animate and inanimate beings, but in fictional texts, this boundary might become blurry quickly, because it is entirely conceivable that objects have properties that are usually reserved for animate beings. Thus, this term would need to be defined more concretely. The definition thus rests on other, not defined terms. 2. The annotation experiments yields high agreement, so maybe this is not so relevant in practice. But the agreement has been measured on only one of the three sub corpora, and presumably on the easiest one: Fairy tales, which have a pretty clear plot. It would be much more convincing if the annotation comparison would have been done on a portion from each corpus, and I do not see a reason why this was not done. 3. The annotation procedure description contains the sentence „First, we read the story and find the events important to the plot.“ I am not sure what this means exactly — was there an agreement across the annotators what the events important to the plot are, before the annotation? This of course would make the annotation task much easier. 4. One of the corpora the authors use consists of broadcast news transcripts from OntoNotes. I would need a lot more arguing about this in the paper, in order to believe the authors that news broadcast is a narrative. While it clearly has narrative elements, they have very different goals and textual properties. Firstly, the ‚plot‘ (understood as a sequence of events in the real world) is only partially represented in a news text, while you have a full plot in many narrative texts. 5. From the third corpus, the authors annotated only one chapter from each novel. This also seems questionable to me, in particular because length of a coreference chain later is such an important feature. In a full novel, the picture might be very different than in a single chapter. Concretely: The evaluation of an event being relevant to a plot could be very different if the full plot is known. 6. What I feel is missing from the paper is a quantitative data analysis independent of the classification experiments. What is the distribution of character- and non-character-chains? How long are they in comparison? This would make it much easier to interpret and evaluate the results properly. 7. The length of a coreference chain has been used as „an integer feature“ (4.2.1). Should this not be normalized in some way, given the very different text lengths? 8. Why is there no baseline for the OntoNotes and CEN corpora? To sum up: While I think this is an interesting task, and the paper is very well written, it makes several assumptions that do not hold general and has a somewhat weak theoretical basis. The classification experiments are pretty straightforward (as the title suggests), and — given the assumptions and restrictions introduced earlier — deliver not very surprising results.
[ [ 309, 353 ], [ 354, 400 ], [ 401, 430 ], [ 2508, 2542 ], [ 2544, 2826 ], [ 2830, 2946 ], [ 2947, 3124 ], [ 3372, 3407 ], [ 3413, 3443 ], [ 3445, 3540 ], [ 3542, 3728 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Major_claim", "Eval_neg_2", "Jus_neg_2", "Eval_neg_1", "Jus_neg_1", "Eval_pos_3", "Eval_pos_4", "Eval_neg_3", "Eval_neg_4" ]
52
paper_summary The paper presents QuALITY, a new benchmark for question answering over long passages. All the questions are multiple-choice, composed by professional writers and validated by MTurk annotators to be answerable and unambiguous. A subset of especially challenging questions is also selected in a task where annotators answer the question under strict time constraints. The paper presents a detailed analysis of the dataset and a thorough evaluation of long-context and extractive QA models on the presented data, demonstrating that all the models are far behind human performance. summary_of_strengths - Long-passage QA datasets are harder to collect and relatively scarce, so the new dataset would be a valuable addition to the field. -The data collection and annotation process is very well thought out and includes multiple validation steps. The data is further validated in qualitative and quantitative analysis. -The experimental part is thorough: both long-context models and extractive models are evaluated, and there are additional experiments with supplementary training data and no-context baselines. The choice of the QA baselines seems reasonable to me (although my expertise in QA is limited). -The paper is clearly written and easy to follow, and both the data collection and the experimental evaluation are documented in detail. summary_of_weaknesses My only (very minor) concern: the qualitative analysis is somewhat hard to understand without reading the Appendix (see comment below). That can easily be addressed given an extra page. comments,_suggestions_and_typos - Without looking at the Appendix, I found it difficult to interpret the different reasoning strategies mentioned in Section 3.6 and Table 5. This section might read more smoothly if you include an example question or a very short explanation for a few most popular types, such as "Description" or "Symbolism". It was also not clear to me how the questions were annotated for reasoning strategy without reading the passages: was it just by looking at the question, or with the Ctrl+F type keyword search in the passage? -This is perhaps too much to ask, but I am very curious about the 4% where the annotator-voted gold label does not match the writer’s label. If the authors have done any analysis on why the annotators might disagree with the writer, I would love to see it! -L275: this inclusion criteria -> these inclusion criteria -L441: perhaps you meant Table 6, not Table 9? Not having to go to the Appendix for the results would make things easier for the reader.
[ [ 617, 685 ], [ 690, 747 ], [ 750, 857 ], [ 931, 964 ], [ 966, 1123 ], [ 1124, 1178 ], [ 1222, 1269 ], [ 1275, 1356 ], [ 1381, 1516 ], [ 1600, 2119 ] ]
[ "Jus_pos_1", "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Jus_pos_3", "Eval_pos_4", "Eval_pos_5", "Eval_pos_6", "Eval_neg_1", "Jus_neg_1" ]
53
This paper presents a comparison of several vector combination techniques on the task of relation classification. - Strengths: The paper is clearly written and easy to understand. - Weaknesses: My main complaint about the paper is the significance of its contributions. I believe it might be suitable as a short paper, but certainly not a full-length paper. Unfortunately, there is little original thought and no significantly strong experimental results to back it up. The only contribution of this paper is an 'in-out' similarity metric, which is itself adapted from previous work. The results seem to be sensitive to the choice of clusters and only majorly outperforms a very naive baseline when the number of clusters is set to the exact value in the data beforehand. I think that relation classification or clustering from semantic vector space models is a very interesting and challenging problem. This work might be useful as an experimental nugget for future reference on vector combination and comparison techniques, as a short paper. Unfortunately, it does not have the substance to merit a full-length paper.
[ [ 127, 179 ], [ 194, 357 ], [ 358, 469 ], [ 470, 771 ], [ 904, 1044 ], [ 1044, 1120 ] ]
[ "Eval_pos_1", "Major_claim", "Eval_neg_1", "Jus_neg_1", "Eval_pos_1", "Major_claim" ]
54
paper_summary The paper has not changed materially from the previous version. Please refer to my previous detailed summary. The new version addresses a few weaknesses I had pointed out previously, such as to include important results that were initially deferred to the appendix and to drop a misleading comparison. It also adds more comparisons to BitFit in table 2. I do appreciate that these changes improve the clarity of the paper, however, the present version still lacks an in-depth comparison to other related work on parameter efficient models as criticized in my previous review. Likewise, experimentation on only GLUE provides an inherently limited picture on the performance of the proposed approach and can draw an overly positive conclusion (refer to Figure 2 in [1] from the previous review). ** I am increasing my score due to improved clarity to 3, but underscore that a more in-depth comparison on other datasets and with other parameter-efficient approaches is still missing.** Currently, the paper could be interesting to a narrow audience that is knowledgeable in the area, i.e., being able to assess the proposed solutions amid the limited experimental setup. [1] He et al. (ICLR 2022) "Towards a Unified View of Parameter-Efficient Transfer Learning." https://arxiv.org/pdf/2110.04366.pdf summary_of_strengths The paper has not changed materially. Please refer to previous summary. summary_of_weaknesses A few weaknesses have been addressed, especially as to the lack of information and to remove misleading information. Some major points of criticism, however still stand: More comparisons would be necessary to get a better sense of whether AdapterBias performs universally well. This concerns both datasets and models/methods. 1) Experimentation on only the GLUE datasets is limited in that it often draws an overly positive picture. Please refer to [1] from the summary above and other references from my prior review. This raises the question in which setups the proposed approach would be usable. 2) Various baselines are missing. A comparison to other adapter architectures would be reasonable and a few other approaches such as LoRA [2], prefix tuning [3], parallel adapter [4], and Compacter [5]. [1] He et al. (ICLR 2022) "Towards a Unified View of Parameter-Efficient Transfer Learning." https://arxiv.org/pdf/2110.04366.pdf [2] Hu et al. (ArXiv 2021). " LoRA: Low-rank adaptation of large language models." https://arxiv.org/abs/2106.09685 [3] Li et al. (ACL 2021). " Prefix-tuning: Optimizing continuous prompts for generation." https://arxiv.org/abs/2101.00190 [4] Zhu et al. (ArXiv 2021). " Serial or Parallel? Plug-able Adapter for multilingual machine translation." https://arxiv.org/abs/2104.08154v1 [5] Mahabadi et al. (NeurIPS 2021). " Compacter: Efficient Low-Rank Hypercomplex Adapter Layers." https://arxiv.org/pdf/2106.04647.pdf comments,_suggestions_and_typos no further comments
[ [ 369, 436 ], [ 447, 590 ], [ 591, 808 ], [ 812, 865 ], [ 867, 995 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Eval_neg_2", "Major_claim", "Eval_neg_3" ]
55
paper_summary **Note**: *This is only a slight revision of my previous review for a previous version of this paper. I did not re-check all the details of the paper carefully, I mostly focused on checking the parts where I had reservations towards the previous version; I simply hope that the parts which I already found good in the previous version stayed good or were improved in this version. But I found already the previous version of the paper to be very good.* The paper describes a model called AlephBERT, which is a BERT language model for Hebrew that surpasses previous such models thanks to being trained on larger data and with better handling of the morphological richness of Hebrew. The paper also compiles together an evaluation toolkit for evaluating Hebrew language models, based on pre-existing tasks and datasets. The model and all code is planned to be released with the camera-ready version of the paper. The paper is definitely mostly a resource paper: most of the stuff is laborious but mostly straightforward, gathering data from available sources, training a model using existing approaches, compiling a benchmarking toolkit from existing tasks and datasets, and evaluating the trained model with this toolkit. The only part which is more research-heavy is handling the rich morphology of Hebrew, where the authors experiment with introducing a morphological segmentation component into the neural setup (a task which is highly non-trivial for Hebrew). The authors evaluate all of their contributions and prove that each of them brings improvements over the previous state of the art. summary_of_strengths The resources created by the authors seem to be extremely useful for nearly anyone dealing with Hebrew in NLP, as large pretrained language models are the core of most current approaches. The approach used for handling complex Hebrew morphology is novel and potentially inspirative for other morphologically complex languages. While I have a feeling that ACL does not prefer publishing pure resource papers, I believe that in case where the created resource is very useful, these papers should have their place at ACL. Besides, there is also a research component to the paper (although the research component itself would not suffice for a long paper). The paper is very well written and very nice to read and easy to understand. summary_of_weaknesses I found several minor problems and uncertainties in the previous version of the paper, but the authors managed to address practically all of these in their revised version. My only remaining reservation thus is towards the claimed but not demonstrated language-agnosticity of the presented approach, which I find to be too strong a claim (or maybe I have a different understanding of what "language agnostic" means). comments,_suggestions_and_typos In their response to the previous reviews, the authors list the following improvement: "We describe the Twitter data acquisition and cleanup process.", but I have not found this improvement in the current version (but I admit I might have simply overlooked it; all I am saying is I did not find it at the places where I would expect it).
[ [ 1631, 1818 ], [ 1819, 1957 ], [ 2038, 2150 ], [ 2159, 2282 ], [ 2284, 2361 ], [ 2557, 2682 ], [ 2684, 2801 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Major_claim", "Eval_pos_3", "Eval_pos_4", "Eval_neg_1", "Jus_neg_1" ]
56
paper_summary The paper introduces a pre-trained vision language model (FewVLM) for prompt-based few-shot vision language tasks such as image captioning and vision question answering. The model is pre-trained with a combined objective of masked language modeling and prefix language modeling. Compared to giant pre-trained vision language models, FewVLM is relatively smaller, but it achieves significantly better zero-shot and few-shot performances, as reported. The authors also conducted a fine-grained analysis understanding the effect of different prompts, data sizes, and pre-training objectives. Their findings include that 1) zero-shot tasks are more sensitive to prompt crafting than few-shot tasks. 2) low-quality prompt also learn fast when increasing data size 3) the masked language modeling objective helps vqa more while the prefix language modeling objective boosts captioning performance. summary_of_strengths - The idea is straightforward and the results presented are solid and strong. It shows that with the proper objective for pre-training, the pre-trained models could be more performant on zero-shot and few-shot tasks even when the model size is much smaller than those giant pre-trained vision language models -The analysis is comprehensive and interesting, and some of the conclusions align well with the findings in NLP tasks. For example, prompt crafting is essential for zero-shot prediction, which inspires better prompt searching. summary_of_weaknesses - The baselines are not very well explained in the paper, making it hard to understand the difference between the proposed model and the baselines. It would be much better if the authors could add some brief introductions for each baseline model. -The paper also lacks analysis or an intuitive explanation as to why the proposed model outperforms large pre-trained models like Frozen. The numbers look strong, but the analysis focus on how different factors affect FewVLM instead of why FewVLM outperforms baselines. comments,_suggestions_and_typos - I also wonder why some numbers are missing from table 2-5? Is it because these numbers are not reported in the original papers?
[ [ 930, 1005 ], [ 1006, 1236 ], [ 1238, 1355 ], [ 1356, 1464 ], [ 1489, 1634 ], [ 1635, 1734 ], [ 1736, 1872 ], [ 1873, 2005 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Jus_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2" ]
57
paper_summary The paper presents a method for representing the relevance of a linguistic dataset to the corresponding language and its speakers. As a proxy for speakers of a certain language the authors use geographical entities, particularly countries. The representation they aim to build relies on entity linking, so the authors explore this problem on several multilingual datasets, and draw conclusions regarding the cross-lingual consistency of NER and EL systems. summary_of_strengths The paper addresses an important problem, that gives a new way of assessing the representativeness of a dataset for a specific language. Since such text collections are at the basis of every other language task, and provide language models on which much of the higher level processing is based, it is important to have collections that are representative for the language (and speakers) that are targeted. summary_of_weaknesses While the main idea of the paper is valuable and interesting, and thoroughly explored, it is based on some assumptions whose soundness is debatable. Details are in the comments section. -there is a disconnect between the visualizations and the rest of the processing. -the preprocessing of the datasets (many for low-resource languages) needs resources that are themselves scarce, incomplete, or borrowed from other languages (that may use other scripts, and hence there is a transliteration problem on top of others). This makes the kind of processing presented here a bit unrealistic, in the sense that it could not be deployed on any collected dataset, and give an objective view of the representativeness of that dataset for the corresponding language (this is linked to the first point, and explanations are below) -some information in the data is discarded (topical adjectives, historical entities), and it is not clear what impact using it would have on the final geographical mapping. comments,_suggestions_and_typos With regards to the disconnect between the visualizations and the rest of the processing: the visualizations are based on geographical statistics for entities in a text, but these entities are already marked. It would have been useful to see how an end-to-end process performs: apply NER on the NER and QA datasets, and build the same visualizations as in section 3. How do the visualization using imperfect NER/EL resources and processing tools compare to the visualizations obtained on the annotated data? Are they very far apart, or the underlying "character" of the dataset is still retrievable even in such imperfect conditions? This links to the second potential weakness, regarding the applicability of this method to newly collected datasets (which is the aim, right?). The geographical mapping presented is left to the subjective inspection of a human judge. Which is not necessarily bad in itself, but as the more detailed maps in the appendix show, the characteristics of some datasets are very very similar (e.g. for European countries for example, or other geographically close countries). It may be useful to have a more rigorous evaluation of the geographical mapping, by showing that from the geographical distribution of entities, one can predict the country corresponding to the dataset's language. This could be done in an unsupervised manner, or using a linear regression model, or something similarly simple -- maybe by deducting an "average" entity geographical distribution model, such that local characteristics become more prominent, or by computing (in an unsupervised manner) some weights that would downplay the contribution of entities from countries that are always represented (like a "country tfidf" maybe?). Some geographical indicators are disregarded, and that may have an impact on the visualizations. Annotating topical adjectives that indicate countries seems doable, based on the anchor texts of links pointing to countries, which are easy to obtain (for some languages). The same for some of the historical entities that no longer exist, but some of which have corresponding GPS coordinates that could be used. The point is that both the resources and the process used to build the geographical maps of the datasets are incomplete. Some are by necessity (because the available resources are incomplete), some by choice (the adjectives and historical figures). We need to know the impact of such processing constraints. It is interesting to analyze the correlation between socio-economic factors, but how does that impact the construction or characteristics of the datasets? Some of these factors -- e.g. the GDP, -- could be (in this experiment) a proxy for the level of web presence of the population, and the level of information digitization of that particular population. Maybe some parameters that measure these issues more explicitly -- which seem more closely relevant to the process of textual collection building -- would provide better insights into data characteristics. Using a country as a proxy for language is useful, but it may skew the data representation, as the authors themselves recognize. What happens with languages that occupy the same country-level geographical space, but are distinct, as happens with multi-lingual countries? The same with languages that cross many borders. A bit more insight into how these are reflected in dataset characteristics and how they impact the usefulness of the dataset would be very useful. Why does the cross-language consistency matter here? Each dataset (for the geographical mapping) is analyzed separately, so while cross-lingual consistency is indeed a problem, it is not clear how it is related to the problem of dataset mapping. Is the cross-lingual consistency a signal of something other than the general performance of NER/EL systems? Some little typos: were => where (almost everywhere "were" appears) then => than (line 319)
[ [ 493, 629 ], [ 630, 899 ], [ 1009, 1107 ], [ 1109, 1189 ], [ 1441, 1508 ], [ 1509, 1741 ], [ 1743, 1915 ], [ 1948, 2581 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_neg_1", "Eval_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_2" ]
58
- Strengths: The approach described in the manuscript outperformed the previous approaches and achieved the state-of-the-art result. Regarding data, the method used the combination of market and text data. The approach used word embeddings to define the weight of each lexicon term by extending it to the similar terms in the document. - Weaknesses: Deep-learning based methods were known to be able to achieve relatively good performances without much feature engineering in sentimental analysis. More literature search is needed to compare with the related works would be better. The approach generally improved performance by feature-based methods without much novelty in model or proposal of new features. - General Discussion: The manuscript described an approach in sentimental analysis. The method used a relatively new method of using word embeddings to define the weight of each lexicon term. However, the novelty is not significant enough.
[ [ 13, 132 ], [ 498, 581 ], [ 582, 709 ], [ 911, 948 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Eval_neg_2", "Eval_neg_3" ]
59
paper_summary This paper investigates the effectiveness of entity representations in multilingual language models. The proposed mLUKE model exhibits strong empirical results with the word inputs (mLUKE-W), it also also shows even better performance with the entity representations (mLUKE-E) in cross-lingual transfer tasks. The authors' analysis reveals that entity representations provide more language-agnostic features to solve the downstream tasks. Extensive experimental results suggest a promising direction to pursue further on how to leverage entity representations in multilingual tasks. summary_of_strengths 1. The authors explore the effectiveness of leveraging entity representations for downstream cross-lingual tasks. They train a multilingual language model with 24 languages with entity representations and show mLUKE model consistently outperforms word-based pretrained models in various cross-lingual transfer tasks. 2. The authors show that a cloze-prompt-style fact completion task can effectively be solved with the query and answer space in the entity vocabulary. 3. The results show that entity-based prompt elicits correct factual knowledge more likely than using only word representations. summary_of_weaknesses Most of languages in LAMA are rich-resourced languages indeed, the authors may need to test mLUKE on some low-resourced languages. comments,_suggestions_and_typos This paper has done a solid work on Multilingual Pretrained Language Models. This paper is well written and easy to read.
[ [ 1241, 1371 ], [ 1405, 1482 ], [ 1483, 1528 ] ]
[ "Eval_neg_1", "Eval_pos_1", "Eval_pos_2" ]
60
This paper describes (1) new corpus resources for the under-resourced Kinyarwanda and Kirundi languages, (2) preliminary experiments on genre classification using these corpora. The resources are described thoroughly, and a useful survey of related work on these languages is presented. A variety of models are used in the experiments, and strong baseline results on this task are achieved, including experiments on transfer learning from the better-resourced Kinyarwanda to Kirundi; an approach likely to play an important role in scaling NLP to the Bantu language family, which has a small number of reasonably-resourced languages, e.g. Swahili, Lingala, Chichewa. Overall the paper should be of interest to COLING attendees. General comments: Abstract: "datasets... for multi-class classification". It would be good to note here and in the introductions that this is specifically a genre or subject classification task. Introduction: "has made access to information more easily" => "has made access to information easier" Introduction, p.2 "In this family, they are..." => "In this family, there are..." Introduction: "fourteen classes... twelve classes". Again, as in the abstract, should make clear what these classes are! Last line of p. 2 "who have not been" => "which have not been" Related work. You might also note Jackson Muhirwe's PhD work at Makerere; some of which was published here: Muhirwe J. (2010) Morphological Analysis of Tone Marked Kinyarwanda Text. In: Yli-Jyrä A., Kornai A., Sakarovitch J., Watson B. (eds) Finite-State Methods and Natural Language Processing. FSMNLP 2009. Lecture Notes in Computer Science, vol 6062. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-14684-8_6 3.3 Dataset cleaning. I know it's just a change in perspective, but I'd prefer viewing the cleaning and stopword removal as standard pre-processing steps; suggesting distributing these as tools vs. distributing the corpora with these steps applied. Classifiers should work on un-preprocessed text in any case. 3.4 I don't understand how the cleaning steps you described could reduce the vocabulary from 370K to 300K. Please clarify. 4.1 In training the word embeddings, you say "removing stopwords". Does that mean removed from the corpus before training? I'm not sure I see the value in doing so, and wonder if it negatively impacts the quality of the embeddings. 4.1 Given the morphological complexity of these languages, I wonder whether results might be improved by working at the subword level (syllables, or morphemes... cf. Muhirwe's work above). This could conceivably help is the cross-lingual training as well. You do have Char-CNN experiments but there may not be enough data to get competitive results at the character level. 4.3.2 "different epochs and number of features... different train sets"; this is fine, but you should refer to the table where these choices are actually laid out 4.4.1 Had the Char-CNN converged at 20 epochs?
[ [ 178, 217 ], [ 222, 285 ], [ 287, 334 ], [ 340, 390 ], [ 391, 665 ], [ 667, 727 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Jus_pos_4", "Major_claim" ]
61
- Strengths: This paper proposes the use of HowNet to enrich embedings. The idea is interesting and gives good results. - Weaknesses: The paper is interesting, but I am not sure the contibution is important enough for a long paper. Also, the comparision with other works may not be fair: authors should compare to other systems that use manually developed resources. The paper is understandable, but it would help some improvement on the English. - General Discussion:
[ [ 72, 119 ], [ 164, 230 ], [ 238, 286 ], [ 288, 366 ], [ 400, 446 ] ]
[ "Eval_pos_1", "Major_claim", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2" ]
62
[update after reading author response: the alignment of the hidden units does not match with my intuition and experience, but I'm willing to believe I'm wrong in this case. Discussing the alignment in the paper is important (and maybe just sanity-checking that the alignment goes away if you initialize with a different seed). If what you're saying about how the new model is very different but only a little better performing -- a 10% error reduction -- then I wonder about an ensemble of the new model and the old one. Seems like ensembling would provide a nice boost if the failures across models are distinct, right? Anyhow this is a solid paper and I appreciate the author response, I raise my review score to a 4.] - Strengths: 1) Evidence of the attention-MTL connection is interesting 2) Methods are appropriate, models perform well relative to state-of-the-art - Weaknesses: 1) Critical detail is not provided in the paper 2) Models are not particularly novel - General Discussion: This paper presents a new method for historical text normalization. The model performs well, but the primary contribution of the paper ends up being a hypothesis that attention mechanisms in the task can be learned via multi-task learning, where the auxiliary task is a pronunciation task. This connection between attention and MTL is interesting. There are two major areas for improvement in this paper. The first is that we are given almost no explanation as to why the pronunciation task would somehow require an attention mechanism similar to that used for the normalization task. Why the two tasks (normalization and pronunciation) are related is mentioned in the paper: spelling variation often stems from variation in pronunciation. But why would doing MTL on both tasks result in an implicit attention mechanism (and in fact, one that is then only hampered by the inclusion of an explicit attention mechanism?). This remains a mystery. The paper can leave some questions unanswered, but at least a suggestion of an answer to this one would strengthen the paper. The other concern is clarity. While the writing in this paper is clear, a number of details are omitted. The most important one is the description of the attention mechanism itself. Given the central role that method plays, it should be described in detail in the paper rather than referring to previous work. I did not understand the paragraph about this in Sec 3.4. Other questions included why you can compare the output vectors of two models (Figure 4), while the output dimensions are the same I don't understand why the hidden layer dimensions of two models would ever be comparable. Usually how the hidden states are "organized" is completely different for every model, at the very least it is permuted. So I really did not understand Figure 4. The Kappa statistic for attention vs. MTL needs to be compared to the same statistic for each of those models vs. the base model. At the end of Sec 5, is that row < 0.21 an upper bound across all data sets? Lastly, the paper's analysis (Sec 5) seems to imply that the attention and MTL approaches make large changes to the model (comparing e.g. Fig 5) but the experimental improvements in accuracy for either model are quite small (2%), which seems like a bit of a contradiction.
[ [ 39, 120 ], [ 174, 328 ], [ 625, 724 ], [ 744, 799 ], [ 806, 880 ], [ 900, 944 ], [ 951, 984 ], [ 1299, 1355 ], [ 1357, 1414 ], [ 1415, 2492 ], [ 2493, 2836 ], [ 2856, 2896 ] ]
[ "Eval_neg_1", "Jus_neg_1", "Major_claim", "Eval_pos_1", "Eval_pos_2", "Eval_neg_2", "Eval_neg_3", "Eval_pos_3", "Eval_neg_4", "Jus_neg_4", "Jus_neg_5", "Eval_neg_5" ]
63
paper_summary This paper proposes a unified representation model Prix-LM for multilingual knowledge base (KB) construction and completion. Specifically, they leverage monolingual triples and cross-lingual links from existing multilingual KBs DBpedia, and formulate them as the autoregressive language modeling training objective via starting from XLM-R’s pretrained model. They conduct experiments on four tasks including Link Prediction (LP), Knowledge probing from LMs (LM-KP), Cross-lingual entity linking (XEL), and Bilingual lexicon induction (BLI). The results demonstrate the effectiveness of the proposed approach. summary_of_strengths 1. They propose a novel approach Prix-LM that can be insightful to the community about how to integrate structural knowledge from multilingual KBs into the pretrained language model. 2. They conduct comprehensive experiments on four different tasks and 17 diverse languages with significant performance gains which demonstrate the effectiveness of their approach. summary_of_weaknesses Though this paper has conducted comprehensive experiments on knowledge related tasks, it would be even stronger if they demonstrate there also exists improvement on the multilingual knowledge-intensive benchmark, like KILT. comments,_suggestions_and_typos N/A
[ [ 648, 828 ], [ 832, 1010 ] ]
[ "Eval_pos_1", "Eval_pos_2" ]
64
paper_summary *(minor edits from previous review XYZ)* Text style transfer is the task of rewriting a sentence into a target style while approximately preserving its content. Modern style transfer research operates in an "unsupervised" setting, where no parallel training data (pairs of sentences differing in style) is available, but assume access to a large unpaired corpus in each style. This paper argues that a large unpaired corpus to train style transfer systems might be hard to obtain in practice, especially in certain domains. To tackle this issue, the authors present a new meta-learning approach (DAML) which trains a style transfer system that can quickly adapt to unseen domains during inference (with a few unpaired examples). The authors build their style transfer system using a discriminative learning objective (via a style classifier) while fine-tuning T5, which they call ST5. The authors approach DAML-ST5 outperforms several baselines on sentiment transfer and Shakespeare author imitiation, and ablation studies confirm the design decisions. summary_of_strengths *(identical to my previous review GAJd, see "Weaknesses" for my response to the revised manuscript)* 1. This paper tackles a practically relevant problem. While current style transfer research does not leverage supervised data, it requires a large amount of unpaired data which may not be practical to obtain in low-resource languages or domains. Hence, building style transfer systems which can quickly adapt in low-resource settings is important, since it eliminates the expensive requirement of hand-curating unpaired datasets for each low-resource domain / language. 2. The paper presents an interesting method based on model-agnostic meta learning [1] (with modifications to make it suitable for domain adaptation) to learn a good initialization which works well across domains. During inference, the model can quickly adapt to a new domain, with decent performance with just 1% of the target domain data. Experimental results confirm the proposed approach outperforms several strong baselines. The paper also has ablation studies to justify the various design decisions used in the approach. [1] - https://arxiv.org/abs/1703.03400 summary_of_weaknesses The authors presented an excellent response and addressed all the concerns in my previous review GAJd in their revised manuscript. In particular, the authors added experiments on the new Shakespeare dataset, used extra automatic metrics to evaluate their approach and found consistent trends, clarified some questions I had about the modeling, added comparisons to recent few-shot style transfer approaches. I have increased my score to 4. It would be nice to move some of the new results into the main body of the paper with the extra 9th page, especially the experiments on the Shakespeare dataset. comments,_suggestions_and_typos Several references are missing their venues / journals / arXiv identifiers, you can get the correct bib entries for papers from https://aclanthology.org, Google Scholar or arXiv.
[ [ 1192, 1243 ], [ 1244, 1659 ], [ 1663, 1872 ], [ 1873, 2186 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Jus_pos_2" ]
65
paper_summary This paper proposes a simple but powerful approach that uses a single Transformer architecture to tackle KG link prediction and question answering treated as sequence-to-sequence tasks. This approach can reduce the model size up to 90% compared to conventional Knowledge graph embedding (KGE) models, and the performance of this approach is best among small-sized baseline models. summary_of_strengths 1. This paper uses the Transformer structure for KG link prediction and question answering tasks, and this simple approach seems powerful. 2. This paper conducts a large number of experiments on multiple datasets and analyzes the experimental results. summary_of_weaknesses Minor: The paper only contains a high-level description of the proposed approach that benefits the performance of KGQA. It would be better if the authors provide some explicit cases or discussions to explain how pre-training on KG link prediction can improve performance on KGQA compared with the previous representative works later. comments,_suggestions_and_typos Specific comments for improving the work: 1. The authors may provide some explicit cases or discussions to explain how pre-training on KG link prediction can improve performance on KGQA. 2. This paper shows KG link prediction performance from the proposed model trained on Wikidata5M in section 4.4. it would be better to show the KG link prediction performance from KGT5 after finetuning for QA, and showing performance on KG link prediction and KGQA with multi-task setting is also a good choice.
[ [ 520, 555 ], [ 562, 672 ], [ 702, 814 ], [ 815, 1029 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1" ]
66
paper_summary This paper introduces a new method for MeSH indexing, combining multiple methods including, but not limited to, dilated CNNs, masked attention, and graph CNNs. Overall, the proposed approach makes substantial improvements over prior state-of-the-art methods. For example, Micro F1 improves over BERTMeSH from 0.685 to 0.745. Similar improvements are also found for the example-based measures (e.g., example-based F1). Furthermore, a comprehensive ablation study was performed, showing that the label feature model has the largest impact on model performance, yet, other parts of the method still impact performance substantially. summary_of_strengths Overall, the paper is well-written and easy to read. Furthermore, the improvement over prior work is substantial. It is neither easy nor trivial to make such considerable performance improvements for MeSH indexing, especially for Micro F1. For instance, BERTMeSH [1] only improves DeepMeSH [2] by only 2% in Micro F1 [1] after five years of work. Hence, seeing a Micro F1 near 0.75 is a huge breakthrough. References: [1] Peng, Shengwen, et al. "DeepMeSH: deep semantic representation for improving large-scale MeSH indexing." Bioinformatics 32.12 (2016): i70-i79. [2] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. summary_of_weaknesses Overall, there are three major weaknesses in this paper. First, the paper uses a custom training and validation dataset pulled from PubMed, making comparisons difficult. Using the data from the yearly BioASQ shared tasks would be better to use their data so new methods are more easily comparable. I understand this is common in similar studies (e.g., by BERTMeSH [3]), but a standardized dataset seems possible and useful. Second, while the hyperparameters are discussed, it is not clear whether hyperparameters were optimized for the baseline models. What were the chosen parameters? Was the validation dataset used to optimize them similarly to the proposed method? If so, why is the standard deviation not reported for the baseline models (e.g., in Table 1)? Given the substantial performance differences between the proposed model and prior work, this additional information must be reported to ensure fair comparisons. Third, while this may be the first paper to use GCNNs for MeSH indexing, it is widely used for similar biomedical text classification tasks (e.g., ICD Coding). For instance, [1] directly combines BiLSTMs with GCNNs and label features in a very similar manner to the method proposed in this paper, albeit with exceptions such as [1] does not use dilated CNNs. Furthermore, that work has been expanded on to better understand the impact of the GCNNs and whether they are needed [2]. Hence, the paper would substantially help if the related work sections were expanded to include citations with similar methodologies. In my opinion, the "Dynamic Knowledge-enhanced Mask Attention Module" is one of the most innovative parts of the paper and should be highlighted more in the introduction. References: [1] Chalkidis, Ilias, et al. "Large-Scale Multi-Label Text Classification on EU Legislation." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. [2] Chalkidis, Ilias, et al. "An Empirical Study on Large-Scale Multi-Label Text Classification Including Few and Zero-Shot Labels." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020. [3] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. comments,_suggestions_and_typos Page 3, Line 240-252: There are a few variations of LSTMs [1]. Is the one used in this paper the same as the 1997 paper? Page 2, Line 098-100: The phrase "latent semantics" is unclear. It may help the paper if that phrase is expanded, e.g., does this mean the contextual information from combining multiple layers of neural networks? Page 4, Line 287: I believe "edges are implement MeSH hierarchies" should be "edges represent relationships in the MeSH hierarchy" Page 6, Line 416-417: I believe the phrase ", and we converted all words are lowercased" Should be ", and we converted all words to lowercase" References: [1] Graves,A. et al. (2012) Supervised Sequence Labelling with Recurrent Neural Networks. Vol. 385. Springer, Berlin.
[ [ 666, 718 ], [ 719, 779 ], [ 780, 905 ], [ 906, 1071 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Jus_pos_3" ]
67
paper_summary This paper describes a contrastive learning approach to automatically solving math word problems (MWP) and investigates multilingual approaches to the problem. Additionally, it provides further evidence that the top layers of BERT will learn task-specific patterns, as shown in prior works. This paper treats MWP solving as a text-to-equation-tree translation problem using an encoder-decoder architecture. To motivate the use of contrastive learning, the paper opens with an analysis of the effect of training epoch and encoder layer on the clustering of MWPs by prototype equation. t-SNE plots show expected clustering effects as layer/epoch increases. Analysis of raw high dimensional representations show that problems with similar lexical semantics or topic are given different representations when the prototype equation differs, especially in layer 12, while problems with the same prototype equation are embedded closer together. Moreover, it is shown that MPWs that are represented closer to the center of the cluster of problems with the same prototype equation are more likely to be correctly solved. The contrastive learning approach proposed here involves finding difficult negative examples, which is done by choosing structurally similar equation trees with different operations in the intermediate nodes. Additional positive examples come from either trees or subtrees which consist of the same structure and operations as the target equation. For the multilingual approach, mBERT is substituted as the encoder. Results show that the contrastive learning method improves MWP solving in both the monolingual and multilingual settings compared to recent baselines. Ablations show the value of choosing difficult negative examples and other design decisions. Analysis shows that the contrastive learning objective results in well defined clusters. Accuracy is especially improved for examples farther from the cluster center. summary_of_strengths The contrastive learning for MWP solving seems to improve performance summary_of_weaknesses Technique is limited to problems that can be modeled by equation trees. A lot of paper real estate is given to an analysis that basically shows:
 -undertrained models don’t work -only using part of the encoding function (the bottom N layers) doesn’t work I don’t think this analysis will be of much use to the ACL community. It seems like the cosine similarity of lower layers in figure 3 are relatively high, while the t-SNE visualizations in Figure 2 are more mixed. Do you think t-SNE is accurately representing the latent space? comments,_suggestions_and_typos The paper would benefit from connections to prior work on BERTology. An intro to this line of research can be found at https://huggingface.co/docs/transformers/bertology
[ [ 1977, 2047 ], [ 2070, 2141 ], [ 2143, 2325 ], [ 2326, 2395 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Jus_neg_2", "Eval_neg_2" ]
68
paper_summary Existing self-explaining models mostly generate the short rationales with the assumption that short rationales are more intuitive to humans, while this work discusses the question that whether the shortest rationale is the most understandable for humans. In this work, the authors design a self-explaining model, LIMITEDINK, that can take controls on rationale length by incorporating contextual information and supporting flexibly extracting rationales at any target length. By generating rationales at different length levels, the authors study how much rationale would be sufficient for humans to confidently make predictions. Experiments on various tasks demonstrate that the proposed method outperforms most prior works, and meanwhile show that the shortest rationales are not the best for human understanding. summary_of_strengths 1. The method proposed in this work is effective and can outperform several strong baselines on the performance of both label predictions and rationale predictions. 2. The problem, the effect of the rationales at different length levels, discussed in this work is meaningful and the conclusions may serve as good guidance for further research in this field. summary_of_weaknesses 1. Although this work points out that shortest rationales are largely not the best for human understanding, the appropriate lengths are still subject to the datasets or even the instances. The length of meaningful rationales may largely depend on the density of the information related to the task. As pointed in Section 5, a more rigorous evaluation is needed to better understand what is a good rationale explanation. 2. This work does not report how "short" the rationales generated by prior works are. As shown in Section 1, recent works agree that good rationales should be "shortest yet sufficient", while this work seems to simply focus more on "shortest". This brings out the concern that whether the main question discussed in this work can really stand for the trend of current works on this task. (a). I think one potential solution to handle this concern is that - by extending or shortening the golden rationales and see whether such perturbations outperform or underperform the original one. comments,_suggestions_and_typos 1. I would like to see some examples of the generated rationales at different length levels from the proposed methods, as well as the rationales generated by the baselines. Such examples can help the readers to better understand the influence of rationale lengths.
[ [ 855, 1016 ], [ 1020, 1210 ], [ 1236, 1531 ], [ 1532, 1652 ], [ 1897, 2040 ], [ 2056, 2249 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Jus_neg_1", "Eval_neg_1", "Eval_neg_2", "Jus_neg_2" ]
69
paper_summary This paper compares two figures, Firth and Harris, who are often cited as foundational in modern computational linguistics but who are rarely actually read, perhaps even not by the people who cite them. It does a deep dive into their work and takes an opinionated stance that Harris was “introverted”, focused on language as a system in isolation and Firth was extroverted, focusing on language as it exists in the world. summary_of_strengths This is an interesting paper, of a type rarely seen at ACL venues: intellectual history with an opinionated thesis. I genuinely enjoyed reading it and learned from it. I imagine the same would be true for many folks in the ACL community. There is some real scholarship here, as it investigates the works of these mid-20th c scholars, derives an opinionated synthesis, and applies it to modern NLP. Given that NLP as a field is extremely forward-looking, often considering something even a year or two old to be ancient history, this is a valuable perspective. summary_of_weaknesses The paper points out that Firth’s work is somewhat scattered and hard to get a clear grip on. Yet it ends up coming down at times in a way that feels to me a bit too much “Harris bad, Firth good”. The claim is that Firth’s views are well aligned with a strand of thought, currently popular in NLP (and well articulated in Bender & Koller’s position piece and Bisk et al.) that “you can’t learn language on the radio” and that language meaning needs to be in embedded context in a way that is heavily socially mediated. The argument is that, by contrast, Harris misses the boat on this. I wasn’t quite convinced on this point. It makes for an interesting contrast for the two thinkers, but it also seems to me to be a bit unfair to Harris since it’s hard to counterfactually reason about how Harris would have reacted to the current state of NLP. And I could imagine a variety of other arguments about the relevance of his work in NLP today. Firth’s positions are, according to the paper, admittedly sometimes murky and not always spelled out, which means it is easy to attribute a wider variety of perspectives to him. So I think there should be some caution in that framing. It also seems possible that a “radically distributional”, like the kind attributed to Harris, could in fact capture a wide range of rich social contexts and individual variation. For instance, GPT-3 which is trained as if it’s trained on a single monolithic dialect, can be a quite effective code-switcher when prompted with different registers. I’ll mention one other thing, which isn’t really a weakness but is more of a meta-concern: One potential pitfall of submitting and publishing this kind of work in an ACL venue is that the reviewers (like me) and audience are not necessarily going to be experts in this methodology and so care should be taken to make sure it is well reviewed by people who have the relevant expertise. An example of the way in which ACL is not necessarily set up for this kind of work is that I have to select whether the work is reproducible: I picked "1 = They would not be able to reproduce the results here no matter how hard they tried." since it's hard to imagine some other set of authors deciding to read Harris and Firth and writing the same paper :-). But the broader point is that some of this veers methodologically into intellectual history, which I’m certainly not an expert in, and the ACL reviewing process is not necessarily set up to review a paper with this method. That's not a reason not to publish it! In fact, it's all the more reason to give it serious consdieration. But I think there should be some thought given to make sure the work is well evaluated. comments,_suggestions_and_typos -The paper says that computational linguists routinely cite Harris and Firth. This is true of textbooks and big review papers. But my impression is that many in the ACL community do not engage with them at all.
[ [ 458, 486 ], [ 488, 573 ], [ 574, 695 ], [ 696, 731 ], [ 733, 855 ], [ 856, 986 ], [ 986, 1018 ], [ 1135, 1238 ], [ 1238, 2562 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Major_claim", "Eval_pos_2", "Jus_pos_2", "Jus_pos_3", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1" ]
70
paper_summary Note - I reviewed this paper in the past and had a positive criticism about it. The authors also addressed my previous comments and I keep my positive review from before. This paper discusses methods for improving multi-domain training for dialog response generation. The authors experiment with several approaches to improve multi-domain models, namely (1) "Interleaved Learning", when data from multiple domains/corpora is concatenated and used for training, (2) "Labeled Learning" where each example is encoded using an additional corpus-specific embedding/label that guides the model, (3) "Multi-Task Labeled Learning" where the model has an additional classification head that determines the domain/corpora label based on the given context, and (4) "Weighted Learning" where the authors propose a weighted loss function that give more weight on words that are especially salient in a given domain. The authors run experiments that evaluate the different approaches using 4 dialog datasets (PersonaChat, OpenSubtitles, Ubuntu and Twitter) where they show the effect of each approach on the resulting model as measured using BLEU, perplexity and F1. While the experiments show that there is no single best approach on all metrics, the proposed approaches improve the results over simple corpora concatenation or single-corpora training. A human evaluation showed that the proposed "Weighted Learning" approach was favorable in comparison to the other methods. summary_of_strengths The main strengths of the paper are as follows: The highlighted task of multi-domain dialog generation is important, practical and relatively understudied. To the best of my knowledge, the proposed "Weighted Learning" approach is novel The experiments are thorough and convincing, especially as they include a human evaluation summary_of_weaknesses The main weakness of the paper is that some of the proposed approaches lack novelty - "interleaved learning", "labeled learning", "multi-task labeled learning" were studied extensively in the MT community. Having said that, I am not aware of works applying those approaches to open-domain dialog generation. comments,_suggestions_and_typos line 230 - "learning material" --> "training data"
[ [ 1547, 1655 ], [ 1656, 1827 ], [ 1850, 1933 ], [ 1936, 2056 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Jus_neg_1", "Eval_neg_1" ]
71
paper_summary This paper proposes a new task formulation to solve complex tasks. In this new formulation, there are multiple agents, each of which is capable of solving some specific types of tasks. For example, there can be a QA agent that answers natural language (NL) questions and an instruction following agent that could execute actions to accomplish an NL intent. Given a complex task described in NL, the model is asked to communicate with each agent for their task-specific knowledge and use the returned answers to proceed with the task. In this work, they instantiate the complex task as multi-hop QA and the agents as a TextQA agent that is able to reason over large text corpus, TableQA agents that could answer questions given structured data like tables, and MathQA agent that could perform numerical reasoning. Each agent also has their own auxiliary data like (NL, answer) supervision and their independent KBs. They design a model that is able to decompose a multi-hop question to simple questions that could be answered by one agent. They compare this model with other black-box models that do not perform communication with agents and show significant improvements on a synthetic dataset they create. summary_of_strengths - The proposed new task formulation is novel and interesting. Intuitively, it is a promising way to resolve the complex tasks people encounter daily. The paper also provides a detailed and clear definition of this new task. summary_of_weaknesses - The instantiation of the task could not fully justify the benefit of the new task formulation. In this new proposed setting, an ideal criterion for designing individual agents is that each has mutually exclusive functionalities, and it is challenging to develop a unified model. For example, the google search agent and the Alexa shopping agent described in the introduction make such a case. However, this work design a synthetic dataset, and the agents are separated by the different forms of knowledge (text vs table) and the different proportions of knowledge in the KB. This separation is OK as long as it could reveal the true distribution in reality -- there is some knowledge that is more accessible through text than structured data and vice versa. However, the data construction process did not consider this and did a random split. A more realistic setting will bring up some interesting questions like "how does the decomposer know which agent is more suitable to answer the current question?", " how can we curate such annotations?" etc, which are not explicitly touched by the current work. To me, my main takeaway is that question decomposition is helpful, which has been studied in previous works like BREAK (Wolfson el at + 2020). Related to this concern, I also have a question regarding training the question decomposition component. According to F3, the NL questions to the text agent and the table agent look pretty similar (e.g. [table] What movies has #1 written? vs. [text] #1 produces which materials?), what are the supervision signals that hint the model to predict one agent over another? - Some descriptions of the experiment setting are somewhat vague, and therefore it is not super clear whether the comparisons are fair. My main question is how factual knowledge is provided to each model? * In *Models with Access to Agent Knowledge*, how do you construct the context? Do you randomly sample some context from the *possible world* of the question? * Do you somehow distinguish the source (e.g., knowledge of TextQA, knowledge of TableQA)? * After decomposing the question through `NextGen`, how do you provide the context when querying an individual agent? Do you provide the ground truth context without distractors? Or do you train some retriever (like in *Models with Fact Supervision*) to retrieve the context? comments,_suggestions_and_typos - Some case study and more systematic error analysis can probably help the readers to understand in which cases the proposed method works and how.
[ [ 1245, 1304 ], [ 1305, 1392 ], [ 1393, 1467 ], [ 1492, 1586 ], [ 1587, 3108 ], [ 3111, 3244 ], [ 3245, 3858 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2" ]
72
paper_summary The authors proposed a Locally Aggregated Feature Attribution method and claimed that this is a novel gradient-based feature attribution method for NLP models. summary_of_strengths The authors proposed a Locally Aggregated Feature Attribution method summary_of_weaknesses Results are varying so much on two different datasets comments,_suggestions_and_typos Did you use an attention mechanism? If yes, what are the significant changes you observed between the two approaches? If not, could you please check a performance comparison with any attention mechanism? Why are the results varying so much on two different datasets? Is your model biased on a particular data? Did you check the disparity and fairness of data?
[ [ 288, 341 ] ]
[ "Eval_neg_1" ]
73
This paper presents a corpus study of coreferences comparing different genres (news, blogs, conversations) and media (written, transcribed speech, microblogging) based on the Ontonotes and Switchboard corpora and a dataset from Twitter sub-threads. The analysed factors include the use of pronouns and noun phrases, the characteristics of such NP mentions (syntactic complexity) and various distances measured between mentions of the same entity. This is an interesting study, and could be potentially useful for models trying to do domain adaptation, as coreferecen systems for written text perform poorly on conversations and microblogging. Overall it seems the contributions are only moderately significant however, for the following reasons: (1) the paper builds on two papers: (Aktas et al., 2018), where the twitter data was collected and described, and (Aktas et al., 2019) which described coreferences in Ontonotes sub-corpora/genres in what I assume is a similar manner (the paper is not freely available, only the abstract). It is not clear how the present paper adds to these papers, and should be made more explicit. (2) the interest for coreference model is rather vaguely described, and it would have been interesting to have a more detailed descriptions of how the knowledge derived from the study could be used in such models. The paper mentions experiments using models trained on written texts applied to other genres/media, how hard would it have been to experiment training on other data, or to combine them ? This seems too preliminary to assess the real interest for automated models. More minor points: -the introduction is rather strangely constructed, and almost reads as a post-introduction section/a related work section already. The context should be made clearer and a few examples wouldn't hurt. -i'm not sure I understand the term "coreference strategies", which seem to imply an intentionality in the way coreferences are produced in different contexts. A lot of what is shown in the paper could be attributed to more general aspects of the genres/media (longer sentences for purely written text, more context available, etc) and some of the properties of coreferences could just be by-product of that. The use of specific personal pronouns (1st/2nd/3rd) is another example. -there is zero descriptions of the statistical tests used, and of the assumptions made, if a parametric model was used. This should be addressed. Also some conclusions are based on multiple-testing, which should include some kind of correction (it might have been done, but again, there is zero details about this). -some technical details are presented a little vaguely, which could be understood given size constraints, but sometimes it is a bit too much: for instance, instead of explaining what hierarchical clustering method was applied, the paper only mentions using some R implementation with default settings, which is rather uninformative. -about the clustering, why not cluster on all the dimensions at the same time ? ( with some normalization of features of course) Details: -Tables/figures have rather cursory captions. For instance table 1 coud recall the meanings of abbreviations for all sub-corpora, especially from Ontonotes. It is also not a good idea to have ontonotes as a whole *and* all the subcorpora without making it clear. -section 3.1, the paper mentions the use of a sentence splitter from (Proisl and Uhrig, 2016) which is a German sentence splitter ? -table 2: why not give relative (to corpus size) frequency instead of absolute frequency ? this would make it easier to interpret.
[ [ 448, 552 ], [ 553, 644 ], [ 645, 719 ], [ 752, 1038 ], [ 1039, 1133 ], [ 1138, 1202 ], [ 1206, 1347 ], [ 1536, 1613 ], [ 1634, 1763 ], [ 1764, 1832 ], [ 1834, 1992 ], [ 1993, 2313 ], [ 2316, 2434 ], [ 2460, 2558 ], [ 2633, 2772 ], [ 2774, 2964 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Major_claim", "Jus_neg_1", "Eval_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4", "Eval_neg_5", "Eval_neg_6", "Eval_neg_7", "Jus_neg_7" ]
74
paper_summary This paper is about determining the syntactic ability of two Dutch variants of transformer based language model BERT: BERTje and RobBERT. The authors use a Multiple Context Free Grammar (MCFG) formalism to model two patterns of Dutch syntax: control verb nesting and verb raising. These rule-based grammatical models are used to generate a test set which is limited by a bound on recursion depth and populated from a lexicon. For evaluation, each verb occurrence garners a prediction of which referential noun phrase is selected by it, and the resulting accuracy is reported. The authors show results that demonstrate drastically worse performance as recursive depth and number of noun phrases increase, and conclude that the models have not properly learned the underlying syntax of the linguistic phenomena they describe; ie discontinuous constituents/cross-serial dependencies. summary_of_strengths As someone unfamiliar with Dutch and with this area of research, I felt this paper did an excellent job of motivating their reasoning for their research and of describing the ways that Dutch syntax is different from English. Figures and examples were clear and well-done. The article was clearly and concisely written, and appears to be a valuable contribution that adds counter-evidence to claims about how much syntax BERT-based models actually “know”. Authors are careful not to exaggerate the consequences of their findings and make suggestions for how this work could be expanded with other languages or other tasks. summary_of_weaknesses I was unable to get the provided code to work. I tried both on my Macbook and on a Linux-based computing cluster. To be fair, I did not try for very long (< 15 minutes), and I also did not have access to a GPU so I tried to run it on a CPU. It’s possible that was the problem, but it wasn’t stated that that was a requirement. It seems that if I understood the instructions in the readme properly, there were a few __init__ files missing. However, even after changing those, I ran into a number of other errors. The readme was also a bit sparse, ie “Play around with the results as you see fit”. I commend the authors for including the code and data with the submission, but I would have liked to see a script included already (i.e. not just a snippet in the readme) along with a brief description of any dependencies required beyond the requirements.txt and what one might expect when running the script. Another weakness I felt, was a lack of description of previous/related work. They mention in the very beginning that "Assessing the ability of large-scale language models to automatically acquire aspects of linguistic theory has become a prominent theme in the literature ever since the inception of BERT", but didn't reference other work to provide similar counter evidence to the consensus they referenced in Rogers et al. (2020). As someone not familiar with this area of research, maybe there is not so much to cite here, but if that is the case, I feel it should be mentioned why there is no related work section. comments,_suggestions_and_typos Citation should be formatted like so: "The consensus points to BERT-like models having some capacity for syntactic understanding (Rogers et al., 2020)."
[ [ 982, 1141 ], [ 1142, 1188 ], [ 1189, 1235 ], [ 1240, 1277 ], [ 1279, 1371 ], [ 2468, 2544 ], [ 2545, 3087 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Jus_pos_4", "Eval_neg_1", "Jus_neg_1" ]
75
paper_summary This work introduces a new dataset, the Hindi Legal Documents Corpus (HLDC), a corpus with 900 thousand legal documents in Hindi. This corpus is collected from public data, and the authors intend to release (in addition to the corpus) the scripts necessary for its creation and processing, along with models and code for the experiments in the paper. The authors examine the task of predicting the verdict of bail applications (a binary task, which is to predict whether or not the application was denied or granted). A variety of models are explored for this task; while accuracy is better than the majority baseline, there is still much room for progress. The headroom in performance even for this simple task highlights the challenges in using natural language processing and machine learning systems for legal use cases. Overall, I believe the data and experiments introduced by this work would be interesting to many, and I recommend it's acceptance. summary_of_strengths 1. This work introduces a new, large-scale dataset containing legal documents in a low-resource language. This can be a valuable resource for many, and could help advance research in natural language processing for legal use cases. 2. Authors thoroughly describe the process of data collection and cleaning, and intend to open-source code for reproducing these steps. 3. Through experiments, authors demonstrate the challenges of current techniques in a simple (yet telling) task of predicting the outcome of bail applications. The authors report multiple baselines and will publicly release their code and models. 4. The authors take many steps to anonymize the dataset, removing names, gender information, titles, locations, times, etc. 5. This paper is clear and well written. summary_of_weaknesses Some minor considerations: 1. It would be informative to users if authors reported sensitivity of their experiments to hyper-parameters, along with standard deviations on their numbers. 2. The presented error analyses are anecdotal, and might not be reflective of the overall behavior of the system. It would strengthen this paper if authors further explored systematic biases in their datasets and models (e.g. how does accuracy/F1 vary by district?) comments,_suggestions_and_typos Footnote marks should come after punctuation.
[ [ 995, 1098 ], [ 1098, 1223 ], [ 1227, 1298 ], [ 1363, 1520 ], [ 1610, 1663 ], [ 1664, 1730 ], [ 1736, 1774 ], [ 1986, 2096 ], [ 2097, 2249 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5", "Jus_pos_5", "Eval_pos_6", "Eval_neg_1", "Jus_neg_1" ]
76
paper_summary This paper proposed a confidence estimation method for neural machine translation (NMT) by jointly training the NMT model with a confidence network which learns to output a confidence score per example. The confidence score (a scalar between 0 and 1) is used to provide “hints” for the NMT model, that is interpolating the original prediction probabilities with the ground truth probability distribution. Higher confidence indicates less hints provided. The two models are trained jointly, where NMT learns the task and the confidence network learns to produce the correct confidence. Besides, the confidence is also utilized to smooth labels for preventing miscalibration. Experiments on several quality estimation tasks demonstrate the effectiveness of the proposed method in improving model performance and detecting noisy samples and out-of-domain data. summary_of_strengths 1. This paper focused on an important problem in estimating confidence for poorly calibrated NMT models. Different from previous work based on Monte Carlo dropout, the proposed method, learning confidence estimation during training, is more efficient and may be benefit for future research. 2. The paper is well-written and easy to follow. The experiments are sufficient and promising. summary_of_weaknesses 1. Since an additional confidence network has been involved in producing confidence score, how to ensure the confidence network would not be over-confident or under-confident? Would this be an endless loop if another network is needed to assess the uncertainty of the confidence network? 2. The improvement compared to other unsupervised methods is not impressive, while there is still a big gap with the strong QE model BERT-BiRNN. comments,_suggestions_and_typos N/A
[ [ 898, 1000 ], [ 1000, 1186 ], [ 1191, 1236 ], [ 1237, 1283 ], [ 1600, 1742 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_neg_1" ]
77
The paper explores the use of probabilistic models (gaussian processes) to regress on the target variable of post-editing time/rates for quality estimation of MT output. The paper is well structured with a clear introduction that highlights the problem of QE point estimates in real-world applications. I especially liked the description of the different asymmetric risk scenarios and how they entail different estimators. For readers familiar with GPs the paper spends quite some space to reflect them, but I think it is worth the effort to introduce these concepts to the reader. The GP approach and the choices for kernels and using warping are explained very clearly and are easy to follow. In general the research questions that are to be answered by this paper are interesting and well phrased. However, I do have some questions/suggestions about the Results and Discussion sections for Intrinsic Uncertainty Evaluation: -Why were post-editing rates chosen over prediction (H)TER? TER is a common value to predict in QE research and it would have been nice to justify the choice made in the paper. -Section 3.2: I don't understand the first paragraph at all: What exactly is the trend you see for fr-en & en-de that you do not see for en-es? NLL and NLPD 'drastically' decrease with warped GPs for all three datasets. -The paper indeed states that it does not want to advance state-of-the-art (given that they use only the standard 17 baseline features), but it would have been nice to show another point estimate model from existing work in the result tables, to get a sense of the overall quality of the models. -Related to this, it is hard to interpret NLL and NLPD values, so one is always tempted to look at MAE in the tables to get a sense of 'how different the predictions are'. Since the whole point of the paper is to say that this is not the right thing to do, it would be great provide some notion of what is a drastic reduce in NLL/NLPD worth: A qualitative analysis with actual examples. Section 4 is very nicely written and explains results very intuitively! Overall, I like the paper since it points out the problematic use of point estimates in QE. A difficult task in general where additional information such as confidence arguably are very important. The submission does not advance state-of-the-art and does not provide a lot of novelty in terms of modeling (since GPs have been used before), but its research questions and goals are clearly stated and nicely executed. Minor problems: -Section 4: "over and underestimates" -> "over- and underestimates" -Figure 1 caption: Lines are actually blue and green, not blue and red as stated in the caption. -If a certain toolkit was used for GP modeling, it would be great to refer to this in the final paper.
[ [ 171, 225 ], [ 226, 303 ], [ 304, 423 ], [ 585, 697 ], [ 698, 803 ], [ 1108, 1166 ], [ 1168, 1326 ], [ 1641, 1792 ], [ 1795, 2009 ], [ 2010, 2081 ], [ 2082, 2173 ], [ 2279, 2386 ], [ 2387, 2420 ], [ 2426, 2498 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_neg_1", "Jus_neg_1", "Eval_pos_5", "Jus_pos_5", "Eval_pos_6", "Major_claim", "Eval_neg_2", "Jus_neg_2", "Eval_pos_7" ]
78
This is a highly satisfying paper. It is a report of various NLP efforts for several Indigenous languages of Canada It goes deeply enough into the technical details of the projects to show that the efforts are viable and successful, without getting bogged down in numbers or linguistic details that are unimportant to people external to the projects. Where the paper does get technical is in a discussion of the differing difficulties of speech recognition for different languages, providing a useful case study to demonstrate that one-size technology approaches are not necessarily universal stand-alone solutions. The paper understates two points that could be further investigated. 1 "Rule based approaches may seem outdated in contrast to statistical or neural methods. However, with most Indigenous languages, existing corpora are not large enough to produce accurate statistical models." Why apologize for using a better approach? Rules may be "outdated" because they are inefficient for certain languages with reams of available data and scads of phenomena that don't fit. For polysynthetic languages, though, one could posit that a fairly small set of rules might be highly predictive - humans invoke algorithms to construct patterned speech that would otherwise be incomprehensible for the listener to deconstruct, and those same algorithms can be encoded for use by machines. At the least, it would be worth proposing that the languages in this study can offer a test of rule-based vs. inference-based processes, and propose performing such comparisons when the data for the study languages is sufficiently mature. 2. This paper shows remarkable achievement for minority languages as a result of a $6 million grant. This is a crucial scientific finding: money works! Important research can make great strides regarding languages that are usually neglected, if and only if funding is available for people to take the time to do the work. The billions that have been pumped into languages like English have in fact resulted in technologies that can be applied at much lower cost to languages like Kanyen’kéha, but there are still costs. The paper could make more of an advocacy point for what relatively modest funding could do for languages in places where leaders have not yet had the same impetuses as witnessed in Canada, including India and Africa where "minority" language is often a misnomer. The paper nicely shows what can be done for languages well outside of the research mainstream, particularly in collaboration between the researchers and the communities. Without a doubt, this paper should be part of the program.
[ [ 0, 34 ], [ 116, 350 ], [ 1629, 1726 ], [ 1727, 1777 ], [ 1778, 2408 ], [ 2409, 2578 ], [ 2579, 2638 ] ]
[ "Major_claim", "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Jus_pos_3", "Eval_pos_4", "Major_claim" ]
79
The aim of this paper is to show that distributional information stored in word vector models contain information about POS labels. They use a version of the BNC annotated with UD POS and in which words have been replaced by lemmas. They train word embeddings on this corpus, then use the resulting vectors to train a logistic classifier to predict the word POS. Evaluations are performed on the same corpus (using cross-validation) as well as on other corpora. Results are clearly presented and discussed and analyzed at length. The paper is clear and well-written. The main issue with this paper is that it does not contain anything new in terms of NLP or ML. It describe a set of straightforward experiments without any new NLP or ML ideas or methods. Results are interesting indeed, in so far that they provide an empirical grounding to the notion of POS. In that regard, it is certainly worth being published in a (quantitative/emprirical) linguistic venue. On another note, the literature on POS tagging and POS induction using word embeddings should be cited more extensively (cf. for instance Lin, Ammar, Duer and Levin 2015; Ling et al. 2015 [EMNLP]; Plank, Søgaard and Goldberg 2016...).
[ [ 530, 566 ], [ 567, 661 ], [ 662, 754 ], [ 755, 785 ], [ 787, 859 ], [ 860, 962 ], [ 980, 1082 ], [ 1083, 1197 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Jus_neg_1", "Eval_pos_2", "Jus_pos_2", "Major_claim", "Eval_neg_2", "Jus_neg_2" ]
80
- Strengths: This paper reports on an interesting project to enable people to design their own language for interacting with a computer program, in place of using a programming language. The specific construction that the authors focus on is the ability for people to make definitions. Very nicely, they can make recursive definitions to arrive at a very general way of giving a command. The example showing how the user could generate definitions to create a palm tree was motivating. The approach using learning of grammars to capture new cases seems like a good one. - Weaknesses: This seems to be an extension of the ACL 2016 paper on a similar topic. It would be helpful to be more explicit about what is new in this paper over the old one. There was not much comparison with previous work: no related work section. The features for learning are interesting but it's not always clear how they would come into play. For example, it would be good to see an example of how the social features influenced the outcome. I did not otherwise see how people work together to create a language. - General Discussion:
[ [ 286, 387 ], [ 388, 485 ], [ 486, 569 ], [ 585, 656 ], [ 657, 746 ], [ 748, 796 ], [ 798, 821 ], [ 824, 922 ], [ 923, 1092 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3" ]
81
paper_summary This work proposes to explicitly model sentence-level representations of both the source and target side of unsupervised machine translation. The authors utilize normalizing flows to model the sentence representations in a flexible space as transformed from a (shared between languages) simple base distribution. At translation time the invertibility of normalizing flows can be used to map between sentence representations in different languages. In experiments the authors test the methods' viability on many language pairs and show competitive performance across the board. summary_of_strengths - The proposed method seems sound and novel. - The authors run extensive experiments on unsupervised machine translation and show moderate improvements across the board. Applying the method on top of XLM seems to result in good improvements over existing techniques, except for MASS. - The paper is mostly well-written except for one crucial point mentioned below in the weaknesses. summary_of_weaknesses - The unsupervised translation tasks are all quite superficial, taking existing datasets of similar languages (e.g. En-De Multi30k, En-Fr WMT) and editing them to an unsupervised MT corpus. - Improvements on Multi30k are quite small (< 1 BLEU) and reported over single runs and measuring BLEU scores alone. It would be good to report averages over multiple runs and report some more modern metrics as well like COMET or BLEURT. - It is initially quite unclear from the writing where the sentence-level representations come from. As they are explicitly modeled, they need supervision from somewhere. The constant comparison to latent variable models and calling these sentence representations latent codes does not add to the clarity of the paper. I hope this will be improved in a revision of the paper. comments,_suggestions_and_typos Some typos: -001: "The latent variables" -> "Latent variables" -154: "efficiently to compute" -> "efficient to compute" -299: "We denote the encoder and decoder for encoding and generating source-language sentences as the source encoder and decoder" - unclear -403: "langauge" -> "language"
[ [ 615, 657 ], [ 660, 782 ], [ 783, 896 ], [ 899, 996 ], [ 1021, 1081 ], [ 1083, 1208 ], [ 1211, 1251 ], [ 1252, 1262 ], [ 1449, 1547 ], [ 1548, 1823 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3" ]
82
paper_summary The authors present an approach for knowledge enhanced counseling reflection generation. It uses dialogue context as well as commonsense and domain knowledge for generating responses in counseling conversations. Two methods for knowledge integration are proposed: a retrieval-based method and a generative method. Experimental results show that both methods for knowledge incorporation improve the system's performance. CONTRIBUTIONS: (1) The authors propose a pipeline that collects domain knowledge (medical) through web mining and apply it to build up a counseling knowledge base. (2) The authors use the domain knowledge they collected along with commonsense knowledge bases for the task of reflection generation. (3) The authors analyze different types of commonsense and domain knowledge, as well as their effect on the generation task. summary_of_strengths - Overall, the paper is clear in its objectives and methodology followed. The work is well structured, easy to read and follow. -The authors show empirical success of their approach. -The overall story is convincing. The proposed approach is tested with reasonable models and appropriate experiments. The experimental results are promising, demonstrating the effectiveness of the proposed method. Thus, the paper makes valuable contributions to the field. -The approach is well motivated and addresses a problem that is relevant to the community. summary_of_weaknesses - Lack of illustrative examples regarding the model outputs. -Some details regarding the knowledge collection process have been omitted (see "Questions" below). comments,_suggestions_and_typos QUESTIONS: -Fig. 2: Why did you discard the "anatomy" category? -l. 221: How many query templates did you specify in total? -l. 227: What's the size of the set of knowledge candidates? -l. 550: Did you calculate the agreement between the annotators? Were the annotators authors of the paper? MINOR: -Try to better align the figures with the text. -fix punctuation: l. 336, l. 433, l. 445, l. 534 -Table 2: The highlighting of the numbers does not correspond to the caption ("highest scores are in bold, second highest scores in italic")
[ [ 883, 954 ], [ 955, 1008 ], [ 1010, 1063 ], [ 1065, 1097 ], [ 1098, 1181 ], [ 1182, 1277 ], [ 1284, 1336 ], [ 1338, 1428 ], [ 1453, 1511 ], [ 1513, 1612 ], [ 1645, 1936 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5", "Eval_pos_6", "Eval_pos_7", "Eval_pos_8", "Eval_neg_1", "Eval_neg_2", "Jus_neg_2" ]
84
paper_summary This paper presents an interesting finding, i.e., fine-tuning only the bias terms of pre-trained language models is competitive with fine-tuning the entire model. The authors compared the proposed method Bias-terms Fine-tuning (BitFit) with other parameter-efficient fine-tuning methods (e.g., Adapters, Diff-Pruning). The experimental results on GLUE benchmark show that BitFit can achieve strong performance with less trainable parameters. summary_of_strengths - The paper is well written and easy to understand. -The proposed method (BitFit) is neat and novel. -The authors show strong empirical results on GLUE benchmark. summary_of_weaknesses I do not have any concerns about this paper. comments,_suggestions_and_typos It would be helpful to compare BitFit with Adapter and Diff-Pruning base on other language models (e.g.,RoBERTa, T5). But current version is good enough for a short paper.
[ [ 480, 529 ], [ 531, 578 ], [ 580, 641 ], [ 664, 709 ], [ 864, 914 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Major_claim", "Major_claim" ]
85
paper_summary This paper proposes a novel method to explore the search space of neural text generation models. The proposed method includes two key components of a modified best-first search and a path recombination mechanism. The authors conduct experiments on text summarization and machine translation tasks. The experiment results show that the proposed method generates massive-scale candidate sentences and obtain comparable or even better metric scores. summary_of_strengths - The description of the proposed approach is clear and easy to follow. - The paper presents a well-rounded set of experiments on text summarization and machine translation. - The authors provide a lot of details in the appendix, which helps the reproducibility. summary_of_weaknesses - Although BFS is briefly introduced in Section 3, it's still uneasy to understand for people who have not studied the problem. More explanation is preferable. comments,_suggestions_and_typos - Algorithm 1, line 11: the function s(·) should accept a single argument according to line 198. - Figure 6: the font size is a little bit small.
[ [ 485, 554 ], [ 557, 656 ], [ 659, 746 ], [ 771, 896 ], [ 897, 929 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Jus_neg_1", "Eval_neg_1" ]
86
The paper proposes a convolutional neural network approach to model the coherence of texts. The model is based on the well-known entity grid representation for coherence, but puts a CNN on top of it. The approach is well motivated and described, I especially appreciate the clear discussion of the intuitions behind certain design decisions (e.g. why CNN and the section titled 'Why it works'). There is an extensive evaluation on several tasks, which shows that the proposed approach beats previous methods. It is however strange that one previous result could not be reproduced: the results on Li/Hovy (2014) suggest an implementation or modelling error that should be addressed. Still, the model is a relatively simple 'neuralization' of the entity grid model. I didn't understand why 100-dimensional vectors are necessary to represent a four-dimensional grid entry (or a few more in the case of the extended grid). How does this help? I can see that optimizing directly for coherence ranking would help learn a better model, but the difference of transition chains for up to k=3 sentences vs. k=6 might not make such a big difference, especially since many WSJ articles may be very short. The writing seemed a bit lengthy, the paper repeats certain parts in several places, for example the introduction to entity grids. In particular, section 2 also presents related work, thus the first 2/3 of section 6 are a repetition and should be deleted (or worked into section 2 where necessary). The rest of section 6 should probably be added in section 2 under a subsection (then rename section 2 as related work). Overall this seems like a solid implementation of applying a neural network model to entity-grid-based coherence. But considering the proposed consolidation of the previous work, I would expect a bit more from a full paper, such as innovations in the representations (other features?) or tasks. minor points: - this paper may benefit from proof-reading by a native speaker: there are articles missing in many places, e.g. '_the_ WSJ corpus' (2x), '_the_ Brown ... toolkit' (2x), etc. - p.1 bottom left column: 'Figure 2' -> 'Figure 1' - p.1 Firstly/Secondly -> First, Second - p.1 'limits the model to' -> 'prevents the model from considering ...' ? - Consider removing the 'standard' final paragraph in section 1, since it is not necessary to follow such a short paper.
[ [ 201, 245 ], [ 247, 394 ], [ 396, 509 ], [ 765, 918 ], [ 920, 1193 ], [ 1194, 1277 ], [ 1279, 1612 ], [ 1613, 1727 ], [ 1792, 1835 ], [ 1924, 1985 ], [ 1987, 2096 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_pos_3", "Major_claim", "Eval_neg_3", "Jus_neg_3" ]

This is a slightly reformatted (split spans and labels) version of the SubstanReview dataset, the original can be found at https://github.com/YanzhuGuo/SubstanReview.

Downloads last month
0
Edit dataset card