id
int64
0
549
review
stringlengths
314
12.7k
spans
sequence
labels
sequence
503
- Strengths: A new encoder-decoder model is proposed that explicitly takes into account monotonicity. - Weaknesses: Maybe the model is just an ordinary BiRNN with alignments de-coupled. Only evaluated on morphology, no other monotone Seq2Seq tasks. - General Discussion: The authors propose a novel encoder-decoder neural network architecture with "hard monotonic attention". They evaluate it on three morphology datasets. This paper is a tough one. One the one hand it is well-written, mostly very clear and also presents a novel idea, namely including monotonicity in morphology tasks. The reason for including such monotonicity is pretty obvious: Unlike machine translation, many seq2seq tasks are monotone, and therefore general encoder-decoder models should not be used in the first place. That they still perform reasonably well should be considered a strong argument for neural techniques, in general. The idea of this paper is now to explicity enforce a monotonic output character generation. They do this by decoupling alignment and transduction and first aligning input-output sequences monotonically and then training to generate outputs in agreement with the monotone alignments. However, the authors are unclear on this point. I have a few questions: 1) How do your alignments look like? On the one hand, the alignments seem to be of the kind 1-to-many (as in the running example, Fig.1), that is, 1 input character can be aligned with zero, 1, or several output characters. However, this seems to contrast with the description given in lines 311-312 where the authors speak of several input characters aligned to 1 output character. That is, do you use 1-to-many, many-to-1 or many-to-many alignments? 2) Actually, there is a quite simple approach to monotone Seq2Seq. In a first stage, align input and output characters monotonically with a 1-to-many constraint (one can use any monotone aligner, such as the toolkit of Jiampojamarn and Kondrak). Then one trains a standard sequence tagger(!) to predict exactly these 1-to-many alignments. For example, flog->fliege (your example on l.613): First align as in "f-l-o-g / f-l-ie-ge". Now use any tagger (could use an LSTM, if you like) to predict "f-l-ie-ge" (sequence of length 4) from "f-l-o-g" (sequence of length 4). Such an approach may have been suggested in multiple papers, one reference could be [*, Section 4.2] below. My two questions here are: 2a) How does your approach differ from this rather simple idea? 2b) Why did you not include it as a baseline? Further issues: 3) It's really a pitty that you only tested on morphology, because there are many other interesting monotonic seq2seq tasks, and you could have shown your system's superiority by evaluating on these, given that you explicitly model monotonicity (cf. also [*]). 4) You perform "on par or better" (l.791). There seems to be a general cognitive bias among NLP researchers to map instances where they perform worse to "on par" and all the rest to "better". I think this wording should be corrected, but otherwise I'm fine with the experimental results. 5) You say little about your linguistic features: From Fig. 1, I infer that they include POS, etc. 5a) Where did you take these features from? 5b) Is it possible that these are responsible for your better performance in some cases, rather than the monotonicity constraints? Minor points: 6) Equation (3): please re-write $NN$ as $\text{NN}$ or similar 7) l.231 "Where" should be lower case 8) l.237 and many more: $x_1\ldots x_n$. As far as I know, the math community recommends to write $x_1,\ldots,x_n$ but $x_1\cdots x_n$. That is, dots should be on the same level as surrounding symbols. 9) Figure 1: is it really necessary to use cyrillic font? I can't even address your example here, because I don't have your fonts. 10) l.437: should be "these" [*] @InProceedings{schnober-EtAl:2016:COLING, author = {Schnober, Carsten and Eger, Steffen and Do Dinh, Erik-L\^{a}n and Gurevych, Iryna}, title = {Still not there? Comparing Traditional Sequence-to-Sequence Models to Encoder-Decoder Neural Networks on Monotone String Translation Tasks}, booktitle = {Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers}, month = {December}, year = {2016}, address = {Osaka, Japan}, publisher = {The COLING 2016 Organizing Committee}, pages = {1703--1714}, url = {http://aclweb.org/anthology/C16-1160} } AFTER AUTHOR RESPONSE Thanks for the clarifications. I think your alignments got mixed up in the response somehow (maybe a coding issue), but I think you're aligning 1-0, 0-1, 1-1, and later make many-to-many alignments from these. I know that you compare to Nicolai, Cherry and Kondrak (2015) but my question would have rather been: why not use 1-x (x in 0,1,2) alignments as in Schnober et al. and then train a neural tagger on these (e.g. BiLSTM). I wonder how much your results would have differed from such a rather simple baseline. ( A tagger is a monotone model to start with and given the monotone alignments, everything stays monotone. In contrast, you start out with a more general model and then put hard monotonicity constraints on this ...) NOTES FROM AC Also quite relevant is Cohn et al. (2016), http://www.aclweb.org/anthology/N16-1102 . Isn't your architecture also related to methods like the Stack LSTM, which similarly predicts a sequence of actions that modify or annotate an input? Do you think you lose anything by using a greedy alignment, in contrast to Rastogi et al. (2016), which also has hard monotonic attention but sums over all alignments?
[ [ 188, 250 ], [ 425, 451 ], [ 452, 589 ], [ 592, 1195 ], [ 1196, 1243 ], [ 1244, 2534 ], [ 2554, 2608 ], [ 2610, 2811 ], [ 3103, 3148 ], [ 3150, 3243 ] ]
[ "Eval_neg_1", "Eval_pos_1", "Eval_pos_2", "Jus_pos_2", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4" ]
504
Update after author response: 1. My major concern about the optimization of model's hyperparameter (which are numerous) has not been addressed. This is very important, considering that you report results from folded cross-validation. 2. The explanation that benefits of their method are experimentally confirmed with 2% difference -- while evaluating via 5-fold CV on 200 examples -- is quite unconvincing. ======================================================================== Summary: In this paper authors present a complex neural model for detecting factuality of event mentions in text. The authors combine the following in their complex model: (1) a set of traditional classifiers for detecting event mentions, factuality sources, and source introducing predicates (SIPs), (2) A bidirectional attention-based LSTM model that learns latent representations for elements on different dependency paths used as input, (2) A CNN that uses representations from the LSTM and performs two output predictions (one to detect specific from underspecified cases and another to predict the actual factuality class). From the methodological point of view, the authors are combining a reasonably familiar methods (att-BiLSTM and CNN) into a fairly complex model. However, this model does not take raw text (sequence of word embeddings) as input, but rather hand-crafted features (e.g., different dependency paths combining factuality concepts, e.g., sources, SIPs, and clues). The usage of hand-crafted features is somewhat surprising if coupled with complex deep model. The evaluation seems a bit tainted as the authors report the results from folded cross-validation but do not report how they optimized the hyperparameters of the model. Finally, the results are not too convincing -- considering the complexity of the model and the amount of preprocessing required (extraction of event mentions, SIPs, and clues), a 2% macro-average gain over the rule-based baseline and overall 44% performance seems modest, at best (looking at Micro-average, the proposed model doesn't outperform simple MaxEnt classifier). The paper is generally well-written and fairly easy to understand. Altogether, I find this paper to be informative to an extent, but in it's current form not a great read for a top-tier conference. Remarks: 1. You keep mentioning that the LSTM and CNN in your model are combined "properly" -- what does that actually mean? How does this "properness" manifest? What would be the improper way to combine the models? 2. I find the motivation/justification for the two output design rather weak: - the first argument that it allows for later addition of cues (i.e manually-designed features) kind of beats the "learning representations" advantage of using deep models. - the second argument about this design tackling the imbalance in the training set is kind of hand-wavy as there is no experimental support for this claim. 3. You first motivate the usage of your complex DL architecture with learning latent representations and avoiding manual design and feature computation. And then you define a set of manually designed features (several dependency paths and lexical features) as input for the model. Do you notice the discrepancy? 4. The LSTMs (bidirectional, and also with attention) have by now already become a standard model for various NLP tasks. Thus I find the detailed description of the attention-based bidirectional LSTM unnecessary. 5. What you present as a baseline in Section 3 is also part of your model (as it generates input to your model). Thus, I think that calling it a baseline undermines the understandability of the paper. 6. The results reported originate from a 5-fold CV. However, the model contains numerous hyperparameters that need to be optimized (e.g., number of filters and filter sizes for CNNs). How do you optimize these values? Reporting results from a folded cross-validation doesn't allow for a fair optimization of the hypeparameters: either you're not optimizing the model's hyperparameters at all, or you're optimizing their values on the test set (which is unfair). 7. " Notice that some values are non-application (NA) grammatically, e.g., PRu, PSu, U+/-" -- why is underspecification in ony one dimension (polarity or certainty) not an option? I can easily think of a case where it is clear the event is negative, but it is not specified whether the absence of an event is certain, probable, or possible. Language & style: 1. " to a great degree" -> "great degree" is an unusual construct, use either "great extent" or "large degree" 2. " events that can not" -> "cannot" or "do not" 3. " describes out networks...in details shown in Figure 3." - > "...shown in Figure 3 in details."
[ [ 1591, 1625 ], [ 1626, 1759 ], [ 1760, 1803 ], [ 1807, 2131 ], [ 2132, 2198 ], [ 2199, 2330 ], [ 2552, 2626 ], [ 2631, 2969 ], [ 3288, 3405 ], [ 3406, 3498 ], [ 3502, 3611 ], [ 3612, 3699 ] ]
[ "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_pos_1", "Major_claim", "Eval_neg_3", "Jus_neg_3", "Jus_neg_4", "Eval_neg_4", "Jus_neg_5", "Eval_neg_5" ]
505
- Strengths: - technique for creating dataset for evaluation of out-of-coverage items, that could possibly be used to evaluation other grammars as well. -the writing in this paper is engaging, and clear (a pleasant surprise, as compared to the typical ACL publication.) - Weaknesses: -The evaluation datasets used are small and hence results are not very convincing (particularly wrt to the alchemy45 dataset on which the best results have been obtained) -It is disappointing to see only F1 scores and coverage scores, but virtually no deeper analysis of the results. For instance, a breakdown by type of error/type of grammatical construction would be interesting. -it is still not clear to this reviewer what is the proportion of out of coverage items due to various factors (running out of resources, lack of coverage for "genuine" grammatical constructions in the long tail, lack of coverage due to extra-grammatical factors like interjections, disfluencies, lack of lexical coverage, etc. - General Discussion: This paper address the problem of "robustness" or lack of coverage for a hand-written HPSG grammar (English Resource Grammar). The paper compares several approaches for increasing coverage, and also presents two creative ways of obtaining evaluation datasets (a non-trivial issue due to the fact that gold standard evaluation data is by definition available only for in-coverage inputs). Although hand-written precision grammars have been very much out of fashion for a long time now and have been superseded by statistical treebank-based grammars, it is important to continue research on these in my opinion. The advantages of high precision and deep semantic analysis provided by these grammars has not been reproduced by non-handwritten grammars as yet. For this reason, I am giving this paper a score of 4, despite the shortcomings mentioned above.
[ [ 155, 270 ], [ 286, 324 ], [ 335, 366 ], [ 368, 454 ], [ 457, 568 ], [ 569, 666 ], [ 669, 778 ], [ 780, 996 ], [ 1778, 1874 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4", "Major_claim" ]
506
- Strengths: This paper presents a 2 x 2 x 3 x 10 array of accuracy results based on systematically changing the parameters of embeddings models: (context type, position sensitive, embedding model, task), accuracy - context type ∈ {Linear, Syntactic} -position sensitive ∈ {True, False} -embedding model ∈ {Skip Gram, BOW, GLOVE} -task ∈ {Word Similarity, Analogies, POS, NER, Chunking, 5 text classific. tasks} The aim of these experiments was to investigate the variation in performance as these parameters are changed. The goal of the study itself is interesting for the ACL community and similar papers have appeared before as workshop papers and have been well cited, such as Nayak et al.'s paper mentioned below. - Weaknesses: Since this paper essentially presents the effect of systematically changing the context types and position sensitivity, I will focus on the execution of the investigation and the analysis of the results, which I am afraid is not satisfactory. A) The lack of hyper-parameter tuning is worrisome. E.g. - 395 Unless otherwise notes, the number of word embedding dimension is set to 500. - 232 It still enlarges the context vocabulary about 5 times in practice. - 385 Most hyper-parameters are the same as Levy et al' best configuration. This is worrisome because lack of hyperparameter tuning makes it difficult to make statements like method A is better than method B. E.g. bound methods may perform better with a lower dimensionality than unbound models, since their effective context vocabulary size is larger. B) The paper sometimes presents strange explanations for its results. E.g. - 115 "Experimental results suggest that although it's hard to find any universal insight, the characteristics of different contexts on different models are concluded according to specific tasks." What does this sentence even mean? - 580 Sequence labeling tasks tend to classify words with the same syntax to the same category. The ignorance of syntax for word embeddings which are learned by bound representation becomes beneficial. These two sentences are contradictory, if a sequence labeling task classified words with "same syntax" to same category then syntx becomes a ver valuable feature. Bound representation's ignorance of syntax should cause a drop in performance just like other tasks which does not happen. C) It is not enough to merely mention Lai et. al. 2016 who have also done a systematic study of the word embeddings, and similarly the paper "Evaluating Word Embeddings Using a Representative Suite of Practical Tasks", Nayak, Angeli, Manning. appeared at the repeval workshop at ACL 2016. should have been cited. I understand that the focus of Nayak et al's paper is not exactly the same as this paper, however they provide recommendations about hyperparameter tuning and experiment design and even provide a web interface for automatically running tagging experiments using neural networks instead of the "simple linear classifiers" used in the current paper. D) The paper uses a neural BOW words classifier for the text classification tasks but a simple linear classifier for the sequence labeling tasks. What is the justification for this choice of classifiers? Why not use a simple neural classifier for the tagging tasks as well? I raise this point, since the tagging task seems to be the only task where bound representations are consistently beating the unbound representations, which makes this task the odd one out. - General Discussion: Finally, I will make one speculative suggestion to the authors regarding the analysis of the data. As I said earlier, this paper's main contribution is an analysis of the following table. (context type, position sensitive, embedding model, task, accuracy) So essentially there are 120 accuracy values that we want to explain in terms of the aspects of the model. It may be beneficial to perform factor analysis or some other pattern mining technique on this 120 sample data.
[ [ 523, 588 ], [ 734, 977 ], [ 981, 1029 ], [ 1030, 1558 ], [ 1562, 1628 ], [ 1629, 2382 ] ]
[ "Eval_pos_1", "Major_claim", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2" ]
508
paper_summary To alleviate the problem of error propagation due to Automatic Speech Recognition (ASR) system for multi-modal sentiment analysis task the paper provides a refinement approach by detecting the positions of the sentiment words in the text and dynamically refine the word embeddings in the detected positions by incorporating multimodal clues such as low voice, sad face and textual context. Their approach outperforms the baselines. summary_of_strengths The approach does try to address the real-world problem of error propagation due to ASR. The architecture is novel. Good benchmarking and ablation. summary_of_weaknesses It would be difficult to detect the positions of sentiment words if an overall sentence (more than one word) has become noisy due to ASR. The same would be true if there are inserts and deletes in the resulting noisy sentence. This approach may fail in such cases. What is the solution for the same? Some qualitative analysis of the results and/or error analysis will be useful to understand these type of cases better. The case study section can be replaced with qualitative and /or error analysis. comments,_suggestions_and_typos The last sentence of the abstract “Furthermore, our approach can be adapted for other multimodal feature fusion models easily.” needs more explanation somewhere in the paper. This is a very vague sentence without any proper basis. There is no mention of making the built datasets publicly available. Though the dataset can be built to reproduce the results it would be better if authors can share the same publicly.
[ [ 557, 584 ], [ 641, 866 ], [ 867, 904 ], [ 905, 1141 ] ]
[ "Eval_pos_1", "Jus_neg_1", "Eval_neg_1", "Jus_neg_1" ]
509
- Strengths: The authors have nice coverage of a different range of language settings to isolate the way that relatedness and amount of morphology interact (i.e., translating between closely related morphologically rich languages vs distant ones) in affecting what the system learns about morphology. They include an illuminating analysis of what parts of the architecture end up being responsible for learning morphology, particularly in examining how the attention mechanism leads to more impoverished target side representations. Their findings are of high interest and practical usefulness for other users of NMT. - Weaknesses: They gloss over the details of their character-based encoder. There are many different ways to learn character-based representations, and omitting a discussion of how they do this leaves open questions about the generality of their findings. Also, their analysis could've been made more interesting had they chosen languages with richer and more challenging morphology such as Turkish or Finnish, accompanied by finer-grained morphology prediction and analysis. - General Discussion: This paper brings insight into what NMT models learn about morphology by training NMT systems and using the encoder or decoder representations, respectively, as input feature representations to a POS- or morphology-tagging classification task. This paper is a straightforward extension of "Does String-Based Neural MT Learn Source Syntax?," using the same methodology but this time applied to morphology. Their findings offer useful insights into what NMT systems learn.
[ [ 301, 421 ], [ 423, 533 ], [ 534, 618 ], [ 634, 696 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Eval_neg_1" ]
510
paper_summary This paper proposes a novel pyramid-BERT to achieve the sequence length reduction across different encoder layers, which benefits the memory and decoding time reduction for downstream classification and ranking tasks. This method is based on the core-set based token selection method which is justified by theoretical results. Experiments on GLUE benchmarks and Long Range Arena datasets demonstrate the effectiveness of the proposed method. summary_of_strengths - This paper designs a novel speedup method with core-set based token selection which is justified by theoretical results. -Nice experiment results on GLUE benchmarks and Long Range Arena datasets. summary_of_weaknesses - This paper lacks experiment comparisons with some very similar approaches, such as centroid transformers and representation pooling, although the authors claim that a thorough comparison is not required. -The proposed method seems not to improve the training speed of traditional BERT in the pre-training stage. It means that we only apply this method in the downstream tasks. I wonder about the effectiveness of combining this method with other speedup ways, such as DeeBERT (dynamic early exiting). comments,_suggestions_and_typos This paper is well written and easy to follow. The authors explore to speed up BERT by selecting core-set tokens and reducing the sequence length across different encoder layers. This idea is interesting, and the motivation is reasonable. My main concern is the experiment comparisons with similar approaches, such as centroid transformers and representation pooling. Although these methods do not release the code, I think this comparison could strengthen the effectiveness of the proposed method. Besides, this method seems to only fit downstream classification and ranking tasks. Thus, I wonder about the performance of combining this method with other speedup ways, such as DeeBERT (dynamic early exiting). If we adopt the way of dynamic early exiting, the sequence length reduction across different layers seems to be marginal somehow. The token selection algorithm used in the paper also reminds me of another way that combines the selective encoding method and Gumbel-Softmax for BERT. Each token predicts the probability of surviving at each layer and adopt the Gumbel-Softmax to obtain the final token that feeds to the next layer. This method may enhance the robustness of the proposed method, since it traverses more different combinations.
[ [ 480, 600 ], [ 602, 676 ], [ 701, 774 ], [ 775, 904 ], [ 906, 1012 ], [ 1013, 1202 ], [ 1235, 1281 ], [ 1414, 1438 ], [ 1444, 1472 ], [ 1475, 1544 ], [ 1546, 1602 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5", "Eval_neg_3", "Jus_neg_3" ]
511
- Strengths: The paper is thoroughly written and discusses its approach compared to other approaches. The authors are aware that their findings are somewhat limited regarding the mean F values. - Weaknesses: Some minor orthographical mistakes and some repetive clauses. In general the paper would benefit if the sections 1 and 2 would be shortened to allow the extension of sections 3 and 4. The main goal is not laid out clearly enough, which may be a result of the ambivalence of the paper's goals. - General Discussion: Table 1 should only be one column wide, while the figures, especially 3, 5, and 6 would greatly benefit from a two column width. The paper was not very easy to understand during first read. Major improvements could be achieved by straightening up the content.
[ [ 13, 101 ], [ 393, 501 ], [ 654, 715 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Eval_neg_2" ]
512
paper_summary The paper proposed boundary smoothing as a regularization technique for span-based neural NER models. The boundary smoothing can mitigate the over-confidence issue by reassigning the entity probabilities from annotated spans to the surrounding ones. The approach is simple, and mainly extends the Yu et al (2020) by adding the boundary smoothing. The experiments show the proposed approach can improve the performance of Yu et al (2020). summary_of_strengths The idea of the boundary smoothing is simple, and can improve the performance of the existing NER approach. The in-depth analysis for the over-confidence issue is attractive. The paper is well written. summary_of_weaknesses The idea of using boundary information is not completely innovative. For example, the idea of effectively using boundary information has been proposed in Shen et al.(2021). The experiments don’t prove the claims. Firstly, the paper lacks adequate verification for the proposed idea, and evaluated it with only one baseline. Secondly, the paper used RoBERTa while the previous methods used BERT. Thirdly, Yu et al. (2020) conducted the experiments on GENIA, but the paper doesn’t. comments,_suggestions_and_typos The paper lacks adequate verification for the idea. The paper claims that the boundary smoothing can be easily integrated into any span-based neural NER systems, but the paper only integrated it into the Yu et al. (2020). The paper should further evaluate the boundary smoothing idea in several SOTA approaches. In addition, the idea of effectively using boundary information has been proposed in Shen et al.(2021), which constructs soft examples for partially matched spans, and trains a boundary regressor with boundary-level Smooth L1 loss. In the experiments, I have some concerns: (1) The paper uses RoBERTa for the experiments, while the baselines used BERT. It may be unfair. (2) Why the Baseline outperforms Yu et al. (2020) on ACE 04 and ACE 05 greatly, but has comparable performance on OntoNotes 5 and CoNLL 2003? Yu et al. (2020) also conducted experiments on the GENIA dataset, but the paper doesn’t report the results on this dataset. (3) According to the Table 4, the improvement of BS against LS is marginal. Some statements are not very accurate. For example, Line 313, the submission says “this work is among the first to introduce a span-based approach to Chinese NER tasks and establish SOTA results.”, however, Shen et al. 2021 proposed a span-based approach previously, and conducted the NER experiment on Chinese Weibo dataset.
[ [ 474, 581 ], [ 582, 648 ], [ 649, 676 ], [ 699, 767 ], [ 768, 871 ], [ 872, 911 ], [ 912, 1179 ], [ 1212, 1263 ], [ 1264, 1523 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3" ]
513
paper_summary The paper proposes a pipeline for zero-shot data-to-text generation. The framework follows traditional data-to-text diagram. It contains 4 steps in general. 1) Template verbalization with 2) ordering module, then the 3) aggregation and 4) sentence compression. To enable training the pipeline model, the paper contributes a large dataset named Wikifluent corpus. The data-text pairs are collected from Wikipedia dump. It also applies some split-and-repharase and coreference models on corresponding texts. The framework proposed in this paper is similar to rule-based data-to-text generation. The main difference of the proposed method compared to traditional methods, each module is trained using neural networks by leveraging recent pre-trained generation models like BART. The novelty of this paper is limited. The neural pipeline method not entirely new, some of previous papers combine important modules such as planning and ordering of traditional data-to-text generation into an end-to-end neural network. [ 1,2]. The paper show improvements of the proposed method on two public datasets with COPY baseline in automatic evaluation. In the manual evaluation, the paper fails to compare with existing baseline method, leaving the evaluation incomplete. The overall paper structure is messy. A lot of detailed model descriptions are described in experiments, which is quite confusing. [1] Neural data-to-text generation: A comparison between pipeline and end-to-end architectures. Ferreira, et al., 2019. [2] Data-to-text with content content selection and planning. Puduppully, et al., 2019. summary_of_strengths 1. The paper contributes a large scale data-to-text generation dataset WikiFluent. 2. The paper proposes a neural pipeline method leveraging recent pre-trained language models. summary_of_weaknesses 1. The idea of neural pipeline method for data-to-text generation is not entirely new. The novelty is limited in this paper. 2. In the experiments, the paper only compare with a weak baseline COPY. There are a lot potential baselines are missing in the experiments. Some existing methods leveraging pre-trained language models (PLMs) should be compared as a baseline. For example, training the PLMs on WIKIFLUENT datasets and then test on the evaluation dataset is a straight forward baseline. 3. The human evaluation part is quite confusing. No baseline methods are compared (for example COPY). 4. The paper writing is really confusing. First is the paper structure. A lot of model descriptions are describe in the experiment part. And the section 5.4 is the description of ablation methods, which would be more clear by listing together with other baseline methods. comments,_suggestions_and_typos In sentence aggregation model, the details are not quite clear. Given the input sentences, the supervision is comming from the delimiter term. What is the standard of defining the delimiter term? Is it only using strict sentence string matching or semantic matching? There are some missing references in this paper. [1] Neural data-to-text generation: A comparison between pipeline and end-to-end architectures. Ferreira, et al., 2019. [2] Data-to-text with content content selection and planning. Puduppully, et al., 2019. [3] An architecture for data-to-text systems. Reiter et al., 2007. Typos Line 409 we adopt BART-base for -> we adopt BART-base model for
[ [ 794, 831 ], [ 832, 876 ], [ 877, 1039 ], [ 1277, 1314 ], [ 1315, 1407 ], [ 1844, 1927 ], [ 1928, 1966 ], [ 1970, 2039 ], [ 2040, 2336 ], [ 2340, 2385 ], [ 2386, 2439 ], [ 2443, 2481 ], [ 2482, 2712 ], [ 2745, 2808 ], [ 2809, 3011 ], [ 3013, 3062 ], [ 3063, 3411 ] ]
[ "Eval_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Eval_neg_5", "Eval_neg_6", "Jus_neg_6", "Eval_neg_7", "Jus_neg_7", "Eval_neg_8", "Jus_neg_8", "Eval_neg_9", "Jus_neg_9", "Eval_neg_10", "Jus_neg_10" ]
514
This paper proposes two dictionary-based methods for estimating multilingual word embeddings, one motivated in clustering (MultiCluster) and another in canonical correlation analysis (MultiCCA). In addition, a supersense similarity measure is proposed that improves on QVEC by substituting its correlation component with CCA, and by taking into account multilingual evaluation. The evaluation is performed on a wide range of tasks using the web portal developed by the authors; it is shown that in some cases the proposed representation methods outperform two other baselines. I think the paper is very well written, and represents a substantial amount of work done. The presented representation-learning and evaluation methods are certainly timely. I also applaud the authors for the meticulous documentation. My general feel about this paper, however, is that it goes (perhaps) in too much breadth at the expense of some depth. I'd prefer to see a thorougher discussion of results (e.g. regarding the conflicting outcome for MultiCluster between 59- and 12-language set-up; regarding the effect of estimation parameters and decisions in MultiCluster/CCA). So, while I think the paper is of high practical value to me and the research community (improved QVEC measure, web portal), I frankly haven't learned that much from reading it, i.e. in terms of research questions addressed and answered. Below are some more concrete remarks. It would make sense to include the correlation results (Table 1) for monolingual QVEC and QVEC-CCA as well. After all, it is stated in l.326--328 that the proposed QVEC-CCA is an improvement over QVEC. Minor: l. 304: "a combination of several cross-lingual word similarity datasets" -> this sounds as though they are of different nature, whereas they are really of the same kind, just different languages, right? p. 3: two equations exceed the column margin Lines 121 and 147 only mention Coulmance et al and Guo et al when referring to the MultiSkip baseline, but section 2.3 then only mentions Luong et al. So, what's the correspondence between these works? While I think the paper does reasonable justice in citing the related works, there are more that are relevant and could be included: Multilingual embeddings and clustering: Chandar A P, S., Lauly, S., Larochelle, H., Khapra, M. M., Ravindran, B., Raykar, V. C., and Saha, A. (2014). An autoencoder approach to learning bilingual word representations. In NIPS. Hill, F., Cho, K., Jean, S., Devin, C., and Bengio, Y. (2014). Embedding word similarity with neural machine translation. arXiv preprint arXiv:1412.6448. Lu, A., Wang, W., Bansal, M., Gimpel, K., & Livescu, K. (2015). Deep multilingual correlation for improved word embeddings. In NAACL. Faruqui, M., & Dyer, C. (2013). An Information Theoretic Approach to Bilingual Word Clustering. In ACL. Multilingual training of embeddings for the sake of better source-language embeddings: Suster, S., Titov, I., and van Noord, G. (2016). Bilingual learning of multi-sense embeddings with discrete autoencoders. In NAACL-HLT. Guo, J., Che, W., Wang, H., and Liu, T. (2014). Learning sense-specific word embeddings by exploiting bilingual resources. In COLING. More broadly, translational context has been explored e.g. in Diab, M., & Resnik, P. (2002). An unsupervised method for word sense tagging using parallel corpora. In ACL.
[ [ 580, 669 ], [ 670, 752 ], [ 753, 813 ], [ 814, 932 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Major_claim" ]
516
paper_summary This paper takes advantage of data (utterances, annotations) that is produced as part of language documentation to use for automatic linguistic segmentation, mainly at the word level. The authors worked with two languages that are still being documented Mboshi and Japhug. Both languages have data available but at different levels of granularity. They used Bayesian non-parametric approaches with different variations of the n-gram models and data used in addition to an incremental data training approach. The evaluation is reported in terms of F1 on three different segmentation levels (morphological boundary level, token level, and type-level). The discussion points out that the weakly supervised performs better than fully supervised and of course better than unsupervised. On the other hand, morpheme-based segmentation benefits more from underserviced training. summary_of_strengths - The paper is well written and easy to follow. -The languages that were targeted are still being documented and they resemble a real low-resource setting. -The core approach is easy to follow and replicate. It lends itself to being easily explainable in terms of behavior and performance. -This effort can be helpful as an enabling technology for the language documentation process. summary_of_weaknesses There are no major weaknesses in this version of the paper. The weaknesses mentioned in the previous review have been addressed in the current version for the most part. comments,_suggestions_and_typos I appreciate the authors taking the time to improve upon the paper. I believe this work would definitely facilitate the documentation process whether in progress or as a post-processing step. It would be really useful for a future version of this work to include a use case study on using this process and reporting how useful it was. This would definitely increase the value of this work among both NLP and language documentation communities.
[ [ 909, 954 ], [ 1064, 1114 ], [ 1115, 1196 ], [ 1198, 1291 ], [ 1314, 1374 ], [ 1587, 1962 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Jus_pos_2", "Eval_pos_3", "Major_claim", "Major_claim" ]
518
In this paper the authors present a method for training a zero-resource NMT system by using training data from a pivot language. Unlike other approaches (mostly inspired in SMT), the author’s approach doesn’t do two-step decoding. Instead, they use a teacher/student framework, where the teacher network is trained using the pivot-target language pairs, and the student network is trained using the source-pivot data and the teacher network predictions of the target language. - Strengths: The results the authors present, show that their idea is promising. Also, the authors present several sets of results that validate their assumptions. - Weaknesses: However, there are many points that need to be address before this paper is ready for publication. 1) Crucial information is missing Can you flesh out more clearly how training and decoding happen in your training framework? I found out that the equations do not completely describe the approach. It might be useful to use a couple of examples to make your approach clearer. Also, how is the montecarlo sampling done? 2) Organization The paper is not very well organized. For example, results are broken into several subsections, while they’d better be presented together.  The organization of the tables is very confusing. Table 7 is referred before table 6. This made it difficult to read the results. 3) Inconclusive results: After reading the results section, it’s difficult to draw conclusions when, as the authors point out in their comparisons, this can be explained by the total size of the corpus involved in their methods (621  ). 4) Not so useful information: While I appreciate the fleshing out of the assumptions, I find that dedicating a whole section of the paper plus experimental results is a lot of space. - General Discussion: Other: 578:  We observe that word-level models tend to have lower valid loss compared with sentence- level methods…. Is it valid to compare the loss from two different loss functions? Sec 3.2, the notations are not clear. What does script(Y) means? How do we get p(y|x)? this is never explained Eq 7 deserves some explanation, or better removed. 320: What approach did you use? You should talk about that here 392 : Do you mean 2016? Nitty-gritty: 742  : import => important 772  : inline citation style 778: can significantly outperform 275: Assumption 2 needs to be rewritten … a target sentence y from x should be close to that from its counterpart z.
[ [ 490, 557 ], [ 655, 753 ], [ 768, 798 ], [ 799, 1040 ], [ 1112, 1149 ], [ 1150, 1251 ], [ 1252, 1301 ], [ 1302, 1381 ], [ 1645, 1670 ], [ 1672, 1824 ] ]
[ "Eval_pos_1", "Major_claim", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4" ]
519
- Strengths: The paper addresses a long standing problem concerning automatic evaluation of the output of generation/translation systems. The analysis of all the available metrics is thorough and comprehensive. The authors demonstrate a new metric with a higher correlation with human judgements The bibliography will help new entrants into the field. - Weaknesses: The paper is written as a numerical analysis paper, with very little insights to linguistic issues in generation, the method of generation, the differences in the output from a different systems and human generated reference. It is unclear if the crowd source generated references serve well in the context of an application that needs language generation. - General Discussion: Overall, the paper could use some linguistic examples (and a description of the different systems) at the risk of dropping a few tables to help the reader with intuitions.
[ [ 138, 210 ], [ 592, 722 ] ]
[ "Eval_pos_1", "Eval_neg_1" ]
520
paper_summary This paper proposes a smoothing regularization technique for NER. The technique proposed targets boundary smoothing where the authors claim that inconsistent boundaries are often seen as a problem in annotated NLP NER datasets. The results after applying the method shows less over-confidence, better model calibration, flatter neural minima and more smoothed loss landscapes which plausibly explain performance improvement rather than directly/only addressing the inconsistent boundary span labeling problem in NER. The result attributes are empirically verified. summary_of_strengths 1. After 95% performance ranges, bringing improvements to any problem systematically is certainly a challenging dilemma. In this regard, the work proposed in this paper is already notable. 2. The problem of inconsistent span annotations identified in this paper indeed can also be intuited as an NLP annotation problem hence is a valid one that needs to be addressed. The solution to regularize this problem with smoothing technique for span-based NER method is sound and well justified throughout the paper. The reader would benefit from reading the theory and methods in this paper. 3. The code is publicly released. 4. The empirical analysis is well performed. I especially appreciate section 5.2 which directly tries to shed light on the question that formed the problem premise of this work and I quote "How does boundary smoothing improve the model performance?" It offers a slight indication of a negative result as in not being able to quantitatively verify it owing to irregularities in distribution between synthesized boundary noise and actual noise in the datasets. Nevertheless, it shows another promising result with smoothed loss landscapes indicating sound machine learning settings. 5. The fact that boundary smoothing addresses the over-confidence issue of target span predictions is quite meaningful, hence again a plus for this paper. summary_of_weaknesses Not quite a weakness of this work, just a question I am wondering about. Considering that the boundary smoothing regularization distributes itself around the entity span, is it that the actual problem of whether such a function addresses the boundary irregularity problem in the NLP datasets can never be verified? It is indeed understood from the method explanation in the paper that it can to some extent. comments,_suggestions_and_typos Table 5. Note the caption. There are no results for LS in the table.
[ [ 604, 721 ], [ 722, 789 ], [ 969, 1109 ], [ 1110, 1185 ], [ 1223, 1264 ], [ 1804, 1956 ] ]
[ "Jus_pos_1", "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5" ]
521
paper_summary The paper proposes to trace an agenda for cross-cultural NLP, first by identifying four axes of variation for cultures (linguistic form and style, common ground, aboutness, objectives and values) and describing the challenges related to them; then by highlighting possible strategies for mitigating cultural biases in modern NLP in the areas of data collection, model training and translation. I am not an expert on the issues under discussion in this paper, so it is not easy for me to provide a balanced evaluation. I enjoyed reading it, especially for the attempt of brining awareness on cultural issues, which are often neglected in current NLP practices. I think the authors did a very extensive and comprehensive work on the bibliography, which could be useful for correcting biases. On the other hand, I feel that is often difficult to disentangle between linguistic and cultural dimension, and in many cases the problems/solutions are overlapping with the typical low-resource language scenarios (e.g. where a particular language/dialect/sociolect can be associated with a specific cultural group), and in such cases the general heuristic could be described as "the more cultural-specific data we can acquire, the better"; while in other cases it is difficult to propose any conclusive solution (e.g. see the case for model training in Section 6.2, where methods balancing for bias would require access to demographic attributes, but the problem would persist because one cannot really be sure that such attributes adequately reflect culture). I guess this is a general problematic when one has to deal with something like culture, which is notoriously difficult to define. At the end of my reading, I still feel that I did not get what are, specifically, the directions indicated by the authors (this might well be a limitation of mine) ... summary_of_strengths The topic is highly relevant for the special theme of this year, and the issue of cross-cultural NLP is a new and interesting perspective. The survey takes into account a wide body of references that might be useful for NLP researchers. summary_of_weaknesses The difficulty in defining culture and in disentangling between linguistic and cultural dimension make it also difficult, in many of the cases exemplified by the authors, to trace a clear roadmap for a cross-cultural NLP. comments,_suggestions_and_typos l. 141 varies --> vary l. 466 datasts --> datasets
[ [ 532, 552 ], [ 554, 673 ], [ 674, 803 ], [ 1958, 2028 ], [ 2029, 2127 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4" ]
522
paper_summary This paper contributes two experiments to investigate how annotator’s backgrounds and sociopolitical attitudes affect their perception of toxic language. The paper finds that having certain characteristics is correlated with perceptions towards racism, for example, believing in freedom of speech, predisposes one to be more lax in annotating anti-Black toxicity. The paper also finds that the widely used toxicity detection tool, Perspective API, mimics the conservative attitudinal profile when it comes anti-Black toxicity. summary_of_strengths - Important and valuable work -Going beyond demographic characteristics of annotators, but also including sociopolitical attitudes -Carefully thought-out experiments -Detailed description of some metrics and results -Well-described and contextualized implications of disregarding annotator background and beliefs when creating summary_of_weaknesses - Underdefined and conflation of concepts -Several important details missing -Lack of clarity in how datasets were curated prevents one from assessing their validity -Too many results which are not fully justified or explained comments,_suggestions_and_typos This is a very important, interesting, and valuable paper with many positives. First and foremost, annotators’ backgrounds are an important factor and should be taken into consideration when designing datasets for hate speech, toxicity, or related phenomena. The paper not only accounts for demographic variables as done in previous work but other attitudinal covariates like attitude towards free speech that are well-chosen. The paper presents two well-thought out experiments and presents results in a clear manner which contain several important findings. It is precisely because of the great potential and impact of this paper, I think the current manuscript requires more consideration and fine-tuning before it can reach its final stage. At this point, there seems to be a lack of important details that prevent me from fully gauging the paper’s findings and claims. Generally: - There were too many missing details (for example, what is the distribution of people with ‘free off speech’ attitudes? What is the correlation of the chosen scale item in the breadth-of-posts study?). On a minor note, many important points are relegated to the appendix. -Certain researcher choices and experiment design choices were not justified (for example, why were these particular scales used?) -The explanation of the creation of the breadth-of-posts was confusing. How accurate was the classification of AAE dialect and vulgarity? -The toxicity experiment was intriguing but there was too little space to be meaningful. More concretely, - With regard to terminology and concepts, toxicity and hate speech may be related but are not the same thing. The instructions to the annotators seem to conflate both. The paper also doesn’t present a concrete definition of either. While it might seem redundant or trivial, the wording to annotators plays an important role and can confound the results presented here. -Why were the particular scales chosen for obtaining attitudes? Particularly, for empathy there are several scale items [1], so why choose the Interpersonal Reactivity Index? -What was the distribution of the annotator’s background with respect to the attitudes? For example, if there are too few ‘free of speech’ annotators, then the results shown in Table 3, 4, etc are underpowered. -What were the correlations of the chosen attitudinal scale item for the breadth-of-posts study with the toxicity in the breadth-of-workers study? -How accurate are the automated classification in the breadth-of-posts experiment, i.e., how well does the states technique differentiate identity vs non-identity vulgarity or AAE language for that particular dataset. Particularly, how can it be ascertained whether the n-word was used as a reclaimed slur or not? -In that line, Section 6 discusses perceptions of vulgarity, but there are too many confounds here. Using b*tch in a sentence can be an indication of vulgarity and toxicity (due to sexism). -In my opinion, the perspective API experiment was interesting but rather shallow. My suggestion would be to follow up on it in more detail in a new paper rather than include it in this one. The newly created space could be used to enter the missing details mentioned in the review. -Finally, given that the paper notes that MTurk tends to be predominantly liberal and the authors (commendably) took several steps to ensure greater participation from conservatives, I was wondering if ‘typical’ hate speech datasets are annotated by more homogenous annotators compared to the sample in this paper. What could be the implications of this? Do this paper's findings then hold for existing hate speech datasets? Besides these, I also note some ethical issues in the ‘Ethical Concerns’ section. To conclude, while my rating might seem quite harsh, I believe this work has great potential and I hope to see it enriched with the required experimental details. References: [1] Gerdes, Karen E., Cynthia A. Lietz, and Elizabeth A. Segal. " Measuring empathy in the 21st century: Development of an empathy index rooted in social cognitive neuroscience and social justice." Social Work Research 35, no. 2 (2011): 83-93.
[ [ 565, 592 ], [ 594, 694 ], [ 695, 728 ], [ 730, 778 ], [ 780, 890 ], [ 915, 954 ], [ 956, 989 ], [ 991, 1078 ], [ 1080, 1140 ], [ 1173, 1251 ], [ 1252, 1732 ], [ 1733, 1917 ], [ 1918, 2046 ], [ 2060, 2095 ], [ 2096, 2330 ], [ 2332, 2407 ], [ 2409, 2460 ], [ 2463, 2533 ], [ 2534, 2599 ], [ 2602, 2689 ], [ 2690, 3076 ], [ 4117, 4198 ], [ 4199, 4398 ], [ 4908, 5070 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5", "Eval_neg_1", "Eval_neg_2", "Eval_neg_3", "Eval_neg_4", "Eval_pos_6", "Jus_pos_6", "Eval_neg_5", "Jus_neg_5", "Eval_neg_6", "Jus_neg_6", "Eval_neg_7", "Jus_neg_7", "Eval_neg_8", "Jus_neg_8", "Eval_neg_9", "Jus_neg_9", "Eval_neg_10", "Jus_neg_10", "Major_claim" ]
523
paper_summary This paper proposes a data-efficient end-to-end event extraction method by designing event-specific generation templates. These event-specific templates describe the event types in a natural language way. The proposed method can use pre-trained language models and exploit label semantics from event type definitions. Experimental results show that DEGREE has better performance on low resource event extraction and can be compared with SOTA under the full supervised setting. summary_of_strengths This paper designs an event-specific template for generative event extraction. Compared with the previous methods, the proposed method is more consistent with natural language generation and achieves better performance in the low-resource setting. summary_of_weaknesses - The proposed framework is based on the previous template-based information extraction method. The template-based method's main drawback is the significantly increased training/inference cost (corresponding to the number of event types). The increase in the category of events (such as MAVEN with 100+ types) will exacerbate the problem. - For extreme low-resource settings (1%), different data sampling may heavily affect the experimental results. It is better to conduct experiments multi-times on different subset samples. comments,_suggestions_and_typos Some detailed questions about model inference: -Multi-events: Can DEGREE and DEGREE(ED) deal with multiple events of the same type in the same sentence? Although we can predict the same input sentence for different types one-by-one if a sentence contains multiple events of the same type, the same input (template and sentence) will correspond to various outputs. For example, some sentences have two death events. This is not a problem for other generation methods because they are trigger-driven (TANL, BART-GEN), or generate all extracted events in one step (Text2Event). -Converting generated results to events: Are there some generated results that cannot be parsed into events? Generation methods are more uncontrollable than span extraction and span classification methods for event extraction. For example, the generated span maybe not appear in the input sentence. There are many template methods for information extraction. It is suggested that authors should cite these articles: -Template-Based Named Entity Recognition Using BART. Findings of ACL_IJCNLP 2021 -Reading the Manual: Event Extraction as Definition Comprehension. spnlp@EMNLP 2020
[]
[]
524
paper_summary Below is a copy from the previous review: >In this paper the authors proposed a new fairness metric, accumulated prediction sensitivity. The authors formulate the metric and establish its properties in relationship with group fairness and individual fairness. Interestingly, the authors measure the correlation of the proposed metric with a human judgment of fairness. Since the proposed metric requires a choice of the way of computing two hyperparameter vectors, the authors experiment with different choices and show that this choice matters quite a lot. summary_of_strengths Below is a copy from the previous review: > - Important direction of research > - Relatively good correlation with human judgment > - The authors evaluated several choices of the metric > - Intuitive formulation of the metric > - Clearly written and easy to follow paper summary_of_weaknesses Below is a copy from the previous review: > - Need of choice of the two hyperparameters vectors w and v > - This choice, as evident from the authors experiments, is very important > - Process of obtaining v can be quite involved > - The proposed metric works only or gradient-based models > - Although the formulation is intuitive, the metric values themselves can be hard to interpret. comments,_suggestions_and_typos The authors addressed most of the reviewers' comments made for the previous submission. This is an important direction of research and I believe in the current state the paper can be accepted for publication.
[ [ 640, 671 ], [ 728, 779 ], [ 784, 819 ], [ 824, 865 ], [ 1181, 1275 ], [ 1396, 1517 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_neg_1", "Major_claim" ]
525
paper_summary This paper deals with the temporal misalignment problem, which occurs when an NLP model is trained on a dataset created from data of a certain time period and tested/used for data of another time period. This paper discusses the temporal misalignment problem through experiments on a number of NLP tasks with temporal misalignment. For that, TD, a metric for temporal degradation (of the task performance), is defined and used. The paper presents some findings about the temporal misalignment problem, such as answers to "how does sensitivity to temporal misalignment vary with text domain and task?" The paper brings NLP community's attention to the temporal misalignment problem. summary_of_strengths Firstly, the paper is beneficial to the NLP community, since it will bring NLP community's attention to the temporal misalignment problem. This is particularly important because currently people in this community extensively use the pretraining-DAPT-finetuning paradigm. The arguments in the paper are supported by comprehensive experiments with a number of tasks. The paper is well-written and well-organized. summary_of_weaknesses The problem caused by difference in training data and test data has been studied, although they might not focus on "temporal" aspect. There are a number of papers, where terms like "covariate shift", labeling adaptation, and instance adaptation (e.g., Jiang and Zhai (2007) and Shimodaira (2000). See below). It would be interesting to see the connection between the arguments of the current paper and such papers. -Jiang and Zhai. Instance Weighting for Domain Adaptation in NLP. ACL. 2007. -Hidetoshi Shimodaira. 2000. Improving predictive inference under covariate shift by weighting the loglikelihood function. Journal of Statistical Planning and Inference, 90:227–244. TD score is defined in Section 2.3, but there is no intuition or justification about why it is defined like this, at least in the same section. Descriptions about intuition can be found in the later sections, so this is simply a matter of organization of the paper. It is not clear whether TD scores for different performance metrics can be compared with each other, across different tasks. There should be a discussion on the justification (or maybe limitation). Some specific examples of temporal misalignment would help readers understand the paper, if they (the examples) are presented in Introduction. comments,_suggestions_and_typos D in the text body should be in italic, consistently.
[ [ 721, 774 ], [ 776, 992 ], [ 993, 1087 ], [ 1088, 1134 ], [ 1832, 1976 ], [ 1976, 2097 ], [ 2100, 2223 ], [ 2225, 2296 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2" ]
526
- First line of page 2, this bit does not seem very fluent, maybe something is missing, please check and amend if/as needed: "... enabling us to easily create large number of coherent test examples..." - I am dubious about the orthodoxy of using the image with the pussycat's face in the title as well as to replace the name of the project/system in the running text of the paper: while it may look cute (to some), I suspect that other readers (like this innovation-averse reviewer) may find it annoying, if not outright inappropriate. If the paper is accepted (as incidentally I think it should be), I think that the author(s) would be well advised to consult with the conference chairs and proceedings editors to check that this rather unusual practice is deemed acceptable and technically feasible for the proceedings. - Similar to the previous point, I wonder if the deliberately eccentric heading of the very short and merely descriptive section 3 (i.e. "Do Androids Dream of Coreference Translation Pipelines?", which consists of merely 14 lines) is entirely justified and appropriate for a paper to be published in conference proceedings, although I accept that to some extent these are matters of personal taste. - Possible typo on page 2, please check and edit as required: "follows this work, but _create_ the challenge set in an automatic way." >>> should it be 'creates'? - The last paragraph of Section 3 on page 3 seems to include some typos such as the following, so please check and amend the passage as appropriate: "The coreference steps resembles" (one of the last two words is incorrect), "the rules-based approach" ('rule-based'?), " each of these phenomenon" ('phenomena'?). - The caption of Figure 1 on page 4 seems too (and unnecessarily) long: a caption is a caption; data analysis and comments should be given in the running text of the paper, referring to the figure whose data is being illustrated and discussed. - Line 3 on page 5: "This _filters_ our subset to 4,580 modified examples." >>> For clarity, should 'filters' be 'reduced' (or something similar) instead?
[ [ 2, 123 ], [ 127, 200 ], [ 204, 379 ], [ 381, 821 ], [ 823, 1220 ] ]
[ "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3" ]
527
This paper introduces the task of automatic "pull quote selection" from text. The task is to identify one or more spans of text pulled from an article and presented in a salient manner to the reader within the article, with the goal of engaging the reader and to provide emphasis on interesting aspects of the article. The authors introduce a new dataset for this research, and explore a variety of approaches to automatically perform this task. Using these approaches (ranging from hand-crafted feature based methods to mixture of experts), the authors provide interesting insights into properties of text that make for good pull pull-quotes. At a high level, there are two key aspects of the paper worth mentioning first. The approach taken by the paper to analyze the novel data/task, and provide insights is extremely well done. However, the the paper also is dealing with the challenge of a formal definition for this task, which it does not quite achieve eventually. The dataset is constructed from pull-quotes identified in existing well known publications (presumably, created by the editors of those publications). As such, there may or may not be a consistency among the strategy taken by these publications in selecting these pull quotes. Additionally, each may have a different goal/objective/motivation in selecting a particular span as a pull-quote. Because this paper only defines the task in terms of what is observed in these publications, and does not go beyond using these existing articles + pull-quotes in its definition of the task, it would be hard for someone to manually construct a dataset for this task by hand (with human annotators). This appears to be a fundamental weakness in this work. What would be human annotation guidelines for such data set creation? How would one assess agreement if one were to create such data with human experts? Therefore, it appears that the actual task taken on by this paper is that of learning the latent decisions behind the pull-quote identification of *these particular* publications. Having said that, the approach and analysis undertaken by the paper is very insightful. While the task can be construed as learning to extract pull-quotes in a manner similar to that of these selected publications, the methodical approach taken in the paper is commendable. It was enjoyable to see the paper build from hand-crafted features used in a traditional ML classifier to more recent deep learning models with character and word-based features, to cross-task of approaches used in similar tasks (headline, clickbait, summarization). The observations and conclusions from the experiments are perceptive, and readers of the paper would certainly learn about interesting linguistic characteristics that are useful in identifying noteworthy sentences in any given text. It was great to see the human evaluation in Section 5.5 of the paper. This really helped to see the impact of the pull-quotes on human readers. It would have been neat to see such an analysis of the data as part of the task definition early on... to perhaps help more clearly define what a human reader (or a human writer) is expecting to highlight as quotable. ( a.k.a., crowd-sourcing pull-quote extraction?)
[ [ 446, 643 ], [ 724, 832 ], [ 833, 972 ], [ 974, 2052 ], [ 2053, 2139 ], [ 2141, 2326 ], [ 2327, 2593 ], [ 2594, 2662 ], [ 2668, 2826 ], [ 2827, 2896 ], [ 2897, 3238 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5", "Eval_pos_6", "Eval_pos_7", "Eval_pos_8", "Jus_pos_8" ]
528
This paper performs an overdue circling-back to the problem of joint semantic and syntactic dependency parsing, applying the recent insights from neural network models. Joint models are one of the most promising things about the success of transition-based neural network parsers. There are two contributions here. First, the authors present a new transition system, that seems better than the Hendersen (2008) system it is based on. The other contribution is to show that the neural network succeeds on this problem, where linear models had previously struggled. The authors attribute this success to the ability of the neural network to automatically learn which features to extract. However, I think there's another advantage to the neural network here, that might be worth mentioning. In a linear model, you need to learn a weight for each feature/class pair. This means that if you jointly learn two problems, you have to learn many more parameters. The neural network is much more economical in this respect. I suspect the transition-system would work just as well with a variety of other neural network models, e.g. the global beam-search model of Andor (2016). There are many other orthogonal improvements that could be made. I expect extensions to the authors' method to produce state-of-the-art results. It would be nice to see an attempt to derive a dynamic oracle for this transition system, even if it's only in an appendix or in follow-up work. At first glance, it seems similar to the arc-eager oracle. The M-S action excludes all semantic arcs between the word at the start of the buffer and the words on the semantic stack, and the M-D action excludes all semantic arcs between the word at the top of the stack and the words in the buffer. The L and R actions seem to each exclude the reverse arc, and no other.
[ [ 1015, 1116 ], [ 1118, 1169 ] ]
[ "Eval_neg_1", "Jus_neg_1" ]
530
The authors use self-training to train a seq2seq-based AMR parser using a small annotated corpus and large amounts of unlabeled data. They then train a similar, seq2seq-based AMR-to-text generator using the annotated corpus and automatic AMRs produced by their parser from the unlabeled data. They use careful delexicalization for named entities in both tasks to avoid data sparsity. This is the first sucessful application of seq2seq models to AMR parsing and generation, and for generation, it most probably improves upon state-of-the art. In general, I really liked the approach as well as the experiments and the final performance analysis. The methods used are not revolutionary, but they are cleverly combined to achieve practial results. The description of the approach is quite detailed, and I believe that it is possible to reproduce the experiments without significant problems. The approach still requires some handcrafting, but I believe that this can be overcome in the future and that the authors are taking a good direction. (RESOLVED BY AUTHORS' RESPONSE) However, I have been made aware by another reviewer of a data overlap in the Gigaword and the Semeval 2016 dataset. This is potentially a very serious problem -- if there is a significant overlap in the test set, this would invalidate the results for generation (which are the main achievemnt of the paper). Unless the authors made sure that no test set sentences made their way to training through Gigaword, I cannot accept their results. (RESOLVED BY AUTHORS' RESPONSE) Another question raised by another reviewer, which I fully agree with, is the 5.4 point claim when comparing to a system tested on an earlier version of the AMR dataset. The paper could probably still claim improvement over state-of-the art, but I am not sure I can accept the 5.4 points claim in a direct comparison to Pourdamghani et al. -- why haven't the authors also tested their system on the older dataset version (or obtained Pourdamghani et al.'s scores for the newer version)? Otherwise I just have two minor comments to experiments: - Statistical significance tests would be advisable (even if the performance difference is very big for generation). - The linearization order experiment should be repeated with several times with different random seeds to overcome the bias of the particular random order chosen. The form of the paper definitely could be improved. The paper is very dense at some points and proofreading by an independent person (preferably an English native speaker) would be advisable. The model (especially the improvements over Luong et al., 2015) could be explained in more detail; consider adding a figure. The experiment description is missing the vocabulary size used. Most importantly, I missed a formal conclusion very much -- the paper ends abruptly after qualitative results are described, and it doesn't give a final overview of the work or future work notes. Minor factual notes: - Make it clear that you use the JAMR aligner, not the whole parser (at 361-364). Also, do you not use the recorded mappings also when testing the parser (366-367)? - Your non-Gigaword model only improves on other seq2seq models by 3.5 F1 points, not 5.4 (at 578). - "voters" in Figure 1 should be "person :ARG0-of vote-01" in AMR. Minor writing notes: - Try rewording and simplifying text near 131-133, 188-190, 280-289, 382-385, 650-659, 683, 694-695. - Inter-sentitial punctuation is sometimes confusing and does not correspond to my experience with English syntax. There are lots of excessive as well as missing commas. - There are a few typos (e.g., 375, 615), some footnotes are missing full stops. - The linearization description is redundant at 429-433 and could just refer to Sect. 3.3. - When refering to the algorithm or figures (e.g., near 529, 538, 621-623), enclose the references in brackets rather than commas. - I think it would be nice to provide a reference for AMR itself and for the multi-BLEU script. - Also mention that you remove AMR variables in Footnote 3. - Consider renaming Sect. 7 to "Linearization Evaluation". - The order in Tables 1 and 2 seems a bit confusing to me, especially when your systems are not explicitly marked (I would expect your systems at the bottom). Also, Table 1 apparently lists development set scores even though its description says otherwise. - The labels in Table 3 are a bit confusing (when you read the table before reading the text). - In Figure 2, it's not entirely visible that you distinguish month names from month numbers, as you state at 376. - Bibliography lacks proper capitalization in paper titles, abbreviations and proper names should be capitalized (use curly braces to prevent BibTeX from lowercasing everything). - The "Peng and Xue, 2017" citation is listed improperly, there are actually four authors. *** Summary: The paper presents first competitive results for neural AMR parsing and probably new state-of-the-art for AMR generation, using seq2seq models with clever preprocessing and exploiting large a unlabelled corpus. Even though revisions to the text are advisable, I liked the paper and would like to see it at the conference. (RESOLVED BY AUTHORS' RESPONSE) However, I am not sure if the comparison with previous state-of-the-art on generation is entirely sound, and most importantly, whether the good results are not actually caused by data overlap of Gigaword (additional training set) with the test set. *** Comments after the authors' response: I thank the authors for addressing both of the major problems I had with the paper. I am happy with their explanation, and I raised my scores assuming that the authors will reflect our discussion in the final paper.
[ [ 542, 645 ], [ 646, 746 ], [ 747, 797 ], [ 801, 891 ], [ 892, 1042 ], [ 1075, 1190 ], [ 1191, 1234 ], [ 1237, 1514 ], [ 1548, 1617 ], [ 1619, 2035 ], [ 5079, 5189 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Eval_pos_5", "Jus_neg_1", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Major_claim" ]
531
- Strengths: The paper makes several novel contributions to (transition-based) dependency parsing by extending the notion of non-monotonic transition systems and dynamic oracles to unrestricted non-projective dependency parsing. The theoretical and algorithmic analysis is clear and insightful, and the paper is admirably clear. - Weaknesses: Given that the main motivation for using Covington's algorithm is to be able to recover non-projective arcs, an empirical error analysis focusing on non-projective structures would have further strengthened the paper. And even though the main contributions of the paper are on the theoretical side, it would have been relevant to include a comparison to the state of the art on the CoNLL data sets and not only to the monotonic baseline version of the same parser. - General Discussion: The paper extends the transition-based formulation of Covington's dependency parsing algorithm (for unrestricted non-projective structures) by allowing non-monotonicity in the sense that later transitions can change structure built by earlier transitions. In addition, it shows how approximate dynamic oracles can be formulated for the new system. Finally, it shows experimentally that the oracles provide a tight approximation and that the non-monotonic system leads to improved parsing accuracy over its monotonic counterpart for the majority of the languages included in the study. The theoretical contributions are in my view significant enough to merit publication, but I also think the paper could be strengthened on the empirical side. In particular, it would be relevant to investigate, in an error analysis, whether the non-monotonic system improves accuracy specifically on non-projective structures. Such an analysis can be motivated on two grounds: (i) the ability to recover non-projective structures is the main motivation for using Covington's algorithm in the first place; (ii) non-projective structures often involved long-distance dependencies that are hard to predict for a greedy transition-based parser, so it is plausible that the new system would improve the situation. Another point worth discussion is how the empirical results relate to the state of the art in light of recent improvements thanks to word embeddings and neural network techniques. For example, the non-monotonicity is claimed to mitigate the error propagation typical of classical greedy transition-based parsers. But another way of mitigating this problem is to use recurrent neural networks as preprocessors to the parser in order to capture more of the global sentence context in word representations. Are these two techniques competing or complementary? A full investigation of these issues is clearly outside the scope of the paper, but some discussion would be highly relevant. Specific questions: Why were only 9 out of the 13 data sets from the CoNLL-X shared task used? I am sure there is a legitimate reason and stating it explicitly may prevent readers from becoming suspicious. Do you have any hypothesis about why accuracy decreases for Basque with the non-monotonic system? Similar (but weaker) trends can be seen also for Turkish, Catalan, Hungarian and (perhaps) German. How do your results compare to the state of the art on these data sets? This is relevant for contextualising your results and allowing readers to estimate the significance of your improvements. Author response: I am satisfied with the author's response and see no reason to change my previous review.
[ [ 13, 97 ], [ 98, 228 ], [ 229, 294 ], [ 299, 328 ], [ 1415, 1572 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_pos_2", "Eval_pos_3", "Major_claim" ]
532
paper_summary The authors of this paper propose an empirical study for the zero-shot capabilities of CLIP models and demonstrate that CLIP models have strong few-shot capabilities. The authors propose the TAP-C method for evaluating VQA and demonstrate zero-shot cross-modal transfer capabilities on the visual entailment. summary_of_strengths 1. The paper conducts rich experiments and presents several interesting insights, which have a certain value to the community. 2. The paper is well-written and easy to follow. 3. The proposed two-step prompt generation method is interesting for studying the zero-shot performance. summary_of_weaknesses 1. There should be more VLU tasks to prove the argument of the paper, e.g., NLVR. comments,_suggestions_and_typos 1. It is best to use vector graphics in the paper.
[ [ 348, 472 ], [ 476, 522 ], [ 526, 628 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3" ]
533
Review: Multimodal Word Distributions - Strengths: Overall a very strong paper. - Weaknesses: The comparison against similar approaches could be extended. - General Discussion: The main focus of this paper is the introduction of a new model for learning multimodal word distributions formed from Gaussian mixtures for multiple word meanings. i. e. representing a word by a set of many Gaussian distributions. The approach, extend the model introduced by Vilnis and McCallum (2014) which represented word as unimodal Gaussian distribution. By using a multimodal, the current approach attain the problem of polysemy. Overall, a very strong paper, well structured and clear. The experimentation is correct and the qualitative analysis made in table 1 shows results as expected from the approach. There’s not much that can be faulted and all my comments below are meant to help the paper gain additional clarity. Some comments: _ It may be interesting to include a brief explanation of the differences between the approach from Tian et al. 2014 and the current one. Both split single word representation into multiple prototypes by using a mixture model. _ There are some missing citations that could me mentioned in related work as : Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space Neelakantan, A., Shankar. J. Passos, A., McCallum. EMNLP 2014 Do Multi-Sense Embeddings Improve Natural Language Understanding? Li and Jurafsky, EMNLP 2015 Topical Word Embeddings. Liu Y., Liu Z., Chua T.,Sun M. AAAI 2015 _ Also, the inclusion of the result from those approaches in tables 3 and 4 could be interesting. _ A question to the authors: What do you attribute the loss of performance of w2gm against w2g in the analysis of SWCS? I have read the response.
[ [ 52, 80 ], [ 617, 673 ] ]
[ "Major_claim", "Major_claim" ]
534
This paper presents an exploratory work made on the evaluation of justified (or not) disagreement in the task of annotations. As far as my skills on the subject allowed me to evaluate and understand, here are the positive, neutral and negative aspects I could identify. --- Positive aspects --- (1) The paper tackles a very interesting subject for NLP and especially for CL. It thus is a good match for COLING. (2) As far as I could tell, the paper respects all formal submission criteria (e.g. abstract size, anonymity). (3) The paper is written in a very pleasant English that allowing a smooth reading (even though I have some concern about the content structure). (3) I found the methodology and results sound and convincing. There was a number of choices made by the authors that felt very adequate (e.g. boolean crowdsourcing, Prolific etc.). (4) Even though the results are probably only preliminary, my impression is that the amount of work to obtain them seems substantial. --- Neutral aspects --- (1) the citations made inside parenthesis do not display inline as they should and have their own pair of parentheses. (2) in Section 2 the authors said "Despite the central nature of phenomena triggering disagreement in annotation tasks, we are not aware of evaluation methods that do not mainly rely on agreement". I don't know any too. Nonetheless, if the authors haven't already checked, I would suggest to give a look at the publications within the hcomp community => https://www.humancomputation.com/ (3) If I got that right, in Section 6.2 I suppose that " The unit-quality-score (uas)" should be changed as " The unit-quality-score (uqs)". (4) In section 7.2, something went wrong with this sentence ". In contrast, a simple majority vote achieves an f1-score of 0.78 and the unit-annotation-score. " (5) "Most importantly, it would be highly valuable if the existing metrics could be combined in such a way that we could use them for the identification of different types of disagreements. " => sounds like something a Machine Learning algorithm could be used for by using the metrics as features. --- Negative aspects --- (1) The paper has one major issue: whereas the overall subject is coherent and well defined, its focus is quite blurry and it has been hard to understand what is exactly the contribution of the authors. Indeed the authors tend to present things in a not-so-consistent fashion from one section to the other. Also, the paper seems to be focusing on studying disagreement between annotations, yet a consequent part of the contribution described is about a filtering method to discard incoherent annotations or annotators and thus improve aggregation. Likewise, diagnostic datasets are often mentioned but nothing is done about them in the results... Another example is Table 2 => why go into such details if they don't contribute directly to the results or argumentation presented in this paper? All this details tend to confuse the reader in the end. (2) While the paper tackles a very interesting subject for CL, I was under the impression that it is written in a very NLP-oriented fashion. I believe COLING try to bridge (as much as possible) NLP and Linguistics and the authors should consider that for their final version. (3) The authors should have provided examples in Section 3 and 6.3 to help the reader understanding. --- Conclusion --- I think that the work presented is a good piece of research and this paper is already nice (and can become even nicer). I would thus gladly recommend to accept it and hope that, if it gets accepted, the final version will have amended the shortcomings mentioned above.
[ [ 300, 375 ], [ 376, 411 ], [ 527, 668 ], [ 672, 730 ], [ 731, 849 ], [ 854, 983 ], [ 2145, 2343 ], [ 2344, 2990 ], [ 3389, 3660 ] ]
[ "Eval_pos_1", "Major_claim", "Eval_pos_2", "Eval_pos_3", "Jus_pos_3", "Eval_pos_4", "Eval_neg_1", "Jus_neg_1", "Major_claim" ]
535
This paper presents a text classification method based on pre-training technique using both labeled and unlabeled data. The authors reported experimental results with several benchmark data sets including TREC data, and showed that the method improved overall performance compared to other comparative methods. I think the approach using pre-training and fine-tuning itself is not a novel one, but the originality is the use of both labeled and unlabeled data in the pre-training step. The authors compare their results against three baselines, i.e. without pre-training and a deep learning with unsupervised pre-training using deep autoencoders, but I think that I would be interesting to compare the method against other methods presented in the introduction section.
[]
[]
536
paper_summary This paper explores the knowledge-grounded conversation generation task. Two latent variables are employed for the controlling of types and boundaries for segments during generation, after an auxiliary task to classify segments into knowledge-relevant or knowledge-irrelevant. Such a strategy is fused into a pretrained encoder-decoder architecture, and realizes superior performance than strong baselines on two benchmark datasets. Although this paper is well-motivated, I do not see many significant revisions for the previous comments. summary_of_strengths 1. The motivation of this paper is solid, I believe the segmentation based on the relation of text pieces between knowledge is reasonable to solve knowledge-grounded dialogue generation tasks. 2. The proposed model is basically complete and shows its superiority to strong baselines. It is based on a pre-trained BART model, and the proposed Module Indicator/ Boundary Indicator can help the base model to better combine information from both the context and knowledge in generated responses. summary_of_weaknesses 1. The writing still needs improvement. There are only some minor revisions compared to the last version, where they fix some typos, moving part of human evaluation results into the main part, and change few expressions to make them more clear. However, some major problems still exist. 1) The descriptions of the model are difficult for readers to follow, including complex notations and some unclear definitions (e.g., the different styles mentioned). 2) Although it adds more details about human evaluation, there is still no clear definition of metrics pklg and lklg. And I also have concerns about their justification. 2. An ablation study is still missing. Without such analyses, readers will have no idea about how each component contributes to the final performance. E.g., how the style adapter affects the generated responses, or how it performs if one kind of latent is removed. comments,_suggestions_and_typos Please check the weakness section. 1. A case study with only one sample is insufficient for a competent paper in the dialogue area. Consider adding more samples and putting more highlights on the samples to show your superiority. 2. Still missing some references that utilize pretrained model in an encoder-decoder architecture for auxiliary-information-grounded dialogue generation. Typo: L492: Sec ??
[ [ 579, 616 ], [ 618, 768 ], [ 1095, 1131 ], [ 1132, 1336 ], [ 1337, 1379 ], [ 1383, 1448 ], [ 1450, 1548 ], [ 1552, 1666 ], [ 1667, 1718 ], [ 1722, 1757 ], [ 1758, 1984 ], [ 2250, 2400 ] ]
[ "Eval_pos_1", "Jus_pos_1", "Eval_neg_1", "Jus_neg_1", "Eval_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Eval_neg_4", "Eval_neg_5", "Jus_neg_5", "Eval_neg_6" ]
537
## General comments: This paper presents an exploration of the connection between part-of-speech tags and word embeddings. Specifically the authors use word embeddings to draw some interesting (if not somewhat straightforward) conclusions about the consistency of PoS tags and the clear connection of word vector representations to PoS. The detailed error analysis (outliers of classification) is definitely a strong point of this paper. However, the paper seems to have missing one critical main point: the reason that corpora such as the BNC were PoS tagged in the first place. Unlike a purely linguistic exploration of morphosyntactic categories (which are underlined by a semantic prototype theory - e.g. see Croft, 1991), these corpora were created and tagged to facilitate further NLP tasks, mostly parsing. The whole discussion could then be reframed as whether the distinctions made by the distributional vectors are more beneficial to parsing as compared to the original tags (or UPOS for that matter). Also, this paper is missing a lot of related work in the context of distributional PoS induction. I recommend starting with the review Christodoulopoulos et al. 2010 and adding some more recent non-DNN work including Blunsom and Cohn (2011), Yatbaz et al. (2012), etc. In light of this body of work, the results of section 5 are barely novel (there are systems with more restrictions in terms of their external knowledge that achieve comparable results). ## Specific issues In the abstract one of the contributed results is that "distributional vectors do contain information about PoS affiliation". Unless I'm misunderstanding the sentence, this is hardly a new result, especially for English: every distributionally-based PoS induction system in the past 15 years that presents "many-to-one" or "cluster purity" numbers shows the same result. The assertion in lines 79-80 ("relations between... vectors... are mostly semantic") is not correct: the <MIKOLOV or COLOBERT> paper (and subsequent work) shows that there is a lot of syntactic information in these vectors. Also see previous comment about cluster purity scores. In fact you revert that statement in the beginning of section 2 (lines 107-108). Why move to UPOS? Surely the fine-grained distinctions of the original tagset are more interesting. I do not understand footnote 3. Were these failed attempts performed by you or other works? Under what criteria did they fail? What about Brown cluster vectors? They almost perfectly align with UPOS tags. Is the observation that "proper nouns are not much similar to common nouns" (lines 331-332) that interesting? Doesn't the existence of "the" (the most frequent function word) almost singlehandedly explain this difference? While I understand the practical reasons for analysing the most frequent word/tag pairs, it would be interesting to see what happens in the tail, both in terms of the vectors and also for the types of errors the classifier makes. You could then try to imagine alternatives to pure distributional (and morphological - since you're lemmatizing) features that would allow better generalizations of the PoS tags to these low-frequency words. ## Minor issues Change the sentential references to \newcite{}: e.g. "Mikolov et al. (2013b) showed"
[ [ 337, 437 ], [ 438, 502 ], [ 504, 1011 ], [ 1013, 1110 ], [ 1111, 1281 ], [ 1282, 1354 ], [ 1356, 1465 ], [ 1613, 1706 ], [ 1708, 1857 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4" ]
538
paper_summary This paper addresses the issue of automatically mining values behind arguments. Relying on a taxonomy of values based on literature and consistent with those used in social sciences, and collecting data across 4 cultural domain, the paper presents a robust study on the automatic detection of values with very encouraging results. summary_of_strengths The paper is exquisitely well-written and presented. The experimental design is sound, and the data collection follows good practice, leading to very encouraging results. The background literature review is excellent. Overall, I think this is an excellent paper which I would like to see at ACL. Addressing the weaknesses below would strengthen the paper even further. summary_of_weaknesses There is a slight imbalance between the background and the description and discussion of results in the paper. Given the very strong set up, the final part of the paper seems somewhat underwhelming. Further discussion on the results and their significance, with more space allocated to what they mean for further work and applications would have made for an even stronger paper. At alpha=.49 inter-annotator agreement is low, which is not unexpected given the complexity of the task. However, further discussion on how this could be addressed (e.g. with better annotation manuals or training) is missing. comments,_suggestions_and_typos Line 458: equally effected -> equally affected Overall, no other issues with the writing, which is excellent. I would advise the authors to dedicate a bit more space on discussing the results and their significance for further work and applications, perhaps shortening the background sections slighlty.
[ [ 367, 419 ], [ 420, 452 ], [ 458, 537 ], [ 539, 585 ], [ 587, 664 ], [ 761, 871 ], [ 872, 1139 ], [ 1140, 1244 ], [ 1245, 1366 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_pos_4", "Major_claim", "Eval_neg_1", "Jus_neg_1", "Jus_neg_2", "Eval_neg_2" ]
539
paper_summary The authors proposed an improved version of the $\alpha$-entmax which does not require sorting during computation, which aims to reduce the latency of the operation. The proposed computation method is based on $\alpha$-RELU, which replaces the normalization variable in $\alpha$-entmax to a fixed value. Experiments show that the resultant operator is faster than 1.5-entmax on WMT14 En-De and WMT13 En-Ru translation tasks. summary_of_strengths - The proposed method shows that even with a potentially unnormalized distribution, it doesn't seem to harm the decoding performance of the generation model -Experiment results show that the proposed method is constantly faster than $\alpha$-entmax and the performance is not dropping summary_of_weaknesses The authors claim that the alpha entmax is slow because it requires sorting. I checked the code of $\alpha$-entmax implemented in https://github.com/deep-spin/entmax and find that the top-k approximation is the default option with K=100 as the default setting. In this case, `torch.topk` is called in the code, and only K numbers are sorted. Therefore, if the author position the proposed method as a fast approximation to the original $\alpha$-entmax, I think the authors shall perform more systematical comparison with the top-k approximation. In the paper, the only comparison with top-k approximation is Figure 2 (bottom, center). However, it's actually difficult to make a conclusion for comparing performance and speed based on this figure. It seems that the proposed 1.5-ReLU is still faster in the first 20 hours of training, then the gap closes. Also, as a key performance indicator of the proposed method, it is desirable to have the actual numbers. For example, averaged seconds per batch for training and averaged seconds per 1k decoding requests for inference. It will be even better to profile just the computation time of softmax / $\alpha$-entmax / $\alpha$-ReLU operators. I hope to see more detailed comparison with the top-k variant of $\alpha$-entmax on (1) translation performance (at least WMT14 En-De) and (2) execution speed (can be seconds per 1k batch, will be better if reporting the running time of just the $\alpha$-entmax/$\alpha$-ReLU layer). I might change my assessment according to the results. Another concern is on the distribution, in Figure 7, authors showed that the sums are still concentrating to a certain value. However, the deviation is not small according to this figure. Will this impact the correctness of the language model scores (log p)? Some applications are relying on the scores for reranking purposes. comments,_suggestions_and_typos - For evaluation, the datasets that the authors use are pretty old datasets. I'm not against using WMT14 En-De, however, WMT13 En-Ru is rare in the research community recently.
[]
[]
540
paper_summary In this paper, the authors proposed Structural Information-augmented Syntax Controlled Paraphrasing (Si_SCP) which is a syntax-controlled paraphrase generation technique. They tackled two problems of such generation (a) encoding the structural information -- by using a tree transformer to capture parent-child and sibling relationships and (b) retrieving syntactic structure to guide the generation - by introducing a synthetic template retriever. summary_of_strengths (a) The paper is well written and easy to follow. (b) Various experiments and ablation studies are done to establish the effectiveness of the proposed mechanism. (c) The authors worked on the previous weaknesses thoroughly. summary_of_weaknesses Though the paper is detailed, I recommend the following areas to be addressed (a) Retrieval of the similar templates: The authors should clarify on why the query text and the whole query parse is important to retrieve templates? Is it always the case that given a query, we can retrieve templates other than the own query template? Are all top K retrieved templates meaningful concerning the query or some thresholding on similarity is needed? (b) Human evaluation needs some more elaboration. The author should report the inter-annotator agreement along with the results. (c) Syntactic evaluation: At the time of evaluating TED, did the authors use the top most retrieved template? They should elaborate on the process (d) Table 1: The performance of (Si_SCP) is slightly better than guiG in ParaNMT-small where as for QQP-OS the gains are higher. Is there any specific reasons or observation regarding the same? (e) Though the paper is focused on syntax guided paraphrase generation, it would be nice if the authors can discuss Si_SCP’s gain in comparison to unsupervised paraphrase generations [Krishna, Kalpesh, John Wieting, and Mohit Iyyer. " Reformulating Unsupervised Style Transfer as Paraphrase Generation." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020.] comments,_suggestions_and_typos While I can see a detailed work in the paper, I would recommend the authors to address the points mentioned in the weaknesses.
[ [ 489, 535 ], [ 540, 648 ], [ 653, 711 ], [ 1184, 1229 ], [ 1230, 1308 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_neg_1", "Jus_neg_1" ]
542
This paper presents a conversion from a "traditional" POS tag set to UD tagset for Thai, and presents POS tagging experiments with a variety of models that differ in two dimensions: (1) the type of sub-word units, and (2) and type of pre-training. The authors conclude that sub-word units are beneficial for the task as well as language-specific pre-training. The problem investigated is a standard problem, but the fact that the paper is on a relatively less-studied language increases the value of the paper. The conversion of a language-specific tagset to a multi-lingual, "universal", tagset is interesting, and the methods presented, although standard/straightforward, makes sense. However, the paper leaves many questions unanswered, it should probably be a long paper with further information, analyses and discussion. I will list some of the issues with the paper. The conversion of ORCHID to UD has multiple issues: - First, there is no clear motivation for this conversion. From Table 1, this seems this is a lossy conversion. Given that there is not strong motivation (e.g., creating a UD compliant treebank), the purpose of the conversion is also unclear, similar to what authors mean by "manageable tagset" (p.1, last paragraph). - There is little information on conversion process. It seems there wasn't enough attention put into the conversion process. In a proper conversion to UD, one should also map some of the sub-types to morphological features. - The conversion also makes the results incomparable to the earlier studies on this data set. Irrespective of the comparison to other studies, it would have been interesting to see the results of the systems tested with both tagsets. -It is also surprising to see no mention of the existing Thai UD treebank (Thai PUD). The treebank includes both coarse (UD) and fine (presumably ORCHID) POS tags. A comparison of conversion used in this treebank and in the present paper is needed for better understanding the value of the conversion presented. -Additional information, e.g., tag distribution (before and after conversion), on the data set would be beneficial for interpretation of the results discussed. Investigating the effects of use of sub-word units is interesting. However, I have difficulties interpreting the results for two reasons: - No discussion, or analysis, of the results. We are given a number of performance metrics, announcement of the best models, and no insight into why these differences should be observed, or are there anything contrary to expectations. For example, the failure of multi-lingual BERT (even in OoV words) needs some explanation discussion. Pre-trained model not only "does not benefit from the cross lingual transfer", but hurts the performance of the model compared much simpler ones. -The models presented typically have large variation due to effects like random initialization and test set split. As a result it is difficult get a good sense of which model is better without having an indication of how much the results vary. Furthermore, the authors seem to have missed the fact that in the results they present in Table 4, syllable-based (not pre-trained) model performs as well as BERT on in-vocabulary words. In general, given current presentation, I do not think results are conclusive. There are also frequent typos and language mistakes. A thorough proofreading is recommended. Here are a few examples: - title: "Pre-trained Language Model" -> "Pre-trained Language Models" -abstract: "syllables representations" -> "syllable representations" -Intro, paragraph 1: "Modern approaches include" -> "Modern approaches includes" -In general quite a few agreement mistakes, a through proofreading would be good. -Intro, paragraph 1: multiple citations should be placed in a single pair of parentheses, and citations should not be after the sentence final punctuation. -'related work', paragraph 1 (and other places): when using a citation as part of the sentence, it should not be in parentheses. "(Akbik et al., 2018) proposed" -> "Akbik et al. (2018) proposed" (see conference style/guidelines for more information) -Most tables are not referenced from the main text. The reader needs to know what to look at in those tables, and in what way they support the description or argumentation. -There are proper names or abbreviations in the references (likely BibTeX normalization issues): "thai", "bilstm", ...
[ [ 511, 610 ], [ 616, 685 ], [ 687, 826 ], [ 927, 983 ], [ 985, 1253 ], [ 1256, 1306 ], [ 1307, 1486 ], [ 1734, 1820 ], [ 1821, 2055 ], [ 2227, 2294 ], [ 2368, 2411 ], [ 2412, 2864 ], [ 2869, 2984 ], [ 2985, 3117 ], [ 3118, 3311 ], [ 3316, 3394 ], [ 3395, 3447 ], [ 3448, 3819 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Major_claim", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_pos_3", "Eval_neg_4", "Jus_neg_4", "Jus_neg_5", "Eval_neg_5", "Eval_neg_6", "Eval_neg_7", "Eval_neg_8", "Jus_neg_8" ]
545
- Strengths: Relatively clear description of context and structure of proposed approach. Relatively complete description of the math. Comparison to an extensive set of alternative systems. - Weaknesses: Weak results/summary of "side-by-side human" comparison in Section 5. Some disfluency/agrammaticality. - General Discussion: The article proposes a principled means of modeling utterance context, consisting of a sequence of previous utterances. Some minor issues: 1. Past turns in Table 1 could be numbered, making the text associated with this table (lines 095-103) less difficult to ingest. Currently, readers need to count turns from the top when identifying references in the authors' description, and may wonder whether "second", "third", and "last" imply a side-specific or global enumeration. 2. Some reader confusion may be eliminated by explicitly defining what "segment" means in "segment level", as occurring on line 269. Previously, on line 129, this seemingly same thing was referred to as "a sequence-sequence [similarity matrix]". The two terms appear to be used interchangeably, but it is not clear what they actually mean, despite the text in section 3.3. It seems the authors may mean "word subsequence" and "word subsequence to word subsequence", where "sub-" implies "not the whole utterance", but not sure. 3. Currently, the variable symbol "n" appears to be used to enumerate words in an utterance (line 306), as well as utterances in a dialogue (line 389). The authors may choose two different letters for these two different purposes, to avoid confusing readers going through their equations. 4. The statement "This indicates that a retrieval based chatbot with SMN can provide a better experience than the state-of-the-art generation model in practice." at the end of section 5 appears to be unsupported. The two approaches referred to are deemed comparable in 555 out of 1000 cases, with the baseline better than the proposed method in 238 our of the remaining 445 cases. The authors are encouraged to assess and present the statistical significance of this comparison. If it is weak, their comparison permits to at best claim that their proposed method is no worse (rather than "better") than the VHRED baseline. 5. The authors may choose to insert into Figure 1 the explicit "first layer", "second layer" and "third layer" labels they use in the accompanying text. 6. Their is a pervasive use of "to meet" as in "a response candidate can meet each utterace" on line 280 which is difficult to understand. 7. Spelling: "gated recurrent unites"; "respectively" on line 133 should be removed; punctuation on line 186 and 188 is exchanged; "baseline model over" -> "baseline model by"; "one cannot neglects".
[ [ 13, 89 ], [ 90, 134 ], [ 135, 189 ], [ 204, 273 ], [ 274, 306 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_pos_3", "Eval_neg_1", "Eval_neg_2" ]
546
paper_summary This paper proposes a neural pairwise ranking model to evaluate readability assessment. The model outperforms existing methods on two tasks - cross-corpus ranking and zero shot, cross-lingual ranking - based on ranking metrics. The paper also publish a new bilingual (English-French) readability dataset for cross-lingual evaluation. summary_of_strengths 1. The pairwise ranking method shows better robustness on various monolingual dataset and out-domain cross-lingual dataset than previous methods. 2. The paper creates a new cross-lingual dataset for readability assessment. summary_of_weaknesses 1. The paper does not clearly introduce the dataset. For example, what is average length of each text in NewsEla? If the text is long, the paper should also mention how the text is encoded by BERT when it exceeds the maximum length. 2. As the paper already pointed out, the nature of pairwise model restricts the inference for downstream applications, so that the contribution of this paper is not very strong. comments,_suggestions_and_typos - L400: citation for "for evaluating ranking or information-retrieval tasks in literature"? -L502: Is NPRM (0.999, 0.995, 0.990, 0.948) better than regBERT (0.999, 0.997, 0.994, 0.977)? -L613: "while" -> ""
[ [ 620, 669 ], [ 670, 850 ], [ 854, 968 ], [ 970, 1029 ] ]
[ "Eval_neg_1", "Jus_neg_1", "Jus_neg_2", "Eval_neg_2" ]
547
paper_summary This paper propose a new pre-training objective for train Bi-encoder based QA model. The main claim is that the proposed pre-training objectives helps in improving the model's performance on non-QA downstream tasks like paraphrase detection, sentiment analysis etc. without extensive finetuing. The main hypothesis is that the proposed way of pre-training helps the model in learning better token level representation needed for zero shot and few shot tasks. summary_of_strengths Synthetically generated questions have been nicely used for pre-training QA model to improve performance on tasks like Sentiment analysis, paraphrase detection in a zero shot, few shot setup. A good analysis of using QA as the pre-training objective rather than MLM. summary_of_weaknesses The paper has incremental or limited novelty as the question generation from passages using BART is already a known, Bi-encoder for QA already exists. Contributions of the paper are quite trivial. Bi-encoder+Unsupervised QA didn't perfrom great. So whether choosing QA as the pre-training is helping QA task or not is not very clear. what is the motivation behind independently encoding questions and passage is not very clear. isn't that a shared common encoder would help the model learning better question tokens to passage token matching? For QA task, It seems that the cross-encoder + MRQA is doing better than proposed QUIP. comments,_suggestions_and_typos Highlight the best numbers in table 7. -Have a strategy to filter out best quality question from the set of synthetically generated questions. -May be design a curriculum learning paradigm for pre-training so that the batch sampling is done in a manner that helps the pre-training learn from easy to complex questions.
[ [ 496, 687 ], [ 688, 763 ], [ 786, 830 ], [ 831, 936 ], [ 938, 984 ], [ 985, 1033 ], [ 1034, 1121 ], [ 1123, 1216 ], [ 1217, 1332 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_3", "Eval_neg_3", "Eval_neg_4", "Jus_neg_4" ]
548
The paper present a curriculum learning method for NMT which is shown to work relatively well for low-resource settings. The method is applied to the standard Transformer and it is evaluated on several language pairs and different data sizes. The method determines what data samples should be presented to the model based on their difficulty and the model competence. The experiments show that the approach provides for consistent improvements. The paper is interesting and relatively clear, but there are some points in the paper that need clarification. The approach is interesting. It is based on Platanios et al. (2019) where sample difficulty and model competence were used. This paper proposes new ways to compute these values, and more importantly, sample difficulty is reestimated during training. The improvements are consistent across different language pairs, although often relatively small. The highest improvements are when training on 50K WMT data. My main concern is that the improvements are rather small, on many test sets it is under 1 BLEU. Also, I am concerned if the model used for 50K WMT is overparametrized. This is the setting where the highest improvements are obtained, but it seems like this is a large model for such a small dataset. Dropout should probably be larger as well. Sennrich and Zhang (2019) (Revisiting Low-Resource Neural Machine Translation: A Case Study) provide details on how to better train low-resource models. I am concerned if the method would prove as effective in low-resource settings if the models are more optimized for the low-resource setting. I am not sure why are there some differences in model sizes across the different language pairs. These seem rather arbitrary. How were these hyperparameters determined? I do not see a connection between training dataset size and model size or number of layers. The analysis in Figure 2, 3 and 4 is interesting. However, I am curious if these effects would be noticeable in the other experimental settings, where in almost all cases, improvements are under 1 BLEU. The results in Koehn and Knowles (2017) with regards to low BLEU scores in low-resource settings have been addressed in Sennrich and Zhang (2019). I agree with the sentiment that performance in low-resource settings is often lacking, but using this reference may not be appropriate. Typos and writing style: including language model -> including language modeling mini-batch contains sentences -> mini-batch containing sentences Algorithm 1, line 1: Randomly initial -> Randomly initialize I would suggest normalizing all scores in Table 3 to 2 decimal points. In this paper, we propose a dynamic model competence (DMC) estimation method... - this paragraph is confusing. I did not understand what is the "prior hypothesis of the training process" nor how is BLEU superior if the loss is already the optimal method to estimate competence.
[ [ 446, 556 ], [ 558, 586 ], [ 587, 806 ], [ 968, 1025 ], [ 1027, 1063 ], [ 1065, 1136 ], [ 1137, 1606 ], [ 1607, 1732 ], [ 1733, 1867 ], [ 1868, 1917 ] ]
[ "Eval_pos_1", "Eval_pos_2", "Jus_pos_2", "Eval_neg_1", "Jus_neg_1", "Eval_neg_2", "Jus_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4" ]
549
paper_summary In this paper, the authors propose SWCC to improve event representation learning. The method mainly modifies the objective function, which can be broken down into a modified InfoNCE objective, a prototype-based clustering objective, and a masked language model objective. The experiment includes two intrinsic evaluations, based on event embedding similarity and one extrinsic evaluation, MCNC. The result shows that the proposed method outperforms baselines on all the evaluations. The ablation and visualization studies also identify the importance of each component (objective function). summary_of_strengths 1. The experiment results show strong performance of the proposed method. 2. To the best of my knowledge, this is the first work that leverages prototype-based clustering in event representation learning. summary_of_weaknesses 1. The write-up has many typos and some formulas/explanations are confusing. 2. The technical innovation of the proposed method is limited. The proposed objective function is basically a combination of two related works with tiny changes. 3. Reproductivity is not ideal, as some essential parts are not addressed in the paper, such as training data. 4. More strong baselines should be included/discussed in the experiments. comments,_suggestions_and_typos Comments 1. Line 18: it’s better to specify the exact tasks rather than stating several tasks in the abstract 2. Line 69: “current” -> “currently” 3. Line 112: margin-based loss can use one positive with MULTIPLE negatives. 4. Figure 1: the relation types are not explicitly modeled in this work so the figure is kind of confusing. 5. Eq. 1: what is z_p 6. Eq. 2: how do you convert BERT token embeddings into event embeddings? Concatenation? Any pooling? 7. Line 236-237: “conciser” -> “consider” 8. Line 209-211: I can understand what you mean but this part should be re-written. For example, “given an anchor event, we generate 3 positive samples with different dropout masks.” 9. Table 1: needs to include a random baseline to show the task difficulty 10. Experiments: what training data you use for pre-training? 11. Table 1: do the baselines and the proposed method use the same training data for pre-training? How did you get the results for the baselines? Did you have your own implementation, or directly use their released embeddings/code? 12. Line 450: L_{pc} or L_{cp}| in Eq. 7? 13. Table 2: what’s the difference between “SWCC w/o Prototype-based Clustering” and “BERT(InfoNCE)”? 14. Table 3: MCNC should have many strong baselines that are not compared here, such as the baselines in [1]. Can you justify the reason? 15. Can you provide an analysis on the impact of the number of augmented samples (e.g., z_{a1}, z_{a2}) here?
[ [ 705, 833 ], [ 859, 886 ], [ 891, 931 ], [ 937, 996 ], [ 997, 1096 ], [ 1100, 1127 ], [ 1129, 1208 ] ]
[ "Eval_pos_1", "Eval_neg_1", "Eval_neg_2", "Eval_neg_3", "Jus_neg_3", "Eval_neg_4", "Jus_neg_4" ]