text
stringlengths
63
12.6k
question
stringclasses
4 values
label
stringclasses
2 values
This paper proposes improving many-to-many NMT using a word-level contrastive loss. The paper also shows that the proposed approach’s translation quality correlates with how well sentences can be retrieved using the encoder’s output. The experiment covers many different setups, with different number of language pairs, a wide variety of languages, and more than 1 neural architecture. The ablation is helpful for explaining where the improvement in NMT performance comes from. 1. The choice of the word-alignment baseline seems odd. The abstract claims that “Word alignment has proven to benefit many-to-many neural machine translation (NMT).” which is supported by (Lin et al., 2020). However, the method proposed by Lin et al was used as baseline. Instead, the paper compared to an older baseline proposed by (Garg et al., 2019). Besides, this baseline by Garg et al (+align) seems to contradict the claim in the abstract since it always performs worse than the baseline without word-alignment (Table 2). If for some practical reason, the baseline of (Lin et al., 2020) can’t be used, it needs to be explained clearly. 2. In Table 2, the proposed approaches only outperform the baselines in 1 setup (out of 3). In addition, there is no consistent trend in the result (i.e. it’s unclear which proposed method (+w2w) or (+FA) is better). Thus, the results presented are insufficient to prove the benefits of the proposed methods. To better justify the claims in this paper, additional experiments or more in-depth analysis seem necessary. 3. If the claim that better word-alignment improves many-to-many translation is true, why does the proposed method have no impact on the MLSC setup (Table 3)? Section 4 touches on this point but provides no explanation. 1. Please provide more details for the sentence retrieval setup (how sentences are retrieved, from what corpus, is it the same/different to the setup in (Artetxe and Schwenk, 2019) ? ). From the paper, “We found that for en-kk, numbers of extracted word pairs per sentence by word2word and FastAlign are 1.0 and 2.2, respectively. In contrast, the numbers are 4.2 and 20.7 for improved language pairs”. Is this because word2word and FastAlign fail for some language pairs or is this because there are few alignments between these language pairs? Would a better aligner improve result further? 2. For Table 3, are the non-highlighted cells not significant or not significantly better? If it’s the latter, please also highlight cells where the proposed approaches are significantly worse. For example, from Kk to En, +FA is significantly better than mBART (14.4 vs 14.1, difference of 0.3) and thus the cell is highlighted. However, from En to Kk, the difference between +FA and mBART is -0.5 (1.3 vs 1.8) but this cell is not highlighted.
Does the review include a summary of the weaknesses of the paper?
yes
This paper proposes improving many-to-many NMT using a word-level contrastive loss. The paper also shows that the proposed approach’s translation quality correlates with how well sentences can be retrieved using the encoder’s output. The experiment covers many different setups, with different number of language pairs, a wide variety of languages, and more than 1 neural architecture. The ablation is helpful for explaining where the improvement in NMT performance comes from. 1. Please provide more details for the sentence retrieval setup (how sentences are retrieved, from what corpus, is it the same/different to the setup in (Artetxe and Schwenk, 2019) ? ). From the paper, “We found that for en-kk, numbers of extracted word pairs per sentence by word2word and FastAlign are 1.0 and 2.2, respectively. In contrast, the numbers are 4.2 and 20.7 for improved language pairs”. Is this because word2word and FastAlign fail for some language pairs or is this because there are few alignments between these language pairs? Would a better aligner improve result further? 2. For Table 3, are the non-highlighted cells not significant or not significantly better? If it’s the latter, please also highlight cells where the proposed approaches are significantly worse. For example, from Kk to En, +FA is significantly better than mBART (14.4 vs 14.1, difference of 0.3) and thus the cell is highlighted. However, from En to Kk, the difference between +FA and mBART is -0.5 (1.3 vs 1.8) but this cell is not highlighted.
Does the review include a summary of the weaknesses of the paper?
no
This paper proposes improving many-to-many NMT using a word-level contrastive loss. The paper also shows that the proposed approach’s translation quality correlates with how well sentences can be retrieved using the encoder’s output. The experiment covers many different setups, with different number of language pairs, a wide variety of languages, and more than 1 neural architecture. The ablation is helpful for explaining where the improvement in NMT performance comes from. 1. The choice of the word-alignment baseline seems odd. The abstract claims that “Word alignment has proven to benefit many-to-many neural machine translation (NMT).” which is supported by (Lin et al., 2020). However, the method proposed by Lin et al was used as baseline. Instead, the paper compared to an older baseline proposed by (Garg et al., 2019). Besides, this baseline by Garg et al (+align) seems to contradict the claim in the abstract since it always performs worse than the baseline without word-alignment (Table 2). If for some practical reason, the baseline of (Lin et al., 2020) can’t be used, it needs to be explained clearly. 2. In Table 2, the proposed approaches only outperform the baselines in 1 setup (out of 3). In addition, there is no consistent trend in the result (i.e. it’s unclear which proposed method (+w2w) or (+FA) is better). Thus, the results presented are insufficient to prove the benefits of the proposed methods. To better justify the claims in this paper, additional experiments or more in-depth analysis seem necessary. 3. If the claim that better word-alignment improves many-to-many translation is true, why does the proposed method have no impact on the MLSC setup (Table 3)? Section 4 touches on this point but provides no explanation. 1. Please provide more details for the sentence retrieval setup (how sentences are retrieved, from what corpus, is it the same/different to the setup in (Artetxe and Schwenk, 2019) ? ). From the paper, “We found that for en-kk, numbers of extracted word pairs per sentence by word2word and FastAlign are 1.0 and 2.2, respectively. In contrast, the numbers are 4.2 and 20.7 for improved language pairs”. Is this because word2word and FastAlign fail for some language pairs or is this because there are few alignments between these language pairs? Would a better aligner improve result further? 2. For Table 3, are the non-highlighted cells not significant or not significantly better? If it’s the latter, please also highlight cells where the proposed approaches are significantly worse. For example, from Kk to En, +FA is significantly better than mBART (14.4 vs 14.1, difference of 0.3) and thus the cell is highlighted. However, from En to Kk, the difference between +FA and mBART is -0.5 (1.3 vs 1.8) but this cell is not highlighted.
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper proposes improving many-to-many NMT using a word-level contrastive loss. The paper also shows that the proposed approach’s translation quality correlates with how well sentences can be retrieved using the encoder’s output. The experiment covers many different setups, with different number of language pairs, a wide variety of languages, and more than 1 neural architecture. The ablation is helpful for explaining where the improvement in NMT performance comes from. 1. The choice of the word-alignment baseline seems odd. The abstract claims that “Word alignment has proven to benefit many-to-many neural machine translation (NMT).” which is supported by (Lin et al., 2020). However, the method proposed by Lin et al was used as baseline. Instead, the paper compared to an older baseline proposed by (Garg et al., 2019). Besides, this baseline by Garg et al (+align) seems to contradict the claim in the abstract since it always performs worse than the baseline without word-alignment (Table 2). If for some practical reason, the baseline of (Lin et al., 2020) can’t be used, it needs to be explained clearly. 2. In Table 2, the proposed approaches only outperform the baselines in 1 setup (out of 3). In addition, there is no consistent trend in the result (i.e. it’s unclear which proposed method (+w2w) or (+FA) is better). Thus, the results presented are insufficient to prove the benefits of the proposed methods. To better justify the claims in this paper, additional experiments or more in-depth analysis seem necessary. 3. If the claim that better word-alignment improves many-to-many translation is true, why does the proposed method have no impact on the MLSC setup (Table 3)? Section 4 touches on this point but provides no explanation.
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper claims the Distinct’s bias that tends to pose higher penalties over longer sequences, and then fixes the bias by calculating the expectation of distinct tokens of a random text with the same length, and divide the origin Distinct value by it. They provide theoretical evidence to the formula, and do experiments on the dialog generation task to prove that the newDistinct correlates better with human evaluations. 1. As the paper mentioned, the idea is inspired by psychological linguistics, making it more convincing. 2. This paper gives math analysis and derivation of the formula, making it more solid. 3. The experiments prove the efficiency of the new Distinct. 1. In the results in Table 2, there is a large score gap between the newdistinct and original distinct for the system of AdaLab, please give more explanations. 2. In Page 7, line 517~523, C can’t be so large that the limitation method can get a good enough result. Perhaps you need more accurate math to show that when C is not so big, for example, when C is only 5x or 10x of V, the derivative of NewDistinct is still bigger than the original Distinct. Besides, the conclusion “the bigger C is, the slower the original Distinct increases” is also right for NewDistinct. This paper is well written and is elegant.
Does the review include a short summary of the paper?
yes
1. As the paper mentioned, the idea is inspired by psychological linguistics, making it more convincing. 2. This paper gives math analysis and derivation of the formula, making it more solid. 3. The experiments prove the efficiency of the new Distinct. 1. In the results in Table 2, there is a large score gap between the newdistinct and original distinct for the system of AdaLab, please give more explanations. 2. In Page 7, line 517~523, C can’t be so large that the limitation method can get a good enough result. Perhaps you need more accurate math to show that when C is not so big, for example, when C is only 5x or 10x of V, the derivative of NewDistinct is still bigger than the original Distinct. Besides, the conclusion “the bigger C is, the slower the original Distinct increases” is also right for NewDistinct. This paper is well written and is elegant.
Does the review include a short summary of the paper?
no
This paper claims the Distinct’s bias that tends to pose higher penalties over longer sequences, and then fixes the bias by calculating the expectation of distinct tokens of a random text with the same length, and divide the origin Distinct value by it. They provide theoretical evidence to the formula, and do experiments on the dialog generation task to prove that the newDistinct correlates better with human evaluations. 1. As the paper mentioned, the idea is inspired by psychological linguistics, making it more convincing. 2. This paper gives math analysis and derivation of the formula, making it more solid. 3. The experiments prove the efficiency of the new Distinct. 1. In the results in Table 2, there is a large score gap between the newdistinct and original distinct for the system of AdaLab, please give more explanations. 2. In Page 7, line 517~523, C can’t be so large that the limitation method can get a good enough result. Perhaps you need more accurate math to show that when C is not so big, for example, when C is only 5x or 10x of V, the derivative of NewDistinct is still bigger than the original Distinct. Besides, the conclusion “the bigger C is, the slower the original Distinct increases” is also right for NewDistinct. This paper is well written and is elegant.
Does the review include a summary of the strengths of the paper?
yes
This paper claims the Distinct’s bias that tends to pose higher penalties over longer sequences, and then fixes the bias by calculating the expectation of distinct tokens of a random text with the same length, and divide the origin Distinct value by it. They provide theoretical evidence to the formula, and do experiments on the dialog generation task to prove that the newDistinct correlates better with human evaluations. 1. In the results in Table 2, there is a large score gap between the newdistinct and original distinct for the system of AdaLab, please give more explanations. 2. In Page 7, line 517~523, C can’t be so large that the limitation method can get a good enough result. Perhaps you need more accurate math to show that when C is not so big, for example, when C is only 5x or 10x of V, the derivative of NewDistinct is still bigger than the original Distinct. Besides, the conclusion “the bigger C is, the slower the original Distinct increases” is also right for NewDistinct. This paper is well written and is elegant.
Does the review include a summary of the strengths of the paper?
no
This paper claims the Distinct’s bias that tends to pose higher penalties over longer sequences, and then fixes the bias by calculating the expectation of distinct tokens of a random text with the same length, and divide the origin Distinct value by it. They provide theoretical evidence to the formula, and do experiments on the dialog generation task to prove that the newDistinct correlates better with human evaluations. 1. As the paper mentioned, the idea is inspired by psychological linguistics, making it more convincing. 2. This paper gives math analysis and derivation of the formula, making it more solid. 3. The experiments prove the efficiency of the new Distinct. 1. In the results in Table 2, there is a large score gap between the newdistinct and original distinct for the system of AdaLab, please give more explanations. 2. In Page 7, line 517~523, C can’t be so large that the limitation method can get a good enough result. Perhaps you need more accurate math to show that when C is not so big, for example, when C is only 5x or 10x of V, the derivative of NewDistinct is still bigger than the original Distinct. Besides, the conclusion “the bigger C is, the slower the original Distinct increases” is also right for NewDistinct. This paper is well written and is elegant.
Does the review include a summary of the weaknesses of the paper?
yes
This paper claims the Distinct’s bias that tends to pose higher penalties over longer sequences, and then fixes the bias by calculating the expectation of distinct tokens of a random text with the same length, and divide the origin Distinct value by it. They provide theoretical evidence to the formula, and do experiments on the dialog generation task to prove that the newDistinct correlates better with human evaluations. 1. As the paper mentioned, the idea is inspired by psychological linguistics, making it more convincing. 2. This paper gives math analysis and derivation of the formula, making it more solid. 3. The experiments prove the efficiency of the new Distinct. This paper is well written and is elegant.
Does the review include a summary of the weaknesses of the paper?
no
This paper claims the Distinct’s bias that tends to pose higher penalties over longer sequences, and then fixes the bias by calculating the expectation of distinct tokens of a random text with the same length, and divide the origin Distinct value by it. They provide theoretical evidence to the formula, and do experiments on the dialog generation task to prove that the newDistinct correlates better with human evaluations. 1. As the paper mentioned, the idea is inspired by psychological linguistics, making it more convincing. 2. This paper gives math analysis and derivation of the formula, making it more solid. 3. The experiments prove the efficiency of the new Distinct. 1. In the results in Table 2, there is a large score gap between the newdistinct and original distinct for the system of AdaLab, please give more explanations. 2. In Page 7, line 517~523, C can’t be so large that the limitation method can get a good enough result. Perhaps you need more accurate math to show that when C is not so big, for example, when C is only 5x or 10x of V, the derivative of NewDistinct is still bigger than the original Distinct. Besides, the conclusion “the bigger C is, the slower the original Distinct increases” is also right for NewDistinct. This paper is well written and is elegant.
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper claims the Distinct’s bias that tends to pose higher penalties over longer sequences, and then fixes the bias by calculating the expectation of distinct tokens of a random text with the same length, and divide the origin Distinct value by it. They provide theoretical evidence to the formula, and do experiments on the dialog generation task to prove that the newDistinct correlates better with human evaluations. 1. As the paper mentioned, the idea is inspired by psychological linguistics, making it more convincing. 2. This paper gives math analysis and derivation of the formula, making it more solid. 3. The experiments prove the efficiency of the new Distinct. 1. In the results in Table 2, there is a large score gap between the newdistinct and original distinct for the system of AdaLab, please give more explanations. 2. In Page 7, line 517~523, C can’t be so large that the limitation method can get a good enough result. Perhaps you need more accurate math to show that when C is not so big, for example, when C is only 5x or 10x of V, the derivative of NewDistinct is still bigger than the original Distinct. Besides, the conclusion “the bigger C is, the slower the original Distinct increases” is also right for NewDistinct.
Does the review mention any comments, suggestions or typos that the author should address?
no
The paper describes a new approach towards MeSH label prediction, utilizing the title abstract journal relative information. The proposed model combines BiLSTMs, Dilated CNNs and GCNNs to extract features from abstracts, titles and the mesh term hierarchy respectively. Limiting the search MeSH space with information extraction from metadata (such as other articles published in that journal) allows for a boost in performance by building dynamic attention masks. The final model shows good performance compared to related approaches, one of which uses the full article. - Utilized information past the document itself to limit the MeSH search space - Introduces novel end-to-end architecture that can be used in other tasks involving scholarly articles - Achieves good performance compared to related approaches. - Threshold is said to have a very big impact but is not discussed in detail with different ablations. How does threshold affect computational complexity (outside of performance)? - Some of the design choices are not explained well (e.g. why IDF-weighting) - Training time (epochs) and computational complexity of the kNN and GCNN component is not discussed. - Equations 10 & 11 should be H_{abstract} instead of D_{abstract}? If not, when is H_{abstract} used? - There is a significant drop in performance for MeSH terms when metadata are not available, leading to a worse performance than other methods (Ablations-d). In case of new journals or preprints, is this the expected performance? - With the tuned threshold, how many MeSH terms are not selected during the dynamic masking on average in the different data splits? What is the hierarchical level of these terms? - A few minor typos, proof reading should fix them. Nothing major.
Does the review include a short summary of the paper?
yes
- Utilized information past the document itself to limit the MeSH search space - Introduces novel end-to-end architecture that can be used in other tasks involving scholarly articles - Achieves good performance compared to related approaches. - Threshold is said to have a very big impact but is not discussed in detail with different ablations. How does threshold affect computational complexity (outside of performance)? - Some of the design choices are not explained well (e.g. why IDF-weighting) - Training time (epochs) and computational complexity of the kNN and GCNN component is not discussed. - Equations 10 & 11 should be H_{abstract} instead of D_{abstract}? If not, when is H_{abstract} used? - There is a significant drop in performance for MeSH terms when metadata are not available, leading to a worse performance than other methods (Ablations-d). In case of new journals or preprints, is this the expected performance? - With the tuned threshold, how many MeSH terms are not selected during the dynamic masking on average in the different data splits? What is the hierarchical level of these terms? - A few minor typos, proof reading should fix them. Nothing major.
Does the review include a short summary of the paper?
no
The paper describes a new approach towards MeSH label prediction, utilizing the title abstract journal relative information. The proposed model combines BiLSTMs, Dilated CNNs and GCNNs to extract features from abstracts, titles and the mesh term hierarchy respectively. Limiting the search MeSH space with information extraction from metadata (such as other articles published in that journal) allows for a boost in performance by building dynamic attention masks. The final model shows good performance compared to related approaches, one of which uses the full article. - Utilized information past the document itself to limit the MeSH search space - Introduces novel end-to-end architecture that can be used in other tasks involving scholarly articles - Achieves good performance compared to related approaches. - Threshold is said to have a very big impact but is not discussed in detail with different ablations. How does threshold affect computational complexity (outside of performance)? - Some of the design choices are not explained well (e.g. why IDF-weighting) - Training time (epochs) and computational complexity of the kNN and GCNN component is not discussed. - Equations 10 & 11 should be H_{abstract} instead of D_{abstract}? If not, when is H_{abstract} used? - There is a significant drop in performance for MeSH terms when metadata are not available, leading to a worse performance than other methods (Ablations-d). In case of new journals or preprints, is this the expected performance? - With the tuned threshold, how many MeSH terms are not selected during the dynamic masking on average in the different data splits? What is the hierarchical level of these terms? - A few minor typos, proof reading should fix them. Nothing major.
Does the review include a summary of the strengths of the paper?
yes
The paper describes a new approach towards MeSH label prediction, utilizing the title abstract journal relative information. The proposed model combines BiLSTMs, Dilated CNNs and GCNNs to extract features from abstracts, titles and the mesh term hierarchy respectively. Limiting the search MeSH space with information extraction from metadata (such as other articles published in that journal) allows for a boost in performance by building dynamic attention masks. The final model shows good performance compared to related approaches, one of which uses the full article. - Threshold is said to have a very big impact but is not discussed in detail with different ablations. How does threshold affect computational complexity (outside of performance)? - Some of the design choices are not explained well (e.g. why IDF-weighting) - Training time (epochs) and computational complexity of the kNN and GCNN component is not discussed. - Equations 10 & 11 should be H_{abstract} instead of D_{abstract}? If not, when is H_{abstract} used? - There is a significant drop in performance for MeSH terms when metadata are not available, leading to a worse performance than other methods (Ablations-d). In case of new journals or preprints, is this the expected performance? - With the tuned threshold, how many MeSH terms are not selected during the dynamic masking on average in the different data splits? What is the hierarchical level of these terms? - A few minor typos, proof reading should fix them. Nothing major.
Does the review include a summary of the strengths of the paper?
no
The paper describes a new approach towards MeSH label prediction, utilizing the title abstract journal relative information. The proposed model combines BiLSTMs, Dilated CNNs and GCNNs to extract features from abstracts, titles and the mesh term hierarchy respectively. Limiting the search MeSH space with information extraction from metadata (such as other articles published in that journal) allows for a boost in performance by building dynamic attention masks. The final model shows good performance compared to related approaches, one of which uses the full article. - Utilized information past the document itself to limit the MeSH search space - Introduces novel end-to-end architecture that can be used in other tasks involving scholarly articles - Achieves good performance compared to related approaches. - Threshold is said to have a very big impact but is not discussed in detail with different ablations. How does threshold affect computational complexity (outside of performance)? - Some of the design choices are not explained well (e.g. why IDF-weighting) - Training time (epochs) and computational complexity of the kNN and GCNN component is not discussed. - Equations 10 & 11 should be H_{abstract} instead of D_{abstract}? If not, when is H_{abstract} used? - There is a significant drop in performance for MeSH terms when metadata are not available, leading to a worse performance than other methods (Ablations-d). In case of new journals or preprints, is this the expected performance? - With the tuned threshold, how many MeSH terms are not selected during the dynamic masking on average in the different data splits? What is the hierarchical level of these terms? - A few minor typos, proof reading should fix them. Nothing major.
Does the review include a summary of the weaknesses of the paper?
yes
The paper describes a new approach towards MeSH label prediction, utilizing the title abstract journal relative information. The proposed model combines BiLSTMs, Dilated CNNs and GCNNs to extract features from abstracts, titles and the mesh term hierarchy respectively. Limiting the search MeSH space with information extraction from metadata (such as other articles published in that journal) allows for a boost in performance by building dynamic attention masks. The final model shows good performance compared to related approaches, one of which uses the full article. - Utilized information past the document itself to limit the MeSH search space - Introduces novel end-to-end architecture that can be used in other tasks involving scholarly articles - Achieves good performance compared to related approaches. - Equations 10 & 11 should be H_{abstract} instead of D_{abstract}? If not, when is H_{abstract} used? - There is a significant drop in performance for MeSH terms when metadata are not available, leading to a worse performance than other methods (Ablations-d). In case of new journals or preprints, is this the expected performance? - With the tuned threshold, how many MeSH terms are not selected during the dynamic masking on average in the different data splits? What is the hierarchical level of these terms? - A few minor typos, proof reading should fix them. Nothing major.
Does the review include a summary of the weaknesses of the paper?
no
The paper describes a new approach towards MeSH label prediction, utilizing the title abstract journal relative information. The proposed model combines BiLSTMs, Dilated CNNs and GCNNs to extract features from abstracts, titles and the mesh term hierarchy respectively. Limiting the search MeSH space with information extraction from metadata (such as other articles published in that journal) allows for a boost in performance by building dynamic attention masks. The final model shows good performance compared to related approaches, one of which uses the full article. - Utilized information past the document itself to limit the MeSH search space - Introduces novel end-to-end architecture that can be used in other tasks involving scholarly articles - Achieves good performance compared to related approaches. - Threshold is said to have a very big impact but is not discussed in detail with different ablations. How does threshold affect computational complexity (outside of performance)? - Some of the design choices are not explained well (e.g. why IDF-weighting) - Training time (epochs) and computational complexity of the kNN and GCNN component is not discussed. - Equations 10 & 11 should be H_{abstract} instead of D_{abstract}? If not, when is H_{abstract} used? - There is a significant drop in performance for MeSH terms when metadata are not available, leading to a worse performance than other methods (Ablations-d). In case of new journals or preprints, is this the expected performance? - With the tuned threshold, how many MeSH terms are not selected during the dynamic masking on average in the different data splits? What is the hierarchical level of these terms? - A few minor typos, proof reading should fix them. Nothing major.
Does the review mention any comments, suggestions or typos that the author should address?
yes
The paper describes a new approach towards MeSH label prediction, utilizing the title abstract journal relative information. The proposed model combines BiLSTMs, Dilated CNNs and GCNNs to extract features from abstracts, titles and the mesh term hierarchy respectively. Limiting the search MeSH space with information extraction from metadata (such as other articles published in that journal) allows for a boost in performance by building dynamic attention masks. The final model shows good performance compared to related approaches, one of which uses the full article. - Utilized information past the document itself to limit the MeSH search space - Introduces novel end-to-end architecture that can be used in other tasks involving scholarly articles - Achieves good performance compared to related approaches. - Threshold is said to have a very big impact but is not discussed in detail with different ablations. How does threshold affect computational complexity (outside of performance)? - Some of the design choices are not explained well (e.g. why IDF-weighting) - Training time (epochs) and computational complexity of the kNN and GCNN component is not discussed.
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper introduces a new method for MeSH indexing, combining multiple methods including, but not limited to, dilated CNNs, masked attention, and graph CNNs. Overall, the proposed approach makes substantial improvements over prior state-of-the-art methods. For example, Micro F1 improves over BERTMeSH from 0.685 to 0.745. Similar improvements are also found for the example-based measures (e.g., example-based F1). Furthermore, a comprehensive ablation study was performed, showing that the label feature model has the largest impact on model performance, yet, other parts of the method still impact performance substantially. Overall, the paper is well-written and easy to read. Furthermore, the improvement over prior work is substantial. It is neither easy nor trivial to make such considerable performance improvements for MeSH indexing, especially for Micro F1. For instance, BERTMeSH [1] only improves DeepMeSH [2] by only 2% in Micro F1 [1] after five years of work. Hence, seeing a Micro F1 near 0.75 is a huge breakthrough. References: [1] Peng, Shengwen, et al. "DeepMeSH: deep semantic representation for improving large-scale MeSH indexing." Bioinformatics 32.12 (2016): i70-i79. [2] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. Overall, there are three major weaknesses in this paper. First, the paper uses a custom training and validation dataset pulled from PubMed, making comparisons difficult. Using the data from the yearly BioASQ shared tasks would be better to use their data so new methods are more easily comparable. I understand this is common in similar studies (e.g., by BERTMeSH [3]), but a standardized dataset seems possible and useful. Second, while the hyperparameters are discussed, it is not clear whether hyperparameters were optimized for the baseline models. What were the chosen parameters? Was the validation dataset used to optimize them similarly to the proposed method? If so, why is the standard deviation not reported for the baseline models (e.g., in Table 1)? Given the substantial performance differences between the proposed model and prior work, this additional information must be reported to ensure fair comparisons. Third, while this may be the first paper to use GCNNs for MeSH indexing, it is widely used for similar biomedical text classification tasks (e.g., ICD Coding). For instance, [1] directly combines BiLSTMs with GCNNs and label features in a very similar manner to the method proposed in this paper, albeit with exceptions such as [1] does not use dilated CNNs. Furthermore, that work has been expanded on to better understand the impact of the GCNNs and whether they are needed [2]. Hence, the paper would substantially help if the related work sections were expanded to include citations with similar methodologies. In my opinion, the "Dynamic Knowledge-enhanced Mask Attention Module" is one of the most innovative parts of the paper and should be highlighted more in the introduction. References: [1] Chalkidis, Ilias, et al. "Large-Scale Multi-Label Text Classification on EU Legislation." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. [2] Chalkidis, Ilias, et al. "An Empirical Study on Large-Scale Multi-Label Text Classification Including Few and Zero-Shot Labels." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020. [3] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. Page 3, Line 240-252: There are a few variations of LSTMs [1]. Is the one used in this paper the same as the 1997 paper? Page 2, Line 098-100: The phrase "latent semantics" is unclear. It may help the paper if that phrase is expanded, e.g., does this mean the contextual information from combining multiple layers of neural networks? Page 4, Line 287: I believe "edges are implement MeSH hierarchies" should be "edges represent relationships in the MeSH hierarchy" Page 6, Line 416-417: I believe the phrase ", and we converted all words are lowercased" Should be ", and we converted all words to lowercase" References: [1] Graves,A. et al. (2012) Supervised Sequence Labelling with Recurrent Neural Networks. Vol. 385. Springer, Berlin.
Does the review include a short summary of the paper?
yes
Overall, the paper is well-written and easy to read. Furthermore, the improvement over prior work is substantial. It is neither easy nor trivial to make such considerable performance improvements for MeSH indexing, especially for Micro F1. For instance, BERTMeSH [1] only improves DeepMeSH [2] by only 2% in Micro F1 [1] after five years of work. Hence, seeing a Micro F1 near 0.75 is a huge breakthrough. References: [1] Peng, Shengwen, et al. "DeepMeSH: deep semantic representation for improving large-scale MeSH indexing." Bioinformatics 32.12 (2016): i70-i79. [2] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. Overall, there are three major weaknesses in this paper. First, the paper uses a custom training and validation dataset pulled from PubMed, making comparisons difficult. Using the data from the yearly BioASQ shared tasks would be better to use their data so new methods are more easily comparable. I understand this is common in similar studies (e.g., by BERTMeSH [3]), but a standardized dataset seems possible and useful. Second, while the hyperparameters are discussed, it is not clear whether hyperparameters were optimized for the baseline models. What were the chosen parameters? Was the validation dataset used to optimize them similarly to the proposed method? If so, why is the standard deviation not reported for the baseline models (e.g., in Table 1)? Given the substantial performance differences between the proposed model and prior work, this additional information must be reported to ensure fair comparisons. Third, while this may be the first paper to use GCNNs for MeSH indexing, it is widely used for similar biomedical text classification tasks (e.g., ICD Coding). For instance, [1] directly combines BiLSTMs with GCNNs and label features in a very similar manner to the method proposed in this paper, albeit with exceptions such as [1] does not use dilated CNNs. Furthermore, that work has been expanded on to better understand the impact of the GCNNs and whether they are needed [2]. Hence, the paper would substantially help if the related work sections were expanded to include citations with similar methodologies. In my opinion, the "Dynamic Knowledge-enhanced Mask Attention Module" is one of the most innovative parts of the paper and should be highlighted more in the introduction. References: [1] Chalkidis, Ilias, et al. "Large-Scale Multi-Label Text Classification on EU Legislation." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. [2] Chalkidis, Ilias, et al. "An Empirical Study on Large-Scale Multi-Label Text Classification Including Few and Zero-Shot Labels." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020. [3] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. Page 3, Line 240-252: There are a few variations of LSTMs [1]. Is the one used in this paper the same as the 1997 paper? Page 2, Line 098-100: The phrase "latent semantics" is unclear. It may help the paper if that phrase is expanded, e.g., does this mean the contextual information from combining multiple layers of neural networks? Page 4, Line 287: I believe "edges are implement MeSH hierarchies" should be "edges represent relationships in the MeSH hierarchy" Page 6, Line 416-417: I believe the phrase ", and we converted all words are lowercased" Should be ", and we converted all words to lowercase" References: [1] Graves,A. et al. (2012) Supervised Sequence Labelling with Recurrent Neural Networks. Vol. 385. Springer, Berlin.
Does the review include a short summary of the paper?
no
This paper introduces a new method for MeSH indexing, combining multiple methods including, but not limited to, dilated CNNs, masked attention, and graph CNNs. Overall, the proposed approach makes substantial improvements over prior state-of-the-art methods. For example, Micro F1 improves over BERTMeSH from 0.685 to 0.745. Similar improvements are also found for the example-based measures (e.g., example-based F1). Furthermore, a comprehensive ablation study was performed, showing that the label feature model has the largest impact on model performance, yet, other parts of the method still impact performance substantially. Overall, the paper is well-written and easy to read. Furthermore, the improvement over prior work is substantial. It is neither easy nor trivial to make such considerable performance improvements for MeSH indexing, especially for Micro F1. For instance, BERTMeSH [1] only improves DeepMeSH [2] by only 2% in Micro F1 [1] after five years of work. Hence, seeing a Micro F1 near 0.75 is a huge breakthrough. References: [1] Peng, Shengwen, et al. "DeepMeSH: deep semantic representation for improving large-scale MeSH indexing." Bioinformatics 32.12 (2016): i70-i79. [2] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. Overall, there are three major weaknesses in this paper. First, the paper uses a custom training and validation dataset pulled from PubMed, making comparisons difficult. Using the data from the yearly BioASQ shared tasks would be better to use their data so new methods are more easily comparable. I understand this is common in similar studies (e.g., by BERTMeSH [3]), but a standardized dataset seems possible and useful. Second, while the hyperparameters are discussed, it is not clear whether hyperparameters were optimized for the baseline models. What were the chosen parameters? Was the validation dataset used to optimize them similarly to the proposed method? If so, why is the standard deviation not reported for the baseline models (e.g., in Table 1)? Given the substantial performance differences between the proposed model and prior work, this additional information must be reported to ensure fair comparisons. Third, while this may be the first paper to use GCNNs for MeSH indexing, it is widely used for similar biomedical text classification tasks (e.g., ICD Coding). For instance, [1] directly combines BiLSTMs with GCNNs and label features in a very similar manner to the method proposed in this paper, albeit with exceptions such as [1] does not use dilated CNNs. Furthermore, that work has been expanded on to better understand the impact of the GCNNs and whether they are needed [2]. Hence, the paper would substantially help if the related work sections were expanded to include citations with similar methodologies. In my opinion, the "Dynamic Knowledge-enhanced Mask Attention Module" is one of the most innovative parts of the paper and should be highlighted more in the introduction. References: [1] Chalkidis, Ilias, et al. "Large-Scale Multi-Label Text Classification on EU Legislation." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. [2] Chalkidis, Ilias, et al. "An Empirical Study on Large-Scale Multi-Label Text Classification Including Few and Zero-Shot Labels." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020. [3] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. Page 3, Line 240-252: There are a few variations of LSTMs [1]. Is the one used in this paper the same as the 1997 paper? Page 2, Line 098-100: The phrase "latent semantics" is unclear. It may help the paper if that phrase is expanded, e.g., does this mean the contextual information from combining multiple layers of neural networks? Page 4, Line 287: I believe "edges are implement MeSH hierarchies" should be "edges represent relationships in the MeSH hierarchy" Page 6, Line 416-417: I believe the phrase ", and we converted all words are lowercased" Should be ", and we converted all words to lowercase" References: [1] Graves,A. et al. (2012) Supervised Sequence Labelling with Recurrent Neural Networks. Vol. 385. Springer, Berlin.
Does the review include a summary of the strengths of the paper?
yes
This paper introduces a new method for MeSH indexing, combining multiple methods including, but not limited to, dilated CNNs, masked attention, and graph CNNs. Overall, the proposed approach makes substantial improvements over prior state-of-the-art methods. For example, Micro F1 improves over BERTMeSH from 0.685 to 0.745. Similar improvements are also found for the example-based measures (e.g., example-based F1). Furthermore, a comprehensive ablation study was performed, showing that the label feature model has the largest impact on model performance, yet, other parts of the method still impact performance substantially. Overall, there are three major weaknesses in this paper. First, the paper uses a custom training and validation dataset pulled from PubMed, making comparisons difficult. Using the data from the yearly BioASQ shared tasks would be better to use their data so new methods are more easily comparable. I understand this is common in similar studies (e.g., by BERTMeSH [3]), but a standardized dataset seems possible and useful. Second, while the hyperparameters are discussed, it is not clear whether hyperparameters were optimized for the baseline models. What were the chosen parameters? Was the validation dataset used to optimize them similarly to the proposed method? If so, why is the standard deviation not reported for the baseline models (e.g., in Table 1)? Given the substantial performance differences between the proposed model and prior work, this additional information must be reported to ensure fair comparisons. Third, while this may be the first paper to use GCNNs for MeSH indexing, it is widely used for similar biomedical text classification tasks (e.g., ICD Coding). For instance, [1] directly combines BiLSTMs with GCNNs and label features in a very similar manner to the method proposed in this paper, albeit with exceptions such as [1] does not use dilated CNNs. Furthermore, that work has been expanded on to better understand the impact of the GCNNs and whether they are needed [2]. Hence, the paper would substantially help if the related work sections were expanded to include citations with similar methodologies. In my opinion, the "Dynamic Knowledge-enhanced Mask Attention Module" is one of the most innovative parts of the paper and should be highlighted more in the introduction. References: [1] Chalkidis, Ilias, et al. "Large-Scale Multi-Label Text Classification on EU Legislation." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. [2] Chalkidis, Ilias, et al. "An Empirical Study on Large-Scale Multi-Label Text Classification Including Few and Zero-Shot Labels." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020. [3] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. Page 3, Line 240-252: There are a few variations of LSTMs [1]. Is the one used in this paper the same as the 1997 paper? Page 2, Line 098-100: The phrase "latent semantics" is unclear. It may help the paper if that phrase is expanded, e.g., does this mean the contextual information from combining multiple layers of neural networks? Page 4, Line 287: I believe "edges are implement MeSH hierarchies" should be "edges represent relationships in the MeSH hierarchy" Page 6, Line 416-417: I believe the phrase ", and we converted all words are lowercased" Should be ", and we converted all words to lowercase" References: [1] Graves,A. et al. (2012) Supervised Sequence Labelling with Recurrent Neural Networks. Vol. 385. Springer, Berlin.
Does the review include a summary of the strengths of the paper?
no
This paper introduces a new method for MeSH indexing, combining multiple methods including, but not limited to, dilated CNNs, masked attention, and graph CNNs. Overall, the proposed approach makes substantial improvements over prior state-of-the-art methods. For example, Micro F1 improves over BERTMeSH from 0.685 to 0.745. Similar improvements are also found for the example-based measures (e.g., example-based F1). Furthermore, a comprehensive ablation study was performed, showing that the label feature model has the largest impact on model performance, yet, other parts of the method still impact performance substantially. Overall, the paper is well-written and easy to read. Furthermore, the improvement over prior work is substantial. It is neither easy nor trivial to make such considerable performance improvements for MeSH indexing, especially for Micro F1. For instance, BERTMeSH [1] only improves DeepMeSH [2] by only 2% in Micro F1 [1] after five years of work. Hence, seeing a Micro F1 near 0.75 is a huge breakthrough. References: [1] Peng, Shengwen, et al. "DeepMeSH: deep semantic representation for improving large-scale MeSH indexing." Bioinformatics 32.12 (2016): i70-i79. [2] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. Overall, there are three major weaknesses in this paper. First, the paper uses a custom training and validation dataset pulled from PubMed, making comparisons difficult. Using the data from the yearly BioASQ shared tasks would be better to use their data so new methods are more easily comparable. I understand this is common in similar studies (e.g., by BERTMeSH [3]), but a standardized dataset seems possible and useful. Second, while the hyperparameters are discussed, it is not clear whether hyperparameters were optimized for the baseline models. What were the chosen parameters? Was the validation dataset used to optimize them similarly to the proposed method? If so, why is the standard deviation not reported for the baseline models (e.g., in Table 1)? Given the substantial performance differences between the proposed model and prior work, this additional information must be reported to ensure fair comparisons. Third, while this may be the first paper to use GCNNs for MeSH indexing, it is widely used for similar biomedical text classification tasks (e.g., ICD Coding). For instance, [1] directly combines BiLSTMs with GCNNs and label features in a very similar manner to the method proposed in this paper, albeit with exceptions such as [1] does not use dilated CNNs. Furthermore, that work has been expanded on to better understand the impact of the GCNNs and whether they are needed [2]. Hence, the paper would substantially help if the related work sections were expanded to include citations with similar methodologies. In my opinion, the "Dynamic Knowledge-enhanced Mask Attention Module" is one of the most innovative parts of the paper and should be highlighted more in the introduction. References: [1] Chalkidis, Ilias, et al. "Large-Scale Multi-Label Text Classification on EU Legislation." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. [2] Chalkidis, Ilias, et al. "An Empirical Study on Large-Scale Multi-Label Text Classification Including Few and Zero-Shot Labels." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020. [3] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. Page 3, Line 240-252: There are a few variations of LSTMs [1]. Is the one used in this paper the same as the 1997 paper? Page 2, Line 098-100: The phrase "latent semantics" is unclear. It may help the paper if that phrase is expanded, e.g., does this mean the contextual information from combining multiple layers of neural networks? Page 4, Line 287: I believe "edges are implement MeSH hierarchies" should be "edges represent relationships in the MeSH hierarchy" Page 6, Line 416-417: I believe the phrase ", and we converted all words are lowercased" Should be ", and we converted all words to lowercase" References: [1] Graves,A. et al. (2012) Supervised Sequence Labelling with Recurrent Neural Networks. Vol. 385. Springer, Berlin.
Does the review include a summary of the weaknesses of the paper?
yes
This paper introduces a new method for MeSH indexing, combining multiple methods including, but not limited to, dilated CNNs, masked attention, and graph CNNs. Overall, the proposed approach makes substantial improvements over prior state-of-the-art methods. For example, Micro F1 improves over BERTMeSH from 0.685 to 0.745. Similar improvements are also found for the example-based measures (e.g., example-based F1). Furthermore, a comprehensive ablation study was performed, showing that the label feature model has the largest impact on model performance, yet, other parts of the method still impact performance substantially. Overall, the paper is well-written and easy to read. Furthermore, the improvement over prior work is substantial. It is neither easy nor trivial to make such considerable performance improvements for MeSH indexing, especially for Micro F1. For instance, BERTMeSH [1] only improves DeepMeSH [2] by only 2% in Micro F1 [1] after five years of work. Hence, seeing a Micro F1 near 0.75 is a huge breakthrough. References: [1] Peng, Shengwen, et al. "DeepMeSH: deep semantic representation for improving large-scale MeSH indexing." Bioinformatics 32.12 (2016): i70-i79. [2] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. Page 3, Line 240-252: There are a few variations of LSTMs [1]. Is the one used in this paper the same as the 1997 paper? Page 2, Line 098-100: The phrase "latent semantics" is unclear. It may help the paper if that phrase is expanded, e.g., does this mean the contextual information from combining multiple layers of neural networks? Page 4, Line 287: I believe "edges are implement MeSH hierarchies" should be "edges represent relationships in the MeSH hierarchy" Page 6, Line 416-417: I believe the phrase ", and we converted all words are lowercased" Should be ", and we converted all words to lowercase" References: [1] Graves,A. et al. (2012) Supervised Sequence Labelling with Recurrent Neural Networks. Vol. 385. Springer, Berlin.
Does the review include a summary of the weaknesses of the paper?
no
This paper introduces a new method for MeSH indexing, combining multiple methods including, but not limited to, dilated CNNs, masked attention, and graph CNNs. Overall, the proposed approach makes substantial improvements over prior state-of-the-art methods. For example, Micro F1 improves over BERTMeSH from 0.685 to 0.745. Similar improvements are also found for the example-based measures (e.g., example-based F1). Furthermore, a comprehensive ablation study was performed, showing that the label feature model has the largest impact on model performance, yet, other parts of the method still impact performance substantially. Overall, the paper is well-written and easy to read. Furthermore, the improvement over prior work is substantial. It is neither easy nor trivial to make such considerable performance improvements for MeSH indexing, especially for Micro F1. For instance, BERTMeSH [1] only improves DeepMeSH [2] by only 2% in Micro F1 [1] after five years of work. Hence, seeing a Micro F1 near 0.75 is a huge breakthrough. References: [1] Peng, Shengwen, et al. "DeepMeSH: deep semantic representation for improving large-scale MeSH indexing." Bioinformatics 32.12 (2016): i70-i79. [2] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. Overall, there are three major weaknesses in this paper. First, the paper uses a custom training and validation dataset pulled from PubMed, making comparisons difficult. Using the data from the yearly BioASQ shared tasks would be better to use their data so new methods are more easily comparable. I understand this is common in similar studies (e.g., by BERTMeSH [3]), but a standardized dataset seems possible and useful. Second, while the hyperparameters are discussed, it is not clear whether hyperparameters were optimized for the baseline models. What were the chosen parameters? Was the validation dataset used to optimize them similarly to the proposed method? If so, why is the standard deviation not reported for the baseline models (e.g., in Table 1)? Given the substantial performance differences between the proposed model and prior work, this additional information must be reported to ensure fair comparisons. Third, while this may be the first paper to use GCNNs for MeSH indexing, it is widely used for similar biomedical text classification tasks (e.g., ICD Coding). For instance, [1] directly combines BiLSTMs with GCNNs and label features in a very similar manner to the method proposed in this paper, albeit with exceptions such as [1] does not use dilated CNNs. Furthermore, that work has been expanded on to better understand the impact of the GCNNs and whether they are needed [2]. Hence, the paper would substantially help if the related work sections were expanded to include citations with similar methodologies. In my opinion, the "Dynamic Knowledge-enhanced Mask Attention Module" is one of the most innovative parts of the paper and should be highlighted more in the introduction. References: [1] Chalkidis, Ilias, et al. "Large-Scale Multi-Label Text Classification on EU Legislation." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. [2] Chalkidis, Ilias, et al. "An Empirical Study on Large-Scale Multi-Label Text Classification Including Few and Zero-Shot Labels." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020. [3] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. Page 3, Line 240-252: There are a few variations of LSTMs [1]. Is the one used in this paper the same as the 1997 paper? Page 2, Line 098-100: The phrase "latent semantics" is unclear. It may help the paper if that phrase is expanded, e.g., does this mean the contextual information from combining multiple layers of neural networks? Page 4, Line 287: I believe "edges are implement MeSH hierarchies" should be "edges represent relationships in the MeSH hierarchy" Page 6, Line 416-417: I believe the phrase ", and we converted all words are lowercased" Should be ", and we converted all words to lowercase" References: [1] Graves,A. et al. (2012) Supervised Sequence Labelling with Recurrent Neural Networks. Vol. 385. Springer, Berlin.
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper introduces a new method for MeSH indexing, combining multiple methods including, but not limited to, dilated CNNs, masked attention, and graph CNNs. Overall, the proposed approach makes substantial improvements over prior state-of-the-art methods. For example, Micro F1 improves over BERTMeSH from 0.685 to 0.745. Similar improvements are also found for the example-based measures (e.g., example-based F1). Furthermore, a comprehensive ablation study was performed, showing that the label feature model has the largest impact on model performance, yet, other parts of the method still impact performance substantially. Overall, the paper is well-written and easy to read. Furthermore, the improvement over prior work is substantial. It is neither easy nor trivial to make such considerable performance improvements for MeSH indexing, especially for Micro F1. For instance, BERTMeSH [1] only improves DeepMeSH [2] by only 2% in Micro F1 [1] after five years of work. Hence, seeing a Micro F1 near 0.75 is a huge breakthrough. References: [1] Peng, Shengwen, et al. "DeepMeSH: deep semantic representation for improving large-scale MeSH indexing." Bioinformatics 32.12 (2016): i70-i79. [2] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692. Overall, there are three major weaknesses in this paper. First, the paper uses a custom training and validation dataset pulled from PubMed, making comparisons difficult. Using the data from the yearly BioASQ shared tasks would be better to use their data so new methods are more easily comparable. I understand this is common in similar studies (e.g., by BERTMeSH [3]), but a standardized dataset seems possible and useful. Second, while the hyperparameters are discussed, it is not clear whether hyperparameters were optimized for the baseline models. What were the chosen parameters? Was the validation dataset used to optimize them similarly to the proposed method? If so, why is the standard deviation not reported for the baseline models (e.g., in Table 1)? Given the substantial performance differences between the proposed model and prior work, this additional information must be reported to ensure fair comparisons. Third, while this may be the first paper to use GCNNs for MeSH indexing, it is widely used for similar biomedical text classification tasks (e.g., ICD Coding). For instance, [1] directly combines BiLSTMs with GCNNs and label features in a very similar manner to the method proposed in this paper, albeit with exceptions such as [1] does not use dilated CNNs. Furthermore, that work has been expanded on to better understand the impact of the GCNNs and whether they are needed [2]. Hence, the paper would substantially help if the related work sections were expanded to include citations with similar methodologies. In my opinion, the "Dynamic Knowledge-enhanced Mask Attention Module" is one of the most innovative parts of the paper and should be highlighted more in the introduction. References: [1] Chalkidis, Ilias, et al. "Large-Scale Multi-Label Text Classification on EU Legislation." Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019. [2] Chalkidis, Ilias, et al. "An Empirical Study on Large-Scale Multi-Label Text Classification Including Few and Zero-Shot Labels." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020. [3] You, Ronghui, et al. "BERTMeSH: deep contextual representation learning for large-scale high-performance MeSH indexing with full text." Bioinformatics 37.5 (2021): 684-692.
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper presents a contrastive learning framework for unsupervised sentence representation learning. The proposed framework can alleviate the sampling bias in the random negative example sampling in contrastive learning. It achieves the debiased sampling by 1) adding per-example noise-based negatives and iteratively improving them by maximizing the non-uniform objective; 2) adding a similarity based filtering (based on a separate model) to remove false negatives. Empirical evaluations show the the proposed method outperforms competitive baselines on 7 semantic textual similarity tasks. - The proposed method is intuitive and achieves good results comparing to existing methods. - The paper is well written and easy to follow. - The proposed method also achieves good performance in the few-shot setting, which makes it more useful in real practice. One key motivation in the paper is to improve the quality of the sentence representations via improving uniformity. However, it is still a bit unclear to me if there is a strong correlation between uniformity and representation quality. Maybe we can perform some additional analyses to reveal this by varying negative proportion k? Note that Figure 6(b) seems to show that k does not have a high quality impact on some datasets. 1. Figure 1: although it is very intuitive to use one random example to demonstrate the distribution of false negative examples, it would also be good to show the statistics over the whole dataset or a particular slice of the dataset. 2. Line 414: it would be good to show the ratio of false negative examples that are filtered out.
Does the review include a short summary of the paper?
yes
- The proposed method is intuitive and achieves good results comparing to existing methods. - The paper is well written and easy to follow. - The proposed method also achieves good performance in the few-shot setting, which makes it more useful in real practice. One key motivation in the paper is to improve the quality of the sentence representations via improving uniformity. However, it is still a bit unclear to me if there is a strong correlation between uniformity and representation quality. Maybe we can perform some additional analyses to reveal this by varying negative proportion k? Note that Figure 6(b) seems to show that k does not have a high quality impact on some datasets. 1. Figure 1: although it is very intuitive to use one random example to demonstrate the distribution of false negative examples, it would also be good to show the statistics over the whole dataset or a particular slice of the dataset. 2. Line 414: it would be good to show the ratio of false negative examples that are filtered out.
Does the review include a short summary of the paper?
no
This paper presents a contrastive learning framework for unsupervised sentence representation learning. The proposed framework can alleviate the sampling bias in the random negative example sampling in contrastive learning. It achieves the debiased sampling by 1) adding per-example noise-based negatives and iteratively improving them by maximizing the non-uniform objective; 2) adding a similarity based filtering (based on a separate model) to remove false negatives. Empirical evaluations show the the proposed method outperforms competitive baselines on 7 semantic textual similarity tasks. - The proposed method is intuitive and achieves good results comparing to existing methods. - The paper is well written and easy to follow. - The proposed method also achieves good performance in the few-shot setting, which makes it more useful in real practice. One key motivation in the paper is to improve the quality of the sentence representations via improving uniformity. However, it is still a bit unclear to me if there is a strong correlation between uniformity and representation quality. Maybe we can perform some additional analyses to reveal this by varying negative proportion k? Note that Figure 6(b) seems to show that k does not have a high quality impact on some datasets. 1. Figure 1: although it is very intuitive to use one random example to demonstrate the distribution of false negative examples, it would also be good to show the statistics over the whole dataset or a particular slice of the dataset. 2. Line 414: it would be good to show the ratio of false negative examples that are filtered out.
Does the review include a summary of the strengths of the paper?
yes
This paper presents a contrastive learning framework for unsupervised sentence representation learning. The proposed framework can alleviate the sampling bias in the random negative example sampling in contrastive learning. It achieves the debiased sampling by 1) adding per-example noise-based negatives and iteratively improving them by maximizing the non-uniform objective; 2) adding a similarity based filtering (based on a separate model) to remove false negatives. Empirical evaluations show the the proposed method outperforms competitive baselines on 7 semantic textual similarity tasks. One key motivation in the paper is to improve the quality of the sentence representations via improving uniformity. However, it is still a bit unclear to me if there is a strong correlation between uniformity and representation quality. Maybe we can perform some additional analyses to reveal this by varying negative proportion k? Note that Figure 6(b) seems to show that k does not have a high quality impact on some datasets. 1. Figure 1: although it is very intuitive to use one random example to demonstrate the distribution of false negative examples, it would also be good to show the statistics over the whole dataset or a particular slice of the dataset. 2. Line 414: it would be good to show the ratio of false negative examples that are filtered out.
Does the review include a summary of the strengths of the paper?
no
This paper presents a contrastive learning framework for unsupervised sentence representation learning. The proposed framework can alleviate the sampling bias in the random negative example sampling in contrastive learning. It achieves the debiased sampling by 1) adding per-example noise-based negatives and iteratively improving them by maximizing the non-uniform objective; 2) adding a similarity based filtering (based on a separate model) to remove false negatives. Empirical evaluations show the the proposed method outperforms competitive baselines on 7 semantic textual similarity tasks. - The proposed method is intuitive and achieves good results comparing to existing methods. - The paper is well written and easy to follow. - The proposed method also achieves good performance in the few-shot setting, which makes it more useful in real practice. One key motivation in the paper is to improve the quality of the sentence representations via improving uniformity. However, it is still a bit unclear to me if there is a strong correlation between uniformity and representation quality. Maybe we can perform some additional analyses to reveal this by varying negative proportion k? Note that Figure 6(b) seems to show that k does not have a high quality impact on some datasets. 1. Figure 1: although it is very intuitive to use one random example to demonstrate the distribution of false negative examples, it would also be good to show the statistics over the whole dataset or a particular slice of the dataset. 2. Line 414: it would be good to show the ratio of false negative examples that are filtered out.
Does the review include a summary of the weaknesses of the paper?
yes
This paper presents a contrastive learning framework for unsupervised sentence representation learning. The proposed framework can alleviate the sampling bias in the random negative example sampling in contrastive learning. It achieves the debiased sampling by 1) adding per-example noise-based negatives and iteratively improving them by maximizing the non-uniform objective; 2) adding a similarity based filtering (based on a separate model) to remove false negatives. Empirical evaluations show the the proposed method outperforms competitive baselines on 7 semantic textual similarity tasks. - The proposed method is intuitive and achieves good results comparing to existing methods. - The paper is well written and easy to follow. - The proposed method also achieves good performance in the few-shot setting, which makes it more useful in real practice. 1. Figure 1: although it is very intuitive to use one random example to demonstrate the distribution of false negative examples, it would also be good to show the statistics over the whole dataset or a particular slice of the dataset. 2. Line 414: it would be good to show the ratio of false negative examples that are filtered out.
Does the review include a summary of the weaknesses of the paper?
no
This paper presents a contrastive learning framework for unsupervised sentence representation learning. The proposed framework can alleviate the sampling bias in the random negative example sampling in contrastive learning. It achieves the debiased sampling by 1) adding per-example noise-based negatives and iteratively improving them by maximizing the non-uniform objective; 2) adding a similarity based filtering (based on a separate model) to remove false negatives. Empirical evaluations show the the proposed method outperforms competitive baselines on 7 semantic textual similarity tasks. - The proposed method is intuitive and achieves good results comparing to existing methods. - The paper is well written and easy to follow. - The proposed method also achieves good performance in the few-shot setting, which makes it more useful in real practice. One key motivation in the paper is to improve the quality of the sentence representations via improving uniformity. However, it is still a bit unclear to me if there is a strong correlation between uniformity and representation quality. Maybe we can perform some additional analyses to reveal this by varying negative proportion k? Note that Figure 6(b) seems to show that k does not have a high quality impact on some datasets. 1. Figure 1: although it is very intuitive to use one random example to demonstrate the distribution of false negative examples, it would also be good to show the statistics over the whole dataset or a particular slice of the dataset. 2. Line 414: it would be good to show the ratio of false negative examples that are filtered out.
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper presents a contrastive learning framework for unsupervised sentence representation learning. The proposed framework can alleviate the sampling bias in the random negative example sampling in contrastive learning. It achieves the debiased sampling by 1) adding per-example noise-based negatives and iteratively improving them by maximizing the non-uniform objective; 2) adding a similarity based filtering (based on a separate model) to remove false negatives. Empirical evaluations show the the proposed method outperforms competitive baselines on 7 semantic textual similarity tasks. - The proposed method is intuitive and achieves good results comparing to existing methods. - The paper is well written and easy to follow. - The proposed method also achieves good performance in the few-shot setting, which makes it more useful in real practice. One key motivation in the paper is to improve the quality of the sentence representations via improving uniformity. However, it is still a bit unclear to me if there is a strong correlation between uniformity and representation quality. Maybe we can perform some additional analyses to reveal this by varying negative proportion k? Note that Figure 6(b) seems to show that k does not have a high quality impact on some datasets.
Does the review mention any comments, suggestions or typos that the author should address?
no
**What is the task?** Debiased contrastive learning framework for unsupervised sentence representation learning **What has been done before?** Previous works (contrastive learning based baselines) mostly utilize in-batch negatives to learn the uniformity, but the randomly negative sampling strategy may lead to sampling bias, such as false negatives and anisotropy representations. Different from these methods, the proposed framework adopts an in-stance weighting method for punishing false negatives and a gradient-based algorithm for generating noise-based negatives towards the most nonuniform points. In this way, the influence of false negatives can be alleviated and the model can better learn the uniformity. It finally reduces the sampling bias and improves the model performance. **What are the main contributions of the paper? How many of them are novel?** - First attempt to reduce the sampling bias in contrastive learning of unsupervised sentence representations. - Presented a new framework DCLR to alleviate the influence of sampling bias. - Experiments on 7 semantic textual similarity tasks show that our approach is more effective than competitive baselines on semantic textual similarity (STS) tasks using BERT and RoBERTa **What are the key techniques used to tackle this task?** The core idea is to improve the random negative sampling strategy for alleviating the sampling bias problem. - A noise-based negatives generation strategy to reduce the bias caused by the anisotropy PLM-derived representations - initialize new negatives based on a Gaussian distribution and iteratively update these negatives by non-uniformity maximization. - An instance weighting method to reduce the bias caused by false negatives - similarity between original sentence and each negative **What are the main results? Are they significant?** Experiments on 7 semantic textual similarity tasks show that our approach is more effective than competitive baselines on semantic textual similarity (STS) tasks using BERT and RoBERTa - First attempt to reduce the sampling bias in contrastive learning of unsupervised sentence representations. - Well-written and easy to read paper - Claims well-supported by ablation analysis and experimental results See comments below - It is unclear which evaluation set(s) were used for Figure 3 - In section 6.1, it is unclear why and how negative sampling (DCLR) has to change. It seems DCLR is independent of the positive data augmentation strategy used. - Why was STS-B and SICK-R chosen for figure 4-6 and not STS-avg ? - How does DCLR compare with SimCSE on figure 5?
Does the review include a short summary of the paper?
yes
- First attempt to reduce the sampling bias in contrastive learning of unsupervised sentence representations. - Well-written and easy to read paper - Claims well-supported by ablation analysis and experimental results See comments below - It is unclear which evaluation set(s) were used for Figure 3 - In section 6.1, it is unclear why and how negative sampling (DCLR) has to change. It seems DCLR is independent of the positive data augmentation strategy used. - Why was STS-B and SICK-R chosen for figure 4-6 and not STS-avg ? - How does DCLR compare with SimCSE on figure 5?
Does the review include a short summary of the paper?
no
**What is the task?** Debiased contrastive learning framework for unsupervised sentence representation learning **What has been done before?** Previous works (contrastive learning based baselines) mostly utilize in-batch negatives to learn the uniformity, but the randomly negative sampling strategy may lead to sampling bias, such as false negatives and anisotropy representations. Different from these methods, the proposed framework adopts an in-stance weighting method for punishing false negatives and a gradient-based algorithm for generating noise-based negatives towards the most nonuniform points. In this way, the influence of false negatives can be alleviated and the model can better learn the uniformity. It finally reduces the sampling bias and improves the model performance. **What are the main contributions of the paper? How many of them are novel?** - First attempt to reduce the sampling bias in contrastive learning of unsupervised sentence representations. - Presented a new framework DCLR to alleviate the influence of sampling bias. - Experiments on 7 semantic textual similarity tasks show that our approach is more effective than competitive baselines on semantic textual similarity (STS) tasks using BERT and RoBERTa **What are the key techniques used to tackle this task?** The core idea is to improve the random negative sampling strategy for alleviating the sampling bias problem. - A noise-based negatives generation strategy to reduce the bias caused by the anisotropy PLM-derived representations - initialize new negatives based on a Gaussian distribution and iteratively update these negatives by non-uniformity maximization. - An instance weighting method to reduce the bias caused by false negatives - similarity between original sentence and each negative **What are the main results? Are they significant?** Experiments on 7 semantic textual similarity tasks show that our approach is more effective than competitive baselines on semantic textual similarity (STS) tasks using BERT and RoBERTa - First attempt to reduce the sampling bias in contrastive learning of unsupervised sentence representations. - Well-written and easy to read paper - Claims well-supported by ablation analysis and experimental results See comments below - It is unclear which evaluation set(s) were used for Figure 3 - In section 6.1, it is unclear why and how negative sampling (DCLR) has to change. It seems DCLR is independent of the positive data augmentation strategy used. - Why was STS-B and SICK-R chosen for figure 4-6 and not STS-avg ? - How does DCLR compare with SimCSE on figure 5?
Does the review include a summary of the strengths of the paper?
yes
**What is the task?** Debiased contrastive learning framework for unsupervised sentence representation learning **What has been done before?** Previous works (contrastive learning based baselines) mostly utilize in-batch negatives to learn the uniformity, but the randomly negative sampling strategy may lead to sampling bias, such as false negatives and anisotropy representations. Different from these methods, the proposed framework adopts an in-stance weighting method for punishing false negatives and a gradient-based algorithm for generating noise-based negatives towards the most nonuniform points. In this way, the influence of false negatives can be alleviated and the model can better learn the uniformity. It finally reduces the sampling bias and improves the model performance. **What are the main contributions of the paper? How many of them are novel?** - First attempt to reduce the sampling bias in contrastive learning of unsupervised sentence representations. - Presented a new framework DCLR to alleviate the influence of sampling bias. - Experiments on 7 semantic textual similarity tasks show that our approach is more effective than competitive baselines on semantic textual similarity (STS) tasks using BERT and RoBERTa **What are the key techniques used to tackle this task?** The core idea is to improve the random negative sampling strategy for alleviating the sampling bias problem. - A noise-based negatives generation strategy to reduce the bias caused by the anisotropy PLM-derived representations - initialize new negatives based on a Gaussian distribution and iteratively update these negatives by non-uniformity maximization. - An instance weighting method to reduce the bias caused by false negatives - similarity between original sentence and each negative **What are the main results? Are they significant?** Experiments on 7 semantic textual similarity tasks show that our approach is more effective than competitive baselines on semantic textual similarity (STS) tasks using BERT and RoBERTa See comments below - It is unclear which evaluation set(s) were used for Figure 3 - In section 6.1, it is unclear why and how negative sampling (DCLR) has to change. It seems DCLR is independent of the positive data augmentation strategy used. - Why was STS-B and SICK-R chosen for figure 4-6 and not STS-avg ? - How does DCLR compare with SimCSE on figure 5?
Does the review include a summary of the strengths of the paper?
no
**What is the task?** Debiased contrastive learning framework for unsupervised sentence representation learning **What has been done before?** Previous works (contrastive learning based baselines) mostly utilize in-batch negatives to learn the uniformity, but the randomly negative sampling strategy may lead to sampling bias, such as false negatives and anisotropy representations. Different from these methods, the proposed framework adopts an in-stance weighting method for punishing false negatives and a gradient-based algorithm for generating noise-based negatives towards the most nonuniform points. In this way, the influence of false negatives can be alleviated and the model can better learn the uniformity. It finally reduces the sampling bias and improves the model performance. **What are the main contributions of the paper? How many of them are novel?** - First attempt to reduce the sampling bias in contrastive learning of unsupervised sentence representations. - Presented a new framework DCLR to alleviate the influence of sampling bias. - Experiments on 7 semantic textual similarity tasks show that our approach is more effective than competitive baselines on semantic textual similarity (STS) tasks using BERT and RoBERTa **What are the key techniques used to tackle this task?** The core idea is to improve the random negative sampling strategy for alleviating the sampling bias problem. - A noise-based negatives generation strategy to reduce the bias caused by the anisotropy PLM-derived representations - initialize new negatives based on a Gaussian distribution and iteratively update these negatives by non-uniformity maximization. - An instance weighting method to reduce the bias caused by false negatives - similarity between original sentence and each negative **What are the main results? Are they significant?** Experiments on 7 semantic textual similarity tasks show that our approach is more effective than competitive baselines on semantic textual similarity (STS) tasks using BERT and RoBERTa - First attempt to reduce the sampling bias in contrastive learning of unsupervised sentence representations. - Well-written and easy to read paper - Claims well-supported by ablation analysis and experimental results See comments below - It is unclear which evaluation set(s) were used for Figure 3 - In section 6.1, it is unclear why and how negative sampling (DCLR) has to change. It seems DCLR is independent of the positive data augmentation strategy used. - Why was STS-B and SICK-R chosen for figure 4-6 and not STS-avg ? - How does DCLR compare with SimCSE on figure 5?
Does the review include a summary of the weaknesses of the paper?
yes
**What is the task?** Debiased contrastive learning framework for unsupervised sentence representation learning **What has been done before?** Previous works (contrastive learning based baselines) mostly utilize in-batch negatives to learn the uniformity, but the randomly negative sampling strategy may lead to sampling bias, such as false negatives and anisotropy representations. Different from these methods, the proposed framework adopts an in-stance weighting method for punishing false negatives and a gradient-based algorithm for generating noise-based negatives towards the most nonuniform points. In this way, the influence of false negatives can be alleviated and the model can better learn the uniformity. It finally reduces the sampling bias and improves the model performance. **What are the main contributions of the paper? How many of them are novel?** - First attempt to reduce the sampling bias in contrastive learning of unsupervised sentence representations. - Presented a new framework DCLR to alleviate the influence of sampling bias. - Experiments on 7 semantic textual similarity tasks show that our approach is more effective than competitive baselines on semantic textual similarity (STS) tasks using BERT and RoBERTa **What are the key techniques used to tackle this task?** The core idea is to improve the random negative sampling strategy for alleviating the sampling bias problem. - A noise-based negatives generation strategy to reduce the bias caused by the anisotropy PLM-derived representations - initialize new negatives based on a Gaussian distribution and iteratively update these negatives by non-uniformity maximization. - An instance weighting method to reduce the bias caused by false negatives - similarity between original sentence and each negative **What are the main results? Are they significant?** Experiments on 7 semantic textual similarity tasks show that our approach is more effective than competitive baselines on semantic textual similarity (STS) tasks using BERT and RoBERTa - First attempt to reduce the sampling bias in contrastive learning of unsupervised sentence representations. - Well-written and easy to read paper - Claims well-supported by ablation analysis and experimental results - It is unclear which evaluation set(s) were used for Figure 3 - In section 6.1, it is unclear why and how negative sampling (DCLR) has to change. It seems DCLR is independent of the positive data augmentation strategy used. - Why was STS-B and SICK-R chosen for figure 4-6 and not STS-avg ? - How does DCLR compare with SimCSE on figure 5?
Does the review include a summary of the weaknesses of the paper?
no
**What is the task?** Debiased contrastive learning framework for unsupervised sentence representation learning **What has been done before?** Previous works (contrastive learning based baselines) mostly utilize in-batch negatives to learn the uniformity, but the randomly negative sampling strategy may lead to sampling bias, such as false negatives and anisotropy representations. Different from these methods, the proposed framework adopts an in-stance weighting method for punishing false negatives and a gradient-based algorithm for generating noise-based negatives towards the most nonuniform points. In this way, the influence of false negatives can be alleviated and the model can better learn the uniformity. It finally reduces the sampling bias and improves the model performance. **What are the main contributions of the paper? How many of them are novel?** - First attempt to reduce the sampling bias in contrastive learning of unsupervised sentence representations. - Presented a new framework DCLR to alleviate the influence of sampling bias. - Experiments on 7 semantic textual similarity tasks show that our approach is more effective than competitive baselines on semantic textual similarity (STS) tasks using BERT and RoBERTa **What are the key techniques used to tackle this task?** The core idea is to improve the random negative sampling strategy for alleviating the sampling bias problem. - A noise-based negatives generation strategy to reduce the bias caused by the anisotropy PLM-derived representations - initialize new negatives based on a Gaussian distribution and iteratively update these negatives by non-uniformity maximization. - An instance weighting method to reduce the bias caused by false negatives - similarity between original sentence and each negative **What are the main results? Are they significant?** Experiments on 7 semantic textual similarity tasks show that our approach is more effective than competitive baselines on semantic textual similarity (STS) tasks using BERT and RoBERTa - First attempt to reduce the sampling bias in contrastive learning of unsupervised sentence representations. - Well-written and easy to read paper - Claims well-supported by ablation analysis and experimental results See comments below - It is unclear which evaluation set(s) were used for Figure 3 - In section 6.1, it is unclear why and how negative sampling (DCLR) has to change. It seems DCLR is independent of the positive data augmentation strategy used. - Why was STS-B and SICK-R chosen for figure 4-6 and not STS-avg ? - How does DCLR compare with SimCSE on figure 5?
Does the review mention any comments, suggestions or typos that the author should address?
yes
**What is the task?** Debiased contrastive learning framework for unsupervised sentence representation learning **What has been done before?** Previous works (contrastive learning based baselines) mostly utilize in-batch negatives to learn the uniformity, but the randomly negative sampling strategy may lead to sampling bias, such as false negatives and anisotropy representations. Different from these methods, the proposed framework adopts an in-stance weighting method for punishing false negatives and a gradient-based algorithm for generating noise-based negatives towards the most nonuniform points. In this way, the influence of false negatives can be alleviated and the model can better learn the uniformity. It finally reduces the sampling bias and improves the model performance. **What are the main contributions of the paper? How many of them are novel?** - First attempt to reduce the sampling bias in contrastive learning of unsupervised sentence representations. - Presented a new framework DCLR to alleviate the influence of sampling bias. - Experiments on 7 semantic textual similarity tasks show that our approach is more effective than competitive baselines on semantic textual similarity (STS) tasks using BERT and RoBERTa **What are the key techniques used to tackle this task?** The core idea is to improve the random negative sampling strategy for alleviating the sampling bias problem. - A noise-based negatives generation strategy to reduce the bias caused by the anisotropy PLM-derived representations - initialize new negatives based on a Gaussian distribution and iteratively update these negatives by non-uniformity maximization. - An instance weighting method to reduce the bias caused by false negatives - similarity between original sentence and each negative **What are the main results? Are they significant?** Experiments on 7 semantic textual similarity tasks show that our approach is more effective than competitive baselines on semantic textual similarity (STS) tasks using BERT and RoBERTa - First attempt to reduce the sampling bias in contrastive learning of unsupervised sentence representations. - Well-written and easy to read paper - Claims well-supported by ablation analysis and experimental results See comments below
Does the review mention any comments, suggestions or typos that the author should address?
no
The paper tries to address a new and important problem: sampling bias in contrastive learning that improper negatives (e.g. false negatives and anisotropy representation) will hurt the uniformity of the representation space. To tackle this problem, they propose DCLR framework to alleviate the influence sampling bias. Specifically, they employ 1) weighting method to punish false negatives 2) generate noise-based negatives to guarantee the uniformity of the representation space. They demonstrated the effectiveness of their method on 7 semantic textual similarity tasks. Overall, the paper is well-written and the results are encouraging. However, I am not fully convinced of the way that the authors use to identify false negatives. The authors can also do a better job at explaining why optimizing equation (3) and (4) can lead the noise-based negatives into more non-uniform semantic space. The paper is overall well-written. The problem that the paper tries to address seems new and important. The authors propose two methods to remediate the sampling bias issue which lead to encouraging results (main results Table 1) compared with previous approaches. I am not fully convinced about the way the false negative samples are identified. In the paper, the authors proposed to use SimCSE as a complementary model and identify false negatives as negatives that have higher semantic similarity over a threshold using SimCSE representation, and punish those “false negatives” with a 0 weighting. I would argue that using similarity score alone could not identify false negatives. It only identify negatives that are very close to the original sentence semantically. In fact, those are “hard negatives” which has been shown in many contrastive works as essential in learning a high-quality representation. The authors can also do a better job at explaining why starting with a Gaussian negatives and optimizing equation (3) and (4) can lead to negatives sampled from non-uniform semantic space. Any previous work or evidence to support that? 1) In Figure 1, what is the corpus that the authors use to sample sentence pairs? 2) In addition to STS tasks, the authors could also explore a number of downstream tasks as implemented in SentEval to evaluate the quality of sentence representations. 3) It would be good to add statical analysis to the results in Table 1 to show significance.
Does the review include a short summary of the paper?
yes
The paper is overall well-written. The problem that the paper tries to address seems new and important. The authors propose two methods to remediate the sampling bias issue which lead to encouraging results (main results Table 1) compared with previous approaches. I am not fully convinced about the way the false negative samples are identified. In the paper, the authors proposed to use SimCSE as a complementary model and identify false negatives as negatives that have higher semantic similarity over a threshold using SimCSE representation, and punish those “false negatives” with a 0 weighting. I would argue that using similarity score alone could not identify false negatives. It only identify negatives that are very close to the original sentence semantically. In fact, those are “hard negatives” which has been shown in many contrastive works as essential in learning a high-quality representation. The authors can also do a better job at explaining why starting with a Gaussian negatives and optimizing equation (3) and (4) can lead to negatives sampled from non-uniform semantic space. Any previous work or evidence to support that? 1) In Figure 1, what is the corpus that the authors use to sample sentence pairs? 2) In addition to STS tasks, the authors could also explore a number of downstream tasks as implemented in SentEval to evaluate the quality of sentence representations. 3) It would be good to add statical analysis to the results in Table 1 to show significance.
Does the review include a short summary of the paper?
no
The paper tries to address a new and important problem: sampling bias in contrastive learning that improper negatives (e.g. false negatives and anisotropy representation) will hurt the uniformity of the representation space. To tackle this problem, they propose DCLR framework to alleviate the influence sampling bias. Specifically, they employ 1) weighting method to punish false negatives 2) generate noise-based negatives to guarantee the uniformity of the representation space. They demonstrated the effectiveness of their method on 7 semantic textual similarity tasks. Overall, the paper is well-written and the results are encouraging. However, I am not fully convinced of the way that the authors use to identify false negatives. The authors can also do a better job at explaining why optimizing equation (3) and (4) can lead the noise-based negatives into more non-uniform semantic space. The paper is overall well-written. The problem that the paper tries to address seems new and important. The authors propose two methods to remediate the sampling bias issue which lead to encouraging results (main results Table 1) compared with previous approaches. I am not fully convinced about the way the false negative samples are identified. In the paper, the authors proposed to use SimCSE as a complementary model and identify false negatives as negatives that have higher semantic similarity over a threshold using SimCSE representation, and punish those “false negatives” with a 0 weighting. I would argue that using similarity score alone could not identify false negatives. It only identify negatives that are very close to the original sentence semantically. In fact, those are “hard negatives” which has been shown in many contrastive works as essential in learning a high-quality representation. The authors can also do a better job at explaining why starting with a Gaussian negatives and optimizing equation (3) and (4) can lead to negatives sampled from non-uniform semantic space. Any previous work or evidence to support that? 1) In Figure 1, what is the corpus that the authors use to sample sentence pairs? 2) In addition to STS tasks, the authors could also explore a number of downstream tasks as implemented in SentEval to evaluate the quality of sentence representations. 3) It would be good to add statical analysis to the results in Table 1 to show significance.
Does the review include a summary of the strengths of the paper?
yes
The paper tries to address a new and important problem: sampling bias in contrastive learning that improper negatives (e.g. false negatives and anisotropy representation) will hurt the uniformity of the representation space. To tackle this problem, they propose DCLR framework to alleviate the influence sampling bias. Specifically, they employ 1) weighting method to punish false negatives 2) generate noise-based negatives to guarantee the uniformity of the representation space. They demonstrated the effectiveness of their method on 7 semantic textual similarity tasks. Overall, the paper is well-written and the results are encouraging. However, I am not fully convinced of the way that the authors use to identify false negatives. The authors can also do a better job at explaining why optimizing equation (3) and (4) can lead the noise-based negatives into more non-uniform semantic space. I am not fully convinced about the way the false negative samples are identified. In the paper, the authors proposed to use SimCSE as a complementary model and identify false negatives as negatives that have higher semantic similarity over a threshold using SimCSE representation, and punish those “false negatives” with a 0 weighting. I would argue that using similarity score alone could not identify false negatives. It only identify negatives that are very close to the original sentence semantically. In fact, those are “hard negatives” which has been shown in many contrastive works as essential in learning a high-quality representation. The authors can also do a better job at explaining why starting with a Gaussian negatives and optimizing equation (3) and (4) can lead to negatives sampled from non-uniform semantic space. Any previous work or evidence to support that? 1) In Figure 1, what is the corpus that the authors use to sample sentence pairs? 2) In addition to STS tasks, the authors could also explore a number of downstream tasks as implemented in SentEval to evaluate the quality of sentence representations. 3) It would be good to add statical analysis to the results in Table 1 to show significance.
Does the review include a summary of the strengths of the paper?
no
The paper tries to address a new and important problem: sampling bias in contrastive learning that improper negatives (e.g. false negatives and anisotropy representation) will hurt the uniformity of the representation space. To tackle this problem, they propose DCLR framework to alleviate the influence sampling bias. Specifically, they employ 1) weighting method to punish false negatives 2) generate noise-based negatives to guarantee the uniformity of the representation space. They demonstrated the effectiveness of their method on 7 semantic textual similarity tasks. Overall, the paper is well-written and the results are encouraging. However, I am not fully convinced of the way that the authors use to identify false negatives. The authors can also do a better job at explaining why optimizing equation (3) and (4) can lead the noise-based negatives into more non-uniform semantic space. The paper is overall well-written. The problem that the paper tries to address seems new and important. The authors propose two methods to remediate the sampling bias issue which lead to encouraging results (main results Table 1) compared with previous approaches. I am not fully convinced about the way the false negative samples are identified. In the paper, the authors proposed to use SimCSE as a complementary model and identify false negatives as negatives that have higher semantic similarity over a threshold using SimCSE representation, and punish those “false negatives” with a 0 weighting. I would argue that using similarity score alone could not identify false negatives. It only identify negatives that are very close to the original sentence semantically. In fact, those are “hard negatives” which has been shown in many contrastive works as essential in learning a high-quality representation. The authors can also do a better job at explaining why starting with a Gaussian negatives and optimizing equation (3) and (4) can lead to negatives sampled from non-uniform semantic space. Any previous work or evidence to support that? 1) In Figure 1, what is the corpus that the authors use to sample sentence pairs? 2) In addition to STS tasks, the authors could also explore a number of downstream tasks as implemented in SentEval to evaluate the quality of sentence representations. 3) It would be good to add statical analysis to the results in Table 1 to show significance.
Does the review include a summary of the weaknesses of the paper?
yes
The paper tries to address a new and important problem: sampling bias in contrastive learning that improper negatives (e.g. false negatives and anisotropy representation) will hurt the uniformity of the representation space. To tackle this problem, they propose DCLR framework to alleviate the influence sampling bias. Specifically, they employ 1) weighting method to punish false negatives 2) generate noise-based negatives to guarantee the uniformity of the representation space. They demonstrated the effectiveness of their method on 7 semantic textual similarity tasks. Overall, the paper is well-written and the results are encouraging. However, I am not fully convinced of the way that the authors use to identify false negatives. The authors can also do a better job at explaining why optimizing equation (3) and (4) can lead the noise-based negatives into more non-uniform semantic space. The paper is overall well-written. The problem that the paper tries to address seems new and important. The authors propose two methods to remediate the sampling bias issue which lead to encouraging results (main results Table 1) compared with previous approaches. 1) In Figure 1, what is the corpus that the authors use to sample sentence pairs? 2) In addition to STS tasks, the authors could also explore a number of downstream tasks as implemented in SentEval to evaluate the quality of sentence representations. 3) It would be good to add statical analysis to the results in Table 1 to show significance.
Does the review include a summary of the weaknesses of the paper?
no
The paper tries to address a new and important problem: sampling bias in contrastive learning that improper negatives (e.g. false negatives and anisotropy representation) will hurt the uniformity of the representation space. To tackle this problem, they propose DCLR framework to alleviate the influence sampling bias. Specifically, they employ 1) weighting method to punish false negatives 2) generate noise-based negatives to guarantee the uniformity of the representation space. They demonstrated the effectiveness of their method on 7 semantic textual similarity tasks. Overall, the paper is well-written and the results are encouraging. However, I am not fully convinced of the way that the authors use to identify false negatives. The authors can also do a better job at explaining why optimizing equation (3) and (4) can lead the noise-based negatives into more non-uniform semantic space. The paper is overall well-written. The problem that the paper tries to address seems new and important. The authors propose two methods to remediate the sampling bias issue which lead to encouraging results (main results Table 1) compared with previous approaches. I am not fully convinced about the way the false negative samples are identified. In the paper, the authors proposed to use SimCSE as a complementary model and identify false negatives as negatives that have higher semantic similarity over a threshold using SimCSE representation, and punish those “false negatives” with a 0 weighting. I would argue that using similarity score alone could not identify false negatives. It only identify negatives that are very close to the original sentence semantically. In fact, those are “hard negatives” which has been shown in many contrastive works as essential in learning a high-quality representation. The authors can also do a better job at explaining why starting with a Gaussian negatives and optimizing equation (3) and (4) can lead to negatives sampled from non-uniform semantic space. Any previous work or evidence to support that? 1) In Figure 1, what is the corpus that the authors use to sample sentence pairs? 2) In addition to STS tasks, the authors could also explore a number of downstream tasks as implemented in SentEval to evaluate the quality of sentence representations. 3) It would be good to add statical analysis to the results in Table 1 to show significance.
Does the review mention any comments, suggestions or typos that the author should address?
yes
The paper tries to address a new and important problem: sampling bias in contrastive learning that improper negatives (e.g. false negatives and anisotropy representation) will hurt the uniformity of the representation space. To tackle this problem, they propose DCLR framework to alleviate the influence sampling bias. Specifically, they employ 1) weighting method to punish false negatives 2) generate noise-based negatives to guarantee the uniformity of the representation space. They demonstrated the effectiveness of their method on 7 semantic textual similarity tasks. Overall, the paper is well-written and the results are encouraging. However, I am not fully convinced of the way that the authors use to identify false negatives. The authors can also do a better job at explaining why optimizing equation (3) and (4) can lead the noise-based negatives into more non-uniform semantic space. The paper is overall well-written. The problem that the paper tries to address seems new and important. The authors propose two methods to remediate the sampling bias issue which lead to encouraging results (main results Table 1) compared with previous approaches. I am not fully convinced about the way the false negative samples are identified. In the paper, the authors proposed to use SimCSE as a complementary model and identify false negatives as negatives that have higher semantic similarity over a threshold using SimCSE representation, and punish those “false negatives” with a 0 weighting. I would argue that using similarity score alone could not identify false negatives. It only identify negatives that are very close to the original sentence semantically. In fact, those are “hard negatives” which has been shown in many contrastive works as essential in learning a high-quality representation. The authors can also do a better job at explaining why starting with a Gaussian negatives and optimizing equation (3) and (4) can lead to negatives sampled from non-uniform semantic space. Any previous work or evidence to support that?
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper provides a large new dataset written in Hanja, an old Korean language not fully understood yet by modern Korean speakers. This dataset comprises three datasets: AJD, DRS, DRRI (DRRI was proposed in this work). The authors investigate the performance of two pretrained language models (PLMs) based on BERT on this data by first continuing the pretraining of the PLMs and later finetuning them to perform a series of supervised tasks. The authors considered two such models: mBERT and AnchiBERT (pretrained on ancient Chinese data). As for the tasks, the authors propose extracting labels from the dataset structure and realizing supervised learning for king prediction (era prediction), topic classification, NER, and summary retrieval. The paper provides a thoughtful set of experiments evaluating the performance of these models with and without finetuning on Hanja data, including zero-shot experiments on DRRI. Later, the authors also investigate the impact of time and historical events using the best performant model (finetuned AnchiBERT). - The paper provides a large new dataset for a low-resource language (Hanja) - The paper does an excellent job at using state-of-the-art approaches for investigating linguistic structures of the datasets, including the impact of name entities and document ages on classification - The presentation of the results is simple and yet comprehensive - The paper would be easy to follow with an English-proofreading even though the overall idea is still understandable. - The new proposed dataset, DRRI, could have been explored more in the paper. - It is not clear how named entities were extracted from the datasets. An English-proofreading would significantly improve the readability of the paper.
Does the review include a short summary of the paper?
yes
- The paper provides a large new dataset for a low-resource language (Hanja) - The paper does an excellent job at using state-of-the-art approaches for investigating linguistic structures of the datasets, including the impact of name entities and document ages on classification - The presentation of the results is simple and yet comprehensive - The paper would be easy to follow with an English-proofreading even though the overall idea is still understandable. - The new proposed dataset, DRRI, could have been explored more in the paper. - It is not clear how named entities were extracted from the datasets. An English-proofreading would significantly improve the readability of the paper.
Does the review include a short summary of the paper?
no
This paper provides a large new dataset written in Hanja, an old Korean language not fully understood yet by modern Korean speakers. This dataset comprises three datasets: AJD, DRS, DRRI (DRRI was proposed in this work). The authors investigate the performance of two pretrained language models (PLMs) based on BERT on this data by first continuing the pretraining of the PLMs and later finetuning them to perform a series of supervised tasks. The authors considered two such models: mBERT and AnchiBERT (pretrained on ancient Chinese data). As for the tasks, the authors propose extracting labels from the dataset structure and realizing supervised learning for king prediction (era prediction), topic classification, NER, and summary retrieval. The paper provides a thoughtful set of experiments evaluating the performance of these models with and without finetuning on Hanja data, including zero-shot experiments on DRRI. Later, the authors also investigate the impact of time and historical events using the best performant model (finetuned AnchiBERT). - The paper provides a large new dataset for a low-resource language (Hanja) - The paper does an excellent job at using state-of-the-art approaches for investigating linguistic structures of the datasets, including the impact of name entities and document ages on classification - The presentation of the results is simple and yet comprehensive - The paper would be easy to follow with an English-proofreading even though the overall idea is still understandable. - The new proposed dataset, DRRI, could have been explored more in the paper. - It is not clear how named entities were extracted from the datasets. An English-proofreading would significantly improve the readability of the paper.
Does the review include a summary of the strengths of the paper?
yes
This paper provides a large new dataset written in Hanja, an old Korean language not fully understood yet by modern Korean speakers. This dataset comprises three datasets: AJD, DRS, DRRI (DRRI was proposed in this work). The authors investigate the performance of two pretrained language models (PLMs) based on BERT on this data by first continuing the pretraining of the PLMs and later finetuning them to perform a series of supervised tasks. The authors considered two such models: mBERT and AnchiBERT (pretrained on ancient Chinese data). As for the tasks, the authors propose extracting labels from the dataset structure and realizing supervised learning for king prediction (era prediction), topic classification, NER, and summary retrieval. The paper provides a thoughtful set of experiments evaluating the performance of these models with and without finetuning on Hanja data, including zero-shot experiments on DRRI. Later, the authors also investigate the impact of time and historical events using the best performant model (finetuned AnchiBERT). - The paper would be easy to follow with an English-proofreading even though the overall idea is still understandable. - The new proposed dataset, DRRI, could have been explored more in the paper. - It is not clear how named entities were extracted from the datasets. An English-proofreading would significantly improve the readability of the paper.
Does the review include a summary of the strengths of the paper?
no
This paper provides a large new dataset written in Hanja, an old Korean language not fully understood yet by modern Korean speakers. This dataset comprises three datasets: AJD, DRS, DRRI (DRRI was proposed in this work). The authors investigate the performance of two pretrained language models (PLMs) based on BERT on this data by first continuing the pretraining of the PLMs and later finetuning them to perform a series of supervised tasks. The authors considered two such models: mBERT and AnchiBERT (pretrained on ancient Chinese data). As for the tasks, the authors propose extracting labels from the dataset structure and realizing supervised learning for king prediction (era prediction), topic classification, NER, and summary retrieval. The paper provides a thoughtful set of experiments evaluating the performance of these models with and without finetuning on Hanja data, including zero-shot experiments on DRRI. Later, the authors also investigate the impact of time and historical events using the best performant model (finetuned AnchiBERT). - The paper provides a large new dataset for a low-resource language (Hanja) - The paper does an excellent job at using state-of-the-art approaches for investigating linguistic structures of the datasets, including the impact of name entities and document ages on classification - The presentation of the results is simple and yet comprehensive - The paper would be easy to follow with an English-proofreading even though the overall idea is still understandable. - The new proposed dataset, DRRI, could have been explored more in the paper. - It is not clear how named entities were extracted from the datasets. An English-proofreading would significantly improve the readability of the paper.
Does the review include a summary of the weaknesses of the paper?
yes
This paper provides a large new dataset written in Hanja, an old Korean language not fully understood yet by modern Korean speakers. This dataset comprises three datasets: AJD, DRS, DRRI (DRRI was proposed in this work). The authors investigate the performance of two pretrained language models (PLMs) based on BERT on this data by first continuing the pretraining of the PLMs and later finetuning them to perform a series of supervised tasks. The authors considered two such models: mBERT and AnchiBERT (pretrained on ancient Chinese data). As for the tasks, the authors propose extracting labels from the dataset structure and realizing supervised learning for king prediction (era prediction), topic classification, NER, and summary retrieval. The paper provides a thoughtful set of experiments evaluating the performance of these models with and without finetuning on Hanja data, including zero-shot experiments on DRRI. Later, the authors also investigate the impact of time and historical events using the best performant model (finetuned AnchiBERT). - The paper provides a large new dataset for a low-resource language (Hanja) - The paper does an excellent job at using state-of-the-art approaches for investigating linguistic structures of the datasets, including the impact of name entities and document ages on classification - The presentation of the results is simple and yet comprehensive An English-proofreading would significantly improve the readability of the paper.
Does the review include a summary of the weaknesses of the paper?
no
This paper provides a large new dataset written in Hanja, an old Korean language not fully understood yet by modern Korean speakers. This dataset comprises three datasets: AJD, DRS, DRRI (DRRI was proposed in this work). The authors investigate the performance of two pretrained language models (PLMs) based on BERT on this data by first continuing the pretraining of the PLMs and later finetuning them to perform a series of supervised tasks. The authors considered two such models: mBERT and AnchiBERT (pretrained on ancient Chinese data). As for the tasks, the authors propose extracting labels from the dataset structure and realizing supervised learning for king prediction (era prediction), topic classification, NER, and summary retrieval. The paper provides a thoughtful set of experiments evaluating the performance of these models with and without finetuning on Hanja data, including zero-shot experiments on DRRI. Later, the authors also investigate the impact of time and historical events using the best performant model (finetuned AnchiBERT). - The paper provides a large new dataset for a low-resource language (Hanja) - The paper does an excellent job at using state-of-the-art approaches for investigating linguistic structures of the datasets, including the impact of name entities and document ages on classification - The presentation of the results is simple and yet comprehensive - The paper would be easy to follow with an English-proofreading even though the overall idea is still understandable. - The new proposed dataset, DRRI, could have been explored more in the paper. - It is not clear how named entities were extracted from the datasets. An English-proofreading would significantly improve the readability of the paper.
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper provides a large new dataset written in Hanja, an old Korean language not fully understood yet by modern Korean speakers. This dataset comprises three datasets: AJD, DRS, DRRI (DRRI was proposed in this work). The authors investigate the performance of two pretrained language models (PLMs) based on BERT on this data by first continuing the pretraining of the PLMs and later finetuning them to perform a series of supervised tasks. The authors considered two such models: mBERT and AnchiBERT (pretrained on ancient Chinese data). As for the tasks, the authors propose extracting labels from the dataset structure and realizing supervised learning for king prediction (era prediction), topic classification, NER, and summary retrieval. The paper provides a thoughtful set of experiments evaluating the performance of these models with and without finetuning on Hanja data, including zero-shot experiments on DRRI. Later, the authors also investigate the impact of time and historical events using the best performant model (finetuned AnchiBERT). - The paper provides a large new dataset for a low-resource language (Hanja) - The paper does an excellent job at using state-of-the-art approaches for investigating linguistic structures of the datasets, including the impact of name entities and document ages on classification - The presentation of the results is simple and yet comprehensive - The paper would be easy to follow with an English-proofreading even though the overall idea is still understandable. - The new proposed dataset, DRRI, could have been explored more in the paper. - It is not clear how named entities were extracted from the datasets.
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper presents a 'HUE' dataset and pre-trained Hanja language models, which aim to help to analyze the Korean historical documents written in Hanja, an extinct language based on Chinese characters. As a great amount of the ancient Korean documents were written in Hanja, that is hard to understand now. The author(s) thus provide a dataset comprising three corpora of records written in Hanja during the Joseon dynasty; four models for four tasks (King prediction, topic classification, named entity recognition, summary retrieval); as well as a language model pretrained on the Hanja documents. They then conduct experiments on HUE corpora with the LM model, taking BERT as the baseline. Overall, models pretrained on time-specific specific corpus data achieve the best performance in these four tasks. By providing additional information as input, the extended experiments show further improvement on KP, TC and NER tasks. Finally, a zero-shot experiment is conducted to demonstrate the effectiveness of the models on information extraction from unseen data. The proposed HUE would be the first resource for the processing and understanding of ancient Korean texts in Hanja. In light of combining with advanced language models, this approach has a lot of potential in computational classical languages and scripts. I understand that Section 6 tries to getting more factors involved in the interpretation of the modal's capability. However, the target theses seem reasonable (to get a quick yes answer) yet require much more pages. They are two of the big questions in digital humanities, and I don't see any convincing arguments/reviews on the extended analysis that contributes. In general, I think this is a great pioneering work and could be fed into a much more interesting resources in the future. It would be better to introduce more about the background for readers who are not familiar with (ancient) Korean. For instance, why is Hanja an extinct 'language' instead of a 'script'? If ancient Korean only wrote Hanja, why do we call it a 'language'? In addition, Table 4 would need more explanation which might really help classical humanities studies, e.g., why naive AnchiBERT only outperforms others on NER? etc.
Does the review include a short summary of the paper?
yes
The proposed HUE would be the first resource for the processing and understanding of ancient Korean texts in Hanja. In light of combining with advanced language models, this approach has a lot of potential in computational classical languages and scripts. I understand that Section 6 tries to getting more factors involved in the interpretation of the modal's capability. However, the target theses seem reasonable (to get a quick yes answer) yet require much more pages. They are two of the big questions in digital humanities, and I don't see any convincing arguments/reviews on the extended analysis that contributes. In general, I think this is a great pioneering work and could be fed into a much more interesting resources in the future. It would be better to introduce more about the background for readers who are not familiar with (ancient) Korean. For instance, why is Hanja an extinct 'language' instead of a 'script'? If ancient Korean only wrote Hanja, why do we call it a 'language'? In addition, Table 4 would need more explanation which might really help classical humanities studies, e.g., why naive AnchiBERT only outperforms others on NER? etc.
Does the review include a short summary of the paper?
no
This paper presents a 'HUE' dataset and pre-trained Hanja language models, which aim to help to analyze the Korean historical documents written in Hanja, an extinct language based on Chinese characters. As a great amount of the ancient Korean documents were written in Hanja, that is hard to understand now. The author(s) thus provide a dataset comprising three corpora of records written in Hanja during the Joseon dynasty; four models for four tasks (King prediction, topic classification, named entity recognition, summary retrieval); as well as a language model pretrained on the Hanja documents. They then conduct experiments on HUE corpora with the LM model, taking BERT as the baseline. Overall, models pretrained on time-specific specific corpus data achieve the best performance in these four tasks. By providing additional information as input, the extended experiments show further improvement on KP, TC and NER tasks. Finally, a zero-shot experiment is conducted to demonstrate the effectiveness of the models on information extraction from unseen data. The proposed HUE would be the first resource for the processing and understanding of ancient Korean texts in Hanja. In light of combining with advanced language models, this approach has a lot of potential in computational classical languages and scripts. I understand that Section 6 tries to getting more factors involved in the interpretation of the modal's capability. However, the target theses seem reasonable (to get a quick yes answer) yet require much more pages. They are two of the big questions in digital humanities, and I don't see any convincing arguments/reviews on the extended analysis that contributes. In general, I think this is a great pioneering work and could be fed into a much more interesting resources in the future. It would be better to introduce more about the background for readers who are not familiar with (ancient) Korean. For instance, why is Hanja an extinct 'language' instead of a 'script'? If ancient Korean only wrote Hanja, why do we call it a 'language'? In addition, Table 4 would need more explanation which might really help classical humanities studies, e.g., why naive AnchiBERT only outperforms others on NER? etc.
Does the review include a summary of the strengths of the paper?
yes
This paper presents a 'HUE' dataset and pre-trained Hanja language models, which aim to help to analyze the Korean historical documents written in Hanja, an extinct language based on Chinese characters. As a great amount of the ancient Korean documents were written in Hanja, that is hard to understand now. The author(s) thus provide a dataset comprising three corpora of records written in Hanja during the Joseon dynasty; four models for four tasks (King prediction, topic classification, named entity recognition, summary retrieval); as well as a language model pretrained on the Hanja documents. They then conduct experiments on HUE corpora with the LM model, taking BERT as the baseline. Overall, models pretrained on time-specific specific corpus data achieve the best performance in these four tasks. By providing additional information as input, the extended experiments show further improvement on KP, TC and NER tasks. Finally, a zero-shot experiment is conducted to demonstrate the effectiveness of the models on information extraction from unseen data. I understand that Section 6 tries to getting more factors involved in the interpretation of the modal's capability. However, the target theses seem reasonable (to get a quick yes answer) yet require much more pages. They are two of the big questions in digital humanities, and I don't see any convincing arguments/reviews on the extended analysis that contributes. In general, I think this is a great pioneering work and could be fed into a much more interesting resources in the future. It would be better to introduce more about the background for readers who are not familiar with (ancient) Korean. For instance, why is Hanja an extinct 'language' instead of a 'script'? If ancient Korean only wrote Hanja, why do we call it a 'language'? In addition, Table 4 would need more explanation which might really help classical humanities studies, e.g., why naive AnchiBERT only outperforms others on NER? etc.
Does the review include a summary of the strengths of the paper?
no
This paper presents a 'HUE' dataset and pre-trained Hanja language models, which aim to help to analyze the Korean historical documents written in Hanja, an extinct language based on Chinese characters. As a great amount of the ancient Korean documents were written in Hanja, that is hard to understand now. The author(s) thus provide a dataset comprising three corpora of records written in Hanja during the Joseon dynasty; four models for four tasks (King prediction, topic classification, named entity recognition, summary retrieval); as well as a language model pretrained on the Hanja documents. They then conduct experiments on HUE corpora with the LM model, taking BERT as the baseline. Overall, models pretrained on time-specific specific corpus data achieve the best performance in these four tasks. By providing additional information as input, the extended experiments show further improvement on KP, TC and NER tasks. Finally, a zero-shot experiment is conducted to demonstrate the effectiveness of the models on information extraction from unseen data. The proposed HUE would be the first resource for the processing and understanding of ancient Korean texts in Hanja. In light of combining with advanced language models, this approach has a lot of potential in computational classical languages and scripts. I understand that Section 6 tries to getting more factors involved in the interpretation of the modal's capability. However, the target theses seem reasonable (to get a quick yes answer) yet require much more pages. They are two of the big questions in digital humanities, and I don't see any convincing arguments/reviews on the extended analysis that contributes. In general, I think this is a great pioneering work and could be fed into a much more interesting resources in the future. It would be better to introduce more about the background for readers who are not familiar with (ancient) Korean. For instance, why is Hanja an extinct 'language' instead of a 'script'? If ancient Korean only wrote Hanja, why do we call it a 'language'? In addition, Table 4 would need more explanation which might really help classical humanities studies, e.g., why naive AnchiBERT only outperforms others on NER? etc.
Does the review include a summary of the weaknesses of the paper?
yes
This paper presents a 'HUE' dataset and pre-trained Hanja language models, which aim to help to analyze the Korean historical documents written in Hanja, an extinct language based on Chinese characters. As a great amount of the ancient Korean documents were written in Hanja, that is hard to understand now. The author(s) thus provide a dataset comprising three corpora of records written in Hanja during the Joseon dynasty; four models for four tasks (King prediction, topic classification, named entity recognition, summary retrieval); as well as a language model pretrained on the Hanja documents. They then conduct experiments on HUE corpora with the LM model, taking BERT as the baseline. Overall, models pretrained on time-specific specific corpus data achieve the best performance in these four tasks. By providing additional information as input, the extended experiments show further improvement on KP, TC and NER tasks. Finally, a zero-shot experiment is conducted to demonstrate the effectiveness of the models on information extraction from unseen data. The proposed HUE would be the first resource for the processing and understanding of ancient Korean texts in Hanja. In light of combining with advanced language models, this approach has a lot of potential in computational classical languages and scripts. In general, I think this is a great pioneering work and could be fed into a much more interesting resources in the future. It would be better to introduce more about the background for readers who are not familiar with (ancient) Korean. For instance, why is Hanja an extinct 'language' instead of a 'script'? If ancient Korean only wrote Hanja, why do we call it a 'language'? In addition, Table 4 would need more explanation which might really help classical humanities studies, e.g., why naive AnchiBERT only outperforms others on NER? etc.
Does the review include a summary of the weaknesses of the paper?
no
This paper presents a 'HUE' dataset and pre-trained Hanja language models, which aim to help to analyze the Korean historical documents written in Hanja, an extinct language based on Chinese characters. As a great amount of the ancient Korean documents were written in Hanja, that is hard to understand now. The author(s) thus provide a dataset comprising three corpora of records written in Hanja during the Joseon dynasty; four models for four tasks (King prediction, topic classification, named entity recognition, summary retrieval); as well as a language model pretrained on the Hanja documents. They then conduct experiments on HUE corpora with the LM model, taking BERT as the baseline. Overall, models pretrained on time-specific specific corpus data achieve the best performance in these four tasks. By providing additional information as input, the extended experiments show further improvement on KP, TC and NER tasks. Finally, a zero-shot experiment is conducted to demonstrate the effectiveness of the models on information extraction from unseen data. The proposed HUE would be the first resource for the processing and understanding of ancient Korean texts in Hanja. In light of combining with advanced language models, this approach has a lot of potential in computational classical languages and scripts. I understand that Section 6 tries to getting more factors involved in the interpretation of the modal's capability. However, the target theses seem reasonable (to get a quick yes answer) yet require much more pages. They are two of the big questions in digital humanities, and I don't see any convincing arguments/reviews on the extended analysis that contributes. In general, I think this is a great pioneering work and could be fed into a much more interesting resources in the future. It would be better to introduce more about the background for readers who are not familiar with (ancient) Korean. For instance, why is Hanja an extinct 'language' instead of a 'script'? If ancient Korean only wrote Hanja, why do we call it a 'language'? In addition, Table 4 would need more explanation which might really help classical humanities studies, e.g., why naive AnchiBERT only outperforms others on NER? etc.
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper presents a 'HUE' dataset and pre-trained Hanja language models, which aim to help to analyze the Korean historical documents written in Hanja, an extinct language based on Chinese characters. As a great amount of the ancient Korean documents were written in Hanja, that is hard to understand now. The author(s) thus provide a dataset comprising three corpora of records written in Hanja during the Joseon dynasty; four models for four tasks (King prediction, topic classification, named entity recognition, summary retrieval); as well as a language model pretrained on the Hanja documents. They then conduct experiments on HUE corpora with the LM model, taking BERT as the baseline. Overall, models pretrained on time-specific specific corpus data achieve the best performance in these four tasks. By providing additional information as input, the extended experiments show further improvement on KP, TC and NER tasks. Finally, a zero-shot experiment is conducted to demonstrate the effectiveness of the models on information extraction from unseen data. The proposed HUE would be the first resource for the processing and understanding of ancient Korean texts in Hanja. In light of combining with advanced language models, this approach has a lot of potential in computational classical languages and scripts. I understand that Section 6 tries to getting more factors involved in the interpretation of the modal's capability. However, the target theses seem reasonable (to get a quick yes answer) yet require much more pages. They are two of the big questions in digital humanities, and I don't see any convincing arguments/reviews on the extended analysis that contributes.
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper describes a self-generation framework for training ODD and TOD systems, as well as the prediction of the transition turns from ODD to TOD turns. The proposed framework makes use of self-chatting approaches using a SotA system, as well as fine-tuned versions of the chatbot to play the role of a salesman and a user. In addition, an interesting approach for predicting the best turn where to insert the transition and use of zero-shot approaches for intent detection complete the work done by the authors. Finally, human evaluations show the feasibility of the proposed approach. - The generated dataset is interesting for people working on dialogue evaluation - The proposed framework for self-generating the dataset is very interesting and the results are feasible - The proposed method for intent detection and transition turns are also correct and interesting - The paper is well written and clear to follow. Experimentation and human evaluation is correct - Although author state that components can be replaced by other models for flexibility, authors did not try any change or alternative in the paper to proof the robustness of the proposed framework. - Did authors tried using BlenderBot vs 2.0 with incorporated knowledge? it would be very interesting to see how the dialogs can be improved by using domain ontologies from the SGD dataset. - Although BlenderBot is finetuned on the SGD dataset, it is not clear how using more specific TOD chatbots can provide better results - Lines 159-162: Authors should provide more information about the type/number of personas created, and how the personas are used by the chatbot to generate the given responses. - It is not clear if authors also experimented with the usage of domain ontologies to avoid the generation of placeholders in the evaluated responses - Line 211: How many questions were created for this zero-shot intent classifier and what is the accuracy of this system? - Line 216: How many paraphrases were created for each question, and what was their quality rate? - Line 237: How critical was the finetuning process over the SQuad and CommonsenseQA models? - Line 254-257: How many templates were manually created? - Line 265: How the future utterances are used during evaluation? For the generation part, are the authors generating some sort of sentence embedding representation (similar to SkipThoughs) to learn the generation of the transition sentence? and is it the transition sentence one taken from the list of manual templates? ( In general, this section 2.2.2 is the one I have found less clear) - Merge SGD: Did authors select the TOD dialogue randomly from those containing the same intent/topic? did you tried some dialogue embedding from the ODD part and tried to select a TOD dialogue with a similar dialogue embedding? if not, this could be an idea to improve the quality of the dataset. this could also allow the usage of the lexicalized version of the SGD and avoids the generation of placeholders in the responses - Line 324: how the repeated dialogues are detected? - Line 356: how and how many sentences are finally selected from the 120 generated sentences? - Lines 402-404: How the additional transitions are generated? using the T5 model? how many times the manual sentences were selected vs the paraphrased ones? - The paper: Fusing task-oriented and open-domain dialogues in conversational agents is not included in the background section and it is important in the context of similar datasets - Probably the word salesman is misleading since by reading some of the generated dialogues in the appendixes, it is not clear that the salesman agent is in fact selling something. It seems sometimes that they are still doing chitchat but on a particular topic or asking for some action to be done (like one to be done by an intelligent speaker)
Does the review include a short summary of the paper?
yes
- The generated dataset is interesting for people working on dialogue evaluation - The proposed framework for self-generating the dataset is very interesting and the results are feasible - The proposed method for intent detection and transition turns are also correct and interesting - The paper is well written and clear to follow. Experimentation and human evaluation is correct - Although author state that components can be replaced by other models for flexibility, authors did not try any change or alternative in the paper to proof the robustness of the proposed framework. - Did authors tried using BlenderBot vs 2.0 with incorporated knowledge? it would be very interesting to see how the dialogs can be improved by using domain ontologies from the SGD dataset. - Although BlenderBot is finetuned on the SGD dataset, it is not clear how using more specific TOD chatbots can provide better results - Lines 159-162: Authors should provide more information about the type/number of personas created, and how the personas are used by the chatbot to generate the given responses. - It is not clear if authors also experimented with the usage of domain ontologies to avoid the generation of placeholders in the evaluated responses - Line 211: How many questions were created for this zero-shot intent classifier and what is the accuracy of this system? - Line 216: How many paraphrases were created for each question, and what was their quality rate? - Line 237: How critical was the finetuning process over the SQuad and CommonsenseQA models? - Line 254-257: How many templates were manually created? - Line 265: How the future utterances are used during evaluation? For the generation part, are the authors generating some sort of sentence embedding representation (similar to SkipThoughs) to learn the generation of the transition sentence? and is it the transition sentence one taken from the list of manual templates? ( In general, this section 2.2.2 is the one I have found less clear) - Merge SGD: Did authors select the TOD dialogue randomly from those containing the same intent/topic? did you tried some dialogue embedding from the ODD part and tried to select a TOD dialogue with a similar dialogue embedding? if not, this could be an idea to improve the quality of the dataset. this could also allow the usage of the lexicalized version of the SGD and avoids the generation of placeholders in the responses - Line 324: how the repeated dialogues are detected? - Line 356: how and how many sentences are finally selected from the 120 generated sentences? - Lines 402-404: How the additional transitions are generated? using the T5 model? how many times the manual sentences were selected vs the paraphrased ones? - The paper: Fusing task-oriented and open-domain dialogues in conversational agents is not included in the background section and it is important in the context of similar datasets - Probably the word salesman is misleading since by reading some of the generated dialogues in the appendixes, it is not clear that the salesman agent is in fact selling something. It seems sometimes that they are still doing chitchat but on a particular topic or asking for some action to be done (like one to be done by an intelligent speaker)
Does the review include a short summary of the paper?
no
This paper describes a self-generation framework for training ODD and TOD systems, as well as the prediction of the transition turns from ODD to TOD turns. The proposed framework makes use of self-chatting approaches using a SotA system, as well as fine-tuned versions of the chatbot to play the role of a salesman and a user. In addition, an interesting approach for predicting the best turn where to insert the transition and use of zero-shot approaches for intent detection complete the work done by the authors. Finally, human evaluations show the feasibility of the proposed approach. - The generated dataset is interesting for people working on dialogue evaluation - The proposed framework for self-generating the dataset is very interesting and the results are feasible - The proposed method for intent detection and transition turns are also correct and interesting - The paper is well written and clear to follow. Experimentation and human evaluation is correct - Although author state that components can be replaced by other models for flexibility, authors did not try any change or alternative in the paper to proof the robustness of the proposed framework. - Did authors tried using BlenderBot vs 2.0 with incorporated knowledge? it would be very interesting to see how the dialogs can be improved by using domain ontologies from the SGD dataset. - Although BlenderBot is finetuned on the SGD dataset, it is not clear how using more specific TOD chatbots can provide better results - Lines 159-162: Authors should provide more information about the type/number of personas created, and how the personas are used by the chatbot to generate the given responses. - It is not clear if authors also experimented with the usage of domain ontologies to avoid the generation of placeholders in the evaluated responses - Line 211: How many questions were created for this zero-shot intent classifier and what is the accuracy of this system? - Line 216: How many paraphrases were created for each question, and what was their quality rate? - Line 237: How critical was the finetuning process over the SQuad and CommonsenseQA models? - Line 254-257: How many templates were manually created? - Line 265: How the future utterances are used during evaluation? For the generation part, are the authors generating some sort of sentence embedding representation (similar to SkipThoughs) to learn the generation of the transition sentence? and is it the transition sentence one taken from the list of manual templates? ( In general, this section 2.2.2 is the one I have found less clear) - Merge SGD: Did authors select the TOD dialogue randomly from those containing the same intent/topic? did you tried some dialogue embedding from the ODD part and tried to select a TOD dialogue with a similar dialogue embedding? if not, this could be an idea to improve the quality of the dataset. this could also allow the usage of the lexicalized version of the SGD and avoids the generation of placeholders in the responses - Line 324: how the repeated dialogues are detected? - Line 356: how and how many sentences are finally selected from the 120 generated sentences? - Lines 402-404: How the additional transitions are generated? using the T5 model? how many times the manual sentences were selected vs the paraphrased ones? - The paper: Fusing task-oriented and open-domain dialogues in conversational agents is not included in the background section and it is important in the context of similar datasets - Probably the word salesman is misleading since by reading some of the generated dialogues in the appendixes, it is not clear that the salesman agent is in fact selling something. It seems sometimes that they are still doing chitchat but on a particular topic or asking for some action to be done (like one to be done by an intelligent speaker)
Does the review include a summary of the strengths of the paper?
yes
This paper describes a self-generation framework for training ODD and TOD systems, as well as the prediction of the transition turns from ODD to TOD turns. The proposed framework makes use of self-chatting approaches using a SotA system, as well as fine-tuned versions of the chatbot to play the role of a salesman and a user. In addition, an interesting approach for predicting the best turn where to insert the transition and use of zero-shot approaches for intent detection complete the work done by the authors. Finally, human evaluations show the feasibility of the proposed approach. - Although author state that components can be replaced by other models for flexibility, authors did not try any change or alternative in the paper to proof the robustness of the proposed framework. - Did authors tried using BlenderBot vs 2.0 with incorporated knowledge? it would be very interesting to see how the dialogs can be improved by using domain ontologies from the SGD dataset. - Although BlenderBot is finetuned on the SGD dataset, it is not clear how using more specific TOD chatbots can provide better results - Lines 159-162: Authors should provide more information about the type/number of personas created, and how the personas are used by the chatbot to generate the given responses. - It is not clear if authors also experimented with the usage of domain ontologies to avoid the generation of placeholders in the evaluated responses - Line 211: How many questions were created for this zero-shot intent classifier and what is the accuracy of this system? - Line 216: How many paraphrases were created for each question, and what was their quality rate? - Line 237: How critical was the finetuning process over the SQuad and CommonsenseQA models? - Line 254-257: How many templates were manually created? - Line 265: How the future utterances are used during evaluation? For the generation part, are the authors generating some sort of sentence embedding representation (similar to SkipThoughs) to learn the generation of the transition sentence? and is it the transition sentence one taken from the list of manual templates? ( In general, this section 2.2.2 is the one I have found less clear) - Merge SGD: Did authors select the TOD dialogue randomly from those containing the same intent/topic? did you tried some dialogue embedding from the ODD part and tried to select a TOD dialogue with a similar dialogue embedding? if not, this could be an idea to improve the quality of the dataset. this could also allow the usage of the lexicalized version of the SGD and avoids the generation of placeholders in the responses - Line 324: how the repeated dialogues are detected? - Line 356: how and how many sentences are finally selected from the 120 generated sentences? - Lines 402-404: How the additional transitions are generated? using the T5 model? how many times the manual sentences were selected vs the paraphrased ones? - The paper: Fusing task-oriented and open-domain dialogues in conversational agents is not included in the background section and it is important in the context of similar datasets - Probably the word salesman is misleading since by reading some of the generated dialogues in the appendixes, it is not clear that the salesman agent is in fact selling something. It seems sometimes that they are still doing chitchat but on a particular topic or asking for some action to be done (like one to be done by an intelligent speaker)
Does the review include a summary of the strengths of the paper?
no
This paper describes a self-generation framework for training ODD and TOD systems, as well as the prediction of the transition turns from ODD to TOD turns. The proposed framework makes use of self-chatting approaches using a SotA system, as well as fine-tuned versions of the chatbot to play the role of a salesman and a user. In addition, an interesting approach for predicting the best turn where to insert the transition and use of zero-shot approaches for intent detection complete the work done by the authors. Finally, human evaluations show the feasibility of the proposed approach. - The generated dataset is interesting for people working on dialogue evaluation - The proposed framework for self-generating the dataset is very interesting and the results are feasible - The proposed method for intent detection and transition turns are also correct and interesting - The paper is well written and clear to follow. Experimentation and human evaluation is correct - Although author state that components can be replaced by other models for flexibility, authors did not try any change or alternative in the paper to proof the robustness of the proposed framework. - Did authors tried using BlenderBot vs 2.0 with incorporated knowledge? it would be very interesting to see how the dialogs can be improved by using domain ontologies from the SGD dataset. - Although BlenderBot is finetuned on the SGD dataset, it is not clear how using more specific TOD chatbots can provide better results - Lines 159-162: Authors should provide more information about the type/number of personas created, and how the personas are used by the chatbot to generate the given responses. - It is not clear if authors also experimented with the usage of domain ontologies to avoid the generation of placeholders in the evaluated responses - Line 211: How many questions were created for this zero-shot intent classifier and what is the accuracy of this system? - Line 216: How many paraphrases were created for each question, and what was their quality rate? - Line 237: How critical was the finetuning process over the SQuad and CommonsenseQA models? - Line 254-257: How many templates were manually created? - Line 265: How the future utterances are used during evaluation? For the generation part, are the authors generating some sort of sentence embedding representation (similar to SkipThoughs) to learn the generation of the transition sentence? and is it the transition sentence one taken from the list of manual templates? ( In general, this section 2.2.2 is the one I have found less clear) - Merge SGD: Did authors select the TOD dialogue randomly from those containing the same intent/topic? did you tried some dialogue embedding from the ODD part and tried to select a TOD dialogue with a similar dialogue embedding? if not, this could be an idea to improve the quality of the dataset. this could also allow the usage of the lexicalized version of the SGD and avoids the generation of placeholders in the responses - Line 324: how the repeated dialogues are detected? - Line 356: how and how many sentences are finally selected from the 120 generated sentences? - Lines 402-404: How the additional transitions are generated? using the T5 model? how many times the manual sentences were selected vs the paraphrased ones? - The paper: Fusing task-oriented and open-domain dialogues in conversational agents is not included in the background section and it is important in the context of similar datasets - Probably the word salesman is misleading since by reading some of the generated dialogues in the appendixes, it is not clear that the salesman agent is in fact selling something. It seems sometimes that they are still doing chitchat but on a particular topic or asking for some action to be done (like one to be done by an intelligent speaker)
Does the review include a summary of the weaknesses of the paper?
yes
This paper describes a self-generation framework for training ODD and TOD systems, as well as the prediction of the transition turns from ODD to TOD turns. The proposed framework makes use of self-chatting approaches using a SotA system, as well as fine-tuned versions of the chatbot to play the role of a salesman and a user. In addition, an interesting approach for predicting the best turn where to insert the transition and use of zero-shot approaches for intent detection complete the work done by the authors. Finally, human evaluations show the feasibility of the proposed approach. - The generated dataset is interesting for people working on dialogue evaluation - The proposed framework for self-generating the dataset is very interesting and the results are feasible - The proposed method for intent detection and transition turns are also correct and interesting - The paper is well written and clear to follow. Experimentation and human evaluation is correct - The paper: Fusing task-oriented and open-domain dialogues in conversational agents is not included in the background section and it is important in the context of similar datasets - Probably the word salesman is misleading since by reading some of the generated dialogues in the appendixes, it is not clear that the salesman agent is in fact selling something. It seems sometimes that they are still doing chitchat but on a particular topic or asking for some action to be done (like one to be done by an intelligent speaker)
Does the review include a summary of the weaknesses of the paper?
no
This paper describes a self-generation framework for training ODD and TOD systems, as well as the prediction of the transition turns from ODD to TOD turns. The proposed framework makes use of self-chatting approaches using a SotA system, as well as fine-tuned versions of the chatbot to play the role of a salesman and a user. In addition, an interesting approach for predicting the best turn where to insert the transition and use of zero-shot approaches for intent detection complete the work done by the authors. Finally, human evaluations show the feasibility of the proposed approach. - The generated dataset is interesting for people working on dialogue evaluation - The proposed framework for self-generating the dataset is very interesting and the results are feasible - The proposed method for intent detection and transition turns are also correct and interesting - The paper is well written and clear to follow. Experimentation and human evaluation is correct - Although author state that components can be replaced by other models for flexibility, authors did not try any change or alternative in the paper to proof the robustness of the proposed framework. - Did authors tried using BlenderBot vs 2.0 with incorporated knowledge? it would be very interesting to see how the dialogs can be improved by using domain ontologies from the SGD dataset. - Although BlenderBot is finetuned on the SGD dataset, it is not clear how using more specific TOD chatbots can provide better results - Lines 159-162: Authors should provide more information about the type/number of personas created, and how the personas are used by the chatbot to generate the given responses. - It is not clear if authors also experimented with the usage of domain ontologies to avoid the generation of placeholders in the evaluated responses - Line 211: How many questions were created for this zero-shot intent classifier and what is the accuracy of this system? - Line 216: How many paraphrases were created for each question, and what was their quality rate? - Line 237: How critical was the finetuning process over the SQuad and CommonsenseQA models? - Line 254-257: How many templates were manually created? - Line 265: How the future utterances are used during evaluation? For the generation part, are the authors generating some sort of sentence embedding representation (similar to SkipThoughs) to learn the generation of the transition sentence? and is it the transition sentence one taken from the list of manual templates? ( In general, this section 2.2.2 is the one I have found less clear) - Merge SGD: Did authors select the TOD dialogue randomly from those containing the same intent/topic? did you tried some dialogue embedding from the ODD part and tried to select a TOD dialogue with a similar dialogue embedding? if not, this could be an idea to improve the quality of the dataset. this could also allow the usage of the lexicalized version of the SGD and avoids the generation of placeholders in the responses - Line 324: how the repeated dialogues are detected? - Line 356: how and how many sentences are finally selected from the 120 generated sentences? - Lines 402-404: How the additional transitions are generated? using the T5 model? how many times the manual sentences were selected vs the paraphrased ones? - The paper: Fusing task-oriented and open-domain dialogues in conversational agents is not included in the background section and it is important in the context of similar datasets - Probably the word salesman is misleading since by reading some of the generated dialogues in the appendixes, it is not clear that the salesman agent is in fact selling something. It seems sometimes that they are still doing chitchat but on a particular topic or asking for some action to be done (like one to be done by an intelligent speaker)
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper describes a self-generation framework for training ODD and TOD systems, as well as the prediction of the transition turns from ODD to TOD turns. The proposed framework makes use of self-chatting approaches using a SotA system, as well as fine-tuned versions of the chatbot to play the role of a salesman and a user. In addition, an interesting approach for predicting the best turn where to insert the transition and use of zero-shot approaches for intent detection complete the work done by the authors. Finally, human evaluations show the feasibility of the proposed approach. - The generated dataset is interesting for people working on dialogue evaluation - The proposed framework for self-generating the dataset is very interesting and the results are feasible - The proposed method for intent detection and transition turns are also correct and interesting - The paper is well written and clear to follow. Experimentation and human evaluation is correct - Although author state that components can be replaced by other models for flexibility, authors did not try any change or alternative in the paper to proof the robustness of the proposed framework. - Did authors tried using BlenderBot vs 2.0 with incorporated knowledge? it would be very interesting to see how the dialogs can be improved by using domain ontologies from the SGD dataset. - Although BlenderBot is finetuned on the SGD dataset, it is not clear how using more specific TOD chatbots can provide better results - Lines 159-162: Authors should provide more information about the type/number of personas created, and how the personas are used by the chatbot to generate the given responses. - It is not clear if authors also experimented with the usage of domain ontologies to avoid the generation of placeholders in the evaluated responses - Line 211: How many questions were created for this zero-shot intent classifier and what is the accuracy of this system? - Line 216: How many paraphrases were created for each question, and what was their quality rate? - Line 237: How critical was the finetuning process over the SQuad and CommonsenseQA models? - Line 254-257: How many templates were manually created? - Line 265: How the future utterances are used during evaluation? For the generation part, are the authors generating some sort of sentence embedding representation (similar to SkipThoughs) to learn the generation of the transition sentence? and is it the transition sentence one taken from the list of manual templates? ( In general, this section 2.2.2 is the one I have found less clear) - Merge SGD: Did authors select the TOD dialogue randomly from those containing the same intent/topic? did you tried some dialogue embedding from the ODD part and tried to select a TOD dialogue with a similar dialogue embedding? if not, this could be an idea to improve the quality of the dataset. this could also allow the usage of the lexicalized version of the SGD and avoids the generation of placeholders in the responses - Line 324: how the repeated dialogues are detected? - Line 356: how and how many sentences are finally selected from the 120 generated sentences? - Lines 402-404: How the additional transitions are generated? using the T5 model? how many times the manual sentences were selected vs the paraphrased ones?
Does the review mention any comments, suggestions or typos that the author should address?
no
The paper proposes a novel and automatic way of transitioning from chit-chat to task-oriented (TOD) mode. They leverate BlenderBot to automatically generate the dialogs with transitions and evaluare their approach using human evaluation. They automate the process using a series of ML modeling - 1) QA based intent detection for automatically transitioning from chit-chat to task-oriented mode 2) T5 model to generate the transition turn 3) BlenderBot model to simulate both chit-chat turns and TOD turns. - The proposed dataset could be useful for the broader research dialog research community since there is a dearth of datasets blending chit-chat and task-oriented dialogs. - The paper is easy to follow and well-written. - The method seems general enough to be applicable to other task-oriented datasets with intent annotations. - More details on how exactly the topic related chit-chat turns would have strengthened the paper. What are prompts provided to the blender bot and the impact of different prompts on the quality of generated data? - Also, Blenderbot details for TOD simulation can be expanded in section 2.3. For instance, what is the impact of using mergeSGD vs TOD simulation on the overall quality ? - The paper seems to lack details on performance of the intent detector model and QA models and their impact on the quality of the dialogs generated. It would be nice to have an ablation study on the quality of dialogs using different intent detectors (including the data used to train). During the transition turn, did the process also check if the user is requesting for more information or a question before switching to TOD setting ?
Does the review include a short summary of the paper?
yes
- The proposed dataset could be useful for the broader research dialog research community since there is a dearth of datasets blending chit-chat and task-oriented dialogs. - The paper is easy to follow and well-written. - The method seems general enough to be applicable to other task-oriented datasets with intent annotations. - More details on how exactly the topic related chit-chat turns would have strengthened the paper. What are prompts provided to the blender bot and the impact of different prompts on the quality of generated data? - Also, Blenderbot details for TOD simulation can be expanded in section 2.3. For instance, what is the impact of using mergeSGD vs TOD simulation on the overall quality ? - The paper seems to lack details on performance of the intent detector model and QA models and their impact on the quality of the dialogs generated. It would be nice to have an ablation study on the quality of dialogs using different intent detectors (including the data used to train). During the transition turn, did the process also check if the user is requesting for more information or a question before switching to TOD setting ?
Does the review include a short summary of the paper?
no
The paper proposes a novel and automatic way of transitioning from chit-chat to task-oriented (TOD) mode. They leverate BlenderBot to automatically generate the dialogs with transitions and evaluare their approach using human evaluation. They automate the process using a series of ML modeling - 1) QA based intent detection for automatically transitioning from chit-chat to task-oriented mode 2) T5 model to generate the transition turn 3) BlenderBot model to simulate both chit-chat turns and TOD turns. - The proposed dataset could be useful for the broader research dialog research community since there is a dearth of datasets blending chit-chat and task-oriented dialogs. - The paper is easy to follow and well-written. - The method seems general enough to be applicable to other task-oriented datasets with intent annotations. - More details on how exactly the topic related chit-chat turns would have strengthened the paper. What are prompts provided to the blender bot and the impact of different prompts on the quality of generated data? - Also, Blenderbot details for TOD simulation can be expanded in section 2.3. For instance, what is the impact of using mergeSGD vs TOD simulation on the overall quality ? - The paper seems to lack details on performance of the intent detector model and QA models and their impact on the quality of the dialogs generated. It would be nice to have an ablation study on the quality of dialogs using different intent detectors (including the data used to train). During the transition turn, did the process also check if the user is requesting for more information or a question before switching to TOD setting ?
Does the review include a summary of the strengths of the paper?
yes
The paper proposes a novel and automatic way of transitioning from chit-chat to task-oriented (TOD) mode. They leverate BlenderBot to automatically generate the dialogs with transitions and evaluare their approach using human evaluation. They automate the process using a series of ML modeling - 1) QA based intent detection for automatically transitioning from chit-chat to task-oriented mode 2) T5 model to generate the transition turn 3) BlenderBot model to simulate both chit-chat turns and TOD turns. - More details on how exactly the topic related chit-chat turns would have strengthened the paper. What are prompts provided to the blender bot and the impact of different prompts on the quality of generated data? - Also, Blenderbot details for TOD simulation can be expanded in section 2.3. For instance, what is the impact of using mergeSGD vs TOD simulation on the overall quality ? - The paper seems to lack details on performance of the intent detector model and QA models and their impact on the quality of the dialogs generated. It would be nice to have an ablation study on the quality of dialogs using different intent detectors (including the data used to train). During the transition turn, did the process also check if the user is requesting for more information or a question before switching to TOD setting ?
Does the review include a summary of the strengths of the paper?
no
The paper proposes a novel and automatic way of transitioning from chit-chat to task-oriented (TOD) mode. They leverate BlenderBot to automatically generate the dialogs with transitions and evaluare their approach using human evaluation. They automate the process using a series of ML modeling - 1) QA based intent detection for automatically transitioning from chit-chat to task-oriented mode 2) T5 model to generate the transition turn 3) BlenderBot model to simulate both chit-chat turns and TOD turns. - The proposed dataset could be useful for the broader research dialog research community since there is a dearth of datasets blending chit-chat and task-oriented dialogs. - The paper is easy to follow and well-written. - The method seems general enough to be applicable to other task-oriented datasets with intent annotations. - More details on how exactly the topic related chit-chat turns would have strengthened the paper. What are prompts provided to the blender bot and the impact of different prompts on the quality of generated data? - Also, Blenderbot details for TOD simulation can be expanded in section 2.3. For instance, what is the impact of using mergeSGD vs TOD simulation on the overall quality ? - The paper seems to lack details on performance of the intent detector model and QA models and their impact on the quality of the dialogs generated. It would be nice to have an ablation study on the quality of dialogs using different intent detectors (including the data used to train). During the transition turn, did the process also check if the user is requesting for more information or a question before switching to TOD setting ?
Does the review include a summary of the weaknesses of the paper?
yes
The paper proposes a novel and automatic way of transitioning from chit-chat to task-oriented (TOD) mode. They leverate BlenderBot to automatically generate the dialogs with transitions and evaluare their approach using human evaluation. They automate the process using a series of ML modeling - 1) QA based intent detection for automatically transitioning from chit-chat to task-oriented mode 2) T5 model to generate the transition turn 3) BlenderBot model to simulate both chit-chat turns and TOD turns. - The proposed dataset could be useful for the broader research dialog research community since there is a dearth of datasets blending chit-chat and task-oriented dialogs. - The paper is easy to follow and well-written. - The method seems general enough to be applicable to other task-oriented datasets with intent annotations. During the transition turn, did the process also check if the user is requesting for more information or a question before switching to TOD setting ?
Does the review include a summary of the weaknesses of the paper?
no
The paper proposes a novel and automatic way of transitioning from chit-chat to task-oriented (TOD) mode. They leverate BlenderBot to automatically generate the dialogs with transitions and evaluare their approach using human evaluation. They automate the process using a series of ML modeling - 1) QA based intent detection for automatically transitioning from chit-chat to task-oriented mode 2) T5 model to generate the transition turn 3) BlenderBot model to simulate both chit-chat turns and TOD turns. - The proposed dataset could be useful for the broader research dialog research community since there is a dearth of datasets blending chit-chat and task-oriented dialogs. - The paper is easy to follow and well-written. - The method seems general enough to be applicable to other task-oriented datasets with intent annotations. - More details on how exactly the topic related chit-chat turns would have strengthened the paper. What are prompts provided to the blender bot and the impact of different prompts on the quality of generated data? - Also, Blenderbot details for TOD simulation can be expanded in section 2.3. For instance, what is the impact of using mergeSGD vs TOD simulation on the overall quality ? - The paper seems to lack details on performance of the intent detector model and QA models and their impact on the quality of the dialogs generated. It would be nice to have an ablation study on the quality of dialogs using different intent detectors (including the data used to train). During the transition turn, did the process also check if the user is requesting for more information or a question before switching to TOD setting ?
Does the review mention any comments, suggestions or typos that the author should address?
yes
The paper proposes a novel and automatic way of transitioning from chit-chat to task-oriented (TOD) mode. They leverate BlenderBot to automatically generate the dialogs with transitions and evaluare their approach using human evaluation. They automate the process using a series of ML modeling - 1) QA based intent detection for automatically transitioning from chit-chat to task-oriented mode 2) T5 model to generate the transition turn 3) BlenderBot model to simulate both chit-chat turns and TOD turns. - The proposed dataset could be useful for the broader research dialog research community since there is a dearth of datasets blending chit-chat and task-oriented dialogs. - The paper is easy to follow and well-written. - The method seems general enough to be applicable to other task-oriented datasets with intent annotations. - More details on how exactly the topic related chit-chat turns would have strengthened the paper. What are prompts provided to the blender bot and the impact of different prompts on the quality of generated data? - Also, Blenderbot details for TOD simulation can be expanded in section 2.3. For instance, what is the impact of using mergeSGD vs TOD simulation on the overall quality ? - The paper seems to lack details on performance of the intent detector model and QA models and their impact on the quality of the dialogs generated. It would be nice to have an ablation study on the quality of dialogs using different intent detectors (including the data used to train).
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper focuses on transitioning from chit to task-oriented dialogues. This is important for triggering business opportunities. Specifically, the authors propose a framework to automatically generate dialogues, which start from open-domain social chatting and then gradually transit to task-oriented dialogs. The human evaluation shows that the automatically generated dialogues have a reasonable quality with natural conversation flows from a business point of view. 1. Combining Chit-Chat and Task-Oriented Dialogues is a less studied direction. 2. Generating dialogs automatically can be useful in the industry. 1. From the perspective of research, since the released dataset is automatically generated without further manual revision or annotation, it is hard to say that this work proposes a new research task. Furthermore, the two difficult problems (implicit intent detection and transition utterance generation) deserve a more closer study. 2. From the perspective of models, the proposed method consists of a list of sub-models. The whole process is too trivial. The authors target an important problem that tries to combine the open-domain dialog and task-oriented dialog. Specifically, they provide a clear objective for this combination, i.e. triggering the business opportunities. There are at least three basic sub-problems. (1) How to capture the interaction between the two types of dialogs? (2) How to determine the turning point? (3) How to transit properly? All of these sub-problems need to be treated more systematically and carefully. Other suggestions. (1) The introduction is a little wordy and sometimes confusing. For example, in line 78, "the conversation starts without any specific goal". This is confusing, since if a user has no specific goal, why does he or she chat with a salesperson? (2) This paper is highly related to dialog recommendation. In line 505-512, the authors claim "such systems is to only make entity recommendations instead of tasks...". From my point of view, there is considerable overlap between "make entity recommendations" and "transferring from chit-chat to task-oriented dialogues and completing a task the user may want". Specifically, in Liu 2020, they have also combined chitchat and task-oriented dialog, and achieved chit to task-oriented transition. Then, the only difference lies is complete the task, which is not the focus of this paper.
Does the review include a short summary of the paper?
yes
1. Combining Chit-Chat and Task-Oriented Dialogues is a less studied direction. 2. Generating dialogs automatically can be useful in the industry. 1. From the perspective of research, since the released dataset is automatically generated without further manual revision or annotation, it is hard to say that this work proposes a new research task. Furthermore, the two difficult problems (implicit intent detection and transition utterance generation) deserve a more closer study. 2. From the perspective of models, the proposed method consists of a list of sub-models. The whole process is too trivial. The authors target an important problem that tries to combine the open-domain dialog and task-oriented dialog. Specifically, they provide a clear objective for this combination, i.e. triggering the business opportunities. There are at least three basic sub-problems. (1) How to capture the interaction between the two types of dialogs? (2) How to determine the turning point? (3) How to transit properly? All of these sub-problems need to be treated more systematically and carefully. Other suggestions. (1) The introduction is a little wordy and sometimes confusing. For example, in line 78, "the conversation starts without any specific goal". This is confusing, since if a user has no specific goal, why does he or she chat with a salesperson? (2) This paper is highly related to dialog recommendation. In line 505-512, the authors claim "such systems is to only make entity recommendations instead of tasks...". From my point of view, there is considerable overlap between "make entity recommendations" and "transferring from chit-chat to task-oriented dialogues and completing a task the user may want". Specifically, in Liu 2020, they have also combined chitchat and task-oriented dialog, and achieved chit to task-oriented transition. Then, the only difference lies is complete the task, which is not the focus of this paper.
Does the review include a short summary of the paper?
no
This paper focuses on transitioning from chit to task-oriented dialogues. This is important for triggering business opportunities. Specifically, the authors propose a framework to automatically generate dialogues, which start from open-domain social chatting and then gradually transit to task-oriented dialogs. The human evaluation shows that the automatically generated dialogues have a reasonable quality with natural conversation flows from a business point of view. 1. Combining Chit-Chat and Task-Oriented Dialogues is a less studied direction. 2. Generating dialogs automatically can be useful in the industry. 1. From the perspective of research, since the released dataset is automatically generated without further manual revision or annotation, it is hard to say that this work proposes a new research task. Furthermore, the two difficult problems (implicit intent detection and transition utterance generation) deserve a more closer study. 2. From the perspective of models, the proposed method consists of a list of sub-models. The whole process is too trivial. The authors target an important problem that tries to combine the open-domain dialog and task-oriented dialog. Specifically, they provide a clear objective for this combination, i.e. triggering the business opportunities. There are at least three basic sub-problems. (1) How to capture the interaction between the two types of dialogs? (2) How to determine the turning point? (3) How to transit properly? All of these sub-problems need to be treated more systematically and carefully. Other suggestions. (1) The introduction is a little wordy and sometimes confusing. For example, in line 78, "the conversation starts without any specific goal". This is confusing, since if a user has no specific goal, why does he or she chat with a salesperson? (2) This paper is highly related to dialog recommendation. In line 505-512, the authors claim "such systems is to only make entity recommendations instead of tasks...". From my point of view, there is considerable overlap between "make entity recommendations" and "transferring from chit-chat to task-oriented dialogues and completing a task the user may want". Specifically, in Liu 2020, they have also combined chitchat and task-oriented dialog, and achieved chit to task-oriented transition. Then, the only difference lies is complete the task, which is not the focus of this paper.
Does the review include a summary of the strengths of the paper?
yes
This paper focuses on transitioning from chit to task-oriented dialogues. This is important for triggering business opportunities. Specifically, the authors propose a framework to automatically generate dialogues, which start from open-domain social chatting and then gradually transit to task-oriented dialogs. The human evaluation shows that the automatically generated dialogues have a reasonable quality with natural conversation flows from a business point of view. 1. From the perspective of research, since the released dataset is automatically generated without further manual revision or annotation, it is hard to say that this work proposes a new research task. Furthermore, the two difficult problems (implicit intent detection and transition utterance generation) deserve a more closer study. 2. From the perspective of models, the proposed method consists of a list of sub-models. The whole process is too trivial. The authors target an important problem that tries to combine the open-domain dialog and task-oriented dialog. Specifically, they provide a clear objective for this combination, i.e. triggering the business opportunities. There are at least three basic sub-problems. (1) How to capture the interaction between the two types of dialogs? (2) How to determine the turning point? (3) How to transit properly? All of these sub-problems need to be treated more systematically and carefully. Other suggestions. (1) The introduction is a little wordy and sometimes confusing. For example, in line 78, "the conversation starts without any specific goal". This is confusing, since if a user has no specific goal, why does he or she chat with a salesperson? (2) This paper is highly related to dialog recommendation. In line 505-512, the authors claim "such systems is to only make entity recommendations instead of tasks...". From my point of view, there is considerable overlap between "make entity recommendations" and "transferring from chit-chat to task-oriented dialogues and completing a task the user may want". Specifically, in Liu 2020, they have also combined chitchat and task-oriented dialog, and achieved chit to task-oriented transition. Then, the only difference lies is complete the task, which is not the focus of this paper.
Does the review include a summary of the strengths of the paper?
no
This paper focuses on transitioning from chit to task-oriented dialogues. This is important for triggering business opportunities. Specifically, the authors propose a framework to automatically generate dialogues, which start from open-domain social chatting and then gradually transit to task-oriented dialogs. The human evaluation shows that the automatically generated dialogues have a reasonable quality with natural conversation flows from a business point of view. 1. Combining Chit-Chat and Task-Oriented Dialogues is a less studied direction. 2. Generating dialogs automatically can be useful in the industry. 1. From the perspective of research, since the released dataset is automatically generated without further manual revision or annotation, it is hard to say that this work proposes a new research task. Furthermore, the two difficult problems (implicit intent detection and transition utterance generation) deserve a more closer study. 2. From the perspective of models, the proposed method consists of a list of sub-models. The whole process is too trivial. The authors target an important problem that tries to combine the open-domain dialog and task-oriented dialog. Specifically, they provide a clear objective for this combination, i.e. triggering the business opportunities. There are at least three basic sub-problems. (1) How to capture the interaction between the two types of dialogs? (2) How to determine the turning point? (3) How to transit properly? All of these sub-problems need to be treated more systematically and carefully. Other suggestions. (1) The introduction is a little wordy and sometimes confusing. For example, in line 78, "the conversation starts without any specific goal". This is confusing, since if a user has no specific goal, why does he or she chat with a salesperson? (2) This paper is highly related to dialog recommendation. In line 505-512, the authors claim "such systems is to only make entity recommendations instead of tasks...". From my point of view, there is considerable overlap between "make entity recommendations" and "transferring from chit-chat to task-oriented dialogues and completing a task the user may want". Specifically, in Liu 2020, they have also combined chitchat and task-oriented dialog, and achieved chit to task-oriented transition. Then, the only difference lies is complete the task, which is not the focus of this paper.
Does the review include a summary of the weaknesses of the paper?
yes
This paper focuses on transitioning from chit to task-oriented dialogues. This is important for triggering business opportunities. Specifically, the authors propose a framework to automatically generate dialogues, which start from open-domain social chatting and then gradually transit to task-oriented dialogs. The human evaluation shows that the automatically generated dialogues have a reasonable quality with natural conversation flows from a business point of view. 1. Combining Chit-Chat and Task-Oriented Dialogues is a less studied direction. 2. Generating dialogs automatically can be useful in the industry. The authors target an important problem that tries to combine the open-domain dialog and task-oriented dialog. Specifically, they provide a clear objective for this combination, i.e. triggering the business opportunities. There are at least three basic sub-problems. (1) How to capture the interaction between the two types of dialogs? (2) How to determine the turning point? (3) How to transit properly? All of these sub-problems need to be treated more systematically and carefully. Other suggestions. (1) The introduction is a little wordy and sometimes confusing. For example, in line 78, "the conversation starts without any specific goal". This is confusing, since if a user has no specific goal, why does he or she chat with a salesperson? (2) This paper is highly related to dialog recommendation. In line 505-512, the authors claim "such systems is to only make entity recommendations instead of tasks...". From my point of view, there is considerable overlap between "make entity recommendations" and "transferring from chit-chat to task-oriented dialogues and completing a task the user may want". Specifically, in Liu 2020, they have also combined chitchat and task-oriented dialog, and achieved chit to task-oriented transition. Then, the only difference lies is complete the task, which is not the focus of this paper.
Does the review include a summary of the weaknesses of the paper?
no
This paper focuses on transitioning from chit to task-oriented dialogues. This is important for triggering business opportunities. Specifically, the authors propose a framework to automatically generate dialogues, which start from open-domain social chatting and then gradually transit to task-oriented dialogs. The human evaluation shows that the automatically generated dialogues have a reasonable quality with natural conversation flows from a business point of view. 1. Combining Chit-Chat and Task-Oriented Dialogues is a less studied direction. 2. Generating dialogs automatically can be useful in the industry. 1. From the perspective of research, since the released dataset is automatically generated without further manual revision or annotation, it is hard to say that this work proposes a new research task. Furthermore, the two difficult problems (implicit intent detection and transition utterance generation) deserve a more closer study. 2. From the perspective of models, the proposed method consists of a list of sub-models. The whole process is too trivial. The authors target an important problem that tries to combine the open-domain dialog and task-oriented dialog. Specifically, they provide a clear objective for this combination, i.e. triggering the business opportunities. There are at least three basic sub-problems. (1) How to capture the interaction between the two types of dialogs? (2) How to determine the turning point? (3) How to transit properly? All of these sub-problems need to be treated more systematically and carefully. Other suggestions. (1) The introduction is a little wordy and sometimes confusing. For example, in line 78, "the conversation starts without any specific goal". This is confusing, since if a user has no specific goal, why does he or she chat with a salesperson? (2) This paper is highly related to dialog recommendation. In line 505-512, the authors claim "such systems is to only make entity recommendations instead of tasks...". From my point of view, there is considerable overlap between "make entity recommendations" and "transferring from chit-chat to task-oriented dialogues and completing a task the user may want". Specifically, in Liu 2020, they have also combined chitchat and task-oriented dialog, and achieved chit to task-oriented transition. Then, the only difference lies is complete the task, which is not the focus of this paper.
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper focuses on transitioning from chit to task-oriented dialogues. This is important for triggering business opportunities. Specifically, the authors propose a framework to automatically generate dialogues, which start from open-domain social chatting and then gradually transit to task-oriented dialogs. The human evaluation shows that the automatically generated dialogues have a reasonable quality with natural conversation flows from a business point of view. 1. Combining Chit-Chat and Task-Oriented Dialogues is a less studied direction. 2. Generating dialogs automatically can be useful in the industry. 1. From the perspective of research, since the released dataset is automatically generated without further manual revision or annotation, it is hard to say that this work proposes a new research task. Furthermore, the two difficult problems (implicit intent detection and transition utterance generation) deserve a more closer study. 2. From the perspective of models, the proposed method consists of a list of sub-models. The whole process is too trivial.
Does the review mention any comments, suggestions or typos that the author should address?
no
This work proposes to frame the task of controlled text generation as sampling from a combination of black-box models that are responsible for various components of interest such as fluency, the controlled attribute, and relevance to the prompt. Pre-trained black-box experts provide scores for each of these desired components that are linearly combined to form an "energy-based" model, from which samples can be drawn without needing any task-specific training. Experiments are conducted on several tasks such as controllable debiasing, sentiment and formality transfer and prompted generation, and results show that this method outperforms task-specific baselines. 1. Paper is well-written and easy to follow. 2. The idea of having a modular controlled text generation model that uses blackbox components is interesting and novel. The approach can be adapted to any desired components or metrics by using the appropriate pre-trained expert models, and does not need any additional task-specific training. 3. Several experiments are conducted to show that their model outperforms task-specific baselines on multiple tasks. Ablation results are also shown which help assess the contributions of each component. 4. Human evaluations are conducted to demonstrate the superior quality of the generated text. Some limitations of the model are also briefly discussed. 1. One of the main drawbacks of this approach is that presumably the different component black-box experts of the controlled text generation have to be manually selected and the weighted linear combination has to be fine-tuned for each task. It is also not discussed if the inference time is significantly affected by this approach. 2. For the sentiment transfer task, the model with the higher Hamming distance coefficient is considered to be the best model based on the BertScore with respect to the source, which essentially measures how much deviation has been introduced. It appears however that the model with the higher Discriminator coefficient is better, in terms of perplexity and the internal/external classifiers. Given that the Hamming distance in the reference is much higher, it may not be necessary to absolutely reduce the number of changes made, if it serves the overall purpose of the text generation to make more changes. This is somewhat true for the formality transfer task as well. 3. In Table 3, for the formality transfer task, the method sees a decline in performance for the ->Informal task. While the improvement in the ->Formal task is probably a decent tradeoff, this issue is not addressed at all. 4. Percentage preference through majority voting is reported for the human evaluation. More robust correlation/agreement metrics such as Cohen's Kappa should be reported for reliability. - BertScore and BLEURT are inconsistently typeset through the paper (alternatively as Bertscore or Bleurt). It would be better to maintain consistency. - Line 244 in Section 2.3 refers to $E_{gen}$ and $E_{rev}$ which have not been previously introduced. It is not easy to deduce what they mean since they are not explained until the next section. Some re-writing for clarity might help here. - Line 182: discirminate: discriminate - Line 203: This penalization token -> This penalizes token - Line 254: describe -> described - Line 376: Dathathri et al. (2020) -> (Dathathri et al, 2020) - Line 434: Ma et al citation missing year - Line 449: describedd -> described - Line 449: in the text -> in a text - Line 520: prodduct -> product - Table 3 BertScore(sc) -> BertScore (src) - Line 573: which use for -> which are used for - Line 631: similar, approaches -> similar approaches
Does the review include a short summary of the paper?
yes
1. Paper is well-written and easy to follow. 2. The idea of having a modular controlled text generation model that uses blackbox components is interesting and novel. The approach can be adapted to any desired components or metrics by using the appropriate pre-trained expert models, and does not need any additional task-specific training. 3. Several experiments are conducted to show that their model outperforms task-specific baselines on multiple tasks. Ablation results are also shown which help assess the contributions of each component. 4. Human evaluations are conducted to demonstrate the superior quality of the generated text. Some limitations of the model are also briefly discussed. 1. One of the main drawbacks of this approach is that presumably the different component black-box experts of the controlled text generation have to be manually selected and the weighted linear combination has to be fine-tuned for each task. It is also not discussed if the inference time is significantly affected by this approach. 2. For the sentiment transfer task, the model with the higher Hamming distance coefficient is considered to be the best model based on the BertScore with respect to the source, which essentially measures how much deviation has been introduced. It appears however that the model with the higher Discriminator coefficient is better, in terms of perplexity and the internal/external classifiers. Given that the Hamming distance in the reference is much higher, it may not be necessary to absolutely reduce the number of changes made, if it serves the overall purpose of the text generation to make more changes. This is somewhat true for the formality transfer task as well. 3. In Table 3, for the formality transfer task, the method sees a decline in performance for the ->Informal task. While the improvement in the ->Formal task is probably a decent tradeoff, this issue is not addressed at all. 4. Percentage preference through majority voting is reported for the human evaluation. More robust correlation/agreement metrics such as Cohen's Kappa should be reported for reliability. - BertScore and BLEURT are inconsistently typeset through the paper (alternatively as Bertscore or Bleurt). It would be better to maintain consistency. - Line 244 in Section 2.3 refers to $E_{gen}$ and $E_{rev}$ which have not been previously introduced. It is not easy to deduce what they mean since they are not explained until the next section. Some re-writing for clarity might help here. - Line 182: discirminate: discriminate - Line 203: This penalization token -> This penalizes token - Line 254: describe -> described - Line 376: Dathathri et al. (2020) -> (Dathathri et al, 2020) - Line 434: Ma et al citation missing year - Line 449: describedd -> described - Line 449: in the text -> in a text - Line 520: prodduct -> product - Table 3 BertScore(sc) -> BertScore (src) - Line 573: which use for -> which are used for - Line 631: similar, approaches -> similar approaches
Does the review include a short summary of the paper?
no
This work proposes to frame the task of controlled text generation as sampling from a combination of black-box models that are responsible for various components of interest such as fluency, the controlled attribute, and relevance to the prompt. Pre-trained black-box experts provide scores for each of these desired components that are linearly combined to form an "energy-based" model, from which samples can be drawn without needing any task-specific training. Experiments are conducted on several tasks such as controllable debiasing, sentiment and formality transfer and prompted generation, and results show that this method outperforms task-specific baselines. 1. Paper is well-written and easy to follow. 2. The idea of having a modular controlled text generation model that uses blackbox components is interesting and novel. The approach can be adapted to any desired components or metrics by using the appropriate pre-trained expert models, and does not need any additional task-specific training. 3. Several experiments are conducted to show that their model outperforms task-specific baselines on multiple tasks. Ablation results are also shown which help assess the contributions of each component. 4. Human evaluations are conducted to demonstrate the superior quality of the generated text. Some limitations of the model are also briefly discussed. 1. One of the main drawbacks of this approach is that presumably the different component black-box experts of the controlled text generation have to be manually selected and the weighted linear combination has to be fine-tuned for each task. It is also not discussed if the inference time is significantly affected by this approach. 2. For the sentiment transfer task, the model with the higher Hamming distance coefficient is considered to be the best model based on the BertScore with respect to the source, which essentially measures how much deviation has been introduced. It appears however that the model with the higher Discriminator coefficient is better, in terms of perplexity and the internal/external classifiers. Given that the Hamming distance in the reference is much higher, it may not be necessary to absolutely reduce the number of changes made, if it serves the overall purpose of the text generation to make more changes. This is somewhat true for the formality transfer task as well. 3. In Table 3, for the formality transfer task, the method sees a decline in performance for the ->Informal task. While the improvement in the ->Formal task is probably a decent tradeoff, this issue is not addressed at all. 4. Percentage preference through majority voting is reported for the human evaluation. More robust correlation/agreement metrics such as Cohen's Kappa should be reported for reliability. - BertScore and BLEURT are inconsistently typeset through the paper (alternatively as Bertscore or Bleurt). It would be better to maintain consistency. - Line 244 in Section 2.3 refers to $E_{gen}$ and $E_{rev}$ which have not been previously introduced. It is not easy to deduce what they mean since they are not explained until the next section. Some re-writing for clarity might help here. - Line 182: discirminate: discriminate - Line 203: This penalization token -> This penalizes token - Line 254: describe -> described - Line 376: Dathathri et al. (2020) -> (Dathathri et al, 2020) - Line 434: Ma et al citation missing year - Line 449: describedd -> described - Line 449: in the text -> in a text - Line 520: prodduct -> product - Table 3 BertScore(sc) -> BertScore (src) - Line 573: which use for -> which are used for - Line 631: similar, approaches -> similar approaches
Does the review include a summary of the strengths of the paper?
yes
This work proposes to frame the task of controlled text generation as sampling from a combination of black-box models that are responsible for various components of interest such as fluency, the controlled attribute, and relevance to the prompt. Pre-trained black-box experts provide scores for each of these desired components that are linearly combined to form an "energy-based" model, from which samples can be drawn without needing any task-specific training. Experiments are conducted on several tasks such as controllable debiasing, sentiment and formality transfer and prompted generation, and results show that this method outperforms task-specific baselines. 1. One of the main drawbacks of this approach is that presumably the different component black-box experts of the controlled text generation have to be manually selected and the weighted linear combination has to be fine-tuned for each task. It is also not discussed if the inference time is significantly affected by this approach. 2. For the sentiment transfer task, the model with the higher Hamming distance coefficient is considered to be the best model based on the BertScore with respect to the source, which essentially measures how much deviation has been introduced. It appears however that the model with the higher Discriminator coefficient is better, in terms of perplexity and the internal/external classifiers. Given that the Hamming distance in the reference is much higher, it may not be necessary to absolutely reduce the number of changes made, if it serves the overall purpose of the text generation to make more changes. This is somewhat true for the formality transfer task as well. 3. In Table 3, for the formality transfer task, the method sees a decline in performance for the ->Informal task. While the improvement in the ->Formal task is probably a decent tradeoff, this issue is not addressed at all. 4. Percentage preference through majority voting is reported for the human evaluation. More robust correlation/agreement metrics such as Cohen's Kappa should be reported for reliability. - BertScore and BLEURT are inconsistently typeset through the paper (alternatively as Bertscore or Bleurt). It would be better to maintain consistency. - Line 244 in Section 2.3 refers to $E_{gen}$ and $E_{rev}$ which have not been previously introduced. It is not easy to deduce what they mean since they are not explained until the next section. Some re-writing for clarity might help here. - Line 182: discirminate: discriminate - Line 203: This penalization token -> This penalizes token - Line 254: describe -> described - Line 376: Dathathri et al. (2020) -> (Dathathri et al, 2020) - Line 434: Ma et al citation missing year - Line 449: describedd -> described - Line 449: in the text -> in a text - Line 520: prodduct -> product - Table 3 BertScore(sc) -> BertScore (src) - Line 573: which use for -> which are used for - Line 631: similar, approaches -> similar approaches
Does the review include a summary of the strengths of the paper?
no
This work proposes to frame the task of controlled text generation as sampling from a combination of black-box models that are responsible for various components of interest such as fluency, the controlled attribute, and relevance to the prompt. Pre-trained black-box experts provide scores for each of these desired components that are linearly combined to form an "energy-based" model, from which samples can be drawn without needing any task-specific training. Experiments are conducted on several tasks such as controllable debiasing, sentiment and formality transfer and prompted generation, and results show that this method outperforms task-specific baselines. 1. Paper is well-written and easy to follow. 2. The idea of having a modular controlled text generation model that uses blackbox components is interesting and novel. The approach can be adapted to any desired components or metrics by using the appropriate pre-trained expert models, and does not need any additional task-specific training. 3. Several experiments are conducted to show that their model outperforms task-specific baselines on multiple tasks. Ablation results are also shown which help assess the contributions of each component. 4. Human evaluations are conducted to demonstrate the superior quality of the generated text. Some limitations of the model are also briefly discussed. 1. One of the main drawbacks of this approach is that presumably the different component black-box experts of the controlled text generation have to be manually selected and the weighted linear combination has to be fine-tuned for each task. It is also not discussed if the inference time is significantly affected by this approach. 2. For the sentiment transfer task, the model with the higher Hamming distance coefficient is considered to be the best model based on the BertScore with respect to the source, which essentially measures how much deviation has been introduced. It appears however that the model with the higher Discriminator coefficient is better, in terms of perplexity and the internal/external classifiers. Given that the Hamming distance in the reference is much higher, it may not be necessary to absolutely reduce the number of changes made, if it serves the overall purpose of the text generation to make more changes. This is somewhat true for the formality transfer task as well. 3. In Table 3, for the formality transfer task, the method sees a decline in performance for the ->Informal task. While the improvement in the ->Formal task is probably a decent tradeoff, this issue is not addressed at all. 4. Percentage preference through majority voting is reported for the human evaluation. More robust correlation/agreement metrics such as Cohen's Kappa should be reported for reliability. - BertScore and BLEURT are inconsistently typeset through the paper (alternatively as Bertscore or Bleurt). It would be better to maintain consistency. - Line 244 in Section 2.3 refers to $E_{gen}$ and $E_{rev}$ which have not been previously introduced. It is not easy to deduce what they mean since they are not explained until the next section. Some re-writing for clarity might help here. - Line 182: discirminate: discriminate - Line 203: This penalization token -> This penalizes token - Line 254: describe -> described - Line 376: Dathathri et al. (2020) -> (Dathathri et al, 2020) - Line 434: Ma et al citation missing year - Line 449: describedd -> described - Line 449: in the text -> in a text - Line 520: prodduct -> product - Table 3 BertScore(sc) -> BertScore (src) - Line 573: which use for -> which are used for - Line 631: similar, approaches -> similar approaches
Does the review include a summary of the weaknesses of the paper?
yes
This work proposes to frame the task of controlled text generation as sampling from a combination of black-box models that are responsible for various components of interest such as fluency, the controlled attribute, and relevance to the prompt. Pre-trained black-box experts provide scores for each of these desired components that are linearly combined to form an "energy-based" model, from which samples can be drawn without needing any task-specific training. Experiments are conducted on several tasks such as controllable debiasing, sentiment and formality transfer and prompted generation, and results show that this method outperforms task-specific baselines. 1. Paper is well-written and easy to follow. 2. The idea of having a modular controlled text generation model that uses blackbox components is interesting and novel. The approach can be adapted to any desired components or metrics by using the appropriate pre-trained expert models, and does not need any additional task-specific training. 3. Several experiments are conducted to show that their model outperforms task-specific baselines on multiple tasks. Ablation results are also shown which help assess the contributions of each component. 4. Human evaluations are conducted to demonstrate the superior quality of the generated text. Some limitations of the model are also briefly discussed. - BertScore and BLEURT are inconsistently typeset through the paper (alternatively as Bertscore or Bleurt). It would be better to maintain consistency. - Line 244 in Section 2.3 refers to $E_{gen}$ and $E_{rev}$ which have not been previously introduced. It is not easy to deduce what they mean since they are not explained until the next section. Some re-writing for clarity might help here. - Line 182: discirminate: discriminate - Line 203: This penalization token -> This penalizes token - Line 254: describe -> described - Line 376: Dathathri et al. (2020) -> (Dathathri et al, 2020) - Line 434: Ma et al citation missing year - Line 449: describedd -> described - Line 449: in the text -> in a text - Line 520: prodduct -> product - Table 3 BertScore(sc) -> BertScore (src) - Line 573: which use for -> which are used for - Line 631: similar, approaches -> similar approaches
Does the review include a summary of the weaknesses of the paper?
no
This work proposes to frame the task of controlled text generation as sampling from a combination of black-box models that are responsible for various components of interest such as fluency, the controlled attribute, and relevance to the prompt. Pre-trained black-box experts provide scores for each of these desired components that are linearly combined to form an "energy-based" model, from which samples can be drawn without needing any task-specific training. Experiments are conducted on several tasks such as controllable debiasing, sentiment and formality transfer and prompted generation, and results show that this method outperforms task-specific baselines. 1. Paper is well-written and easy to follow. 2. The idea of having a modular controlled text generation model that uses blackbox components is interesting and novel. The approach can be adapted to any desired components or metrics by using the appropriate pre-trained expert models, and does not need any additional task-specific training. 3. Several experiments are conducted to show that their model outperforms task-specific baselines on multiple tasks. Ablation results are also shown which help assess the contributions of each component. 4. Human evaluations are conducted to demonstrate the superior quality of the generated text. Some limitations of the model are also briefly discussed. 1. One of the main drawbacks of this approach is that presumably the different component black-box experts of the controlled text generation have to be manually selected and the weighted linear combination has to be fine-tuned for each task. It is also not discussed if the inference time is significantly affected by this approach. 2. For the sentiment transfer task, the model with the higher Hamming distance coefficient is considered to be the best model based on the BertScore with respect to the source, which essentially measures how much deviation has been introduced. It appears however that the model with the higher Discriminator coefficient is better, in terms of perplexity and the internal/external classifiers. Given that the Hamming distance in the reference is much higher, it may not be necessary to absolutely reduce the number of changes made, if it serves the overall purpose of the text generation to make more changes. This is somewhat true for the formality transfer task as well. 3. In Table 3, for the formality transfer task, the method sees a decline in performance for the ->Informal task. While the improvement in the ->Formal task is probably a decent tradeoff, this issue is not addressed at all. 4. Percentage preference through majority voting is reported for the human evaluation. More robust correlation/agreement metrics such as Cohen's Kappa should be reported for reliability. - BertScore and BLEURT are inconsistently typeset through the paper (alternatively as Bertscore or Bleurt). It would be better to maintain consistency. - Line 244 in Section 2.3 refers to $E_{gen}$ and $E_{rev}$ which have not been previously introduced. It is not easy to deduce what they mean since they are not explained until the next section. Some re-writing for clarity might help here. - Line 182: discirminate: discriminate - Line 203: This penalization token -> This penalizes token - Line 254: describe -> described - Line 376: Dathathri et al. (2020) -> (Dathathri et al, 2020) - Line 434: Ma et al citation missing year - Line 449: describedd -> described - Line 449: in the text -> in a text - Line 520: prodduct -> product - Table 3 BertScore(sc) -> BertScore (src) - Line 573: which use for -> which are used for - Line 631: similar, approaches -> similar approaches
Does the review mention any comments, suggestions or typos that the author should address?
yes
This work proposes to frame the task of controlled text generation as sampling from a combination of black-box models that are responsible for various components of interest such as fluency, the controlled attribute, and relevance to the prompt. Pre-trained black-box experts provide scores for each of these desired components that are linearly combined to form an "energy-based" model, from which samples can be drawn without needing any task-specific training. Experiments are conducted on several tasks such as controllable debiasing, sentiment and formality transfer and prompted generation, and results show that this method outperforms task-specific baselines. 1. Paper is well-written and easy to follow. 2. The idea of having a modular controlled text generation model that uses blackbox components is interesting and novel. The approach can be adapted to any desired components or metrics by using the appropriate pre-trained expert models, and does not need any additional task-specific training. 3. Several experiments are conducted to show that their model outperforms task-specific baselines on multiple tasks. Ablation results are also shown which help assess the contributions of each component. 4. Human evaluations are conducted to demonstrate the superior quality of the generated text. Some limitations of the model are also briefly discussed. 1. One of the main drawbacks of this approach is that presumably the different component black-box experts of the controlled text generation have to be manually selected and the weighted linear combination has to be fine-tuned for each task. It is also not discussed if the inference time is significantly affected by this approach. 2. For the sentiment transfer task, the model with the higher Hamming distance coefficient is considered to be the best model based on the BertScore with respect to the source, which essentially measures how much deviation has been introduced. It appears however that the model with the higher Discriminator coefficient is better, in terms of perplexity and the internal/external classifiers. Given that the Hamming distance in the reference is much higher, it may not be necessary to absolutely reduce the number of changes made, if it serves the overall purpose of the text generation to make more changes. This is somewhat true for the formality transfer task as well. 3. In Table 3, for the formality transfer task, the method sees a decline in performance for the ->Informal task. While the improvement in the ->Formal task is probably a decent tradeoff, this issue is not addressed at all. 4. Percentage preference through majority voting is reported for the human evaluation. More robust correlation/agreement metrics such as Cohen's Kappa should be reported for reliability.
Does the review mention any comments, suggestions or typos that the author should address?
no