text
stringlengths
63
12.6k
question
stringclasses
4 values
label
stringclasses
2 values
The authors of this work propose Mix and Match LM. Their method uses global-scores for controllable text generation by drawing samples from an energy-based model. Here, the values of the energies come from blackbox models that correspond to attributes of generation (fluency, faithfulness to conditioning, etc.). They show empirical gains relative to some other baseline methods. - The paper is written well and much of the method is quite accessible. I feel like readers would have a decent grasp of the method and approach after a single read. - The method is simple. - Demonstrate performance improvements against methods that are more computationally prohibitive or necessitate fine-tuning. - Quite modular and hypothetically flexible to add in other aspects - more experimentation would need to be done to see how truly flexible to add stuff in. For the same problem and tasks, there's a lot of other prior work that are not energy-based modeling (two are) that could be compared to. After reading, I'm unsure how well this method would compare to those and what situations this approach would be better for. Many of those combine some blackbox methods in different ways. A discussion of these works and comparisons would really strengthen the paper greatly. Some of the work I'm talking about are: DExperts: Decoding-time controlled text generation with experts and anti-experts (Liu et al. 2021) Plug and Play Autoencoders for Conditional Text Generation (Mai et al 2020) Sentence Bottleneck Autoencoders from Transformer Language Models (Montero et al. 2021) A distributional approach to controlled text generation (Khalifa et al. 2020) GeDi (Kruase et al. 2020) DAPT (Gururangan et al. 2020) CTRL (Keskar et al. 2019) FUDGE (Yang and Klein 2021) You do cite a few of these, but comparisons to them would really strengthen the work. These papers just came out, around or after the ARR deadline, but could be interesting too: - Controlling Conditional Language Models with Distributional Policy Gradients (Korbak et al. 2021) - Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation (Mai and Henderson 2021) Many of these methods compare favorably against PPLM, so they could be stronger baselines against which your work could be compared to. Another aspect that could be improved is the experimentation around the trade-offs between accuracy and BLEU score. Table 6 and 7 start this. It'd be informative to see some of the baselines varied slightly on a plot similar to Mai et al. 2020 or Montero et al 2021 for this task where self-BLEU/ref-BLEU is on one axis and accuracy via the classifier is on the other and things are plotted (top right being better). A task such as toxicity via RealToxicityPrompts could be quite interesting and offer a test-bed for comparison against some of the other prior work. I'd be interested in seeing how this method extends to other models, i.e. an auto-regressive model such as GPT-2. Is it simply extendible? How well would it do? This is a paper on controllable text generation and a method for this. A discussion of the potential broader impact is missing. It has potentially wide-ranging downstream impact, so including one to highlight potential harms and effects is important. Misc comments: - line 35: cite GPT3 - line 46: cite GPT2 - Figure 1: The caption seems oddly placed being on the right-hand-side (presumably this was to fit the figure easily for review, but may not strictly be okay so something to check, I'm not sure). - line 331: what discriminator did you use? Did you use the same one for the PPLM setup for evaluation? Another flavor of related work to help motivate the approach (no fine-tuning, no optimization needed), could be around prompting, steering LMs without fine-tuning, etc. It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners (Shick and Schutze 2020) Few-Shot Text Generation with Pattern-Exploiting Training (Shick and Schutze 2020) Eliciting Knowledge from Language Models Using Automatically Generated Prompts (Shin et al. 2020) Can Unconditional Language Models Recover Arbitrary Sentences? ( Subramani et al. 2019)
Does the review include a short summary of the paper?
yes
- The paper is written well and much of the method is quite accessible. I feel like readers would have a decent grasp of the method and approach after a single read. - The method is simple. - Demonstrate performance improvements against methods that are more computationally prohibitive or necessitate fine-tuning. - Quite modular and hypothetically flexible to add in other aspects - more experimentation would need to be done to see how truly flexible to add stuff in. For the same problem and tasks, there's a lot of other prior work that are not energy-based modeling (two are) that could be compared to. After reading, I'm unsure how well this method would compare to those and what situations this approach would be better for. Many of those combine some blackbox methods in different ways. A discussion of these works and comparisons would really strengthen the paper greatly. Some of the work I'm talking about are: DExperts: Decoding-time controlled text generation with experts and anti-experts (Liu et al. 2021) Plug and Play Autoencoders for Conditional Text Generation (Mai et al 2020) Sentence Bottleneck Autoencoders from Transformer Language Models (Montero et al. 2021) A distributional approach to controlled text generation (Khalifa et al. 2020) GeDi (Kruase et al. 2020) DAPT (Gururangan et al. 2020) CTRL (Keskar et al. 2019) FUDGE (Yang and Klein 2021) You do cite a few of these, but comparisons to them would really strengthen the work. These papers just came out, around or after the ARR deadline, but could be interesting too: - Controlling Conditional Language Models with Distributional Policy Gradients (Korbak et al. 2021) - Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation (Mai and Henderson 2021) Many of these methods compare favorably against PPLM, so they could be stronger baselines against which your work could be compared to. Another aspect that could be improved is the experimentation around the trade-offs between accuracy and BLEU score. Table 6 and 7 start this. It'd be informative to see some of the baselines varied slightly on a plot similar to Mai et al. 2020 or Montero et al 2021 for this task where self-BLEU/ref-BLEU is on one axis and accuracy via the classifier is on the other and things are plotted (top right being better). A task such as toxicity via RealToxicityPrompts could be quite interesting and offer a test-bed for comparison against some of the other prior work. I'd be interested in seeing how this method extends to other models, i.e. an auto-regressive model such as GPT-2. Is it simply extendible? How well would it do? This is a paper on controllable text generation and a method for this. A discussion of the potential broader impact is missing. It has potentially wide-ranging downstream impact, so including one to highlight potential harms and effects is important. Misc comments: - line 35: cite GPT3 - line 46: cite GPT2 - Figure 1: The caption seems oddly placed being on the right-hand-side (presumably this was to fit the figure easily for review, but may not strictly be okay so something to check, I'm not sure). - line 331: what discriminator did you use? Did you use the same one for the PPLM setup for evaluation? Another flavor of related work to help motivate the approach (no fine-tuning, no optimization needed), could be around prompting, steering LMs without fine-tuning, etc. It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners (Shick and Schutze 2020) Few-Shot Text Generation with Pattern-Exploiting Training (Shick and Schutze 2020) Eliciting Knowledge from Language Models Using Automatically Generated Prompts (Shin et al. 2020) Can Unconditional Language Models Recover Arbitrary Sentences? ( Subramani et al. 2019)
Does the review include a short summary of the paper?
no
The authors of this work propose Mix and Match LM. Their method uses global-scores for controllable text generation by drawing samples from an energy-based model. Here, the values of the energies come from blackbox models that correspond to attributes of generation (fluency, faithfulness to conditioning, etc.). They show empirical gains relative to some other baseline methods. - The paper is written well and much of the method is quite accessible. I feel like readers would have a decent grasp of the method and approach after a single read. - The method is simple. - Demonstrate performance improvements against methods that are more computationally prohibitive or necessitate fine-tuning. - Quite modular and hypothetically flexible to add in other aspects - more experimentation would need to be done to see how truly flexible to add stuff in. For the same problem and tasks, there's a lot of other prior work that are not energy-based modeling (two are) that could be compared to. After reading, I'm unsure how well this method would compare to those and what situations this approach would be better for. Many of those combine some blackbox methods in different ways. A discussion of these works and comparisons would really strengthen the paper greatly. Some of the work I'm talking about are: DExperts: Decoding-time controlled text generation with experts and anti-experts (Liu et al. 2021) Plug and Play Autoencoders for Conditional Text Generation (Mai et al 2020) Sentence Bottleneck Autoencoders from Transformer Language Models (Montero et al. 2021) A distributional approach to controlled text generation (Khalifa et al. 2020) GeDi (Kruase et al. 2020) DAPT (Gururangan et al. 2020) CTRL (Keskar et al. 2019) FUDGE (Yang and Klein 2021) You do cite a few of these, but comparisons to them would really strengthen the work. These papers just came out, around or after the ARR deadline, but could be interesting too: - Controlling Conditional Language Models with Distributional Policy Gradients (Korbak et al. 2021) - Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation (Mai and Henderson 2021) Many of these methods compare favorably against PPLM, so they could be stronger baselines against which your work could be compared to. Another aspect that could be improved is the experimentation around the trade-offs between accuracy and BLEU score. Table 6 and 7 start this. It'd be informative to see some of the baselines varied slightly on a plot similar to Mai et al. 2020 or Montero et al 2021 for this task where self-BLEU/ref-BLEU is on one axis and accuracy via the classifier is on the other and things are plotted (top right being better). A task such as toxicity via RealToxicityPrompts could be quite interesting and offer a test-bed for comparison against some of the other prior work. I'd be interested in seeing how this method extends to other models, i.e. an auto-regressive model such as GPT-2. Is it simply extendible? How well would it do? This is a paper on controllable text generation and a method for this. A discussion of the potential broader impact is missing. It has potentially wide-ranging downstream impact, so including one to highlight potential harms and effects is important. Misc comments: - line 35: cite GPT3 - line 46: cite GPT2 - Figure 1: The caption seems oddly placed being on the right-hand-side (presumably this was to fit the figure easily for review, but may not strictly be okay so something to check, I'm not sure). - line 331: what discriminator did you use? Did you use the same one for the PPLM setup for evaluation? Another flavor of related work to help motivate the approach (no fine-tuning, no optimization needed), could be around prompting, steering LMs without fine-tuning, etc. It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners (Shick and Schutze 2020) Few-Shot Text Generation with Pattern-Exploiting Training (Shick and Schutze 2020) Eliciting Knowledge from Language Models Using Automatically Generated Prompts (Shin et al. 2020) Can Unconditional Language Models Recover Arbitrary Sentences? ( Subramani et al. 2019)
Does the review include a summary of the strengths of the paper?
yes
The authors of this work propose Mix and Match LM. Their method uses global-scores for controllable text generation by drawing samples from an energy-based model. Here, the values of the energies come from blackbox models that correspond to attributes of generation (fluency, faithfulness to conditioning, etc.). They show empirical gains relative to some other baseline methods. For the same problem and tasks, there's a lot of other prior work that are not energy-based modeling (two are) that could be compared to. After reading, I'm unsure how well this method would compare to those and what situations this approach would be better for. Many of those combine some blackbox methods in different ways. A discussion of these works and comparisons would really strengthen the paper greatly. Some of the work I'm talking about are: DExperts: Decoding-time controlled text generation with experts and anti-experts (Liu et al. 2021) Plug and Play Autoencoders for Conditional Text Generation (Mai et al 2020) Sentence Bottleneck Autoencoders from Transformer Language Models (Montero et al. 2021) A distributional approach to controlled text generation (Khalifa et al. 2020) GeDi (Kruase et al. 2020) DAPT (Gururangan et al. 2020) CTRL (Keskar et al. 2019) FUDGE (Yang and Klein 2021) You do cite a few of these, but comparisons to them would really strengthen the work. These papers just came out, around or after the ARR deadline, but could be interesting too: - Controlling Conditional Language Models with Distributional Policy Gradients (Korbak et al. 2021) - Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation (Mai and Henderson 2021) Many of these methods compare favorably against PPLM, so they could be stronger baselines against which your work could be compared to. Another aspect that could be improved is the experimentation around the trade-offs between accuracy and BLEU score. Table 6 and 7 start this. It'd be informative to see some of the baselines varied slightly on a plot similar to Mai et al. 2020 or Montero et al 2021 for this task where self-BLEU/ref-BLEU is on one axis and accuracy via the classifier is on the other and things are plotted (top right being better). A task such as toxicity via RealToxicityPrompts could be quite interesting and offer a test-bed for comparison against some of the other prior work. I'd be interested in seeing how this method extends to other models, i.e. an auto-regressive model such as GPT-2. Is it simply extendible? How well would it do? This is a paper on controllable text generation and a method for this. A discussion of the potential broader impact is missing. It has potentially wide-ranging downstream impact, so including one to highlight potential harms and effects is important. Misc comments: - line 35: cite GPT3 - line 46: cite GPT2 - Figure 1: The caption seems oddly placed being on the right-hand-side (presumably this was to fit the figure easily for review, but may not strictly be okay so something to check, I'm not sure). - line 331: what discriminator did you use? Did you use the same one for the PPLM setup for evaluation? Another flavor of related work to help motivate the approach (no fine-tuning, no optimization needed), could be around prompting, steering LMs without fine-tuning, etc. It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners (Shick and Schutze 2020) Few-Shot Text Generation with Pattern-Exploiting Training (Shick and Schutze 2020) Eliciting Knowledge from Language Models Using Automatically Generated Prompts (Shin et al. 2020) Can Unconditional Language Models Recover Arbitrary Sentences? ( Subramani et al. 2019)
Does the review include a summary of the strengths of the paper?
no
The authors of this work propose Mix and Match LM. Their method uses global-scores for controllable text generation by drawing samples from an energy-based model. Here, the values of the energies come from blackbox models that correspond to attributes of generation (fluency, faithfulness to conditioning, etc.). They show empirical gains relative to some other baseline methods. - The paper is written well and much of the method is quite accessible. I feel like readers would have a decent grasp of the method and approach after a single read. - The method is simple. - Demonstrate performance improvements against methods that are more computationally prohibitive or necessitate fine-tuning. - Quite modular and hypothetically flexible to add in other aspects - more experimentation would need to be done to see how truly flexible to add stuff in. For the same problem and tasks, there's a lot of other prior work that are not energy-based modeling (two are) that could be compared to. After reading, I'm unsure how well this method would compare to those and what situations this approach would be better for. Many of those combine some blackbox methods in different ways. A discussion of these works and comparisons would really strengthen the paper greatly. Some of the work I'm talking about are: DExperts: Decoding-time controlled text generation with experts and anti-experts (Liu et al. 2021) Plug and Play Autoencoders for Conditional Text Generation (Mai et al 2020) Sentence Bottleneck Autoencoders from Transformer Language Models (Montero et al. 2021) A distributional approach to controlled text generation (Khalifa et al. 2020) GeDi (Kruase et al. 2020) DAPT (Gururangan et al. 2020) CTRL (Keskar et al. 2019) FUDGE (Yang and Klein 2021) You do cite a few of these, but comparisons to them would really strengthen the work. These papers just came out, around or after the ARR deadline, but could be interesting too: - Controlling Conditional Language Models with Distributional Policy Gradients (Korbak et al. 2021) - Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation (Mai and Henderson 2021) Many of these methods compare favorably against PPLM, so they could be stronger baselines against which your work could be compared to. Another aspect that could be improved is the experimentation around the trade-offs between accuracy and BLEU score. Table 6 and 7 start this. It'd be informative to see some of the baselines varied slightly on a plot similar to Mai et al. 2020 or Montero et al 2021 for this task where self-BLEU/ref-BLEU is on one axis and accuracy via the classifier is on the other and things are plotted (top right being better). A task such as toxicity via RealToxicityPrompts could be quite interesting and offer a test-bed for comparison against some of the other prior work. I'd be interested in seeing how this method extends to other models, i.e. an auto-regressive model such as GPT-2. Is it simply extendible? How well would it do? This is a paper on controllable text generation and a method for this. A discussion of the potential broader impact is missing. It has potentially wide-ranging downstream impact, so including one to highlight potential harms and effects is important. Misc comments: - line 35: cite GPT3 - line 46: cite GPT2 - Figure 1: The caption seems oddly placed being on the right-hand-side (presumably this was to fit the figure easily for review, but may not strictly be okay so something to check, I'm not sure). - line 331: what discriminator did you use? Did you use the same one for the PPLM setup for evaluation? Another flavor of related work to help motivate the approach (no fine-tuning, no optimization needed), could be around prompting, steering LMs without fine-tuning, etc. It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners (Shick and Schutze 2020) Few-Shot Text Generation with Pattern-Exploiting Training (Shick and Schutze 2020) Eliciting Knowledge from Language Models Using Automatically Generated Prompts (Shin et al. 2020) Can Unconditional Language Models Recover Arbitrary Sentences? ( Subramani et al. 2019)
Does the review include a summary of the weaknesses of the paper?
yes
The authors of this work propose Mix and Match LM. Their method uses global-scores for controllable text generation by drawing samples from an energy-based model. Here, the values of the energies come from blackbox models that correspond to attributes of generation (fluency, faithfulness to conditioning, etc.). They show empirical gains relative to some other baseline methods. - The paper is written well and much of the method is quite accessible. I feel like readers would have a decent grasp of the method and approach after a single read. - The method is simple. - Demonstrate performance improvements against methods that are more computationally prohibitive or necessitate fine-tuning. - Quite modular and hypothetically flexible to add in other aspects - more experimentation would need to be done to see how truly flexible to add stuff in. Misc comments: - line 35: cite GPT3 - line 46: cite GPT2 - Figure 1: The caption seems oddly placed being on the right-hand-side (presumably this was to fit the figure easily for review, but may not strictly be okay so something to check, I'm not sure). - line 331: what discriminator did you use? Did you use the same one for the PPLM setup for evaluation? Another flavor of related work to help motivate the approach (no fine-tuning, no optimization needed), could be around prompting, steering LMs without fine-tuning, etc. It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners (Shick and Schutze 2020) Few-Shot Text Generation with Pattern-Exploiting Training (Shick and Schutze 2020) Eliciting Knowledge from Language Models Using Automatically Generated Prompts (Shin et al. 2020) Can Unconditional Language Models Recover Arbitrary Sentences? ( Subramani et al. 2019)
Does the review include a summary of the weaknesses of the paper?
no
The authors of this work propose Mix and Match LM. Their method uses global-scores for controllable text generation by drawing samples from an energy-based model. Here, the values of the energies come from blackbox models that correspond to attributes of generation (fluency, faithfulness to conditioning, etc.). They show empirical gains relative to some other baseline methods. - The paper is written well and much of the method is quite accessible. I feel like readers would have a decent grasp of the method and approach after a single read. - The method is simple. - Demonstrate performance improvements against methods that are more computationally prohibitive or necessitate fine-tuning. - Quite modular and hypothetically flexible to add in other aspects - more experimentation would need to be done to see how truly flexible to add stuff in. For the same problem and tasks, there's a lot of other prior work that are not energy-based modeling (two are) that could be compared to. After reading, I'm unsure how well this method would compare to those and what situations this approach would be better for. Many of those combine some blackbox methods in different ways. A discussion of these works and comparisons would really strengthen the paper greatly. Some of the work I'm talking about are: DExperts: Decoding-time controlled text generation with experts and anti-experts (Liu et al. 2021) Plug and Play Autoencoders for Conditional Text Generation (Mai et al 2020) Sentence Bottleneck Autoencoders from Transformer Language Models (Montero et al. 2021) A distributional approach to controlled text generation (Khalifa et al. 2020) GeDi (Kruase et al. 2020) DAPT (Gururangan et al. 2020) CTRL (Keskar et al. 2019) FUDGE (Yang and Klein 2021) You do cite a few of these, but comparisons to them would really strengthen the work. These papers just came out, around or after the ARR deadline, but could be interesting too: - Controlling Conditional Language Models with Distributional Policy Gradients (Korbak et al. 2021) - Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation (Mai and Henderson 2021) Many of these methods compare favorably against PPLM, so they could be stronger baselines against which your work could be compared to. Another aspect that could be improved is the experimentation around the trade-offs between accuracy and BLEU score. Table 6 and 7 start this. It'd be informative to see some of the baselines varied slightly on a plot similar to Mai et al. 2020 or Montero et al 2021 for this task where self-BLEU/ref-BLEU is on one axis and accuracy via the classifier is on the other and things are plotted (top right being better). A task such as toxicity via RealToxicityPrompts could be quite interesting and offer a test-bed for comparison against some of the other prior work. I'd be interested in seeing how this method extends to other models, i.e. an auto-regressive model such as GPT-2. Is it simply extendible? How well would it do? This is a paper on controllable text generation and a method for this. A discussion of the potential broader impact is missing. It has potentially wide-ranging downstream impact, so including one to highlight potential harms and effects is important. Misc comments: - line 35: cite GPT3 - line 46: cite GPT2 - Figure 1: The caption seems oddly placed being on the right-hand-side (presumably this was to fit the figure easily for review, but may not strictly be okay so something to check, I'm not sure). - line 331: what discriminator did you use? Did you use the same one for the PPLM setup for evaluation? Another flavor of related work to help motivate the approach (no fine-tuning, no optimization needed), could be around prompting, steering LMs without fine-tuning, etc. It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners (Shick and Schutze 2020) Few-Shot Text Generation with Pattern-Exploiting Training (Shick and Schutze 2020) Eliciting Knowledge from Language Models Using Automatically Generated Prompts (Shin et al. 2020) Can Unconditional Language Models Recover Arbitrary Sentences? ( Subramani et al. 2019)
Does the review mention any comments, suggestions or typos that the author should address?
yes
The authors of this work propose Mix and Match LM. Their method uses global-scores for controllable text generation by drawing samples from an energy-based model. Here, the values of the energies come from blackbox models that correspond to attributes of generation (fluency, faithfulness to conditioning, etc.). They show empirical gains relative to some other baseline methods. - The paper is written well and much of the method is quite accessible. I feel like readers would have a decent grasp of the method and approach after a single read. - The method is simple. - Demonstrate performance improvements against methods that are more computationally prohibitive or necessitate fine-tuning. - Quite modular and hypothetically flexible to add in other aspects - more experimentation would need to be done to see how truly flexible to add stuff in. For the same problem and tasks, there's a lot of other prior work that are not energy-based modeling (two are) that could be compared to. After reading, I'm unsure how well this method would compare to those and what situations this approach would be better for. Many of those combine some blackbox methods in different ways. A discussion of these works and comparisons would really strengthen the paper greatly. Some of the work I'm talking about are: DExperts: Decoding-time controlled text generation with experts and anti-experts (Liu et al. 2021) Plug and Play Autoencoders for Conditional Text Generation (Mai et al 2020) Sentence Bottleneck Autoencoders from Transformer Language Models (Montero et al. 2021) A distributional approach to controlled text generation (Khalifa et al. 2020) GeDi (Kruase et al. 2020) DAPT (Gururangan et al. 2020) CTRL (Keskar et al. 2019) FUDGE (Yang and Klein 2021) You do cite a few of these, but comparisons to them would really strengthen the work. These papers just came out, around or after the ARR deadline, but could be interesting too: - Controlling Conditional Language Models with Distributional Policy Gradients (Korbak et al. 2021) - Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation (Mai and Henderson 2021) Many of these methods compare favorably against PPLM, so they could be stronger baselines against which your work could be compared to. Another aspect that could be improved is the experimentation around the trade-offs between accuracy and BLEU score. Table 6 and 7 start this. It'd be informative to see some of the baselines varied slightly on a plot similar to Mai et al. 2020 or Montero et al 2021 for this task where self-BLEU/ref-BLEU is on one axis and accuracy via the classifier is on the other and things are plotted (top right being better). A task such as toxicity via RealToxicityPrompts could be quite interesting and offer a test-bed for comparison against some of the other prior work. I'd be interested in seeing how this method extends to other models, i.e. an auto-regressive model such as GPT-2. Is it simply extendible? How well would it do? This is a paper on controllable text generation and a method for this. A discussion of the potential broader impact is missing. It has potentially wide-ranging downstream impact, so including one to highlight potential harms and effects is important.
Does the review mention any comments, suggestions or typos that the author should address?
no
The authors proposed Context-Aware Language Models (CALM). As far as I understood, CALM changes (relabels) DB in the failed dialogue to make the dialogue successful (dialog task relabeling). As dialog task relabeling is not enough to make successful performance, three additional auxiliary tricks are suggested: task-specific auxiliary loss, task pretraining, and model-based dialogue rollouts. The proposed method is evaluated in AirDialogue dataset. The authors argue that, in the self-play scenario of AirDialogue dataset, the proposed method achieves state-of-the-art performance and achieved human-level performance. - The idea of dialog task relabeling is novel. - It is a little bit hard to understand the paper - I think the proposed idea can be easily explained without POMDP or RL. It is also less persuasive for me that the proposed method is described from the perspective of RL. Thus, I feel the description on POMDP could be removed. - Some relatively important description goes to Appendix, which makes reduced readability. - Motivation of the methods is less persuasive. - It is weird for me that dialogue without success becomes successful dialogue by changing the contents of DB. - An example of relabeled dialogue (in Appendix) would help. - The used dataset is not been frequently used recently, and recent TOD methods are not also been evaluated. - It seems it is necessary to explain why the method could not be compared to mainstream datasets (e.g., MultiWOZ 2.0) and methodologies. - I feel evaluation with self-play is less confident. - Though, previous works evaluate their methods by self-play, I feel that human evaluation is required for robust evaluation. - I think that it is relatively easy to fit the self-play scenario. See the Summary of Weaknesses
Does the review include a short summary of the paper?
yes
- The idea of dialog task relabeling is novel. - It is a little bit hard to understand the paper - I think the proposed idea can be easily explained without POMDP or RL. It is also less persuasive for me that the proposed method is described from the perspective of RL. Thus, I feel the description on POMDP could be removed. - Some relatively important description goes to Appendix, which makes reduced readability. - Motivation of the methods is less persuasive. - It is weird for me that dialogue without success becomes successful dialogue by changing the contents of DB. - An example of relabeled dialogue (in Appendix) would help. - The used dataset is not been frequently used recently, and recent TOD methods are not also been evaluated. - It seems it is necessary to explain why the method could not be compared to mainstream datasets (e.g., MultiWOZ 2.0) and methodologies. - I feel evaluation with self-play is less confident. - Though, previous works evaluate their methods by self-play, I feel that human evaluation is required for robust evaluation. - I think that it is relatively easy to fit the self-play scenario. See the Summary of Weaknesses
Does the review include a short summary of the paper?
no
The authors proposed Context-Aware Language Models (CALM). As far as I understood, CALM changes (relabels) DB in the failed dialogue to make the dialogue successful (dialog task relabeling). As dialog task relabeling is not enough to make successful performance, three additional auxiliary tricks are suggested: task-specific auxiliary loss, task pretraining, and model-based dialogue rollouts. The proposed method is evaluated in AirDialogue dataset. The authors argue that, in the self-play scenario of AirDialogue dataset, the proposed method achieves state-of-the-art performance and achieved human-level performance. - The idea of dialog task relabeling is novel. - It is a little bit hard to understand the paper - I think the proposed idea can be easily explained without POMDP or RL. It is also less persuasive for me that the proposed method is described from the perspective of RL. Thus, I feel the description on POMDP could be removed. - Some relatively important description goes to Appendix, which makes reduced readability. - Motivation of the methods is less persuasive. - It is weird for me that dialogue without success becomes successful dialogue by changing the contents of DB. - An example of relabeled dialogue (in Appendix) would help. - The used dataset is not been frequently used recently, and recent TOD methods are not also been evaluated. - It seems it is necessary to explain why the method could not be compared to mainstream datasets (e.g., MultiWOZ 2.0) and methodologies. - I feel evaluation with self-play is less confident. - Though, previous works evaluate their methods by self-play, I feel that human evaluation is required for robust evaluation. - I think that it is relatively easy to fit the self-play scenario. See the Summary of Weaknesses
Does the review include a summary of the strengths of the paper?
yes
The authors proposed Context-Aware Language Models (CALM). As far as I understood, CALM changes (relabels) DB in the failed dialogue to make the dialogue successful (dialog task relabeling). As dialog task relabeling is not enough to make successful performance, three additional auxiliary tricks are suggested: task-specific auxiliary loss, task pretraining, and model-based dialogue rollouts. The proposed method is evaluated in AirDialogue dataset. The authors argue that, in the self-play scenario of AirDialogue dataset, the proposed method achieves state-of-the-art performance and achieved human-level performance. - It is a little bit hard to understand the paper - I think the proposed idea can be easily explained without POMDP or RL. It is also less persuasive for me that the proposed method is described from the perspective of RL. Thus, I feel the description on POMDP could be removed. - Some relatively important description goes to Appendix, which makes reduced readability. - Motivation of the methods is less persuasive. - It is weird for me that dialogue without success becomes successful dialogue by changing the contents of DB. - An example of relabeled dialogue (in Appendix) would help. - The used dataset is not been frequently used recently, and recent TOD methods are not also been evaluated. - It seems it is necessary to explain why the method could not be compared to mainstream datasets (e.g., MultiWOZ 2.0) and methodologies. - I feel evaluation with self-play is less confident. - Though, previous works evaluate their methods by self-play, I feel that human evaluation is required for robust evaluation. - I think that it is relatively easy to fit the self-play scenario. See the Summary of Weaknesses
Does the review include a summary of the strengths of the paper?
no
The authors proposed Context-Aware Language Models (CALM). As far as I understood, CALM changes (relabels) DB in the failed dialogue to make the dialogue successful (dialog task relabeling). As dialog task relabeling is not enough to make successful performance, three additional auxiliary tricks are suggested: task-specific auxiliary loss, task pretraining, and model-based dialogue rollouts. The proposed method is evaluated in AirDialogue dataset. The authors argue that, in the self-play scenario of AirDialogue dataset, the proposed method achieves state-of-the-art performance and achieved human-level performance. - The idea of dialog task relabeling is novel. - It is a little bit hard to understand the paper - I think the proposed idea can be easily explained without POMDP or RL. It is also less persuasive for me that the proposed method is described from the perspective of RL. Thus, I feel the description on POMDP could be removed. - Some relatively important description goes to Appendix, which makes reduced readability. - Motivation of the methods is less persuasive. - It is weird for me that dialogue without success becomes successful dialogue by changing the contents of DB. - An example of relabeled dialogue (in Appendix) would help. - The used dataset is not been frequently used recently, and recent TOD methods are not also been evaluated. - It seems it is necessary to explain why the method could not be compared to mainstream datasets (e.g., MultiWOZ 2.0) and methodologies. - I feel evaluation with self-play is less confident. - Though, previous works evaluate their methods by self-play, I feel that human evaluation is required for robust evaluation. - I think that it is relatively easy to fit the self-play scenario. See the Summary of Weaknesses
Does the review include a summary of the weaknesses of the paper?
yes
The authors proposed Context-Aware Language Models (CALM). As far as I understood, CALM changes (relabels) DB in the failed dialogue to make the dialogue successful (dialog task relabeling). As dialog task relabeling is not enough to make successful performance, three additional auxiliary tricks are suggested: task-specific auxiliary loss, task pretraining, and model-based dialogue rollouts. The proposed method is evaluated in AirDialogue dataset. The authors argue that, in the self-play scenario of AirDialogue dataset, the proposed method achieves state-of-the-art performance and achieved human-level performance. - The idea of dialog task relabeling is novel. See the Summary of Weaknesses
Does the review include a summary of the weaknesses of the paper?
no
The authors proposed Context-Aware Language Models (CALM). As far as I understood, CALM changes (relabels) DB in the failed dialogue to make the dialogue successful (dialog task relabeling). As dialog task relabeling is not enough to make successful performance, three additional auxiliary tricks are suggested: task-specific auxiliary loss, task pretraining, and model-based dialogue rollouts. The proposed method is evaluated in AirDialogue dataset. The authors argue that, in the self-play scenario of AirDialogue dataset, the proposed method achieves state-of-the-art performance and achieved human-level performance. - The idea of dialog task relabeling is novel. - It is a little bit hard to understand the paper - I think the proposed idea can be easily explained without POMDP or RL. It is also less persuasive for me that the proposed method is described from the perspective of RL. Thus, I feel the description on POMDP could be removed. - Some relatively important description goes to Appendix, which makes reduced readability. - Motivation of the methods is less persuasive. - It is weird for me that dialogue without success becomes successful dialogue by changing the contents of DB. - An example of relabeled dialogue (in Appendix) would help. - The used dataset is not been frequently used recently, and recent TOD methods are not also been evaluated. - It seems it is necessary to explain why the method could not be compared to mainstream datasets (e.g., MultiWOZ 2.0) and methodologies. - I feel evaluation with self-play is less confident. - Though, previous works evaluate their methods by self-play, I feel that human evaluation is required for robust evaluation. - I think that it is relatively easy to fit the self-play scenario. See the Summary of Weaknesses
Does the review mention any comments, suggestions or typos that the author should address?
yes
The authors proposed Context-Aware Language Models (CALM). As far as I understood, CALM changes (relabels) DB in the failed dialogue to make the dialogue successful (dialog task relabeling). As dialog task relabeling is not enough to make successful performance, three additional auxiliary tricks are suggested: task-specific auxiliary loss, task pretraining, and model-based dialogue rollouts. The proposed method is evaluated in AirDialogue dataset. The authors argue that, in the self-play scenario of AirDialogue dataset, the proposed method achieves state-of-the-art performance and achieved human-level performance. - The idea of dialog task relabeling is novel. - It is a little bit hard to understand the paper - I think the proposed idea can be easily explained without POMDP or RL. It is also less persuasive for me that the proposed method is described from the perspective of RL. Thus, I feel the description on POMDP could be removed. - Some relatively important description goes to Appendix, which makes reduced readability. - Motivation of the methods is less persuasive. - It is weird for me that dialogue without success becomes successful dialogue by changing the contents of DB. - An example of relabeled dialogue (in Appendix) would help. - The used dataset is not been frequently used recently, and recent TOD methods are not also been evaluated. - It seems it is necessary to explain why the method could not be compared to mainstream datasets (e.g., MultiWOZ 2.0) and methodologies. - I feel evaluation with self-play is less confident. - Though, previous works evaluate their methods by self-play, I feel that human evaluation is required for robust evaluation. - I think that it is relatively easy to fit the self-play scenario.
Does the review mention any comments, suggestions or typos that the author should address?
no
Modeling a dialogue as POMDP, this paper reports an approach and a training recipe for end to end solution to generate task oriented dialogue. Differently from existing approaches, the proposed approach doesn't rely on manually designed dialogue states and realization schemas. The approach also resorts to task relabeling and auxiliary loss which prove to be instrumental to achieve good results according the conducted ablation study. Reported results outperform SOTA on AirDialogue dataset. - Novelty in approaching the end 2 end dialogue generation from a combination of change in loss function and relabeling technique - Results are interesting for an approach where we'd expect the end2end approach to underperform (because structured context and structured output are curcial) or perform at par with pipeline approaches. - Although I don't see anything standing against the publication of this paper, I think results reported on one dataset are a weakness for this paper. Since the main angle of this research work is to build a generalized model for generating dialogue end 2 end, the generalizibility of the model can only be shown by demonstrating good results on a bigger set of data. - Authors have not shared any qualitative assessment of the model or error analysis. I think specially when we're talking about an end2end solution, it is very interesting to see undesirable examples. Numbers show how often the model is correct but we don't know how far the model deviates from expected dialogue when it generates incorrect conversations. None
Does the review include a short summary of the paper?
yes
- Novelty in approaching the end 2 end dialogue generation from a combination of change in loss function and relabeling technique - Results are interesting for an approach where we'd expect the end2end approach to underperform (because structured context and structured output are curcial) or perform at par with pipeline approaches. - Although I don't see anything standing against the publication of this paper, I think results reported on one dataset are a weakness for this paper. Since the main angle of this research work is to build a generalized model for generating dialogue end 2 end, the generalizibility of the model can only be shown by demonstrating good results on a bigger set of data. - Authors have not shared any qualitative assessment of the model or error analysis. I think specially when we're talking about an end2end solution, it is very interesting to see undesirable examples. Numbers show how often the model is correct but we don't know how far the model deviates from expected dialogue when it generates incorrect conversations. None
Does the review include a short summary of the paper?
no
Modeling a dialogue as POMDP, this paper reports an approach and a training recipe for end to end solution to generate task oriented dialogue. Differently from existing approaches, the proposed approach doesn't rely on manually designed dialogue states and realization schemas. The approach also resorts to task relabeling and auxiliary loss which prove to be instrumental to achieve good results according the conducted ablation study. Reported results outperform SOTA on AirDialogue dataset. - Novelty in approaching the end 2 end dialogue generation from a combination of change in loss function and relabeling technique - Results are interesting for an approach where we'd expect the end2end approach to underperform (because structured context and structured output are curcial) or perform at par with pipeline approaches. - Although I don't see anything standing against the publication of this paper, I think results reported on one dataset are a weakness for this paper. Since the main angle of this research work is to build a generalized model for generating dialogue end 2 end, the generalizibility of the model can only be shown by demonstrating good results on a bigger set of data. - Authors have not shared any qualitative assessment of the model or error analysis. I think specially when we're talking about an end2end solution, it is very interesting to see undesirable examples. Numbers show how often the model is correct but we don't know how far the model deviates from expected dialogue when it generates incorrect conversations. None
Does the review include a summary of the strengths of the paper?
yes
Modeling a dialogue as POMDP, this paper reports an approach and a training recipe for end to end solution to generate task oriented dialogue. Differently from existing approaches, the proposed approach doesn't rely on manually designed dialogue states and realization schemas. The approach also resorts to task relabeling and auxiliary loss which prove to be instrumental to achieve good results according the conducted ablation study. Reported results outperform SOTA on AirDialogue dataset. - Although I don't see anything standing against the publication of this paper, I think results reported on one dataset are a weakness for this paper. Since the main angle of this research work is to build a generalized model for generating dialogue end 2 end, the generalizibility of the model can only be shown by demonstrating good results on a bigger set of data. - Authors have not shared any qualitative assessment of the model or error analysis. I think specially when we're talking about an end2end solution, it is very interesting to see undesirable examples. Numbers show how often the model is correct but we don't know how far the model deviates from expected dialogue when it generates incorrect conversations. None
Does the review include a summary of the strengths of the paper?
no
Modeling a dialogue as POMDP, this paper reports an approach and a training recipe for end to end solution to generate task oriented dialogue. Differently from existing approaches, the proposed approach doesn't rely on manually designed dialogue states and realization schemas. The approach also resorts to task relabeling and auxiliary loss which prove to be instrumental to achieve good results according the conducted ablation study. Reported results outperform SOTA on AirDialogue dataset. - Novelty in approaching the end 2 end dialogue generation from a combination of change in loss function and relabeling technique - Results are interesting for an approach where we'd expect the end2end approach to underperform (because structured context and structured output are curcial) or perform at par with pipeline approaches. - Although I don't see anything standing against the publication of this paper, I think results reported on one dataset are a weakness for this paper. Since the main angle of this research work is to build a generalized model for generating dialogue end 2 end, the generalizibility of the model can only be shown by demonstrating good results on a bigger set of data. - Authors have not shared any qualitative assessment of the model or error analysis. I think specially when we're talking about an end2end solution, it is very interesting to see undesirable examples. Numbers show how often the model is correct but we don't know how far the model deviates from expected dialogue when it generates incorrect conversations. None
Does the review include a summary of the weaknesses of the paper?
yes
Modeling a dialogue as POMDP, this paper reports an approach and a training recipe for end to end solution to generate task oriented dialogue. Differently from existing approaches, the proposed approach doesn't rely on manually designed dialogue states and realization schemas. The approach also resorts to task relabeling and auxiliary loss which prove to be instrumental to achieve good results according the conducted ablation study. Reported results outperform SOTA on AirDialogue dataset. - Novelty in approaching the end 2 end dialogue generation from a combination of change in loss function and relabeling technique - Results are interesting for an approach where we'd expect the end2end approach to underperform (because structured context and structured output are curcial) or perform at par with pipeline approaches. None
Does the review include a summary of the weaknesses of the paper?
no
Modeling a dialogue as POMDP, this paper reports an approach and a training recipe for end to end solution to generate task oriented dialogue. Differently from existing approaches, the proposed approach doesn't rely on manually designed dialogue states and realization schemas. The approach also resorts to task relabeling and auxiliary loss which prove to be instrumental to achieve good results according the conducted ablation study. Reported results outperform SOTA on AirDialogue dataset. - Novelty in approaching the end 2 end dialogue generation from a combination of change in loss function and relabeling technique - Results are interesting for an approach where we'd expect the end2end approach to underperform (because structured context and structured output are curcial) or perform at par with pipeline approaches. - Although I don't see anything standing against the publication of this paper, I think results reported on one dataset are a weakness for this paper. Since the main angle of this research work is to build a generalized model for generating dialogue end 2 end, the generalizibility of the model can only be shown by demonstrating good results on a bigger set of data. - Authors have not shared any qualitative assessment of the model or error analysis. I think specially when we're talking about an end2end solution, it is very interesting to see undesirable examples. Numbers show how often the model is correct but we don't know how far the model deviates from expected dialogue when it generates incorrect conversations. None
Does the review mention any comments, suggestions or typos that the author should address?
yes
Modeling a dialogue as POMDP, this paper reports an approach and a training recipe for end to end solution to generate task oriented dialogue. Differently from existing approaches, the proposed approach doesn't rely on manually designed dialogue states and realization schemas. The approach also resorts to task relabeling and auxiliary loss which prove to be instrumental to achieve good results according the conducted ablation study. Reported results outperform SOTA on AirDialogue dataset. - Novelty in approaching the end 2 end dialogue generation from a combination of change in loss function and relabeling technique - Results are interesting for an approach where we'd expect the end2end approach to underperform (because structured context and structured output are curcial) or perform at par with pipeline approaches. - Although I don't see anything standing against the publication of this paper, I think results reported on one dataset are a weakness for this paper. Since the main angle of this research work is to build a generalized model for generating dialogue end 2 end, the generalizibility of the model can only be shown by demonstrating good results on a bigger set of data. - Authors have not shared any qualitative assessment of the model or error analysis. I think specially when we're talking about an end2end solution, it is very interesting to see undesirable examples. Numbers show how often the model is correct but we don't know how far the model deviates from expected dialogue when it generates incorrect conversations.
Does the review mention any comments, suggestions or typos that the author should address?
no
The paper describes the method to trace historical sound change by measuring the distance between pairs of character distributions over time. Using spelling as a necessary proxy to phonetics, the authors model phonological change through the use of diachronic character embeddings. They show the viability of the proposed approach on synthetic datasets and on real sound changes in Danish geographical names (in particular, the authors focus on lenition). I really like how the work combines linguistic problems with data-driven solutions. Modeling sound change computationally is a rarely seen task in the NLP venues, which is sad, since it is extremely important for general historical linguistics. The paper under review proposes a smart method to trace gradual phonological replacements like lenition (t -> d, k -> g, etc). It will be of great use both to linguists and to NLP practitioners interested in change detection. In fact, the paper extends the computational change detection field from only semantics to phonology. 1) Although the proposed method is indeed interesting, the authors do not make any attempt to compare it to any other prior methods. They limit themselves to showing that the method produces statistically significant linear regressions. But this is, to my mind, insufficient as empirical evaluation. Why not implement a simple baseline (for example, character frequencies, etc) and compare to it? This would make the paper more persuasive. 2) The strange results for "p --> b" sound change deserves more explanations beyond just noting that this is a rarer phoneme than the others. If the method is not able to trace sound change in 1/3 cases, it must be shown that it is still better than anything else (baselines). 3) The method is entirely based on written signal (spelling). It is understandable that we lack proper phonological datasets, but I would like to see more discussion on how this spelling proxy might influence the performance of the proposed method. l. 066: "phonology: Inspired by" --> "phonology. Inspired by" May be, reposition Figures 1 and 2 to use the page space more efficiently? Did you try cosine distance instead of Euclidean? It might perform better in the end. The authors several times mention "neural methods that make use of dense character representations" (which they don't use). I believe that the word "neural" is redundant here. Dense representations can be produced without using any neural networks (for example, by applying SVD to PMI vectors). These papers, although not 100% on-topic, might still be mentioned in the related work: https://aclanthology.org/W19-4713/ https://aclanthology.org/W19-4732/
Does the review include a short summary of the paper?
yes
I really like how the work combines linguistic problems with data-driven solutions. Modeling sound change computationally is a rarely seen task in the NLP venues, which is sad, since it is extremely important for general historical linguistics. The paper under review proposes a smart method to trace gradual phonological replacements like lenition (t -> d, k -> g, etc). It will be of great use both to linguists and to NLP practitioners interested in change detection. In fact, the paper extends the computational change detection field from only semantics to phonology. 1) Although the proposed method is indeed interesting, the authors do not make any attempt to compare it to any other prior methods. They limit themselves to showing that the method produces statistically significant linear regressions. But this is, to my mind, insufficient as empirical evaluation. Why not implement a simple baseline (for example, character frequencies, etc) and compare to it? This would make the paper more persuasive. 2) The strange results for "p --> b" sound change deserves more explanations beyond just noting that this is a rarer phoneme than the others. If the method is not able to trace sound change in 1/3 cases, it must be shown that it is still better than anything else (baselines). 3) The method is entirely based on written signal (spelling). It is understandable that we lack proper phonological datasets, but I would like to see more discussion on how this spelling proxy might influence the performance of the proposed method. l. 066: "phonology: Inspired by" --> "phonology. Inspired by" May be, reposition Figures 1 and 2 to use the page space more efficiently? Did you try cosine distance instead of Euclidean? It might perform better in the end. The authors several times mention "neural methods that make use of dense character representations" (which they don't use). I believe that the word "neural" is redundant here. Dense representations can be produced without using any neural networks (for example, by applying SVD to PMI vectors). These papers, although not 100% on-topic, might still be mentioned in the related work: https://aclanthology.org/W19-4713/ https://aclanthology.org/W19-4732/
Does the review include a short summary of the paper?
no
The paper describes the method to trace historical sound change by measuring the distance between pairs of character distributions over time. Using spelling as a necessary proxy to phonetics, the authors model phonological change through the use of diachronic character embeddings. They show the viability of the proposed approach on synthetic datasets and on real sound changes in Danish geographical names (in particular, the authors focus on lenition). I really like how the work combines linguistic problems with data-driven solutions. Modeling sound change computationally is a rarely seen task in the NLP venues, which is sad, since it is extremely important for general historical linguistics. The paper under review proposes a smart method to trace gradual phonological replacements like lenition (t -> d, k -> g, etc). It will be of great use both to linguists and to NLP practitioners interested in change detection. In fact, the paper extends the computational change detection field from only semantics to phonology. 1) Although the proposed method is indeed interesting, the authors do not make any attempt to compare it to any other prior methods. They limit themselves to showing that the method produces statistically significant linear regressions. But this is, to my mind, insufficient as empirical evaluation. Why not implement a simple baseline (for example, character frequencies, etc) and compare to it? This would make the paper more persuasive. 2) The strange results for "p --> b" sound change deserves more explanations beyond just noting that this is a rarer phoneme than the others. If the method is not able to trace sound change in 1/3 cases, it must be shown that it is still better than anything else (baselines). 3) The method is entirely based on written signal (spelling). It is understandable that we lack proper phonological datasets, but I would like to see more discussion on how this spelling proxy might influence the performance of the proposed method. l. 066: "phonology: Inspired by" --> "phonology. Inspired by" May be, reposition Figures 1 and 2 to use the page space more efficiently? Did you try cosine distance instead of Euclidean? It might perform better in the end. The authors several times mention "neural methods that make use of dense character representations" (which they don't use). I believe that the word "neural" is redundant here. Dense representations can be produced without using any neural networks (for example, by applying SVD to PMI vectors). These papers, although not 100% on-topic, might still be mentioned in the related work: https://aclanthology.org/W19-4713/ https://aclanthology.org/W19-4732/
Does the review include a summary of the strengths of the paper?
yes
The paper describes the method to trace historical sound change by measuring the distance between pairs of character distributions over time. Using spelling as a necessary proxy to phonetics, the authors model phonological change through the use of diachronic character embeddings. They show the viability of the proposed approach on synthetic datasets and on real sound changes in Danish geographical names (in particular, the authors focus on lenition). 1) Although the proposed method is indeed interesting, the authors do not make any attempt to compare it to any other prior methods. They limit themselves to showing that the method produces statistically significant linear regressions. But this is, to my mind, insufficient as empirical evaluation. Why not implement a simple baseline (for example, character frequencies, etc) and compare to it? This would make the paper more persuasive. 2) The strange results for "p --> b" sound change deserves more explanations beyond just noting that this is a rarer phoneme than the others. If the method is not able to trace sound change in 1/3 cases, it must be shown that it is still better than anything else (baselines). 3) The method is entirely based on written signal (spelling). It is understandable that we lack proper phonological datasets, but I would like to see more discussion on how this spelling proxy might influence the performance of the proposed method. l. 066: "phonology: Inspired by" --> "phonology. Inspired by" May be, reposition Figures 1 and 2 to use the page space more efficiently? Did you try cosine distance instead of Euclidean? It might perform better in the end. The authors several times mention "neural methods that make use of dense character representations" (which they don't use). I believe that the word "neural" is redundant here. Dense representations can be produced without using any neural networks (for example, by applying SVD to PMI vectors). These papers, although not 100% on-topic, might still be mentioned in the related work: https://aclanthology.org/W19-4713/ https://aclanthology.org/W19-4732/
Does the review include a summary of the strengths of the paper?
no
The paper describes the method to trace historical sound change by measuring the distance between pairs of character distributions over time. Using spelling as a necessary proxy to phonetics, the authors model phonological change through the use of diachronic character embeddings. They show the viability of the proposed approach on synthetic datasets and on real sound changes in Danish geographical names (in particular, the authors focus on lenition). I really like how the work combines linguistic problems with data-driven solutions. Modeling sound change computationally is a rarely seen task in the NLP venues, which is sad, since it is extremely important for general historical linguistics. The paper under review proposes a smart method to trace gradual phonological replacements like lenition (t -> d, k -> g, etc). It will be of great use both to linguists and to NLP practitioners interested in change detection. In fact, the paper extends the computational change detection field from only semantics to phonology. 1) Although the proposed method is indeed interesting, the authors do not make any attempt to compare it to any other prior methods. They limit themselves to showing that the method produces statistically significant linear regressions. But this is, to my mind, insufficient as empirical evaluation. Why not implement a simple baseline (for example, character frequencies, etc) and compare to it? This would make the paper more persuasive. 2) The strange results for "p --> b" sound change deserves more explanations beyond just noting that this is a rarer phoneme than the others. If the method is not able to trace sound change in 1/3 cases, it must be shown that it is still better than anything else (baselines). 3) The method is entirely based on written signal (spelling). It is understandable that we lack proper phonological datasets, but I would like to see more discussion on how this spelling proxy might influence the performance of the proposed method. l. 066: "phonology: Inspired by" --> "phonology. Inspired by" May be, reposition Figures 1 and 2 to use the page space more efficiently? Did you try cosine distance instead of Euclidean? It might perform better in the end. The authors several times mention "neural methods that make use of dense character representations" (which they don't use). I believe that the word "neural" is redundant here. Dense representations can be produced without using any neural networks (for example, by applying SVD to PMI vectors). These papers, although not 100% on-topic, might still be mentioned in the related work: https://aclanthology.org/W19-4713/ https://aclanthology.org/W19-4732/
Does the review include a summary of the weaknesses of the paper?
yes
The paper describes the method to trace historical sound change by measuring the distance between pairs of character distributions over time. Using spelling as a necessary proxy to phonetics, the authors model phonological change through the use of diachronic character embeddings. They show the viability of the proposed approach on synthetic datasets and on real sound changes in Danish geographical names (in particular, the authors focus on lenition). I really like how the work combines linguistic problems with data-driven solutions. Modeling sound change computationally is a rarely seen task in the NLP venues, which is sad, since it is extremely important for general historical linguistics. The paper under review proposes a smart method to trace gradual phonological replacements like lenition (t -> d, k -> g, etc). It will be of great use both to linguists and to NLP practitioners interested in change detection. In fact, the paper extends the computational change detection field from only semantics to phonology. l. 066: "phonology: Inspired by" --> "phonology. Inspired by" May be, reposition Figures 1 and 2 to use the page space more efficiently? Did you try cosine distance instead of Euclidean? It might perform better in the end. The authors several times mention "neural methods that make use of dense character representations" (which they don't use). I believe that the word "neural" is redundant here. Dense representations can be produced without using any neural networks (for example, by applying SVD to PMI vectors). These papers, although not 100% on-topic, might still be mentioned in the related work: https://aclanthology.org/W19-4713/ https://aclanthology.org/W19-4732/
Does the review include a summary of the weaknesses of the paper?
no
The paper describes the method to trace historical sound change by measuring the distance between pairs of character distributions over time. Using spelling as a necessary proxy to phonetics, the authors model phonological change through the use of diachronic character embeddings. They show the viability of the proposed approach on synthetic datasets and on real sound changes in Danish geographical names (in particular, the authors focus on lenition). I really like how the work combines linguistic problems with data-driven solutions. Modeling sound change computationally is a rarely seen task in the NLP venues, which is sad, since it is extremely important for general historical linguistics. The paper under review proposes a smart method to trace gradual phonological replacements like lenition (t -> d, k -> g, etc). It will be of great use both to linguists and to NLP practitioners interested in change detection. In fact, the paper extends the computational change detection field from only semantics to phonology. 1) Although the proposed method is indeed interesting, the authors do not make any attempt to compare it to any other prior methods. They limit themselves to showing that the method produces statistically significant linear regressions. But this is, to my mind, insufficient as empirical evaluation. Why not implement a simple baseline (for example, character frequencies, etc) and compare to it? This would make the paper more persuasive. 2) The strange results for "p --> b" sound change deserves more explanations beyond just noting that this is a rarer phoneme than the others. If the method is not able to trace sound change in 1/3 cases, it must be shown that it is still better than anything else (baselines). 3) The method is entirely based on written signal (spelling). It is understandable that we lack proper phonological datasets, but I would like to see more discussion on how this spelling proxy might influence the performance of the proposed method. l. 066: "phonology: Inspired by" --> "phonology. Inspired by" May be, reposition Figures 1 and 2 to use the page space more efficiently? Did you try cosine distance instead of Euclidean? It might perform better in the end. The authors several times mention "neural methods that make use of dense character representations" (which they don't use). I believe that the word "neural" is redundant here. Dense representations can be produced without using any neural networks (for example, by applying SVD to PMI vectors). These papers, although not 100% on-topic, might still be mentioned in the related work: https://aclanthology.org/W19-4713/ https://aclanthology.org/W19-4732/
Does the review mention any comments, suggestions or typos that the author should address?
yes
The paper describes the method to trace historical sound change by measuring the distance between pairs of character distributions over time. Using spelling as a necessary proxy to phonetics, the authors model phonological change through the use of diachronic character embeddings. They show the viability of the proposed approach on synthetic datasets and on real sound changes in Danish geographical names (in particular, the authors focus on lenition). I really like how the work combines linguistic problems with data-driven solutions. Modeling sound change computationally is a rarely seen task in the NLP venues, which is sad, since it is extremely important for general historical linguistics. The paper under review proposes a smart method to trace gradual phonological replacements like lenition (t -> d, k -> g, etc). It will be of great use both to linguists and to NLP practitioners interested in change detection. In fact, the paper extends the computational change detection field from only semantics to phonology. 1) Although the proposed method is indeed interesting, the authors do not make any attempt to compare it to any other prior methods. They limit themselves to showing that the method produces statistically significant linear regressions. But this is, to my mind, insufficient as empirical evaluation. Why not implement a simple baseline (for example, character frequencies, etc) and compare to it? This would make the paper more persuasive. 2) The strange results for "p --> b" sound change deserves more explanations beyond just noting that this is a rarer phoneme than the others. If the method is not able to trace sound change in 1/3 cases, it must be shown that it is still better than anything else (baselines). 3) The method is entirely based on written signal (spelling). It is understandable that we lack proper phonological datasets, but I would like to see more discussion on how this spelling proxy might influence the performance of the proposed method.
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper use methods relying on spelling, using character embeddings, to detect sound change across time. They rely on three datasets to test the effectiveness of their methods: a synthetic one in a language having a simplified phonotactic system, one in Danish with synthetically generated sound change, and a real one in Danish, with known sound change. They show that they can successfully detect the selected sound changes in the datasets, and identify precisely the context of the change. The paper is globally nice to read and introduces an interesting idea. The related works section is particularly well-written, with the right level of detail, even though some references are missing. The experimental setup is sound and clear; in particular, the use of a control dataset is appreciated. 1) The main issue in this paper is that you only show that you can successfully detect known sound changes, and that you don’t spuriously detect them when they’re absent, thanks to the control corpus. But you might detect a lot of changes in characters embeddings using your methods, that are not related to sound change. Using your system in an exploratory fashion, to detect all possible changes and categorize them, would greatly strengthen the paper. As it is, to my understanding, there is no way to be sure that the character embeddings do not lead to detecting a lot of changes independent from phonology, and this is a major issue. Moreover, the link between spelling and sound change is not straightforward to me and should be more clearly justified. 2) Semantic and phonological change across time should indeed take a logistic shape, as you describe in your description; your experiments also seem to show that the change in the Geo corpus is not linear. Why did you stick to a linear shape? Shoemark et al (2019) that you cite also tried a logarithmic shape, you could try it too. 3) There is a large amount of work on semantic change using contextualized embeddings and pre-trained language models, that you do not evoke in your related works (from Mario Giulianelli, Matej Martinc…). 4) The formalism you use, a --> b / c, should be introduced more clearly from the beginning. Especially since you use it as early as in the summary, where it can’t be comprehended without prior knowledge of this formalism. The explanation at line 190 might benefit from a scheme. Similarly, some words like “plosive” might benefit from a short definition (even in a footnote) for readers not familiar with the domain. Nor clear how the Bin:Control effect is computed. Globally, all metrics described in Section 5 would benefit from equations. Overall, the localization of Tables and Figures in the paper would be more optimal to ease the reading. Tables 2 to 4 look strange to me without lines. L.32: Choose only one between “since” and “however” L59-62: Add references here. L. 362 and 364: add “the” distance, “the” main effect. L. 390: in both corpora (no “the”) L. 480: repetition of “language”’.
Does the review include a short summary of the paper?
yes
The paper is globally nice to read and introduces an interesting idea. The related works section is particularly well-written, with the right level of detail, even though some references are missing. The experimental setup is sound and clear; in particular, the use of a control dataset is appreciated. 1) The main issue in this paper is that you only show that you can successfully detect known sound changes, and that you don’t spuriously detect them when they’re absent, thanks to the control corpus. But you might detect a lot of changes in characters embeddings using your methods, that are not related to sound change. Using your system in an exploratory fashion, to detect all possible changes and categorize them, would greatly strengthen the paper. As it is, to my understanding, there is no way to be sure that the character embeddings do not lead to detecting a lot of changes independent from phonology, and this is a major issue. Moreover, the link between spelling and sound change is not straightforward to me and should be more clearly justified. 2) Semantic and phonological change across time should indeed take a logistic shape, as you describe in your description; your experiments also seem to show that the change in the Geo corpus is not linear. Why did you stick to a linear shape? Shoemark et al (2019) that you cite also tried a logarithmic shape, you could try it too. 3) There is a large amount of work on semantic change using contextualized embeddings and pre-trained language models, that you do not evoke in your related works (from Mario Giulianelli, Matej Martinc…). 4) The formalism you use, a --> b / c, should be introduced more clearly from the beginning. Especially since you use it as early as in the summary, where it can’t be comprehended without prior knowledge of this formalism. The explanation at line 190 might benefit from a scheme. Similarly, some words like “plosive” might benefit from a short definition (even in a footnote) for readers not familiar with the domain. Nor clear how the Bin:Control effect is computed. Globally, all metrics described in Section 5 would benefit from equations. Overall, the localization of Tables and Figures in the paper would be more optimal to ease the reading. Tables 2 to 4 look strange to me without lines. L.32: Choose only one between “since” and “however” L59-62: Add references here. L. 362 and 364: add “the” distance, “the” main effect. L. 390: in both corpora (no “the”) L. 480: repetition of “language”’.
Does the review include a short summary of the paper?
no
This paper use methods relying on spelling, using character embeddings, to detect sound change across time. They rely on three datasets to test the effectiveness of their methods: a synthetic one in a language having a simplified phonotactic system, one in Danish with synthetically generated sound change, and a real one in Danish, with known sound change. They show that they can successfully detect the selected sound changes in the datasets, and identify precisely the context of the change. The paper is globally nice to read and introduces an interesting idea. The related works section is particularly well-written, with the right level of detail, even though some references are missing. The experimental setup is sound and clear; in particular, the use of a control dataset is appreciated. 1) The main issue in this paper is that you only show that you can successfully detect known sound changes, and that you don’t spuriously detect them when they’re absent, thanks to the control corpus. But you might detect a lot of changes in characters embeddings using your methods, that are not related to sound change. Using your system in an exploratory fashion, to detect all possible changes and categorize them, would greatly strengthen the paper. As it is, to my understanding, there is no way to be sure that the character embeddings do not lead to detecting a lot of changes independent from phonology, and this is a major issue. Moreover, the link between spelling and sound change is not straightforward to me and should be more clearly justified. 2) Semantic and phonological change across time should indeed take a logistic shape, as you describe in your description; your experiments also seem to show that the change in the Geo corpus is not linear. Why did you stick to a linear shape? Shoemark et al (2019) that you cite also tried a logarithmic shape, you could try it too. 3) There is a large amount of work on semantic change using contextualized embeddings and pre-trained language models, that you do not evoke in your related works (from Mario Giulianelli, Matej Martinc…). 4) The formalism you use, a --> b / c, should be introduced more clearly from the beginning. Especially since you use it as early as in the summary, where it can’t be comprehended without prior knowledge of this formalism. The explanation at line 190 might benefit from a scheme. Similarly, some words like “plosive” might benefit from a short definition (even in a footnote) for readers not familiar with the domain. Nor clear how the Bin:Control effect is computed. Globally, all metrics described in Section 5 would benefit from equations. Overall, the localization of Tables and Figures in the paper would be more optimal to ease the reading. Tables 2 to 4 look strange to me without lines. L.32: Choose only one between “since” and “however” L59-62: Add references here. L. 362 and 364: add “the” distance, “the” main effect. L. 390: in both corpora (no “the”) L. 480: repetition of “language”’.
Does the review include a summary of the strengths of the paper?
yes
This paper use methods relying on spelling, using character embeddings, to detect sound change across time. They rely on three datasets to test the effectiveness of their methods: a synthetic one in a language having a simplified phonotactic system, one in Danish with synthetically generated sound change, and a real one in Danish, with known sound change. They show that they can successfully detect the selected sound changes in the datasets, and identify precisely the context of the change. 1) The main issue in this paper is that you only show that you can successfully detect known sound changes, and that you don’t spuriously detect them when they’re absent, thanks to the control corpus. But you might detect a lot of changes in characters embeddings using your methods, that are not related to sound change. Using your system in an exploratory fashion, to detect all possible changes and categorize them, would greatly strengthen the paper. As it is, to my understanding, there is no way to be sure that the character embeddings do not lead to detecting a lot of changes independent from phonology, and this is a major issue. Moreover, the link between spelling and sound change is not straightforward to me and should be more clearly justified. 2) Semantic and phonological change across time should indeed take a logistic shape, as you describe in your description; your experiments also seem to show that the change in the Geo corpus is not linear. Why did you stick to a linear shape? Shoemark et al (2019) that you cite also tried a logarithmic shape, you could try it too. 3) There is a large amount of work on semantic change using contextualized embeddings and pre-trained language models, that you do not evoke in your related works (from Mario Giulianelli, Matej Martinc…). 4) The formalism you use, a --> b / c, should be introduced more clearly from the beginning. Especially since you use it as early as in the summary, where it can’t be comprehended without prior knowledge of this formalism. The explanation at line 190 might benefit from a scheme. Similarly, some words like “plosive” might benefit from a short definition (even in a footnote) for readers not familiar with the domain. Nor clear how the Bin:Control effect is computed. Globally, all metrics described in Section 5 would benefit from equations. Overall, the localization of Tables and Figures in the paper would be more optimal to ease the reading. Tables 2 to 4 look strange to me without lines. L.32: Choose only one between “since” and “however” L59-62: Add references here. L. 362 and 364: add “the” distance, “the” main effect. L. 390: in both corpora (no “the”) L. 480: repetition of “language”’.
Does the review include a summary of the strengths of the paper?
no
This paper use methods relying on spelling, using character embeddings, to detect sound change across time. They rely on three datasets to test the effectiveness of their methods: a synthetic one in a language having a simplified phonotactic system, one in Danish with synthetically generated sound change, and a real one in Danish, with known sound change. They show that they can successfully detect the selected sound changes in the datasets, and identify precisely the context of the change. The paper is globally nice to read and introduces an interesting idea. The related works section is particularly well-written, with the right level of detail, even though some references are missing. The experimental setup is sound and clear; in particular, the use of a control dataset is appreciated. 1) The main issue in this paper is that you only show that you can successfully detect known sound changes, and that you don’t spuriously detect them when they’re absent, thanks to the control corpus. But you might detect a lot of changes in characters embeddings using your methods, that are not related to sound change. Using your system in an exploratory fashion, to detect all possible changes and categorize them, would greatly strengthen the paper. As it is, to my understanding, there is no way to be sure that the character embeddings do not lead to detecting a lot of changes independent from phonology, and this is a major issue. Moreover, the link between spelling and sound change is not straightforward to me and should be more clearly justified. 2) Semantic and phonological change across time should indeed take a logistic shape, as you describe in your description; your experiments also seem to show that the change in the Geo corpus is not linear. Why did you stick to a linear shape? Shoemark et al (2019) that you cite also tried a logarithmic shape, you could try it too. 3) There is a large amount of work on semantic change using contextualized embeddings and pre-trained language models, that you do not evoke in your related works (from Mario Giulianelli, Matej Martinc…). 4) The formalism you use, a --> b / c, should be introduced more clearly from the beginning. Especially since you use it as early as in the summary, where it can’t be comprehended without prior knowledge of this formalism. The explanation at line 190 might benefit from a scheme. Similarly, some words like “plosive” might benefit from a short definition (even in a footnote) for readers not familiar with the domain. Nor clear how the Bin:Control effect is computed. Globally, all metrics described in Section 5 would benefit from equations. Overall, the localization of Tables and Figures in the paper would be more optimal to ease the reading. Tables 2 to 4 look strange to me without lines. L.32: Choose only one between “since” and “however” L59-62: Add references here. L. 362 and 364: add “the” distance, “the” main effect. L. 390: in both corpora (no “the”) L. 480: repetition of “language”’.
Does the review include a summary of the weaknesses of the paper?
yes
This paper use methods relying on spelling, using character embeddings, to detect sound change across time. They rely on three datasets to test the effectiveness of their methods: a synthetic one in a language having a simplified phonotactic system, one in Danish with synthetically generated sound change, and a real one in Danish, with known sound change. They show that they can successfully detect the selected sound changes in the datasets, and identify precisely the context of the change. The paper is globally nice to read and introduces an interesting idea. The related works section is particularly well-written, with the right level of detail, even though some references are missing. The experimental setup is sound and clear; in particular, the use of a control dataset is appreciated. Nor clear how the Bin:Control effect is computed. Globally, all metrics described in Section 5 would benefit from equations. Overall, the localization of Tables and Figures in the paper would be more optimal to ease the reading. Tables 2 to 4 look strange to me without lines. L.32: Choose only one between “since” and “however” L59-62: Add references here. L. 362 and 364: add “the” distance, “the” main effect. L. 390: in both corpora (no “the”) L. 480: repetition of “language”’.
Does the review include a summary of the weaknesses of the paper?
no
This paper use methods relying on spelling, using character embeddings, to detect sound change across time. They rely on three datasets to test the effectiveness of their methods: a synthetic one in a language having a simplified phonotactic system, one in Danish with synthetically generated sound change, and a real one in Danish, with known sound change. They show that they can successfully detect the selected sound changes in the datasets, and identify precisely the context of the change. The paper is globally nice to read and introduces an interesting idea. The related works section is particularly well-written, with the right level of detail, even though some references are missing. The experimental setup is sound and clear; in particular, the use of a control dataset is appreciated. 1) The main issue in this paper is that you only show that you can successfully detect known sound changes, and that you don’t spuriously detect them when they’re absent, thanks to the control corpus. But you might detect a lot of changes in characters embeddings using your methods, that are not related to sound change. Using your system in an exploratory fashion, to detect all possible changes and categorize them, would greatly strengthen the paper. As it is, to my understanding, there is no way to be sure that the character embeddings do not lead to detecting a lot of changes independent from phonology, and this is a major issue. Moreover, the link between spelling and sound change is not straightforward to me and should be more clearly justified. 2) Semantic and phonological change across time should indeed take a logistic shape, as you describe in your description; your experiments also seem to show that the change in the Geo corpus is not linear. Why did you stick to a linear shape? Shoemark et al (2019) that you cite also tried a logarithmic shape, you could try it too. 3) There is a large amount of work on semantic change using contextualized embeddings and pre-trained language models, that you do not evoke in your related works (from Mario Giulianelli, Matej Martinc…). 4) The formalism you use, a --> b / c, should be introduced more clearly from the beginning. Especially since you use it as early as in the summary, where it can’t be comprehended without prior knowledge of this formalism. The explanation at line 190 might benefit from a scheme. Similarly, some words like “plosive” might benefit from a short definition (even in a footnote) for readers not familiar with the domain. Nor clear how the Bin:Control effect is computed. Globally, all metrics described in Section 5 would benefit from equations. Overall, the localization of Tables and Figures in the paper would be more optimal to ease the reading. Tables 2 to 4 look strange to me without lines. L.32: Choose only one between “since” and “however” L59-62: Add references here. L. 362 and 364: add “the” distance, “the” main effect. L. 390: in both corpora (no “the”) L. 480: repetition of “language”’.
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper use methods relying on spelling, using character embeddings, to detect sound change across time. They rely on three datasets to test the effectiveness of their methods: a synthetic one in a language having a simplified phonotactic system, one in Danish with synthetically generated sound change, and a real one in Danish, with known sound change. They show that they can successfully detect the selected sound changes in the datasets, and identify precisely the context of the change. The paper is globally nice to read and introduces an interesting idea. The related works section is particularly well-written, with the right level of detail, even though some references are missing. The experimental setup is sound and clear; in particular, the use of a control dataset is appreciated. 1) The main issue in this paper is that you only show that you can successfully detect known sound changes, and that you don’t spuriously detect them when they’re absent, thanks to the control corpus. But you might detect a lot of changes in characters embeddings using your methods, that are not related to sound change. Using your system in an exploratory fashion, to detect all possible changes and categorize them, would greatly strengthen the paper. As it is, to my understanding, there is no way to be sure that the character embeddings do not lead to detecting a lot of changes independent from phonology, and this is a major issue. Moreover, the link between spelling and sound change is not straightforward to me and should be more clearly justified. 2) Semantic and phonological change across time should indeed take a logistic shape, as you describe in your description; your experiments also seem to show that the change in the Geo corpus is not linear. Why did you stick to a linear shape? Shoemark et al (2019) that you cite also tried a logarithmic shape, you could try it too. 3) There is a large amount of work on semantic change using contextualized embeddings and pre-trained language models, that you do not evoke in your related works (from Mario Giulianelli, Matej Martinc…). 4) The formalism you use, a --> b / c, should be introduced more clearly from the beginning. Especially since you use it as early as in the summary, where it can’t be comprehended without prior knowledge of this formalism. The explanation at line 190 might benefit from a scheme. Similarly, some words like “plosive” might benefit from a short definition (even in a footnote) for readers not familiar with the domain.
Does the review mention any comments, suggestions or typos that the author should address?
no
The paper presents a new corpus with anaphoric relations, annotating recipes for coreference and bridging. The utility of the corpus is demonstrated by training a ML classifier. The new corpus can benefit others studying anaphora beyond identify. The paper is well written. - Transfer learning does not look to bring significant improvements. Looking at the variance, the results with and without transfer learning overlap. Can you please clarify/expand the evaluation methodology, e.g., given gold anaphors, you only evaluate the antecedents? At least you should state why the standard evaluation metrics fail in this case, e.g., they cannot score plurals, i.e., split-antecedent references Typo: line 513 'pretrained'
Does the review include a short summary of the paper?
yes
The new corpus can benefit others studying anaphora beyond identify. The paper is well written. - Transfer learning does not look to bring significant improvements. Looking at the variance, the results with and without transfer learning overlap. Can you please clarify/expand the evaluation methodology, e.g., given gold anaphors, you only evaluate the antecedents? At least you should state why the standard evaluation metrics fail in this case, e.g., they cannot score plurals, i.e., split-antecedent references Typo: line 513 'pretrained'
Does the review include a short summary of the paper?
no
The paper presents a new corpus with anaphoric relations, annotating recipes for coreference and bridging. The utility of the corpus is demonstrated by training a ML classifier. The new corpus can benefit others studying anaphora beyond identify. The paper is well written. - Transfer learning does not look to bring significant improvements. Looking at the variance, the results with and without transfer learning overlap. Can you please clarify/expand the evaluation methodology, e.g., given gold anaphors, you only evaluate the antecedents? At least you should state why the standard evaluation metrics fail in this case, e.g., they cannot score plurals, i.e., split-antecedent references Typo: line 513 'pretrained'
Does the review include a summary of the strengths of the paper?
yes
The paper presents a new corpus with anaphoric relations, annotating recipes for coreference and bridging. The utility of the corpus is demonstrated by training a ML classifier. - Transfer learning does not look to bring significant improvements. Looking at the variance, the results with and without transfer learning overlap. Can you please clarify/expand the evaluation methodology, e.g., given gold anaphors, you only evaluate the antecedents? At least you should state why the standard evaluation metrics fail in this case, e.g., they cannot score plurals, i.e., split-antecedent references Typo: line 513 'pretrained'
Does the review include a summary of the strengths of the paper?
no
The paper presents a new corpus with anaphoric relations, annotating recipes for coreference and bridging. The utility of the corpus is demonstrated by training a ML classifier. The new corpus can benefit others studying anaphora beyond identify. The paper is well written. - Transfer learning does not look to bring significant improvements. Looking at the variance, the results with and without transfer learning overlap. Can you please clarify/expand the evaluation methodology, e.g., given gold anaphors, you only evaluate the antecedents? At least you should state why the standard evaluation metrics fail in this case, e.g., they cannot score plurals, i.e., split-antecedent references Typo: line 513 'pretrained'
Does the review include a summary of the weaknesses of the paper?
yes
The paper presents a new corpus with anaphoric relations, annotating recipes for coreference and bridging. The utility of the corpus is demonstrated by training a ML classifier. The new corpus can benefit others studying anaphora beyond identify. The paper is well written. Can you please clarify/expand the evaluation methodology, e.g., given gold anaphors, you only evaluate the antecedents? At least you should state why the standard evaluation metrics fail in this case, e.g., they cannot score plurals, i.e., split-antecedent references Typo: line 513 'pretrained'
Does the review include a summary of the weaknesses of the paper?
no
The paper presents a new corpus with anaphoric relations, annotating recipes for coreference and bridging. The utility of the corpus is demonstrated by training a ML classifier. The new corpus can benefit others studying anaphora beyond identify. The paper is well written. - Transfer learning does not look to bring significant improvements. Looking at the variance, the results with and without transfer learning overlap. Can you please clarify/expand the evaluation methodology, e.g., given gold anaphors, you only evaluate the antecedents? At least you should state why the standard evaluation metrics fail in this case, e.g., they cannot score plurals, i.e., split-antecedent references Typo: line 513 'pretrained'
Does the review mention any comments, suggestions or typos that the author should address?
yes
The paper presents a new corpus with anaphoric relations, annotating recipes for coreference and bridging. The utility of the corpus is demonstrated by training a ML classifier. The new corpus can benefit others studying anaphora beyond identify. The paper is well written. - Transfer learning does not look to bring significant improvements. Looking at the variance, the results with and without transfer learning overlap.
Does the review mention any comments, suggestions or typos that the author should address?
no
The main point of the paper is a corpus of baking recipes annotated for coreference and bridging relations (both 1-to-1 and many-to-1 transformations of ingredients) and experiments to learn coreference resolution on this genre by transfer from either just unsupervised pretraining (GloVe, ELMo) or including supervised training on an existing larger chemical corpus and subsequent transfer using a state-of-the-art model for span linking coreference. The authors show that both joint training of coreference and bridging and transfer learning from the larger annotated corpus help the performance of the model. - the authors present a new corpus with coreference and bridging annotation which provides an interesting dataset for exploring this task - the authors provide a nice survey of the work especially concerning bridging corpora and resolution - the authors present some experiments showing that transfer learning can be beneficial for learning bridging resolution - while the focus of the paper is on bridging resolution and the authors claim that transfer learning allows the model to incorporate procedural knowledge, this claim is not backed up by any kind of error analysis or qualitative examples that would show what the model learns exactly - the part that is novel wrt Feng et al 2021 is pretty much just the extension towards ingredient transformation and the transfer learning study (which is however less elaborate as the one in Xia&vanDurme) - while the models are reasonably state of the art (based on ELMo and in line with Lee et al 2018 and Feng et al 2021), newer research such as the Xia&vanDurme paper use more recent language models such as XLM-R as the base model, which may lead to better performance overall Xia and van Durme has been published at EMNLP 2021 and can be cited as such. Section 4 413: "For evaluation, we use precision, recall and F1" - since MUC and B3 measures also define P/R/F1, it's good to mention here that it's P/R/F1 on individual (coreference or bridging) links. Section 6 555: "Table 4 shows ..." - the structure of Table 3 and Table 4 isn't very intuitive, since there is partial overlap in conditions and metrics but essentially it's one big collection of results (baseline, +joint, +transfer, +joint+transfer). It's not clear to me why the "overall" figure isn't computed based on the separate baseline classifiers
Does the review include a short summary of the paper?
yes
- the authors present a new corpus with coreference and bridging annotation which provides an interesting dataset for exploring this task - the authors provide a nice survey of the work especially concerning bridging corpora and resolution - the authors present some experiments showing that transfer learning can be beneficial for learning bridging resolution - while the focus of the paper is on bridging resolution and the authors claim that transfer learning allows the model to incorporate procedural knowledge, this claim is not backed up by any kind of error analysis or qualitative examples that would show what the model learns exactly - the part that is novel wrt Feng et al 2021 is pretty much just the extension towards ingredient transformation and the transfer learning study (which is however less elaborate as the one in Xia&vanDurme) - while the models are reasonably state of the art (based on ELMo and in line with Lee et al 2018 and Feng et al 2021), newer research such as the Xia&vanDurme paper use more recent language models such as XLM-R as the base model, which may lead to better performance overall Xia and van Durme has been published at EMNLP 2021 and can be cited as such. Section 4 413: "For evaluation, we use precision, recall and F1" - since MUC and B3 measures also define P/R/F1, it's good to mention here that it's P/R/F1 on individual (coreference or bridging) links. Section 6 555: "Table 4 shows ..." - the structure of Table 3 and Table 4 isn't very intuitive, since there is partial overlap in conditions and metrics but essentially it's one big collection of results (baseline, +joint, +transfer, +joint+transfer). It's not clear to me why the "overall" figure isn't computed based on the separate baseline classifiers
Does the review include a short summary of the paper?
no
The main point of the paper is a corpus of baking recipes annotated for coreference and bridging relations (both 1-to-1 and many-to-1 transformations of ingredients) and experiments to learn coreference resolution on this genre by transfer from either just unsupervised pretraining (GloVe, ELMo) or including supervised training on an existing larger chemical corpus and subsequent transfer using a state-of-the-art model for span linking coreference. The authors show that both joint training of coreference and bridging and transfer learning from the larger annotated corpus help the performance of the model. - the authors present a new corpus with coreference and bridging annotation which provides an interesting dataset for exploring this task - the authors provide a nice survey of the work especially concerning bridging corpora and resolution - the authors present some experiments showing that transfer learning can be beneficial for learning bridging resolution - while the focus of the paper is on bridging resolution and the authors claim that transfer learning allows the model to incorporate procedural knowledge, this claim is not backed up by any kind of error analysis or qualitative examples that would show what the model learns exactly - the part that is novel wrt Feng et al 2021 is pretty much just the extension towards ingredient transformation and the transfer learning study (which is however less elaborate as the one in Xia&vanDurme) - while the models are reasonably state of the art (based on ELMo and in line with Lee et al 2018 and Feng et al 2021), newer research such as the Xia&vanDurme paper use more recent language models such as XLM-R as the base model, which may lead to better performance overall Xia and van Durme has been published at EMNLP 2021 and can be cited as such. Section 4 413: "For evaluation, we use precision, recall and F1" - since MUC and B3 measures also define P/R/F1, it's good to mention here that it's P/R/F1 on individual (coreference or bridging) links. Section 6 555: "Table 4 shows ..." - the structure of Table 3 and Table 4 isn't very intuitive, since there is partial overlap in conditions and metrics but essentially it's one big collection of results (baseline, +joint, +transfer, +joint+transfer). It's not clear to me why the "overall" figure isn't computed based on the separate baseline classifiers
Does the review include a summary of the strengths of the paper?
yes
The main point of the paper is a corpus of baking recipes annotated for coreference and bridging relations (both 1-to-1 and many-to-1 transformations of ingredients) and experiments to learn coreference resolution on this genre by transfer from either just unsupervised pretraining (GloVe, ELMo) or including supervised training on an existing larger chemical corpus and subsequent transfer using a state-of-the-art model for span linking coreference. The authors show that both joint training of coreference and bridging and transfer learning from the larger annotated corpus help the performance of the model. - while the focus of the paper is on bridging resolution and the authors claim that transfer learning allows the model to incorporate procedural knowledge, this claim is not backed up by any kind of error analysis or qualitative examples that would show what the model learns exactly - the part that is novel wrt Feng et al 2021 is pretty much just the extension towards ingredient transformation and the transfer learning study (which is however less elaborate as the one in Xia&vanDurme) - while the models are reasonably state of the art (based on ELMo and in line with Lee et al 2018 and Feng et al 2021), newer research such as the Xia&vanDurme paper use more recent language models such as XLM-R as the base model, which may lead to better performance overall Xia and van Durme has been published at EMNLP 2021 and can be cited as such. Section 4 413: "For evaluation, we use precision, recall and F1" - since MUC and B3 measures also define P/R/F1, it's good to mention here that it's P/R/F1 on individual (coreference or bridging) links. Section 6 555: "Table 4 shows ..." - the structure of Table 3 and Table 4 isn't very intuitive, since there is partial overlap in conditions and metrics but essentially it's one big collection of results (baseline, +joint, +transfer, +joint+transfer). It's not clear to me why the "overall" figure isn't computed based on the separate baseline classifiers
Does the review include a summary of the strengths of the paper?
no
The main point of the paper is a corpus of baking recipes annotated for coreference and bridging relations (both 1-to-1 and many-to-1 transformations of ingredients) and experiments to learn coreference resolution on this genre by transfer from either just unsupervised pretraining (GloVe, ELMo) or including supervised training on an existing larger chemical corpus and subsequent transfer using a state-of-the-art model for span linking coreference. The authors show that both joint training of coreference and bridging and transfer learning from the larger annotated corpus help the performance of the model. - the authors present a new corpus with coreference and bridging annotation which provides an interesting dataset for exploring this task - the authors provide a nice survey of the work especially concerning bridging corpora and resolution - the authors present some experiments showing that transfer learning can be beneficial for learning bridging resolution - while the focus of the paper is on bridging resolution and the authors claim that transfer learning allows the model to incorporate procedural knowledge, this claim is not backed up by any kind of error analysis or qualitative examples that would show what the model learns exactly - the part that is novel wrt Feng et al 2021 is pretty much just the extension towards ingredient transformation and the transfer learning study (which is however less elaborate as the one in Xia&vanDurme) - while the models are reasonably state of the art (based on ELMo and in line with Lee et al 2018 and Feng et al 2021), newer research such as the Xia&vanDurme paper use more recent language models such as XLM-R as the base model, which may lead to better performance overall Xia and van Durme has been published at EMNLP 2021 and can be cited as such. Section 4 413: "For evaluation, we use precision, recall and F1" - since MUC and B3 measures also define P/R/F1, it's good to mention here that it's P/R/F1 on individual (coreference or bridging) links. Section 6 555: "Table 4 shows ..." - the structure of Table 3 and Table 4 isn't very intuitive, since there is partial overlap in conditions and metrics but essentially it's one big collection of results (baseline, +joint, +transfer, +joint+transfer). It's not clear to me why the "overall" figure isn't computed based on the separate baseline classifiers
Does the review include a summary of the weaknesses of the paper?
yes
The main point of the paper is a corpus of baking recipes annotated for coreference and bridging relations (both 1-to-1 and many-to-1 transformations of ingredients) and experiments to learn coreference resolution on this genre by transfer from either just unsupervised pretraining (GloVe, ELMo) or including supervised training on an existing larger chemical corpus and subsequent transfer using a state-of-the-art model for span linking coreference. The authors show that both joint training of coreference and bridging and transfer learning from the larger annotated corpus help the performance of the model. - the authors present a new corpus with coreference and bridging annotation which provides an interesting dataset for exploring this task - the authors provide a nice survey of the work especially concerning bridging corpora and resolution - the authors present some experiments showing that transfer learning can be beneficial for learning bridging resolution Xia and van Durme has been published at EMNLP 2021 and can be cited as such. Section 4 413: "For evaluation, we use precision, recall and F1" - since MUC and B3 measures also define P/R/F1, it's good to mention here that it's P/R/F1 on individual (coreference or bridging) links. Section 6 555: "Table 4 shows ..." - the structure of Table 3 and Table 4 isn't very intuitive, since there is partial overlap in conditions and metrics but essentially it's one big collection of results (baseline, +joint, +transfer, +joint+transfer). It's not clear to me why the "overall" figure isn't computed based on the separate baseline classifiers
Does the review include a summary of the weaknesses of the paper?
no
The main point of the paper is a corpus of baking recipes annotated for coreference and bridging relations (both 1-to-1 and many-to-1 transformations of ingredients) and experiments to learn coreference resolution on this genre by transfer from either just unsupervised pretraining (GloVe, ELMo) or including supervised training on an existing larger chemical corpus and subsequent transfer using a state-of-the-art model for span linking coreference. The authors show that both joint training of coreference and bridging and transfer learning from the larger annotated corpus help the performance of the model. - the authors present a new corpus with coreference and bridging annotation which provides an interesting dataset for exploring this task - the authors provide a nice survey of the work especially concerning bridging corpora and resolution - the authors present some experiments showing that transfer learning can be beneficial for learning bridging resolution - while the focus of the paper is on bridging resolution and the authors claim that transfer learning allows the model to incorporate procedural knowledge, this claim is not backed up by any kind of error analysis or qualitative examples that would show what the model learns exactly - the part that is novel wrt Feng et al 2021 is pretty much just the extension towards ingredient transformation and the transfer learning study (which is however less elaborate as the one in Xia&vanDurme) - while the models are reasonably state of the art (based on ELMo and in line with Lee et al 2018 and Feng et al 2021), newer research such as the Xia&vanDurme paper use more recent language models such as XLM-R as the base model, which may lead to better performance overall Xia and van Durme has been published at EMNLP 2021 and can be cited as such. Section 4 413: "For evaluation, we use precision, recall and F1" - since MUC and B3 measures also define P/R/F1, it's good to mention here that it's P/R/F1 on individual (coreference or bridging) links. Section 6 555: "Table 4 shows ..." - the structure of Table 3 and Table 4 isn't very intuitive, since there is partial overlap in conditions and metrics but essentially it's one big collection of results (baseline, +joint, +transfer, +joint+transfer). It's not clear to me why the "overall" figure isn't computed based on the separate baseline classifiers
Does the review mention any comments, suggestions or typos that the author should address?
yes
The main point of the paper is a corpus of baking recipes annotated for coreference and bridging relations (both 1-to-1 and many-to-1 transformations of ingredients) and experiments to learn coreference resolution on this genre by transfer from either just unsupervised pretraining (GloVe, ELMo) or including supervised training on an existing larger chemical corpus and subsequent transfer using a state-of-the-art model for span linking coreference. The authors show that both joint training of coreference and bridging and transfer learning from the larger annotated corpus help the performance of the model. - the authors present a new corpus with coreference and bridging annotation which provides an interesting dataset for exploring this task - the authors provide a nice survey of the work especially concerning bridging corpora and resolution - the authors present some experiments showing that transfer learning can be beneficial for learning bridging resolution - while the focus of the paper is on bridging resolution and the authors claim that transfer learning allows the model to incorporate procedural knowledge, this claim is not backed up by any kind of error analysis or qualitative examples that would show what the model learns exactly - the part that is novel wrt Feng et al 2021 is pretty much just the extension towards ingredient transformation and the transfer learning study (which is however less elaborate as the one in Xia&vanDurme) - while the models are reasonably state of the art (based on ELMo and in line with Lee et al 2018 and Feng et al 2021), newer research such as the Xia&vanDurme paper use more recent language models such as XLM-R as the base model, which may lead to better performance overall
Does the review mention any comments, suggestions or typos that the author should address?
no
The paper looks at distilling models for abstractive summarization. The paper makes the claim that the most obvious way to do knowledge distillation with seq2seq models (just using the generated output from the teacher model as a target output for the student; also called pseudo-labeling), is problematic. The problems with pseudo-labeling relate to known but important to restate problems with current abstractive summarization models in general: they generally copy from the target document, and they generally only copy the leading line of the target document. The paper attributes the causes for both of these problems being at least in part due to the attention distribution for the teacher models being too sharp, often focusing most of its weight on the next available word. In order to counteract these effects, the paper proposes to raise the temperature for the attention soft max in order to smooth the attention distribution of the teacher model. The temperature scaling modifications are evaluated on three standard datasets (CNN/DM, XSum, and NYT), with slight improvements in distillation performance found across all three datasets. - The paper is easy to read and follow - A lot of detail has been provided in the paper, making it quite easy for someone to be able to reproduce. - Evaluations are done on reasonable datasets, against reasonable baselines, and shows promising results. Human evaluations are also provided - Abalations and additional comparisons with obvious baselines (like just using sampling in the teach model for generating pseudo labels) are done. - There is unfortunately not a whole lot of new content in this paper. The proposed method really just boils down to playing with the temperature settings for the attention of the teacher model; something that is usually just considered a hyperparameter. While I have no problem with what is currently in the paper, I am just not sure that this is enough to form a long paper proposing a new method. I think if this paper couched itself as more of a 'analysis/experimental' type of paper, expanding its analysis even further (and maybe expanding its scope to other tasks besides just abstractive summarization), it could be solid contribution. Unfortunately, the paper is presented more as a 'methods' paper, with the new method being proposed simply being a change in hyperparameters. - What hypothesis test are you using for computing the significance in table 2 and 3? - Are there other tasks where temperature smoothing could be beneficial?
Does the review include a short summary of the paper?
yes
- The paper is easy to read and follow - A lot of detail has been provided in the paper, making it quite easy for someone to be able to reproduce. - Evaluations are done on reasonable datasets, against reasonable baselines, and shows promising results. Human evaluations are also provided - Abalations and additional comparisons with obvious baselines (like just using sampling in the teach model for generating pseudo labels) are done. - There is unfortunately not a whole lot of new content in this paper. The proposed method really just boils down to playing with the temperature settings for the attention of the teacher model; something that is usually just considered a hyperparameter. While I have no problem with what is currently in the paper, I am just not sure that this is enough to form a long paper proposing a new method. I think if this paper couched itself as more of a 'analysis/experimental' type of paper, expanding its analysis even further (and maybe expanding its scope to other tasks besides just abstractive summarization), it could be solid contribution. Unfortunately, the paper is presented more as a 'methods' paper, with the new method being proposed simply being a change in hyperparameters. - What hypothesis test are you using for computing the significance in table 2 and 3? - Are there other tasks where temperature smoothing could be beneficial?
Does the review include a short summary of the paper?
no
The paper looks at distilling models for abstractive summarization. The paper makes the claim that the most obvious way to do knowledge distillation with seq2seq models (just using the generated output from the teacher model as a target output for the student; also called pseudo-labeling), is problematic. The problems with pseudo-labeling relate to known but important to restate problems with current abstractive summarization models in general: they generally copy from the target document, and they generally only copy the leading line of the target document. The paper attributes the causes for both of these problems being at least in part due to the attention distribution for the teacher models being too sharp, often focusing most of its weight on the next available word. In order to counteract these effects, the paper proposes to raise the temperature for the attention soft max in order to smooth the attention distribution of the teacher model. The temperature scaling modifications are evaluated on three standard datasets (CNN/DM, XSum, and NYT), with slight improvements in distillation performance found across all three datasets. - The paper is easy to read and follow - A lot of detail has been provided in the paper, making it quite easy for someone to be able to reproduce. - Evaluations are done on reasonable datasets, against reasonable baselines, and shows promising results. Human evaluations are also provided - Abalations and additional comparisons with obvious baselines (like just using sampling in the teach model for generating pseudo labels) are done. - There is unfortunately not a whole lot of new content in this paper. The proposed method really just boils down to playing with the temperature settings for the attention of the teacher model; something that is usually just considered a hyperparameter. While I have no problem with what is currently in the paper, I am just not sure that this is enough to form a long paper proposing a new method. I think if this paper couched itself as more of a 'analysis/experimental' type of paper, expanding its analysis even further (and maybe expanding its scope to other tasks besides just abstractive summarization), it could be solid contribution. Unfortunately, the paper is presented more as a 'methods' paper, with the new method being proposed simply being a change in hyperparameters. - What hypothesis test are you using for computing the significance in table 2 and 3? - Are there other tasks where temperature smoothing could be beneficial?
Does the review include a summary of the strengths of the paper?
yes
The paper looks at distilling models for abstractive summarization. The paper makes the claim that the most obvious way to do knowledge distillation with seq2seq models (just using the generated output from the teacher model as a target output for the student; also called pseudo-labeling), is problematic. The problems with pseudo-labeling relate to known but important to restate problems with current abstractive summarization models in general: they generally copy from the target document, and they generally only copy the leading line of the target document. The paper attributes the causes for both of these problems being at least in part due to the attention distribution for the teacher models being too sharp, often focusing most of its weight on the next available word. In order to counteract these effects, the paper proposes to raise the temperature for the attention soft max in order to smooth the attention distribution of the teacher model. The temperature scaling modifications are evaluated on three standard datasets (CNN/DM, XSum, and NYT), with slight improvements in distillation performance found across all three datasets. - There is unfortunately not a whole lot of new content in this paper. The proposed method really just boils down to playing with the temperature settings for the attention of the teacher model; something that is usually just considered a hyperparameter. While I have no problem with what is currently in the paper, I am just not sure that this is enough to form a long paper proposing a new method. I think if this paper couched itself as more of a 'analysis/experimental' type of paper, expanding its analysis even further (and maybe expanding its scope to other tasks besides just abstractive summarization), it could be solid contribution. Unfortunately, the paper is presented more as a 'methods' paper, with the new method being proposed simply being a change in hyperparameters. - What hypothesis test are you using for computing the significance in table 2 and 3? - Are there other tasks where temperature smoothing could be beneficial?
Does the review include a summary of the strengths of the paper?
no
The paper looks at distilling models for abstractive summarization. The paper makes the claim that the most obvious way to do knowledge distillation with seq2seq models (just using the generated output from the teacher model as a target output for the student; also called pseudo-labeling), is problematic. The problems with pseudo-labeling relate to known but important to restate problems with current abstractive summarization models in general: they generally copy from the target document, and they generally only copy the leading line of the target document. The paper attributes the causes for both of these problems being at least in part due to the attention distribution for the teacher models being too sharp, often focusing most of its weight on the next available word. In order to counteract these effects, the paper proposes to raise the temperature for the attention soft max in order to smooth the attention distribution of the teacher model. The temperature scaling modifications are evaluated on three standard datasets (CNN/DM, XSum, and NYT), with slight improvements in distillation performance found across all three datasets. - The paper is easy to read and follow - A lot of detail has been provided in the paper, making it quite easy for someone to be able to reproduce. - Evaluations are done on reasonable datasets, against reasonable baselines, and shows promising results. Human evaluations are also provided - Abalations and additional comparisons with obvious baselines (like just using sampling in the teach model for generating pseudo labels) are done. - There is unfortunately not a whole lot of new content in this paper. The proposed method really just boils down to playing with the temperature settings for the attention of the teacher model; something that is usually just considered a hyperparameter. While I have no problem with what is currently in the paper, I am just not sure that this is enough to form a long paper proposing a new method. I think if this paper couched itself as more of a 'analysis/experimental' type of paper, expanding its analysis even further (and maybe expanding its scope to other tasks besides just abstractive summarization), it could be solid contribution. Unfortunately, the paper is presented more as a 'methods' paper, with the new method being proposed simply being a change in hyperparameters. - What hypothesis test are you using for computing the significance in table 2 and 3? - Are there other tasks where temperature smoothing could be beneficial?
Does the review include a summary of the weaknesses of the paper?
yes
The paper looks at distilling models for abstractive summarization. The paper makes the claim that the most obvious way to do knowledge distillation with seq2seq models (just using the generated output from the teacher model as a target output for the student; also called pseudo-labeling), is problematic. The problems with pseudo-labeling relate to known but important to restate problems with current abstractive summarization models in general: they generally copy from the target document, and they generally only copy the leading line of the target document. The paper attributes the causes for both of these problems being at least in part due to the attention distribution for the teacher models being too sharp, often focusing most of its weight on the next available word. In order to counteract these effects, the paper proposes to raise the temperature for the attention soft max in order to smooth the attention distribution of the teacher model. The temperature scaling modifications are evaluated on three standard datasets (CNN/DM, XSum, and NYT), with slight improvements in distillation performance found across all three datasets. - The paper is easy to read and follow - A lot of detail has been provided in the paper, making it quite easy for someone to be able to reproduce. - Evaluations are done on reasonable datasets, against reasonable baselines, and shows promising results. Human evaluations are also provided - Abalations and additional comparisons with obvious baselines (like just using sampling in the teach model for generating pseudo labels) are done. - What hypothesis test are you using for computing the significance in table 2 and 3? - Are there other tasks where temperature smoothing could be beneficial?
Does the review include a summary of the weaknesses of the paper?
no
The paper looks at distilling models for abstractive summarization. The paper makes the claim that the most obvious way to do knowledge distillation with seq2seq models (just using the generated output from the teacher model as a target output for the student; also called pseudo-labeling), is problematic. The problems with pseudo-labeling relate to known but important to restate problems with current abstractive summarization models in general: they generally copy from the target document, and they generally only copy the leading line of the target document. The paper attributes the causes for both of these problems being at least in part due to the attention distribution for the teacher models being too sharp, often focusing most of its weight on the next available word. In order to counteract these effects, the paper proposes to raise the temperature for the attention soft max in order to smooth the attention distribution of the teacher model. The temperature scaling modifications are evaluated on three standard datasets (CNN/DM, XSum, and NYT), with slight improvements in distillation performance found across all three datasets. - The paper is easy to read and follow - A lot of detail has been provided in the paper, making it quite easy for someone to be able to reproduce. - Evaluations are done on reasonable datasets, against reasonable baselines, and shows promising results. Human evaluations are also provided - Abalations and additional comparisons with obvious baselines (like just using sampling in the teach model for generating pseudo labels) are done. - There is unfortunately not a whole lot of new content in this paper. The proposed method really just boils down to playing with the temperature settings for the attention of the teacher model; something that is usually just considered a hyperparameter. While I have no problem with what is currently in the paper, I am just not sure that this is enough to form a long paper proposing a new method. I think if this paper couched itself as more of a 'analysis/experimental' type of paper, expanding its analysis even further (and maybe expanding its scope to other tasks besides just abstractive summarization), it could be solid contribution. Unfortunately, the paper is presented more as a 'methods' paper, with the new method being proposed simply being a change in hyperparameters. - What hypothesis test are you using for computing the significance in table 2 and 3? - Are there other tasks where temperature smoothing could be beneficial?
Does the review mention any comments, suggestions or typos that the author should address?
yes
The paper looks at distilling models for abstractive summarization. The paper makes the claim that the most obvious way to do knowledge distillation with seq2seq models (just using the generated output from the teacher model as a target output for the student; also called pseudo-labeling), is problematic. The problems with pseudo-labeling relate to known but important to restate problems with current abstractive summarization models in general: they generally copy from the target document, and they generally only copy the leading line of the target document. The paper attributes the causes for both of these problems being at least in part due to the attention distribution for the teacher models being too sharp, often focusing most of its weight on the next available word. In order to counteract these effects, the paper proposes to raise the temperature for the attention soft max in order to smooth the attention distribution of the teacher model. The temperature scaling modifications are evaluated on three standard datasets (CNN/DM, XSum, and NYT), with slight improvements in distillation performance found across all three datasets. - The paper is easy to read and follow - A lot of detail has been provided in the paper, making it quite easy for someone to be able to reproduce. - Evaluations are done on reasonable datasets, against reasonable baselines, and shows promising results. Human evaluations are also provided - Abalations and additional comparisons with obvious baselines (like just using sampling in the teach model for generating pseudo labels) are done. - There is unfortunately not a whole lot of new content in this paper. The proposed method really just boils down to playing with the temperature settings for the attention of the teacher model; something that is usually just considered a hyperparameter. While I have no problem with what is currently in the paper, I am just not sure that this is enough to form a long paper proposing a new method. I think if this paper couched itself as more of a 'analysis/experimental' type of paper, expanding its analysis even further (and maybe expanding its scope to other tasks besides just abstractive summarization), it could be solid contribution. Unfortunately, the paper is presented more as a 'methods' paper, with the new method being proposed simply being a change in hyperparameters.
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper proposes an extension of the pseudo-labeling method named PLATE for summarization distillation, extensive experiments on three datasets suggest the effectiveness of the proposed method versus vanilla sequence distillation. Ablation studies and empirical analysis reveal the performance gain may come from the diverse cross-attention distribution of the teacher model and concise and abstractive summaries generated from the teacher model. The paper is well-motivated and clearly presented; The authors extensively study the effect of Pseudo-labeling with Larger Attention TEmperature on various summarization tasks. Substantial ablation studies and empirical analyses are also provided for revealing the performance gain over baseline methods, which may help future studies understand how to conduct effective summarization distillation. The findings of the diverse cross attention pattern, the conciseness, and abstractiveness of produced summarization pseudo-labels as well as the attention focus of the teacher model help future studies develop better summarization models. From Table 6, the difference of attention pattern results in a great difference in novel-n-grams ratios and length in the generated teacher summarization, but this does not necessarily translate to a better student summarization system in Table 2, further explanation or better ablation studies might be needed to understand how the length, novel-n-grams ratio, the different dataset will affect the distillation effects. Given that this paper focuses on sequence distillation, wondering how the beam size, length penalty of the teacher model will affect the sequence distillation results? It may be good to know if a better teacher model (BART-large, PEGASUS-large) produces better pseudo-labels for summarization distillation.
Does the review include a short summary of the paper?
yes
The paper is well-motivated and clearly presented; The authors extensively study the effect of Pseudo-labeling with Larger Attention TEmperature on various summarization tasks. Substantial ablation studies and empirical analyses are also provided for revealing the performance gain over baseline methods, which may help future studies understand how to conduct effective summarization distillation. The findings of the diverse cross attention pattern, the conciseness, and abstractiveness of produced summarization pseudo-labels as well as the attention focus of the teacher model help future studies develop better summarization models. From Table 6, the difference of attention pattern results in a great difference in novel-n-grams ratios and length in the generated teacher summarization, but this does not necessarily translate to a better student summarization system in Table 2, further explanation or better ablation studies might be needed to understand how the length, novel-n-grams ratio, the different dataset will affect the distillation effects. Given that this paper focuses on sequence distillation, wondering how the beam size, length penalty of the teacher model will affect the sequence distillation results? It may be good to know if a better teacher model (BART-large, PEGASUS-large) produces better pseudo-labels for summarization distillation.
Does the review include a short summary of the paper?
no
This paper proposes an extension of the pseudo-labeling method named PLATE for summarization distillation, extensive experiments on three datasets suggest the effectiveness of the proposed method versus vanilla sequence distillation. Ablation studies and empirical analysis reveal the performance gain may come from the diverse cross-attention distribution of the teacher model and concise and abstractive summaries generated from the teacher model. The paper is well-motivated and clearly presented; The authors extensively study the effect of Pseudo-labeling with Larger Attention TEmperature on various summarization tasks. Substantial ablation studies and empirical analyses are also provided for revealing the performance gain over baseline methods, which may help future studies understand how to conduct effective summarization distillation. The findings of the diverse cross attention pattern, the conciseness, and abstractiveness of produced summarization pseudo-labels as well as the attention focus of the teacher model help future studies develop better summarization models. From Table 6, the difference of attention pattern results in a great difference in novel-n-grams ratios and length in the generated teacher summarization, but this does not necessarily translate to a better student summarization system in Table 2, further explanation or better ablation studies might be needed to understand how the length, novel-n-grams ratio, the different dataset will affect the distillation effects. Given that this paper focuses on sequence distillation, wondering how the beam size, length penalty of the teacher model will affect the sequence distillation results? It may be good to know if a better teacher model (BART-large, PEGASUS-large) produces better pseudo-labels for summarization distillation.
Does the review include a summary of the strengths of the paper?
yes
This paper proposes an extension of the pseudo-labeling method named PLATE for summarization distillation, extensive experiments on three datasets suggest the effectiveness of the proposed method versus vanilla sequence distillation. Ablation studies and empirical analysis reveal the performance gain may come from the diverse cross-attention distribution of the teacher model and concise and abstractive summaries generated from the teacher model. From Table 6, the difference of attention pattern results in a great difference in novel-n-grams ratios and length in the generated teacher summarization, but this does not necessarily translate to a better student summarization system in Table 2, further explanation or better ablation studies might be needed to understand how the length, novel-n-grams ratio, the different dataset will affect the distillation effects. Given that this paper focuses on sequence distillation, wondering how the beam size, length penalty of the teacher model will affect the sequence distillation results? It may be good to know if a better teacher model (BART-large, PEGASUS-large) produces better pseudo-labels for summarization distillation.
Does the review include a summary of the strengths of the paper?
no
This paper proposes an extension of the pseudo-labeling method named PLATE for summarization distillation, extensive experiments on three datasets suggest the effectiveness of the proposed method versus vanilla sequence distillation. Ablation studies and empirical analysis reveal the performance gain may come from the diverse cross-attention distribution of the teacher model and concise and abstractive summaries generated from the teacher model. The paper is well-motivated and clearly presented; The authors extensively study the effect of Pseudo-labeling with Larger Attention TEmperature on various summarization tasks. Substantial ablation studies and empirical analyses are also provided for revealing the performance gain over baseline methods, which may help future studies understand how to conduct effective summarization distillation. The findings of the diverse cross attention pattern, the conciseness, and abstractiveness of produced summarization pseudo-labels as well as the attention focus of the teacher model help future studies develop better summarization models. From Table 6, the difference of attention pattern results in a great difference in novel-n-grams ratios and length in the generated teacher summarization, but this does not necessarily translate to a better student summarization system in Table 2, further explanation or better ablation studies might be needed to understand how the length, novel-n-grams ratio, the different dataset will affect the distillation effects. Given that this paper focuses on sequence distillation, wondering how the beam size, length penalty of the teacher model will affect the sequence distillation results? It may be good to know if a better teacher model (BART-large, PEGASUS-large) produces better pseudo-labels for summarization distillation.
Does the review include a summary of the weaknesses of the paper?
yes
This paper proposes an extension of the pseudo-labeling method named PLATE for summarization distillation, extensive experiments on three datasets suggest the effectiveness of the proposed method versus vanilla sequence distillation. Ablation studies and empirical analysis reveal the performance gain may come from the diverse cross-attention distribution of the teacher model and concise and abstractive summaries generated from the teacher model. The paper is well-motivated and clearly presented; The authors extensively study the effect of Pseudo-labeling with Larger Attention TEmperature on various summarization tasks. Substantial ablation studies and empirical analyses are also provided for revealing the performance gain over baseline methods, which may help future studies understand how to conduct effective summarization distillation. The findings of the diverse cross attention pattern, the conciseness, and abstractiveness of produced summarization pseudo-labels as well as the attention focus of the teacher model help future studies develop better summarization models. It may be good to know if a better teacher model (BART-large, PEGASUS-large) produces better pseudo-labels for summarization distillation.
Does the review include a summary of the weaknesses of the paper?
no
This paper proposes an extension of the pseudo-labeling method named PLATE for summarization distillation, extensive experiments on three datasets suggest the effectiveness of the proposed method versus vanilla sequence distillation. Ablation studies and empirical analysis reveal the performance gain may come from the diverse cross-attention distribution of the teacher model and concise and abstractive summaries generated from the teacher model. The paper is well-motivated and clearly presented; The authors extensively study the effect of Pseudo-labeling with Larger Attention TEmperature on various summarization tasks. Substantial ablation studies and empirical analyses are also provided for revealing the performance gain over baseline methods, which may help future studies understand how to conduct effective summarization distillation. The findings of the diverse cross attention pattern, the conciseness, and abstractiveness of produced summarization pseudo-labels as well as the attention focus of the teacher model help future studies develop better summarization models. From Table 6, the difference of attention pattern results in a great difference in novel-n-grams ratios and length in the generated teacher summarization, but this does not necessarily translate to a better student summarization system in Table 2, further explanation or better ablation studies might be needed to understand how the length, novel-n-grams ratio, the different dataset will affect the distillation effects. Given that this paper focuses on sequence distillation, wondering how the beam size, length penalty of the teacher model will affect the sequence distillation results? It may be good to know if a better teacher model (BART-large, PEGASUS-large) produces better pseudo-labels for summarization distillation.
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper proposes an extension of the pseudo-labeling method named PLATE for summarization distillation, extensive experiments on three datasets suggest the effectiveness of the proposed method versus vanilla sequence distillation. Ablation studies and empirical analysis reveal the performance gain may come from the diverse cross-attention distribution of the teacher model and concise and abstractive summaries generated from the teacher model. The paper is well-motivated and clearly presented; The authors extensively study the effect of Pseudo-labeling with Larger Attention TEmperature on various summarization tasks. Substantial ablation studies and empirical analyses are also provided for revealing the performance gain over baseline methods, which may help future studies understand how to conduct effective summarization distillation. The findings of the diverse cross attention pattern, the conciseness, and abstractiveness of produced summarization pseudo-labels as well as the attention focus of the teacher model help future studies develop better summarization models. From Table 6, the difference of attention pattern results in a great difference in novel-n-grams ratios and length in the generated teacher summarization, but this does not necessarily translate to a better student summarization system in Table 2, further explanation or better ablation studies might be needed to understand how the length, novel-n-grams ratio, the different dataset will affect the distillation effects. Given that this paper focuses on sequence distillation, wondering how the beam size, length penalty of the teacher model will affect the sequence distillation results?
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper augments the standard cross-modal retrieval framework with an additional codebook. The motivation is that with the codebook, the model's behavior can be interpretable. The proposed approach improves the performance over the baseline approach by a margin. 1. I like the idea of using codebook for augmenting the cross-modal representation learning. Interpretabliity is one of the major issues in the cross-modal learning. I am glad that the author tackles this problem. 2. The codebook update rules/policies are straightforward. It is interesting to observe that the codebook is aligned with actions in Fig. 3. 1. L003 mentioned the proposed framework is a self-supervised learning framework. IIUC, the model still needs cross-modal (x-modal) alignment to train. Why is the proposed framework a self-supervised learning framework? 2. It is unclear to me how the gradient back-prop in the equation of L165? 3. Cross-modal code matching: The key of the proposed approach is the x-modal code matching. It seems the codebook should be large enough to cover all semantic information in the dataset. I wonder how to determine the codebook space? Would the initalization of the codebook affect the performance? What is the performance under different codebook size? 4. The proposed approach achieves higher performance than the baseline. I wonder is it due to the additional codebook or the increased model capacity? 5. The interpretation part is a little bit confusing. Fig.3 clearly shows the codebook is aligned with the action. However, even with Fig. 3, it is still hard to interpret the output embedding of f^A_{code}. Imagine that given an embedding and a codebook, I wonder how to interpret the embedding? I think this paper is well-written and easy to follow.
Does the review include a short summary of the paper?
yes
1. I like the idea of using codebook for augmenting the cross-modal representation learning. Interpretabliity is one of the major issues in the cross-modal learning. I am glad that the author tackles this problem. 2. The codebook update rules/policies are straightforward. It is interesting to observe that the codebook is aligned with actions in Fig. 3. 1. L003 mentioned the proposed framework is a self-supervised learning framework. IIUC, the model still needs cross-modal (x-modal) alignment to train. Why is the proposed framework a self-supervised learning framework? 2. It is unclear to me how the gradient back-prop in the equation of L165? 3. Cross-modal code matching: The key of the proposed approach is the x-modal code matching. It seems the codebook should be large enough to cover all semantic information in the dataset. I wonder how to determine the codebook space? Would the initalization of the codebook affect the performance? What is the performance under different codebook size? 4. The proposed approach achieves higher performance than the baseline. I wonder is it due to the additional codebook or the increased model capacity? 5. The interpretation part is a little bit confusing. Fig.3 clearly shows the codebook is aligned with the action. However, even with Fig. 3, it is still hard to interpret the output embedding of f^A_{code}. Imagine that given an embedding and a codebook, I wonder how to interpret the embedding? I think this paper is well-written and easy to follow.
Does the review include a short summary of the paper?
no
This paper augments the standard cross-modal retrieval framework with an additional codebook. The motivation is that with the codebook, the model's behavior can be interpretable. The proposed approach improves the performance over the baseline approach by a margin. 1. I like the idea of using codebook for augmenting the cross-modal representation learning. Interpretabliity is one of the major issues in the cross-modal learning. I am glad that the author tackles this problem. 2. The codebook update rules/policies are straightforward. It is interesting to observe that the codebook is aligned with actions in Fig. 3. 1. L003 mentioned the proposed framework is a self-supervised learning framework. IIUC, the model still needs cross-modal (x-modal) alignment to train. Why is the proposed framework a self-supervised learning framework? 2. It is unclear to me how the gradient back-prop in the equation of L165? 3. Cross-modal code matching: The key of the proposed approach is the x-modal code matching. It seems the codebook should be large enough to cover all semantic information in the dataset. I wonder how to determine the codebook space? Would the initalization of the codebook affect the performance? What is the performance under different codebook size? 4. The proposed approach achieves higher performance than the baseline. I wonder is it due to the additional codebook or the increased model capacity? 5. The interpretation part is a little bit confusing. Fig.3 clearly shows the codebook is aligned with the action. However, even with Fig. 3, it is still hard to interpret the output embedding of f^A_{code}. Imagine that given an embedding and a codebook, I wonder how to interpret the embedding? I think this paper is well-written and easy to follow.
Does the review include a summary of the strengths of the paper?
yes
This paper augments the standard cross-modal retrieval framework with an additional codebook. The motivation is that with the codebook, the model's behavior can be interpretable. The proposed approach improves the performance over the baseline approach by a margin. 1. L003 mentioned the proposed framework is a self-supervised learning framework. IIUC, the model still needs cross-modal (x-modal) alignment to train. Why is the proposed framework a self-supervised learning framework? 2. It is unclear to me how the gradient back-prop in the equation of L165? 3. Cross-modal code matching: The key of the proposed approach is the x-modal code matching. It seems the codebook should be large enough to cover all semantic information in the dataset. I wonder how to determine the codebook space? Would the initalization of the codebook affect the performance? What is the performance under different codebook size? 4. The proposed approach achieves higher performance than the baseline. I wonder is it due to the additional codebook or the increased model capacity? 5. The interpretation part is a little bit confusing. Fig.3 clearly shows the codebook is aligned with the action. However, even with Fig. 3, it is still hard to interpret the output embedding of f^A_{code}. Imagine that given an embedding and a codebook, I wonder how to interpret the embedding? I think this paper is well-written and easy to follow.
Does the review include a summary of the strengths of the paper?
no
This paper augments the standard cross-modal retrieval framework with an additional codebook. The motivation is that with the codebook, the model's behavior can be interpretable. The proposed approach improves the performance over the baseline approach by a margin. 1. I like the idea of using codebook for augmenting the cross-modal representation learning. Interpretabliity is one of the major issues in the cross-modal learning. I am glad that the author tackles this problem. 2. The codebook update rules/policies are straightforward. It is interesting to observe that the codebook is aligned with actions in Fig. 3. 1. L003 mentioned the proposed framework is a self-supervised learning framework. IIUC, the model still needs cross-modal (x-modal) alignment to train. Why is the proposed framework a self-supervised learning framework? 2. It is unclear to me how the gradient back-prop in the equation of L165? 3. Cross-modal code matching: The key of the proposed approach is the x-modal code matching. It seems the codebook should be large enough to cover all semantic information in the dataset. I wonder how to determine the codebook space? Would the initalization of the codebook affect the performance? What is the performance under different codebook size? 4. The proposed approach achieves higher performance than the baseline. I wonder is it due to the additional codebook or the increased model capacity? 5. The interpretation part is a little bit confusing. Fig.3 clearly shows the codebook is aligned with the action. However, even with Fig. 3, it is still hard to interpret the output embedding of f^A_{code}. Imagine that given an embedding and a codebook, I wonder how to interpret the embedding? I think this paper is well-written and easy to follow.
Does the review include a summary of the weaknesses of the paper?
yes
This paper augments the standard cross-modal retrieval framework with an additional codebook. The motivation is that with the codebook, the model's behavior can be interpretable. The proposed approach improves the performance over the baseline approach by a margin. 1. I like the idea of using codebook for augmenting the cross-modal representation learning. Interpretabliity is one of the major issues in the cross-modal learning. I am glad that the author tackles this problem. 2. The codebook update rules/policies are straightforward. It is interesting to observe that the codebook is aligned with actions in Fig. 3. I think this paper is well-written and easy to follow.
Does the review include a summary of the weaknesses of the paper?
no
This paper augments the standard cross-modal retrieval framework with an additional codebook. The motivation is that with the codebook, the model's behavior can be interpretable. The proposed approach improves the performance over the baseline approach by a margin. 1. I like the idea of using codebook for augmenting the cross-modal representation learning. Interpretabliity is one of the major issues in the cross-modal learning. I am glad that the author tackles this problem. 2. The codebook update rules/policies are straightforward. It is interesting to observe that the codebook is aligned with actions in Fig. 3. 1. L003 mentioned the proposed framework is a self-supervised learning framework. IIUC, the model still needs cross-modal (x-modal) alignment to train. Why is the proposed framework a self-supervised learning framework? 2. It is unclear to me how the gradient back-prop in the equation of L165? 3. Cross-modal code matching: The key of the proposed approach is the x-modal code matching. It seems the codebook should be large enough to cover all semantic information in the dataset. I wonder how to determine the codebook space? Would the initalization of the codebook affect the performance? What is the performance under different codebook size? 4. The proposed approach achieves higher performance than the baseline. I wonder is it due to the additional codebook or the increased model capacity? 5. The interpretation part is a little bit confusing. Fig.3 clearly shows the codebook is aligned with the action. However, even with Fig. 3, it is still hard to interpret the output embedding of f^A_{code}. Imagine that given an embedding and a codebook, I wonder how to interpret the embedding? I think this paper is well-written and easy to follow.
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper augments the standard cross-modal retrieval framework with an additional codebook. The motivation is that with the codebook, the model's behavior can be interpretable. The proposed approach improves the performance over the baseline approach by a margin. 1. I like the idea of using codebook for augmenting the cross-modal representation learning. Interpretabliity is one of the major issues in the cross-modal learning. I am glad that the author tackles this problem. 2. The codebook update rules/policies are straightforward. It is interesting to observe that the codebook is aligned with actions in Fig. 3. 1. L003 mentioned the proposed framework is a self-supervised learning framework. IIUC, the model still needs cross-modal (x-modal) alignment to train. Why is the proposed framework a self-supervised learning framework? 2. It is unclear to me how the gradient back-prop in the equation of L165? 3. Cross-modal code matching: The key of the proposed approach is the x-modal code matching. It seems the codebook should be large enough to cover all semantic information in the dataset. I wonder how to determine the codebook space? Would the initalization of the codebook affect the performance? What is the performance under different codebook size? 4. The proposed approach achieves higher performance than the baseline. I wonder is it due to the additional codebook or the increased model capacity? 5. The interpretation part is a little bit confusing. Fig.3 clearly shows the codebook is aligned with the action. However, even with Fig. 3, it is still hard to interpret the output embedding of f^A_{code}. Imagine that given an embedding and a codebook, I wonder how to interpret the embedding?
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper describes an advancement in counterfactual probes that test whether neural nets make use syntactic information. The goal of counterfactual probes is show that models actually use syntactic info rather than just encoding it. The paper identifies a potential problem in previous work (summarized neatly in Fig. 1), namely that information may be encoded redundantly, and the probe may attend to the wrong redunant representation relative to the model itself, thus underestimating the presence/utility of the information. They solve this by introducing a dropout probe. The paper is overall clear in identifying the problem, though I wish they had described what a counterfactual probe was more clearly and earlier than they did (2.2), and I believe from the paper that they have solved the particular problem that they set out to: the issue of probes missing redundantly encoded information. My main concern is that they haven't quite demonstrated enough to validate the claim that these are demonstrating a causal role for syntactic knowledge. Two crticisms in particular: 1) The dropout probe improves sensitivity. It finds a causal role for syntactic representations where previous approaches would have missed it. Good. But all other things being equal, one should worry that this also increases the risk of false positives. I would think this should be a substantial part of the discussion. 2) Relating to the possibility of false positives, what about the probe itself? The counterfactual paradigm assumes that the probe itself is capturing syntactic structure, but I worry that training on templates could allow both the probe and the model to "cheat" and look for side information. Templates exacerbate this problem by presenting a relatively invariant sentence string structure. Finally, there's the possiblity that syntactic structure (or not quite, see point 2)) is encoded and plays some causal role in the model's predictions, but it's secondary to some other information. Really, the criticism comes down to a lack of baselines. How do we know that this approach isn't now overestimating the causal role of syntax in these models? Testing with a clearly non-syntactic proble, destroying syntax but keeping lexical effects by scrambling word order, or probing a model that certainly cannot encode this information. This is most of the way towards being an excellent paper, but it drops the ball in this respect. I got garden pathed reading line 331 "One could define other Z1 and Z2..." The author seems to really like the bigram "prior art."
Does the review include a short summary of the paper?
yes
The paper is overall clear in identifying the problem, though I wish they had described what a counterfactual probe was more clearly and earlier than they did (2.2), and I believe from the paper that they have solved the particular problem that they set out to: the issue of probes missing redundantly encoded information. My main concern is that they haven't quite demonstrated enough to validate the claim that these are demonstrating a causal role for syntactic knowledge. Two crticisms in particular: 1) The dropout probe improves sensitivity. It finds a causal role for syntactic representations where previous approaches would have missed it. Good. But all other things being equal, one should worry that this also increases the risk of false positives. I would think this should be a substantial part of the discussion. 2) Relating to the possibility of false positives, what about the probe itself? The counterfactual paradigm assumes that the probe itself is capturing syntactic structure, but I worry that training on templates could allow both the probe and the model to "cheat" and look for side information. Templates exacerbate this problem by presenting a relatively invariant sentence string structure. Finally, there's the possiblity that syntactic structure (or not quite, see point 2)) is encoded and plays some causal role in the model's predictions, but it's secondary to some other information. Really, the criticism comes down to a lack of baselines. How do we know that this approach isn't now overestimating the causal role of syntax in these models? Testing with a clearly non-syntactic proble, destroying syntax but keeping lexical effects by scrambling word order, or probing a model that certainly cannot encode this information. This is most of the way towards being an excellent paper, but it drops the ball in this respect. I got garden pathed reading line 331 "One could define other Z1 and Z2..." The author seems to really like the bigram "prior art."
Does the review include a short summary of the paper?
no
This paper describes an advancement in counterfactual probes that test whether neural nets make use syntactic information. The goal of counterfactual probes is show that models actually use syntactic info rather than just encoding it. The paper identifies a potential problem in previous work (summarized neatly in Fig. 1), namely that information may be encoded redundantly, and the probe may attend to the wrong redunant representation relative to the model itself, thus underestimating the presence/utility of the information. They solve this by introducing a dropout probe. The paper is overall clear in identifying the problem, though I wish they had described what a counterfactual probe was more clearly and earlier than they did (2.2), and I believe from the paper that they have solved the particular problem that they set out to: the issue of probes missing redundantly encoded information. My main concern is that they haven't quite demonstrated enough to validate the claim that these are demonstrating a causal role for syntactic knowledge. Two crticisms in particular: 1) The dropout probe improves sensitivity. It finds a causal role for syntactic representations where previous approaches would have missed it. Good. But all other things being equal, one should worry that this also increases the risk of false positives. I would think this should be a substantial part of the discussion. 2) Relating to the possibility of false positives, what about the probe itself? The counterfactual paradigm assumes that the probe itself is capturing syntactic structure, but I worry that training on templates could allow both the probe and the model to "cheat" and look for side information. Templates exacerbate this problem by presenting a relatively invariant sentence string structure. Finally, there's the possiblity that syntactic structure (or not quite, see point 2)) is encoded and plays some causal role in the model's predictions, but it's secondary to some other information. Really, the criticism comes down to a lack of baselines. How do we know that this approach isn't now overestimating the causal role of syntax in these models? Testing with a clearly non-syntactic proble, destroying syntax but keeping lexical effects by scrambling word order, or probing a model that certainly cannot encode this information. This is most of the way towards being an excellent paper, but it drops the ball in this respect. I got garden pathed reading line 331 "One could define other Z1 and Z2..." The author seems to really like the bigram "prior art."
Does the review include a summary of the strengths of the paper?
yes
This paper describes an advancement in counterfactual probes that test whether neural nets make use syntactic information. The goal of counterfactual probes is show that models actually use syntactic info rather than just encoding it. The paper identifies a potential problem in previous work (summarized neatly in Fig. 1), namely that information may be encoded redundantly, and the probe may attend to the wrong redunant representation relative to the model itself, thus underestimating the presence/utility of the information. They solve this by introducing a dropout probe. My main concern is that they haven't quite demonstrated enough to validate the claim that these are demonstrating a causal role for syntactic knowledge. Two crticisms in particular: 1) The dropout probe improves sensitivity. It finds a causal role for syntactic representations where previous approaches would have missed it. Good. But all other things being equal, one should worry that this also increases the risk of false positives. I would think this should be a substantial part of the discussion. 2) Relating to the possibility of false positives, what about the probe itself? The counterfactual paradigm assumes that the probe itself is capturing syntactic structure, but I worry that training on templates could allow both the probe and the model to "cheat" and look for side information. Templates exacerbate this problem by presenting a relatively invariant sentence string structure. Finally, there's the possiblity that syntactic structure (or not quite, see point 2)) is encoded and plays some causal role in the model's predictions, but it's secondary to some other information. Really, the criticism comes down to a lack of baselines. How do we know that this approach isn't now overestimating the causal role of syntax in these models? Testing with a clearly non-syntactic proble, destroying syntax but keeping lexical effects by scrambling word order, or probing a model that certainly cannot encode this information. This is most of the way towards being an excellent paper, but it drops the ball in this respect. I got garden pathed reading line 331 "One could define other Z1 and Z2..." The author seems to really like the bigram "prior art."
Does the review include a summary of the strengths of the paper?
no
This paper describes an advancement in counterfactual probes that test whether neural nets make use syntactic information. The goal of counterfactual probes is show that models actually use syntactic info rather than just encoding it. The paper identifies a potential problem in previous work (summarized neatly in Fig. 1), namely that information may be encoded redundantly, and the probe may attend to the wrong redunant representation relative to the model itself, thus underestimating the presence/utility of the information. They solve this by introducing a dropout probe. The paper is overall clear in identifying the problem, though I wish they had described what a counterfactual probe was more clearly and earlier than they did (2.2), and I believe from the paper that they have solved the particular problem that they set out to: the issue of probes missing redundantly encoded information. My main concern is that they haven't quite demonstrated enough to validate the claim that these are demonstrating a causal role for syntactic knowledge. Two crticisms in particular: 1) The dropout probe improves sensitivity. It finds a causal role for syntactic representations where previous approaches would have missed it. Good. But all other things being equal, one should worry that this also increases the risk of false positives. I would think this should be a substantial part of the discussion. 2) Relating to the possibility of false positives, what about the probe itself? The counterfactual paradigm assumes that the probe itself is capturing syntactic structure, but I worry that training on templates could allow both the probe and the model to "cheat" and look for side information. Templates exacerbate this problem by presenting a relatively invariant sentence string structure. Finally, there's the possiblity that syntactic structure (or not quite, see point 2)) is encoded and plays some causal role in the model's predictions, but it's secondary to some other information. Really, the criticism comes down to a lack of baselines. How do we know that this approach isn't now overestimating the causal role of syntax in these models? Testing with a clearly non-syntactic proble, destroying syntax but keeping lexical effects by scrambling word order, or probing a model that certainly cannot encode this information. This is most of the way towards being an excellent paper, but it drops the ball in this respect. I got garden pathed reading line 331 "One could define other Z1 and Z2..." The author seems to really like the bigram "prior art."
Does the review include a summary of the weaknesses of the paper?
yes
This paper describes an advancement in counterfactual probes that test whether neural nets make use syntactic information. The goal of counterfactual probes is show that models actually use syntactic info rather than just encoding it. The paper identifies a potential problem in previous work (summarized neatly in Fig. 1), namely that information may be encoded redundantly, and the probe may attend to the wrong redunant representation relative to the model itself, thus underestimating the presence/utility of the information. They solve this by introducing a dropout probe. The paper is overall clear in identifying the problem, though I wish they had described what a counterfactual probe was more clearly and earlier than they did (2.2), and I believe from the paper that they have solved the particular problem that they set out to: the issue of probes missing redundantly encoded information. I got garden pathed reading line 331 "One could define other Z1 and Z2..." The author seems to really like the bigram "prior art."
Does the review include a summary of the weaknesses of the paper?
no
This paper describes an advancement in counterfactual probes that test whether neural nets make use syntactic information. The goal of counterfactual probes is show that models actually use syntactic info rather than just encoding it. The paper identifies a potential problem in previous work (summarized neatly in Fig. 1), namely that information may be encoded redundantly, and the probe may attend to the wrong redunant representation relative to the model itself, thus underestimating the presence/utility of the information. They solve this by introducing a dropout probe. The paper is overall clear in identifying the problem, though I wish they had described what a counterfactual probe was more clearly and earlier than they did (2.2), and I believe from the paper that they have solved the particular problem that they set out to: the issue of probes missing redundantly encoded information. My main concern is that they haven't quite demonstrated enough to validate the claim that these are demonstrating a causal role for syntactic knowledge. Two crticisms in particular: 1) The dropout probe improves sensitivity. It finds a causal role for syntactic representations where previous approaches would have missed it. Good. But all other things being equal, one should worry that this also increases the risk of false positives. I would think this should be a substantial part of the discussion. 2) Relating to the possibility of false positives, what about the probe itself? The counterfactual paradigm assumes that the probe itself is capturing syntactic structure, but I worry that training on templates could allow both the probe and the model to "cheat" and look for side information. Templates exacerbate this problem by presenting a relatively invariant sentence string structure. Finally, there's the possiblity that syntactic structure (or not quite, see point 2)) is encoded and plays some causal role in the model's predictions, but it's secondary to some other information. Really, the criticism comes down to a lack of baselines. How do we know that this approach isn't now overestimating the causal role of syntax in these models? Testing with a clearly non-syntactic proble, destroying syntax but keeping lexical effects by scrambling word order, or probing a model that certainly cannot encode this information. This is most of the way towards being an excellent paper, but it drops the ball in this respect. I got garden pathed reading line 331 "One could define other Z1 and Z2..." The author seems to really like the bigram "prior art."
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper describes an advancement in counterfactual probes that test whether neural nets make use syntactic information. The goal of counterfactual probes is show that models actually use syntactic info rather than just encoding it. The paper identifies a potential problem in previous work (summarized neatly in Fig. 1), namely that information may be encoded redundantly, and the probe may attend to the wrong redunant representation relative to the model itself, thus underestimating the presence/utility of the information. They solve this by introducing a dropout probe. The paper is overall clear in identifying the problem, though I wish they had described what a counterfactual probe was more clearly and earlier than they did (2.2), and I believe from the paper that they have solved the particular problem that they set out to: the issue of probes missing redundantly encoded information. My main concern is that they haven't quite demonstrated enough to validate the claim that these are demonstrating a causal role for syntactic knowledge. Two crticisms in particular: 1) The dropout probe improves sensitivity. It finds a causal role for syntactic representations where previous approaches would have missed it. Good. But all other things being equal, one should worry that this also increases the risk of false positives. I would think this should be a substantial part of the discussion. 2) Relating to the possibility of false positives, what about the probe itself? The counterfactual paradigm assumes that the probe itself is capturing syntactic structure, but I worry that training on templates could allow both the probe and the model to "cheat" and look for side information. Templates exacerbate this problem by presenting a relatively invariant sentence string structure. Finally, there's the possiblity that syntactic structure (or not quite, see point 2)) is encoded and plays some causal role in the model's predictions, but it's secondary to some other information. Really, the criticism comes down to a lack of baselines. How do we know that this approach isn't now overestimating the causal role of syntax in these models? Testing with a clearly non-syntactic proble, destroying syntax but keeping lexical effects by scrambling word order, or probing a model that certainly cannot encode this information. This is most of the way towards being an excellent paper, but it drops the ball in this respect.
Does the review mention any comments, suggestions or typos that the author should address?
no
The authors entertain the hypothesis that syntactic information is redundantly encoded in neural language models, which has two important implications for probing and causal analysis: 1. Probes may arbitrarily choose one place a syntactic property is encoded, and the model may choose another. 2. Changing a representation based on a probe may therefore only partially change whether the property is encoded in the model, and might not influence the model decision, even if the model is relying on syntax. For contributions, the authors show first that syntactic information is redundantly encoded in language models, based on estimates of mutual information between different parts of the embeddings. They then propose a simple method to make probing more robustly rely on all the information in the representation, by adding dropout to the probe. They show that this leads to altered representations that are more influential on changing the model's behavior. 1. Well-written and clear hypothesis: redundant encoding of syntactic information poses a problem for drawing conclusions about whether models rely on the syntactic information encoded by probes. 2. The authors show that different parts of the hidden representations in different trained networks redundantly encode information correlated with syntax. 3. The dropout method for addressing this issue is both simple and motivated, which makes it appealing. 4. The experimental results, in the case of the QA models, tell a different story than past work that suffered from the redundancy vulnerability. Specifically, this work finds evidence that QA models do rely on syntactic information, while past work suggested they didn't. In the case of NLI, this work agrees with past works in finding a negative result. 1. There is some imprecision in the discussion of the results in 4.2.2: “In contrast to the standard probes, the dropout probes, plotted in the right column, revealed much larger effects of syntactic interventions.” Is this conclusion really justified? Visually, it is not clear that the red line is “above” the green line any more with dropout than without. Rather, it may just have more variance (so more causal influence, but maybe not in the right direction). In any case, the clean summary you give here seems at odds with the trend in the figure, and should be made more precise. 2. There may be some issue with the counterfactual intervention experiment with NLI-HANS, if I understand that part correctly. In 4.3, if the NLI-HANS model is already at 99% performance, why would you expect its accuracy to change significantly after counterfactual modification? You are already very close to the ceiling, and it should be hard to see any benefit of the intervention. 3. “ First, we found that language models redundantly encoded syntactic information in their embeddings” — Section 4.1 seems like solid evidence that networks redundantly encode information that is informative about syntax. But, to be a bit pedantic, it doesn’t have to be syntactic information; if $D$ is highly correlated with something non-syntactic, then that property could be expressed redundantly, and the MI would still be high. If we believe that there are dataset artifacts in syntactic parsing, then this is a concern. It seems to me that redundancy may only be a problem for the specific gradient-based representation alteration method you consider, rather than other ways of producing counterfactual interventions. For example, would [AlterRep](https://arxiv.org/abs/2105.06965) automatically handle the issue of redundancy? 417: why is “as” included along with “were” and “are”? 295: What do you mean by “conservative but tight estimate of mutual information”? This is pretty unclear to me 188: typo at “represented by within”
Does the review include a short summary of the paper?
yes
1. Well-written and clear hypothesis: redundant encoding of syntactic information poses a problem for drawing conclusions about whether models rely on the syntactic information encoded by probes. 2. The authors show that different parts of the hidden representations in different trained networks redundantly encode information correlated with syntax. 3. The dropout method for addressing this issue is both simple and motivated, which makes it appealing. 4. The experimental results, in the case of the QA models, tell a different story than past work that suffered from the redundancy vulnerability. Specifically, this work finds evidence that QA models do rely on syntactic information, while past work suggested they didn't. In the case of NLI, this work agrees with past works in finding a negative result. 1. There is some imprecision in the discussion of the results in 4.2.2: “In contrast to the standard probes, the dropout probes, plotted in the right column, revealed much larger effects of syntactic interventions.” Is this conclusion really justified? Visually, it is not clear that the red line is “above” the green line any more with dropout than without. Rather, it may just have more variance (so more causal influence, but maybe not in the right direction). In any case, the clean summary you give here seems at odds with the trend in the figure, and should be made more precise. 2. There may be some issue with the counterfactual intervention experiment with NLI-HANS, if I understand that part correctly. In 4.3, if the NLI-HANS model is already at 99% performance, why would you expect its accuracy to change significantly after counterfactual modification? You are already very close to the ceiling, and it should be hard to see any benefit of the intervention. 3. “ First, we found that language models redundantly encoded syntactic information in their embeddings” — Section 4.1 seems like solid evidence that networks redundantly encode information that is informative about syntax. But, to be a bit pedantic, it doesn’t have to be syntactic information; if $D$ is highly correlated with something non-syntactic, then that property could be expressed redundantly, and the MI would still be high. If we believe that there are dataset artifacts in syntactic parsing, then this is a concern. It seems to me that redundancy may only be a problem for the specific gradient-based representation alteration method you consider, rather than other ways of producing counterfactual interventions. For example, would [AlterRep](https://arxiv.org/abs/2105.06965) automatically handle the issue of redundancy? 417: why is “as” included along with “were” and “are”? 295: What do you mean by “conservative but tight estimate of mutual information”? This is pretty unclear to me 188: typo at “represented by within”
Does the review include a short summary of the paper?
no
The authors entertain the hypothesis that syntactic information is redundantly encoded in neural language models, which has two important implications for probing and causal analysis: 1. Probes may arbitrarily choose one place a syntactic property is encoded, and the model may choose another. 2. Changing a representation based on a probe may therefore only partially change whether the property is encoded in the model, and might not influence the model decision, even if the model is relying on syntax. For contributions, the authors show first that syntactic information is redundantly encoded in language models, based on estimates of mutual information between different parts of the embeddings. They then propose a simple method to make probing more robustly rely on all the information in the representation, by adding dropout to the probe. They show that this leads to altered representations that are more influential on changing the model's behavior. 1. Well-written and clear hypothesis: redundant encoding of syntactic information poses a problem for drawing conclusions about whether models rely on the syntactic information encoded by probes. 2. The authors show that different parts of the hidden representations in different trained networks redundantly encode information correlated with syntax. 3. The dropout method for addressing this issue is both simple and motivated, which makes it appealing. 4. The experimental results, in the case of the QA models, tell a different story than past work that suffered from the redundancy vulnerability. Specifically, this work finds evidence that QA models do rely on syntactic information, while past work suggested they didn't. In the case of NLI, this work agrees with past works in finding a negative result. 1. There is some imprecision in the discussion of the results in 4.2.2: “In contrast to the standard probes, the dropout probes, plotted in the right column, revealed much larger effects of syntactic interventions.” Is this conclusion really justified? Visually, it is not clear that the red line is “above” the green line any more with dropout than without. Rather, it may just have more variance (so more causal influence, but maybe not in the right direction). In any case, the clean summary you give here seems at odds with the trend in the figure, and should be made more precise. 2. There may be some issue with the counterfactual intervention experiment with NLI-HANS, if I understand that part correctly. In 4.3, if the NLI-HANS model is already at 99% performance, why would you expect its accuracy to change significantly after counterfactual modification? You are already very close to the ceiling, and it should be hard to see any benefit of the intervention. 3. “ First, we found that language models redundantly encoded syntactic information in their embeddings” — Section 4.1 seems like solid evidence that networks redundantly encode information that is informative about syntax. But, to be a bit pedantic, it doesn’t have to be syntactic information; if $D$ is highly correlated with something non-syntactic, then that property could be expressed redundantly, and the MI would still be high. If we believe that there are dataset artifacts in syntactic parsing, then this is a concern. It seems to me that redundancy may only be a problem for the specific gradient-based representation alteration method you consider, rather than other ways of producing counterfactual interventions. For example, would [AlterRep](https://arxiv.org/abs/2105.06965) automatically handle the issue of redundancy? 417: why is “as” included along with “were” and “are”? 295: What do you mean by “conservative but tight estimate of mutual information”? This is pretty unclear to me 188: typo at “represented by within”
Does the review include a summary of the strengths of the paper?
yes
The authors entertain the hypothesis that syntactic information is redundantly encoded in neural language models, which has two important implications for probing and causal analysis: 1. Probes may arbitrarily choose one place a syntactic property is encoded, and the model may choose another. 2. Changing a representation based on a probe may therefore only partially change whether the property is encoded in the model, and might not influence the model decision, even if the model is relying on syntax. For contributions, the authors show first that syntactic information is redundantly encoded in language models, based on estimates of mutual information between different parts of the embeddings. They then propose a simple method to make probing more robustly rely on all the information in the representation, by adding dropout to the probe. They show that this leads to altered representations that are more influential on changing the model's behavior. 1. There is some imprecision in the discussion of the results in 4.2.2: “In contrast to the standard probes, the dropout probes, plotted in the right column, revealed much larger effects of syntactic interventions.” Is this conclusion really justified? Visually, it is not clear that the red line is “above” the green line any more with dropout than without. Rather, it may just have more variance (so more causal influence, but maybe not in the right direction). In any case, the clean summary you give here seems at odds with the trend in the figure, and should be made more precise. 2. There may be some issue with the counterfactual intervention experiment with NLI-HANS, if I understand that part correctly. In 4.3, if the NLI-HANS model is already at 99% performance, why would you expect its accuracy to change significantly after counterfactual modification? You are already very close to the ceiling, and it should be hard to see any benefit of the intervention. 3. “ First, we found that language models redundantly encoded syntactic information in their embeddings” — Section 4.1 seems like solid evidence that networks redundantly encode information that is informative about syntax. But, to be a bit pedantic, it doesn’t have to be syntactic information; if $D$ is highly correlated with something non-syntactic, then that property could be expressed redundantly, and the MI would still be high. If we believe that there are dataset artifacts in syntactic parsing, then this is a concern. It seems to me that redundancy may only be a problem for the specific gradient-based representation alteration method you consider, rather than other ways of producing counterfactual interventions. For example, would [AlterRep](https://arxiv.org/abs/2105.06965) automatically handle the issue of redundancy? 417: why is “as” included along with “were” and “are”? 295: What do you mean by “conservative but tight estimate of mutual information”? This is pretty unclear to me 188: typo at “represented by within”
Does the review include a summary of the strengths of the paper?
no
The authors entertain the hypothesis that syntactic information is redundantly encoded in neural language models, which has two important implications for probing and causal analysis: 1. Probes may arbitrarily choose one place a syntactic property is encoded, and the model may choose another. 2. Changing a representation based on a probe may therefore only partially change whether the property is encoded in the model, and might not influence the model decision, even if the model is relying on syntax. For contributions, the authors show first that syntactic information is redundantly encoded in language models, based on estimates of mutual information between different parts of the embeddings. They then propose a simple method to make probing more robustly rely on all the information in the representation, by adding dropout to the probe. They show that this leads to altered representations that are more influential on changing the model's behavior. 1. Well-written and clear hypothesis: redundant encoding of syntactic information poses a problem for drawing conclusions about whether models rely on the syntactic information encoded by probes. 2. The authors show that different parts of the hidden representations in different trained networks redundantly encode information correlated with syntax. 3. The dropout method for addressing this issue is both simple and motivated, which makes it appealing. 4. The experimental results, in the case of the QA models, tell a different story than past work that suffered from the redundancy vulnerability. Specifically, this work finds evidence that QA models do rely on syntactic information, while past work suggested they didn't. In the case of NLI, this work agrees with past works in finding a negative result. 1. There is some imprecision in the discussion of the results in 4.2.2: “In contrast to the standard probes, the dropout probes, plotted in the right column, revealed much larger effects of syntactic interventions.” Is this conclusion really justified? Visually, it is not clear that the red line is “above” the green line any more with dropout than without. Rather, it may just have more variance (so more causal influence, but maybe not in the right direction). In any case, the clean summary you give here seems at odds with the trend in the figure, and should be made more precise. 2. There may be some issue with the counterfactual intervention experiment with NLI-HANS, if I understand that part correctly. In 4.3, if the NLI-HANS model is already at 99% performance, why would you expect its accuracy to change significantly after counterfactual modification? You are already very close to the ceiling, and it should be hard to see any benefit of the intervention. 3. “ First, we found that language models redundantly encoded syntactic information in their embeddings” — Section 4.1 seems like solid evidence that networks redundantly encode information that is informative about syntax. But, to be a bit pedantic, it doesn’t have to be syntactic information; if $D$ is highly correlated with something non-syntactic, then that property could be expressed redundantly, and the MI would still be high. If we believe that there are dataset artifacts in syntactic parsing, then this is a concern. It seems to me that redundancy may only be a problem for the specific gradient-based representation alteration method you consider, rather than other ways of producing counterfactual interventions. For example, would [AlterRep](https://arxiv.org/abs/2105.06965) automatically handle the issue of redundancy? 417: why is “as” included along with “were” and “are”? 295: What do you mean by “conservative but tight estimate of mutual information”? This is pretty unclear to me 188: typo at “represented by within”
Does the review include a summary of the weaknesses of the paper?
yes
The authors entertain the hypothesis that syntactic information is redundantly encoded in neural language models, which has two important implications for probing and causal analysis: 1. Probes may arbitrarily choose one place a syntactic property is encoded, and the model may choose another. 2. Changing a representation based on a probe may therefore only partially change whether the property is encoded in the model, and might not influence the model decision, even if the model is relying on syntax. For contributions, the authors show first that syntactic information is redundantly encoded in language models, based on estimates of mutual information between different parts of the embeddings. They then propose a simple method to make probing more robustly rely on all the information in the representation, by adding dropout to the probe. They show that this leads to altered representations that are more influential on changing the model's behavior. 1. Well-written and clear hypothesis: redundant encoding of syntactic information poses a problem for drawing conclusions about whether models rely on the syntactic information encoded by probes. 2. The authors show that different parts of the hidden representations in different trained networks redundantly encode information correlated with syntax. 3. The dropout method for addressing this issue is both simple and motivated, which makes it appealing. 4. The experimental results, in the case of the QA models, tell a different story than past work that suffered from the redundancy vulnerability. Specifically, this work finds evidence that QA models do rely on syntactic information, while past work suggested they didn't. In the case of NLI, this work agrees with past works in finding a negative result. It seems to me that redundancy may only be a problem for the specific gradient-based representation alteration method you consider, rather than other ways of producing counterfactual interventions. For example, would [AlterRep](https://arxiv.org/abs/2105.06965) automatically handle the issue of redundancy? 417: why is “as” included along with “were” and “are”? 295: What do you mean by “conservative but tight estimate of mutual information”? This is pretty unclear to me 188: typo at “represented by within”
Does the review include a summary of the weaknesses of the paper?
no
The authors entertain the hypothesis that syntactic information is redundantly encoded in neural language models, which has two important implications for probing and causal analysis: 1. Probes may arbitrarily choose one place a syntactic property is encoded, and the model may choose another. 2. Changing a representation based on a probe may therefore only partially change whether the property is encoded in the model, and might not influence the model decision, even if the model is relying on syntax. For contributions, the authors show first that syntactic information is redundantly encoded in language models, based on estimates of mutual information between different parts of the embeddings. They then propose a simple method to make probing more robustly rely on all the information in the representation, by adding dropout to the probe. They show that this leads to altered representations that are more influential on changing the model's behavior. 1. Well-written and clear hypothesis: redundant encoding of syntactic information poses a problem for drawing conclusions about whether models rely on the syntactic information encoded by probes. 2. The authors show that different parts of the hidden representations in different trained networks redundantly encode information correlated with syntax. 3. The dropout method for addressing this issue is both simple and motivated, which makes it appealing. 4. The experimental results, in the case of the QA models, tell a different story than past work that suffered from the redundancy vulnerability. Specifically, this work finds evidence that QA models do rely on syntactic information, while past work suggested they didn't. In the case of NLI, this work agrees with past works in finding a negative result. 1. There is some imprecision in the discussion of the results in 4.2.2: “In contrast to the standard probes, the dropout probes, plotted in the right column, revealed much larger effects of syntactic interventions.” Is this conclusion really justified? Visually, it is not clear that the red line is “above” the green line any more with dropout than without. Rather, it may just have more variance (so more causal influence, but maybe not in the right direction). In any case, the clean summary you give here seems at odds with the trend in the figure, and should be made more precise. 2. There may be some issue with the counterfactual intervention experiment with NLI-HANS, if I understand that part correctly. In 4.3, if the NLI-HANS model is already at 99% performance, why would you expect its accuracy to change significantly after counterfactual modification? You are already very close to the ceiling, and it should be hard to see any benefit of the intervention. 3. “ First, we found that language models redundantly encoded syntactic information in their embeddings” — Section 4.1 seems like solid evidence that networks redundantly encode information that is informative about syntax. But, to be a bit pedantic, it doesn’t have to be syntactic information; if $D$ is highly correlated with something non-syntactic, then that property could be expressed redundantly, and the MI would still be high. If we believe that there are dataset artifacts in syntactic parsing, then this is a concern. It seems to me that redundancy may only be a problem for the specific gradient-based representation alteration method you consider, rather than other ways of producing counterfactual interventions. For example, would [AlterRep](https://arxiv.org/abs/2105.06965) automatically handle the issue of redundancy? 417: why is “as” included along with “were” and “are”? 295: What do you mean by “conservative but tight estimate of mutual information”? This is pretty unclear to me 188: typo at “represented by within”
Does the review mention any comments, suggestions or typos that the author should address?
yes
The authors entertain the hypothesis that syntactic information is redundantly encoded in neural language models, which has two important implications for probing and causal analysis: 1. Probes may arbitrarily choose one place a syntactic property is encoded, and the model may choose another. 2. Changing a representation based on a probe may therefore only partially change whether the property is encoded in the model, and might not influence the model decision, even if the model is relying on syntax. For contributions, the authors show first that syntactic information is redundantly encoded in language models, based on estimates of mutual information between different parts of the embeddings. They then propose a simple method to make probing more robustly rely on all the information in the representation, by adding dropout to the probe. They show that this leads to altered representations that are more influential on changing the model's behavior. 1. Well-written and clear hypothesis: redundant encoding of syntactic information poses a problem for drawing conclusions about whether models rely on the syntactic information encoded by probes. 2. The authors show that different parts of the hidden representations in different trained networks redundantly encode information correlated with syntax. 3. The dropout method for addressing this issue is both simple and motivated, which makes it appealing. 4. The experimental results, in the case of the QA models, tell a different story than past work that suffered from the redundancy vulnerability. Specifically, this work finds evidence that QA models do rely on syntactic information, while past work suggested they didn't. In the case of NLI, this work agrees with past works in finding a negative result. 1. There is some imprecision in the discussion of the results in 4.2.2: “In contrast to the standard probes, the dropout probes, plotted in the right column, revealed much larger effects of syntactic interventions.” Is this conclusion really justified? Visually, it is not clear that the red line is “above” the green line any more with dropout than without. Rather, it may just have more variance (so more causal influence, but maybe not in the right direction). In any case, the clean summary you give here seems at odds with the trend in the figure, and should be made more precise. 2. There may be some issue with the counterfactual intervention experiment with NLI-HANS, if I understand that part correctly. In 4.3, if the NLI-HANS model is already at 99% performance, why would you expect its accuracy to change significantly after counterfactual modification? You are already very close to the ceiling, and it should be hard to see any benefit of the intervention. 3. “ First, we found that language models redundantly encoded syntactic information in their embeddings” — Section 4.1 seems like solid evidence that networks redundantly encode information that is informative about syntax. But, to be a bit pedantic, it doesn’t have to be syntactic information; if $D$ is highly correlated with something non-syntactic, then that property could be expressed redundantly, and the MI would still be high. If we believe that there are dataset artifacts in syntactic parsing, then this is a concern.
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper proposes a new pre-trained variational encoder-decoder dialog model which has continuous latent variables to deal with the one-to-many mapping problem in dialogue response generation. This paper conducts empirical experiments on 3 datasets to show their proposed model has better performances both on relevance and diversity than previous state-of-the-art dialog systems. They also conducted additional analysis to show the impact of latent variable sizes, different decoding strategies and position embeddings for their proposed model. However, in the model evaluation part, the paper does not show how their model can generate diverse responses given the same context. 1. This paper proposes a new variational encoder-decoder dialog model for open-domain dialogue generation tasks, which shows better performance in terms of relevancy and diversity than previous state-of-the-art models. The new model leverages several techniques to better alleviate the one-to-many mapping problem in dialogue response generation, including conditional variational autoencoder, n-stream self-attention, memory scheme, masked language modeling, free bits and bag-of-words loss for reducing KL-vanishing, and position embeddings for Transformers. 2. This paper conducts empirical experiments on three benchmark datasets to validate the effectiveness of the proposed model. The proposed model achieves better performances than previous baselines (e.g. PLATO) in both automatic metrics (e.g. BLEU-1/2, Distinct-1/2) and human evaluation (fluency, coherence, informativeness, overall). 3. This paper analyzes the effect of latent variable sizes, sizes of K in top-k sampling and different combinations of position embeddings, which are helpful empirical observations for training Transformer-based variational encoder-decoders. 1. This paper lacks a discussion on its novelty, e.g., why continuous latent variables are better than discrete latent variables for solving the one-to-many mapping problem. Though empirically the proposed DialogVED performs better than PLATO, DialogVED incorporates more other techniques (e.g. n-stream self-attention, memory scheme, masked language modeling, free bits) than PLATO. It is not very clear to me why we should choose those techniques to train a continuous variational encoder-decoder. Maybe some ablation studies on the training objective and model architecture can help explain it. 2. The model evaluation does not show the model's superiority in solving the one-to-many mapping problem in open-domain dialogue response generation. Distinct-1/2 are corpus-level metrics and are not showing the diversity of generated responses given the same context. To verify their claim, the authors may consider using self-bleu and the ratio of unique generated sentences to better evaluate the diversity. 1. It is unclear to me what's the input to the prior network. Fig 1 suggests the [CLS] token is at the end of the context, but line234 says the [CLS] token is at the beginning of the context.
Does the review include a short summary of the paper?
yes
1. This paper proposes a new variational encoder-decoder dialog model for open-domain dialogue generation tasks, which shows better performance in terms of relevancy and diversity than previous state-of-the-art models. The new model leverages several techniques to better alleviate the one-to-many mapping problem in dialogue response generation, including conditional variational autoencoder, n-stream self-attention, memory scheme, masked language modeling, free bits and bag-of-words loss for reducing KL-vanishing, and position embeddings for Transformers. 2. This paper conducts empirical experiments on three benchmark datasets to validate the effectiveness of the proposed model. The proposed model achieves better performances than previous baselines (e.g. PLATO) in both automatic metrics (e.g. BLEU-1/2, Distinct-1/2) and human evaluation (fluency, coherence, informativeness, overall). 3. This paper analyzes the effect of latent variable sizes, sizes of K in top-k sampling and different combinations of position embeddings, which are helpful empirical observations for training Transformer-based variational encoder-decoders. 1. This paper lacks a discussion on its novelty, e.g., why continuous latent variables are better than discrete latent variables for solving the one-to-many mapping problem. Though empirically the proposed DialogVED performs better than PLATO, DialogVED incorporates more other techniques (e.g. n-stream self-attention, memory scheme, masked language modeling, free bits) than PLATO. It is not very clear to me why we should choose those techniques to train a continuous variational encoder-decoder. Maybe some ablation studies on the training objective and model architecture can help explain it. 2. The model evaluation does not show the model's superiority in solving the one-to-many mapping problem in open-domain dialogue response generation. Distinct-1/2 are corpus-level metrics and are not showing the diversity of generated responses given the same context. To verify their claim, the authors may consider using self-bleu and the ratio of unique generated sentences to better evaluate the diversity. 1. It is unclear to me what's the input to the prior network. Fig 1 suggests the [CLS] token is at the end of the context, but line234 says the [CLS] token is at the beginning of the context.
Does the review include a short summary of the paper?
no
This paper proposes a new pre-trained variational encoder-decoder dialog model which has continuous latent variables to deal with the one-to-many mapping problem in dialogue response generation. This paper conducts empirical experiments on 3 datasets to show their proposed model has better performances both on relevance and diversity than previous state-of-the-art dialog systems. They also conducted additional analysis to show the impact of latent variable sizes, different decoding strategies and position embeddings for their proposed model. However, in the model evaluation part, the paper does not show how their model can generate diverse responses given the same context. 1. This paper proposes a new variational encoder-decoder dialog model for open-domain dialogue generation tasks, which shows better performance in terms of relevancy and diversity than previous state-of-the-art models. The new model leverages several techniques to better alleviate the one-to-many mapping problem in dialogue response generation, including conditional variational autoencoder, n-stream self-attention, memory scheme, masked language modeling, free bits and bag-of-words loss for reducing KL-vanishing, and position embeddings for Transformers. 2. This paper conducts empirical experiments on three benchmark datasets to validate the effectiveness of the proposed model. The proposed model achieves better performances than previous baselines (e.g. PLATO) in both automatic metrics (e.g. BLEU-1/2, Distinct-1/2) and human evaluation (fluency, coherence, informativeness, overall). 3. This paper analyzes the effect of latent variable sizes, sizes of K in top-k sampling and different combinations of position embeddings, which are helpful empirical observations for training Transformer-based variational encoder-decoders. 1. This paper lacks a discussion on its novelty, e.g., why continuous latent variables are better than discrete latent variables for solving the one-to-many mapping problem. Though empirically the proposed DialogVED performs better than PLATO, DialogVED incorporates more other techniques (e.g. n-stream self-attention, memory scheme, masked language modeling, free bits) than PLATO. It is not very clear to me why we should choose those techniques to train a continuous variational encoder-decoder. Maybe some ablation studies on the training objective and model architecture can help explain it. 2. The model evaluation does not show the model's superiority in solving the one-to-many mapping problem in open-domain dialogue response generation. Distinct-1/2 are corpus-level metrics and are not showing the diversity of generated responses given the same context. To verify their claim, the authors may consider using self-bleu and the ratio of unique generated sentences to better evaluate the diversity. 1. It is unclear to me what's the input to the prior network. Fig 1 suggests the [CLS] token is at the end of the context, but line234 says the [CLS] token is at the beginning of the context.
Does the review include a summary of the strengths of the paper?
yes
This paper proposes a new pre-trained variational encoder-decoder dialog model which has continuous latent variables to deal with the one-to-many mapping problem in dialogue response generation. This paper conducts empirical experiments on 3 datasets to show their proposed model has better performances both on relevance and diversity than previous state-of-the-art dialog systems. They also conducted additional analysis to show the impact of latent variable sizes, different decoding strategies and position embeddings for their proposed model. However, in the model evaluation part, the paper does not show how their model can generate diverse responses given the same context. 1. This paper lacks a discussion on its novelty, e.g., why continuous latent variables are better than discrete latent variables for solving the one-to-many mapping problem. Though empirically the proposed DialogVED performs better than PLATO, DialogVED incorporates more other techniques (e.g. n-stream self-attention, memory scheme, masked language modeling, free bits) than PLATO. It is not very clear to me why we should choose those techniques to train a continuous variational encoder-decoder. Maybe some ablation studies on the training objective and model architecture can help explain it. 2. The model evaluation does not show the model's superiority in solving the one-to-many mapping problem in open-domain dialogue response generation. Distinct-1/2 are corpus-level metrics and are not showing the diversity of generated responses given the same context. To verify their claim, the authors may consider using self-bleu and the ratio of unique generated sentences to better evaluate the diversity. 1. It is unclear to me what's the input to the prior network. Fig 1 suggests the [CLS] token is at the end of the context, but line234 says the [CLS] token is at the beginning of the context.
Does the review include a summary of the strengths of the paper?
no