text
stringlengths
63
12.6k
question
stringclasses
4 values
label
stringclasses
2 values
This paper addresses the problem of how to weight task-specific losses in multi-task learning and presents an algorithm to adapt weights of loss functions, each of which corresponds to a task, during training. The key idea is to update loss weights so that the weighted sum of task-specific gradients moves model parameters to the direction which reduces all task-specific losses computed on held-out data. The proposed algorithm is expected to (a) avoid undesirable task performance trade-offs (i.e, sacrificing performance on one task to improve on another task) and (b) facilitate better generalization. The experiments showed that the proposed method achieved meaningful improvements in most of the tested individual tasks, and also achieved significantly higher performances on average. 1. The proposed method directly and naturally addresses the trade-off problem of multi-task learning. The design motivation is easy to understand thus can facilitate further, deeper research easily. 2. The presented experiments clearly demonstrated the performance superiority in two different types of text classification tasks. Thus it is arguable that the proposed method will have broader applications. 1. The experimental settings may be in favor of the proposed method and the actual performance gap in the averaged loss may not be as significant as presented in the paper. The loss function of the “Uniform” baseline uses a constant weight across tasks, which matches the weights used in the evaluation of average performance. Thus I expect that the baseline’s performance is not very much behind the other methods including the proposed method *for the averaged score*. However Figure 2 and 3 shows it performed poorly. This may suggest that the baseline, and other baselines, might overfit to the training data. There is no mention of the criteria to stop training so I cannot tell whether the authors did early stopping using held-out data. Since the proposed method uses a held-out dataset (called “query” in the paper) to control training, I think it is fair to use such a dataset to control training of baseline methods. 2. The paper doesn’t provide insights about the adapted weights except they looked to keep adapted during training. I think it is of general interest whether the proposed method actually controlled task weights nicely and/or the other baseline methods assigned task weights poorly and resulted in worse performances and due to that. The scaling of Figure 8 and 9 makes it hard to compare weights between compared methods, especially MGDA and GradNorm, because these absolute values are much smaller than others and see *relative magnitudes” between tasks. ( For task weighting, absolute magnitudes don’t matter.) I think the paper doesn’t provide strong evidence that the observed good performances were actually due to better task weighting. 3. Related to #2, no reason is provided to support why the baselines were chosen to compare. The readers who follow the technical area closely might understand how informative contrasting the proposed method and the baselines are, but for others like me, they look like random choices. Explanations are required to describe why they were chosen. 1. The theoretical analysis doesn’t provide much insight beyond that a loss computed on a held-out dataset (= the query set) is a better estimator of the expected loss than that computed on the training data, which is well known. It could be fully removed to make spaces for other discussions. 2. Scatter plots in Figure 2 and 3 are not very suitable because it is hard to see how each run is distributed when they overlap. Box plots like in Figure 4 and 5 would be better.
Does the review include a short summary of the paper?
yes
1. The proposed method directly and naturally addresses the trade-off problem of multi-task learning. The design motivation is easy to understand thus can facilitate further, deeper research easily. 2. The presented experiments clearly demonstrated the performance superiority in two different types of text classification tasks. Thus it is arguable that the proposed method will have broader applications. 1. The experimental settings may be in favor of the proposed method and the actual performance gap in the averaged loss may not be as significant as presented in the paper. The loss function of the “Uniform” baseline uses a constant weight across tasks, which matches the weights used in the evaluation of average performance. Thus I expect that the baseline’s performance is not very much behind the other methods including the proposed method *for the averaged score*. However Figure 2 and 3 shows it performed poorly. This may suggest that the baseline, and other baselines, might overfit to the training data. There is no mention of the criteria to stop training so I cannot tell whether the authors did early stopping using held-out data. Since the proposed method uses a held-out dataset (called “query” in the paper) to control training, I think it is fair to use such a dataset to control training of baseline methods. 2. The paper doesn’t provide insights about the adapted weights except they looked to keep adapted during training. I think it is of general interest whether the proposed method actually controlled task weights nicely and/or the other baseline methods assigned task weights poorly and resulted in worse performances and due to that. The scaling of Figure 8 and 9 makes it hard to compare weights between compared methods, especially MGDA and GradNorm, because these absolute values are much smaller than others and see *relative magnitudes” between tasks. ( For task weighting, absolute magnitudes don’t matter.) I think the paper doesn’t provide strong evidence that the observed good performances were actually due to better task weighting. 3. Related to #2, no reason is provided to support why the baselines were chosen to compare. The readers who follow the technical area closely might understand how informative contrasting the proposed method and the baselines are, but for others like me, they look like random choices. Explanations are required to describe why they were chosen. 1. The theoretical analysis doesn’t provide much insight beyond that a loss computed on a held-out dataset (= the query set) is a better estimator of the expected loss than that computed on the training data, which is well known. It could be fully removed to make spaces for other discussions. 2. Scatter plots in Figure 2 and 3 are not very suitable because it is hard to see how each run is distributed when they overlap. Box plots like in Figure 4 and 5 would be better.
Does the review include a short summary of the paper?
no
This paper addresses the problem of how to weight task-specific losses in multi-task learning and presents an algorithm to adapt weights of loss functions, each of which corresponds to a task, during training. The key idea is to update loss weights so that the weighted sum of task-specific gradients moves model parameters to the direction which reduces all task-specific losses computed on held-out data. The proposed algorithm is expected to (a) avoid undesirable task performance trade-offs (i.e, sacrificing performance on one task to improve on another task) and (b) facilitate better generalization. The experiments showed that the proposed method achieved meaningful improvements in most of the tested individual tasks, and also achieved significantly higher performances on average. 1. The proposed method directly and naturally addresses the trade-off problem of multi-task learning. The design motivation is easy to understand thus can facilitate further, deeper research easily. 2. The presented experiments clearly demonstrated the performance superiority in two different types of text classification tasks. Thus it is arguable that the proposed method will have broader applications. 1. The experimental settings may be in favor of the proposed method and the actual performance gap in the averaged loss may not be as significant as presented in the paper. The loss function of the “Uniform” baseline uses a constant weight across tasks, which matches the weights used in the evaluation of average performance. Thus I expect that the baseline’s performance is not very much behind the other methods including the proposed method *for the averaged score*. However Figure 2 and 3 shows it performed poorly. This may suggest that the baseline, and other baselines, might overfit to the training data. There is no mention of the criteria to stop training so I cannot tell whether the authors did early stopping using held-out data. Since the proposed method uses a held-out dataset (called “query” in the paper) to control training, I think it is fair to use such a dataset to control training of baseline methods. 2. The paper doesn’t provide insights about the adapted weights except they looked to keep adapted during training. I think it is of general interest whether the proposed method actually controlled task weights nicely and/or the other baseline methods assigned task weights poorly and resulted in worse performances and due to that. The scaling of Figure 8 and 9 makes it hard to compare weights between compared methods, especially MGDA and GradNorm, because these absolute values are much smaller than others and see *relative magnitudes” between tasks. ( For task weighting, absolute magnitudes don’t matter.) I think the paper doesn’t provide strong evidence that the observed good performances were actually due to better task weighting. 3. Related to #2, no reason is provided to support why the baselines were chosen to compare. The readers who follow the technical area closely might understand how informative contrasting the proposed method and the baselines are, but for others like me, they look like random choices. Explanations are required to describe why they were chosen. 1. The theoretical analysis doesn’t provide much insight beyond that a loss computed on a held-out dataset (= the query set) is a better estimator of the expected loss than that computed on the training data, which is well known. It could be fully removed to make spaces for other discussions. 2. Scatter plots in Figure 2 and 3 are not very suitable because it is hard to see how each run is distributed when they overlap. Box plots like in Figure 4 and 5 would be better.
Does the review include a summary of the strengths of the paper?
yes
This paper addresses the problem of how to weight task-specific losses in multi-task learning and presents an algorithm to adapt weights of loss functions, each of which corresponds to a task, during training. The key idea is to update loss weights so that the weighted sum of task-specific gradients moves model parameters to the direction which reduces all task-specific losses computed on held-out data. The proposed algorithm is expected to (a) avoid undesirable task performance trade-offs (i.e, sacrificing performance on one task to improve on another task) and (b) facilitate better generalization. The experiments showed that the proposed method achieved meaningful improvements in most of the tested individual tasks, and also achieved significantly higher performances on average. 1. The experimental settings may be in favor of the proposed method and the actual performance gap in the averaged loss may not be as significant as presented in the paper. The loss function of the “Uniform” baseline uses a constant weight across tasks, which matches the weights used in the evaluation of average performance. Thus I expect that the baseline’s performance is not very much behind the other methods including the proposed method *for the averaged score*. However Figure 2 and 3 shows it performed poorly. This may suggest that the baseline, and other baselines, might overfit to the training data. There is no mention of the criteria to stop training so I cannot tell whether the authors did early stopping using held-out data. Since the proposed method uses a held-out dataset (called “query” in the paper) to control training, I think it is fair to use such a dataset to control training of baseline methods. 2. The paper doesn’t provide insights about the adapted weights except they looked to keep adapted during training. I think it is of general interest whether the proposed method actually controlled task weights nicely and/or the other baseline methods assigned task weights poorly and resulted in worse performances and due to that. The scaling of Figure 8 and 9 makes it hard to compare weights between compared methods, especially MGDA and GradNorm, because these absolute values are much smaller than others and see *relative magnitudes” between tasks. ( For task weighting, absolute magnitudes don’t matter.) I think the paper doesn’t provide strong evidence that the observed good performances were actually due to better task weighting. 3. Related to #2, no reason is provided to support why the baselines were chosen to compare. The readers who follow the technical area closely might understand how informative contrasting the proposed method and the baselines are, but for others like me, they look like random choices. Explanations are required to describe why they were chosen. 1. The theoretical analysis doesn’t provide much insight beyond that a loss computed on a held-out dataset (= the query set) is a better estimator of the expected loss than that computed on the training data, which is well known. It could be fully removed to make spaces for other discussions. 2. Scatter plots in Figure 2 and 3 are not very suitable because it is hard to see how each run is distributed when they overlap. Box plots like in Figure 4 and 5 would be better.
Does the review include a summary of the strengths of the paper?
no
This paper addresses the problem of how to weight task-specific losses in multi-task learning and presents an algorithm to adapt weights of loss functions, each of which corresponds to a task, during training. The key idea is to update loss weights so that the weighted sum of task-specific gradients moves model parameters to the direction which reduces all task-specific losses computed on held-out data. The proposed algorithm is expected to (a) avoid undesirable task performance trade-offs (i.e, sacrificing performance on one task to improve on another task) and (b) facilitate better generalization. The experiments showed that the proposed method achieved meaningful improvements in most of the tested individual tasks, and also achieved significantly higher performances on average. 1. The proposed method directly and naturally addresses the trade-off problem of multi-task learning. The design motivation is easy to understand thus can facilitate further, deeper research easily. 2. The presented experiments clearly demonstrated the performance superiority in two different types of text classification tasks. Thus it is arguable that the proposed method will have broader applications. 1. The experimental settings may be in favor of the proposed method and the actual performance gap in the averaged loss may not be as significant as presented in the paper. The loss function of the “Uniform” baseline uses a constant weight across tasks, which matches the weights used in the evaluation of average performance. Thus I expect that the baseline’s performance is not very much behind the other methods including the proposed method *for the averaged score*. However Figure 2 and 3 shows it performed poorly. This may suggest that the baseline, and other baselines, might overfit to the training data. There is no mention of the criteria to stop training so I cannot tell whether the authors did early stopping using held-out data. Since the proposed method uses a held-out dataset (called “query” in the paper) to control training, I think it is fair to use such a dataset to control training of baseline methods. 2. The paper doesn’t provide insights about the adapted weights except they looked to keep adapted during training. I think it is of general interest whether the proposed method actually controlled task weights nicely and/or the other baseline methods assigned task weights poorly and resulted in worse performances and due to that. The scaling of Figure 8 and 9 makes it hard to compare weights between compared methods, especially MGDA and GradNorm, because these absolute values are much smaller than others and see *relative magnitudes” between tasks. ( For task weighting, absolute magnitudes don’t matter.) I think the paper doesn’t provide strong evidence that the observed good performances were actually due to better task weighting. 3. Related to #2, no reason is provided to support why the baselines were chosen to compare. The readers who follow the technical area closely might understand how informative contrasting the proposed method and the baselines are, but for others like me, they look like random choices. Explanations are required to describe why they were chosen. 1. The theoretical analysis doesn’t provide much insight beyond that a loss computed on a held-out dataset (= the query set) is a better estimator of the expected loss than that computed on the training data, which is well known. It could be fully removed to make spaces for other discussions. 2. Scatter plots in Figure 2 and 3 are not very suitable because it is hard to see how each run is distributed when they overlap. Box plots like in Figure 4 and 5 would be better.
Does the review include a summary of the weaknesses of the paper?
yes
This paper addresses the problem of how to weight task-specific losses in multi-task learning and presents an algorithm to adapt weights of loss functions, each of which corresponds to a task, during training. The key idea is to update loss weights so that the weighted sum of task-specific gradients moves model parameters to the direction which reduces all task-specific losses computed on held-out data. The proposed algorithm is expected to (a) avoid undesirable task performance trade-offs (i.e, sacrificing performance on one task to improve on another task) and (b) facilitate better generalization. The experiments showed that the proposed method achieved meaningful improvements in most of the tested individual tasks, and also achieved significantly higher performances on average. 1. The proposed method directly and naturally addresses the trade-off problem of multi-task learning. The design motivation is easy to understand thus can facilitate further, deeper research easily. 2. The presented experiments clearly demonstrated the performance superiority in two different types of text classification tasks. Thus it is arguable that the proposed method will have broader applications. 1. The theoretical analysis doesn’t provide much insight beyond that a loss computed on a held-out dataset (= the query set) is a better estimator of the expected loss than that computed on the training data, which is well known. It could be fully removed to make spaces for other discussions. 2. Scatter plots in Figure 2 and 3 are not very suitable because it is hard to see how each run is distributed when they overlap. Box plots like in Figure 4 and 5 would be better.
Does the review include a summary of the weaknesses of the paper?
no
This paper addresses the problem of how to weight task-specific losses in multi-task learning and presents an algorithm to adapt weights of loss functions, each of which corresponds to a task, during training. The key idea is to update loss weights so that the weighted sum of task-specific gradients moves model parameters to the direction which reduces all task-specific losses computed on held-out data. The proposed algorithm is expected to (a) avoid undesirable task performance trade-offs (i.e, sacrificing performance on one task to improve on another task) and (b) facilitate better generalization. The experiments showed that the proposed method achieved meaningful improvements in most of the tested individual tasks, and also achieved significantly higher performances on average. 1. The proposed method directly and naturally addresses the trade-off problem of multi-task learning. The design motivation is easy to understand thus can facilitate further, deeper research easily. 2. The presented experiments clearly demonstrated the performance superiority in two different types of text classification tasks. Thus it is arguable that the proposed method will have broader applications. 1. The experimental settings may be in favor of the proposed method and the actual performance gap in the averaged loss may not be as significant as presented in the paper. The loss function of the “Uniform” baseline uses a constant weight across tasks, which matches the weights used in the evaluation of average performance. Thus I expect that the baseline’s performance is not very much behind the other methods including the proposed method *for the averaged score*. However Figure 2 and 3 shows it performed poorly. This may suggest that the baseline, and other baselines, might overfit to the training data. There is no mention of the criteria to stop training so I cannot tell whether the authors did early stopping using held-out data. Since the proposed method uses a held-out dataset (called “query” in the paper) to control training, I think it is fair to use such a dataset to control training of baseline methods. 2. The paper doesn’t provide insights about the adapted weights except they looked to keep adapted during training. I think it is of general interest whether the proposed method actually controlled task weights nicely and/or the other baseline methods assigned task weights poorly and resulted in worse performances and due to that. The scaling of Figure 8 and 9 makes it hard to compare weights between compared methods, especially MGDA and GradNorm, because these absolute values are much smaller than others and see *relative magnitudes” between tasks. ( For task weighting, absolute magnitudes don’t matter.) I think the paper doesn’t provide strong evidence that the observed good performances were actually due to better task weighting. 3. Related to #2, no reason is provided to support why the baselines were chosen to compare. The readers who follow the technical area closely might understand how informative contrasting the proposed method and the baselines are, but for others like me, they look like random choices. Explanations are required to describe why they were chosen. 1. The theoretical analysis doesn’t provide much insight beyond that a loss computed on a held-out dataset (= the query set) is a better estimator of the expected loss than that computed on the training data, which is well known. It could be fully removed to make spaces for other discussions. 2. Scatter plots in Figure 2 and 3 are not very suitable because it is hard to see how each run is distributed when they overlap. Box plots like in Figure 4 and 5 would be better.
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper addresses the problem of how to weight task-specific losses in multi-task learning and presents an algorithm to adapt weights of loss functions, each of which corresponds to a task, during training. The key idea is to update loss weights so that the weighted sum of task-specific gradients moves model parameters to the direction which reduces all task-specific losses computed on held-out data. The proposed algorithm is expected to (a) avoid undesirable task performance trade-offs (i.e, sacrificing performance on one task to improve on another task) and (b) facilitate better generalization. The experiments showed that the proposed method achieved meaningful improvements in most of the tested individual tasks, and also achieved significantly higher performances on average. 1. The proposed method directly and naturally addresses the trade-off problem of multi-task learning. The design motivation is easy to understand thus can facilitate further, deeper research easily. 2. The presented experiments clearly demonstrated the performance superiority in two different types of text classification tasks. Thus it is arguable that the proposed method will have broader applications. 1. The experimental settings may be in favor of the proposed method and the actual performance gap in the averaged loss may not be as significant as presented in the paper. The loss function of the “Uniform” baseline uses a constant weight across tasks, which matches the weights used in the evaluation of average performance. Thus I expect that the baseline’s performance is not very much behind the other methods including the proposed method *for the averaged score*. However Figure 2 and 3 shows it performed poorly. This may suggest that the baseline, and other baselines, might overfit to the training data. There is no mention of the criteria to stop training so I cannot tell whether the authors did early stopping using held-out data. Since the proposed method uses a held-out dataset (called “query” in the paper) to control training, I think it is fair to use such a dataset to control training of baseline methods. 2. The paper doesn’t provide insights about the adapted weights except they looked to keep adapted during training. I think it is of general interest whether the proposed method actually controlled task weights nicely and/or the other baseline methods assigned task weights poorly and resulted in worse performances and due to that. The scaling of Figure 8 and 9 makes it hard to compare weights between compared methods, especially MGDA and GradNorm, because these absolute values are much smaller than others and see *relative magnitudes” between tasks. ( For task weighting, absolute magnitudes don’t matter.) I think the paper doesn’t provide strong evidence that the observed good performances were actually due to better task weighting. 3. Related to #2, no reason is provided to support why the baselines were chosen to compare. The readers who follow the technical area closely might understand how informative contrasting the proposed method and the baselines are, but for others like me, they look like random choices. Explanations are required to describe why they were chosen.
Does the review mention any comments, suggestions or typos that the author should address?
no
The authors propose to use static semi-factual generation + dynamic human-intervened correction to get better performance on OOD and few-shot performance. The method inlcudes 2 major steps: 1) use a rationale extraction model trained on small amount of annotated rationales to highlight rationales and replace non-rationales with synonyms (semi-factual generation) and 2) ask human annotators to identify false rationales (use synonym replacement for the false span) and missing rationales (extract a subsequence) to generate as the new examples. The authors compare with CAD and show better in-domain and OOD performance. - The idea of using semi-factual generation is novel and interesting. - Using raitonale model to expose model reasoning and ask human annotators for minimal effort (only identifies errors and leaves generating new examples to models) can largely reduce the annotation effort. - The result that show better OOD performance against CAD with fewer number of examples is interesting as the effort to annotate the data is much less. - Some experiments don’t add much to the analysis: Duplication (line 493) doesn’t seem like a reasonable baseline to compare to. It merely multiplies the 50 examples without adding any new information. - The semi-factual generation on model generated rationale in the first step: the replacement could be done on missing rationales, This will remove the correct rationales form the example. It would be better if the paper address this (even simply pointing out that it’s not an issue). - Some annotation protocals and their encessities are not explained. E.g., line 223-224 and line 230-231. - Typo line 1: rational-centric → rationale-centric - Was the notation in line 284 intentional? Would look nicer to use $x'$ instead. - Not sure if *inductive bias* is the right term to characterize what the annotation captures. Personally, I think of it more as *invariance*. (This point does not factor in to the final score since it's more of a personal take.)
Does the review include a short summary of the paper?
yes
- The idea of using semi-factual generation is novel and interesting. - Using raitonale model to expose model reasoning and ask human annotators for minimal effort (only identifies errors and leaves generating new examples to models) can largely reduce the annotation effort. - The result that show better OOD performance against CAD with fewer number of examples is interesting as the effort to annotate the data is much less. - Some experiments don’t add much to the analysis: Duplication (line 493) doesn’t seem like a reasonable baseline to compare to. It merely multiplies the 50 examples without adding any new information. - The semi-factual generation on model generated rationale in the first step: the replacement could be done on missing rationales, This will remove the correct rationales form the example. It would be better if the paper address this (even simply pointing out that it’s not an issue). - Some annotation protocals and their encessities are not explained. E.g., line 223-224 and line 230-231. - Typo line 1: rational-centric → rationale-centric - Was the notation in line 284 intentional? Would look nicer to use $x'$ instead. - Not sure if *inductive bias* is the right term to characterize what the annotation captures. Personally, I think of it more as *invariance*. (This point does not factor in to the final score since it's more of a personal take.)
Does the review include a short summary of the paper?
no
The authors propose to use static semi-factual generation + dynamic human-intervened correction to get better performance on OOD and few-shot performance. The method inlcudes 2 major steps: 1) use a rationale extraction model trained on small amount of annotated rationales to highlight rationales and replace non-rationales with synonyms (semi-factual generation) and 2) ask human annotators to identify false rationales (use synonym replacement for the false span) and missing rationales (extract a subsequence) to generate as the new examples. The authors compare with CAD and show better in-domain and OOD performance. - The idea of using semi-factual generation is novel and interesting. - Using raitonale model to expose model reasoning and ask human annotators for minimal effort (only identifies errors and leaves generating new examples to models) can largely reduce the annotation effort. - The result that show better OOD performance against CAD with fewer number of examples is interesting as the effort to annotate the data is much less. - Some experiments don’t add much to the analysis: Duplication (line 493) doesn’t seem like a reasonable baseline to compare to. It merely multiplies the 50 examples without adding any new information. - The semi-factual generation on model generated rationale in the first step: the replacement could be done on missing rationales, This will remove the correct rationales form the example. It would be better if the paper address this (even simply pointing out that it’s not an issue). - Some annotation protocals and their encessities are not explained. E.g., line 223-224 and line 230-231. - Typo line 1: rational-centric → rationale-centric - Was the notation in line 284 intentional? Would look nicer to use $x'$ instead. - Not sure if *inductive bias* is the right term to characterize what the annotation captures. Personally, I think of it more as *invariance*. (This point does not factor in to the final score since it's more of a personal take.)
Does the review include a summary of the strengths of the paper?
yes
The authors propose to use static semi-factual generation + dynamic human-intervened correction to get better performance on OOD and few-shot performance. The method inlcudes 2 major steps: 1) use a rationale extraction model trained on small amount of annotated rationales to highlight rationales and replace non-rationales with synonyms (semi-factual generation) and 2) ask human annotators to identify false rationales (use synonym replacement for the false span) and missing rationales (extract a subsequence) to generate as the new examples. The authors compare with CAD and show better in-domain and OOD performance. - Some experiments don’t add much to the analysis: Duplication (line 493) doesn’t seem like a reasonable baseline to compare to. It merely multiplies the 50 examples without adding any new information. - The semi-factual generation on model generated rationale in the first step: the replacement could be done on missing rationales, This will remove the correct rationales form the example. It would be better if the paper address this (even simply pointing out that it’s not an issue). - Some annotation protocals and their encessities are not explained. E.g., line 223-224 and line 230-231. - Typo line 1: rational-centric → rationale-centric - Was the notation in line 284 intentional? Would look nicer to use $x'$ instead. - Not sure if *inductive bias* is the right term to characterize what the annotation captures. Personally, I think of it more as *invariance*. (This point does not factor in to the final score since it's more of a personal take.)
Does the review include a summary of the strengths of the paper?
no
The authors propose to use static semi-factual generation + dynamic human-intervened correction to get better performance on OOD and few-shot performance. The method inlcudes 2 major steps: 1) use a rationale extraction model trained on small amount of annotated rationales to highlight rationales and replace non-rationales with synonyms (semi-factual generation) and 2) ask human annotators to identify false rationales (use synonym replacement for the false span) and missing rationales (extract a subsequence) to generate as the new examples. The authors compare with CAD and show better in-domain and OOD performance. - The idea of using semi-factual generation is novel and interesting. - Using raitonale model to expose model reasoning and ask human annotators for minimal effort (only identifies errors and leaves generating new examples to models) can largely reduce the annotation effort. - The result that show better OOD performance against CAD with fewer number of examples is interesting as the effort to annotate the data is much less. - Some experiments don’t add much to the analysis: Duplication (line 493) doesn’t seem like a reasonable baseline to compare to. It merely multiplies the 50 examples without adding any new information. - The semi-factual generation on model generated rationale in the first step: the replacement could be done on missing rationales, This will remove the correct rationales form the example. It would be better if the paper address this (even simply pointing out that it’s not an issue). - Some annotation protocals and their encessities are not explained. E.g., line 223-224 and line 230-231. - Typo line 1: rational-centric → rationale-centric - Was the notation in line 284 intentional? Would look nicer to use $x'$ instead. - Not sure if *inductive bias* is the right term to characterize what the annotation captures. Personally, I think of it more as *invariance*. (This point does not factor in to the final score since it's more of a personal take.)
Does the review include a summary of the weaknesses of the paper?
yes
The authors propose to use static semi-factual generation + dynamic human-intervened correction to get better performance on OOD and few-shot performance. The method inlcudes 2 major steps: 1) use a rationale extraction model trained on small amount of annotated rationales to highlight rationales and replace non-rationales with synonyms (semi-factual generation) and 2) ask human annotators to identify false rationales (use synonym replacement for the false span) and missing rationales (extract a subsequence) to generate as the new examples. The authors compare with CAD and show better in-domain and OOD performance. - The idea of using semi-factual generation is novel and interesting. - Using raitonale model to expose model reasoning and ask human annotators for minimal effort (only identifies errors and leaves generating new examples to models) can largely reduce the annotation effort. - The result that show better OOD performance against CAD with fewer number of examples is interesting as the effort to annotate the data is much less. - Typo line 1: rational-centric → rationale-centric - Was the notation in line 284 intentional? Would look nicer to use $x'$ instead. - Not sure if *inductive bias* is the right term to characterize what the annotation captures. Personally, I think of it more as *invariance*. (This point does not factor in to the final score since it's more of a personal take.)
Does the review include a summary of the weaknesses of the paper?
no
The authors propose to use static semi-factual generation + dynamic human-intervened correction to get better performance on OOD and few-shot performance. The method inlcudes 2 major steps: 1) use a rationale extraction model trained on small amount of annotated rationales to highlight rationales and replace non-rationales with synonyms (semi-factual generation) and 2) ask human annotators to identify false rationales (use synonym replacement for the false span) and missing rationales (extract a subsequence) to generate as the new examples. The authors compare with CAD and show better in-domain and OOD performance. - The idea of using semi-factual generation is novel and interesting. - Using raitonale model to expose model reasoning and ask human annotators for minimal effort (only identifies errors and leaves generating new examples to models) can largely reduce the annotation effort. - The result that show better OOD performance against CAD with fewer number of examples is interesting as the effort to annotate the data is much less. - Some experiments don’t add much to the analysis: Duplication (line 493) doesn’t seem like a reasonable baseline to compare to. It merely multiplies the 50 examples without adding any new information. - The semi-factual generation on model generated rationale in the first step: the replacement could be done on missing rationales, This will remove the correct rationales form the example. It would be better if the paper address this (even simply pointing out that it’s not an issue). - Some annotation protocals and their encessities are not explained. E.g., line 223-224 and line 230-231. - Typo line 1: rational-centric → rationale-centric - Was the notation in line 284 intentional? Would look nicer to use $x'$ instead. - Not sure if *inductive bias* is the right term to characterize what the annotation captures. Personally, I think of it more as *invariance*. (This point does not factor in to the final score since it's more of a personal take.)
Does the review mention any comments, suggestions or typos that the author should address?
yes
The authors propose to use static semi-factual generation + dynamic human-intervened correction to get better performance on OOD and few-shot performance. The method inlcudes 2 major steps: 1) use a rationale extraction model trained on small amount of annotated rationales to highlight rationales and replace non-rationales with synonyms (semi-factual generation) and 2) ask human annotators to identify false rationales (use synonym replacement for the false span) and missing rationales (extract a subsequence) to generate as the new examples. The authors compare with CAD and show better in-domain and OOD performance. - The idea of using semi-factual generation is novel and interesting. - Using raitonale model to expose model reasoning and ask human annotators for minimal effort (only identifies errors and leaves generating new examples to models) can largely reduce the annotation effort. - The result that show better OOD performance against CAD with fewer number of examples is interesting as the effort to annotate the data is much less. - Some experiments don’t add much to the analysis: Duplication (line 493) doesn’t seem like a reasonable baseline to compare to. It merely multiplies the 50 examples without adding any new information. - The semi-factual generation on model generated rationale in the first step: the replacement could be done on missing rationales, This will remove the correct rationales form the example. It would be better if the paper address this (even simply pointing out that it’s not an issue). - Some annotation protocals and their encessities are not explained. E.g., line 223-224 and line 230-231.
Does the review mention any comments, suggestions or typos that the author should address?
no
The paper presents a rational-centric framework with human-in-the-loop to boost model out-of-distribution performance in few-shot learning scenarios. The proposed approach uses static semi-factual generation and human corrections to decouple spurious associations and bias models towards generally applicable underlying distributions, which enables fast and accurate generalization. Experimental results shows the superiority of the proposed approach on in-distribution and out-of-distribution settings, especially for few-shot learning scenarios. - Improving out-of-distribution model performance specially in few-shot learning settings is an important problem of real-life need in the NLP community. - The proposed approach is simple and can be applied for NLP tasks where rationales can be easily identified and annotated. For instance, in sentiment analysis and text classification tasks. - The proposed approach provides cost savings in comparison to alternative approaches for data augmentation, and provides strong out-of-distribution as well as in-distribution performance in few-shot learning settings. - The proposed approach is not fully automatic, and still requires human annotations for identifying rationales and correcting errors from the static semi-factual generation phase. While this annotation effort could be less significant that other data augmentation methods, it still presents a significant cost overhead. - It is not clear how to generalize this approach to other NLP tasks aside from sentiment analysis and text classification. For instance, it is not clear how to generalize this approach to other sequence-to-sequence tasks like machine translation. - Identifying rationales is not a simple problem, specifically for more complicated NLP tasks like machine translation. The paper is well organized and easy to follow. Figure 2 is a bit cluttered and the "bold" text is hard to see, perhaps another color or a bigger font could help in highlighting the human identified rationales better.
Does the review include a short summary of the paper?
yes
- Improving out-of-distribution model performance specially in few-shot learning settings is an important problem of real-life need in the NLP community. - The proposed approach is simple and can be applied for NLP tasks where rationales can be easily identified and annotated. For instance, in sentiment analysis and text classification tasks. - The proposed approach provides cost savings in comparison to alternative approaches for data augmentation, and provides strong out-of-distribution as well as in-distribution performance in few-shot learning settings. - The proposed approach is not fully automatic, and still requires human annotations for identifying rationales and correcting errors from the static semi-factual generation phase. While this annotation effort could be less significant that other data augmentation methods, it still presents a significant cost overhead. - It is not clear how to generalize this approach to other NLP tasks aside from sentiment analysis and text classification. For instance, it is not clear how to generalize this approach to other sequence-to-sequence tasks like machine translation. - Identifying rationales is not a simple problem, specifically for more complicated NLP tasks like machine translation. The paper is well organized and easy to follow. Figure 2 is a bit cluttered and the "bold" text is hard to see, perhaps another color or a bigger font could help in highlighting the human identified rationales better.
Does the review include a short summary of the paper?
no
The paper presents a rational-centric framework with human-in-the-loop to boost model out-of-distribution performance in few-shot learning scenarios. The proposed approach uses static semi-factual generation and human corrections to decouple spurious associations and bias models towards generally applicable underlying distributions, which enables fast and accurate generalization. Experimental results shows the superiority of the proposed approach on in-distribution and out-of-distribution settings, especially for few-shot learning scenarios. - Improving out-of-distribution model performance specially in few-shot learning settings is an important problem of real-life need in the NLP community. - The proposed approach is simple and can be applied for NLP tasks where rationales can be easily identified and annotated. For instance, in sentiment analysis and text classification tasks. - The proposed approach provides cost savings in comparison to alternative approaches for data augmentation, and provides strong out-of-distribution as well as in-distribution performance in few-shot learning settings. - The proposed approach is not fully automatic, and still requires human annotations for identifying rationales and correcting errors from the static semi-factual generation phase. While this annotation effort could be less significant that other data augmentation methods, it still presents a significant cost overhead. - It is not clear how to generalize this approach to other NLP tasks aside from sentiment analysis and text classification. For instance, it is not clear how to generalize this approach to other sequence-to-sequence tasks like machine translation. - Identifying rationales is not a simple problem, specifically for more complicated NLP tasks like machine translation. The paper is well organized and easy to follow. Figure 2 is a bit cluttered and the "bold" text is hard to see, perhaps another color or a bigger font could help in highlighting the human identified rationales better.
Does the review include a summary of the strengths of the paper?
yes
The paper presents a rational-centric framework with human-in-the-loop to boost model out-of-distribution performance in few-shot learning scenarios. The proposed approach uses static semi-factual generation and human corrections to decouple spurious associations and bias models towards generally applicable underlying distributions, which enables fast and accurate generalization. Experimental results shows the superiority of the proposed approach on in-distribution and out-of-distribution settings, especially for few-shot learning scenarios. - The proposed approach is not fully automatic, and still requires human annotations for identifying rationales and correcting errors from the static semi-factual generation phase. While this annotation effort could be less significant that other data augmentation methods, it still presents a significant cost overhead. - It is not clear how to generalize this approach to other NLP tasks aside from sentiment analysis and text classification. For instance, it is not clear how to generalize this approach to other sequence-to-sequence tasks like machine translation. - Identifying rationales is not a simple problem, specifically for more complicated NLP tasks like machine translation. The paper is well organized and easy to follow. Figure 2 is a bit cluttered and the "bold" text is hard to see, perhaps another color or a bigger font could help in highlighting the human identified rationales better.
Does the review include a summary of the strengths of the paper?
no
The paper presents a rational-centric framework with human-in-the-loop to boost model out-of-distribution performance in few-shot learning scenarios. The proposed approach uses static semi-factual generation and human corrections to decouple spurious associations and bias models towards generally applicable underlying distributions, which enables fast and accurate generalization. Experimental results shows the superiority of the proposed approach on in-distribution and out-of-distribution settings, especially for few-shot learning scenarios. - Improving out-of-distribution model performance specially in few-shot learning settings is an important problem of real-life need in the NLP community. - The proposed approach is simple and can be applied for NLP tasks where rationales can be easily identified and annotated. For instance, in sentiment analysis and text classification tasks. - The proposed approach provides cost savings in comparison to alternative approaches for data augmentation, and provides strong out-of-distribution as well as in-distribution performance in few-shot learning settings. - The proposed approach is not fully automatic, and still requires human annotations for identifying rationales and correcting errors from the static semi-factual generation phase. While this annotation effort could be less significant that other data augmentation methods, it still presents a significant cost overhead. - It is not clear how to generalize this approach to other NLP tasks aside from sentiment analysis and text classification. For instance, it is not clear how to generalize this approach to other sequence-to-sequence tasks like machine translation. - Identifying rationales is not a simple problem, specifically for more complicated NLP tasks like machine translation. The paper is well organized and easy to follow. Figure 2 is a bit cluttered and the "bold" text is hard to see, perhaps another color or a bigger font could help in highlighting the human identified rationales better.
Does the review include a summary of the weaknesses of the paper?
yes
The paper presents a rational-centric framework with human-in-the-loop to boost model out-of-distribution performance in few-shot learning scenarios. The proposed approach uses static semi-factual generation and human corrections to decouple spurious associations and bias models towards generally applicable underlying distributions, which enables fast and accurate generalization. Experimental results shows the superiority of the proposed approach on in-distribution and out-of-distribution settings, especially for few-shot learning scenarios. - Improving out-of-distribution model performance specially in few-shot learning settings is an important problem of real-life need in the NLP community. - The proposed approach is simple and can be applied for NLP tasks where rationales can be easily identified and annotated. For instance, in sentiment analysis and text classification tasks. - The proposed approach provides cost savings in comparison to alternative approaches for data augmentation, and provides strong out-of-distribution as well as in-distribution performance in few-shot learning settings. The paper is well organized and easy to follow. Figure 2 is a bit cluttered and the "bold" text is hard to see, perhaps another color or a bigger font could help in highlighting the human identified rationales better.
Does the review include a summary of the weaknesses of the paper?
no
The paper presents a rational-centric framework with human-in-the-loop to boost model out-of-distribution performance in few-shot learning scenarios. The proposed approach uses static semi-factual generation and human corrections to decouple spurious associations and bias models towards generally applicable underlying distributions, which enables fast and accurate generalization. Experimental results shows the superiority of the proposed approach on in-distribution and out-of-distribution settings, especially for few-shot learning scenarios. - Improving out-of-distribution model performance specially in few-shot learning settings is an important problem of real-life need in the NLP community. - The proposed approach is simple and can be applied for NLP tasks where rationales can be easily identified and annotated. For instance, in sentiment analysis and text classification tasks. - The proposed approach provides cost savings in comparison to alternative approaches for data augmentation, and provides strong out-of-distribution as well as in-distribution performance in few-shot learning settings. - The proposed approach is not fully automatic, and still requires human annotations for identifying rationales and correcting errors from the static semi-factual generation phase. While this annotation effort could be less significant that other data augmentation methods, it still presents a significant cost overhead. - It is not clear how to generalize this approach to other NLP tasks aside from sentiment analysis and text classification. For instance, it is not clear how to generalize this approach to other sequence-to-sequence tasks like machine translation. - Identifying rationales is not a simple problem, specifically for more complicated NLP tasks like machine translation. The paper is well organized and easy to follow. Figure 2 is a bit cluttered and the "bold" text is hard to see, perhaps another color or a bigger font could help in highlighting the human identified rationales better.
Does the review mention any comments, suggestions or typos that the author should address?
yes
The paper presents a rational-centric framework with human-in-the-loop to boost model out-of-distribution performance in few-shot learning scenarios. The proposed approach uses static semi-factual generation and human corrections to decouple spurious associations and bias models towards generally applicable underlying distributions, which enables fast and accurate generalization. Experimental results shows the superiority of the proposed approach on in-distribution and out-of-distribution settings, especially for few-shot learning scenarios. - Improving out-of-distribution model performance specially in few-shot learning settings is an important problem of real-life need in the NLP community. - The proposed approach is simple and can be applied for NLP tasks where rationales can be easily identified and annotated. For instance, in sentiment analysis and text classification tasks. - The proposed approach provides cost savings in comparison to alternative approaches for data augmentation, and provides strong out-of-distribution as well as in-distribution performance in few-shot learning settings. - The proposed approach is not fully automatic, and still requires human annotations for identifying rationales and correcting errors from the static semi-factual generation phase. While this annotation effort could be less significant that other data augmentation methods, it still presents a significant cost overhead. - It is not clear how to generalize this approach to other NLP tasks aside from sentiment analysis and text classification. For instance, it is not clear how to generalize this approach to other sequence-to-sequence tasks like machine translation. - Identifying rationales is not a simple problem, specifically for more complicated NLP tasks like machine translation.
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper proposes, as a form of data augmentation, to replace the one-hot sequences that are typically consumed by text classification models with an interpoloation between this one-hot distribution and a distribution over word-types obtained by running the sequence through BERT. The authors show that this form of augmentation helps on its own as well as when combined with other standard data augmentation techniques. - The paper obtains good results with a straightforward approach. - The paper is written fairly clearly. - The paper needs some light editing (especially Section 4.1) but I don't see any significant weaknesses. - One thing I wasn't sure I understood is whether the "smoothed" inputs are used on their own to train the model, or if they're used in addition to the standard inputs. Lines 236-239 make me think they're used in addition. This is fine, but should be emphasized more explicitly. - Since, as the authors note, they are are merely approximating the token-distribution given by BERT (by not using any MASK tokens), it might be interesting to see whether this approximation is in fact hurting the performance or not. That is, if we obtain token-level distributions by masking each token in the input in turn, and then use the resulting smoothed representations, is this better or worse for augmentation than the approximation the authors propose?
Does the review include a short summary of the paper?
yes
- The paper obtains good results with a straightforward approach. - The paper is written fairly clearly. - The paper needs some light editing (especially Section 4.1) but I don't see any significant weaknesses. - One thing I wasn't sure I understood is whether the "smoothed" inputs are used on their own to train the model, or if they're used in addition to the standard inputs. Lines 236-239 make me think they're used in addition. This is fine, but should be emphasized more explicitly. - Since, as the authors note, they are are merely approximating the token-distribution given by BERT (by not using any MASK tokens), it might be interesting to see whether this approximation is in fact hurting the performance or not. That is, if we obtain token-level distributions by masking each token in the input in turn, and then use the resulting smoothed representations, is this better or worse for augmentation than the approximation the authors propose?
Does the review include a short summary of the paper?
no
This paper proposes, as a form of data augmentation, to replace the one-hot sequences that are typically consumed by text classification models with an interpoloation between this one-hot distribution and a distribution over word-types obtained by running the sequence through BERT. The authors show that this form of augmentation helps on its own as well as when combined with other standard data augmentation techniques. - The paper obtains good results with a straightforward approach. - The paper is written fairly clearly. - The paper needs some light editing (especially Section 4.1) but I don't see any significant weaknesses. - One thing I wasn't sure I understood is whether the "smoothed" inputs are used on their own to train the model, or if they're used in addition to the standard inputs. Lines 236-239 make me think they're used in addition. This is fine, but should be emphasized more explicitly. - Since, as the authors note, they are are merely approximating the token-distribution given by BERT (by not using any MASK tokens), it might be interesting to see whether this approximation is in fact hurting the performance or not. That is, if we obtain token-level distributions by masking each token in the input in turn, and then use the resulting smoothed representations, is this better or worse for augmentation than the approximation the authors propose?
Does the review include a summary of the strengths of the paper?
yes
This paper proposes, as a form of data augmentation, to replace the one-hot sequences that are typically consumed by text classification models with an interpoloation between this one-hot distribution and a distribution over word-types obtained by running the sequence through BERT. The authors show that this form of augmentation helps on its own as well as when combined with other standard data augmentation techniques. - The paper needs some light editing (especially Section 4.1) but I don't see any significant weaknesses. - One thing I wasn't sure I understood is whether the "smoothed" inputs are used on their own to train the model, or if they're used in addition to the standard inputs. Lines 236-239 make me think they're used in addition. This is fine, but should be emphasized more explicitly. - Since, as the authors note, they are are merely approximating the token-distribution given by BERT (by not using any MASK tokens), it might be interesting to see whether this approximation is in fact hurting the performance or not. That is, if we obtain token-level distributions by masking each token in the input in turn, and then use the resulting smoothed representations, is this better or worse for augmentation than the approximation the authors propose?
Does the review include a summary of the strengths of the paper?
no
This paper proposes, as a form of data augmentation, to replace the one-hot sequences that are typically consumed by text classification models with an interpoloation between this one-hot distribution and a distribution over word-types obtained by running the sequence through BERT. The authors show that this form of augmentation helps on its own as well as when combined with other standard data augmentation techniques. - The paper obtains good results with a straightforward approach. - The paper is written fairly clearly. - The paper needs some light editing (especially Section 4.1) but I don't see any significant weaknesses. - One thing I wasn't sure I understood is whether the "smoothed" inputs are used on their own to train the model, or if they're used in addition to the standard inputs. Lines 236-239 make me think they're used in addition. This is fine, but should be emphasized more explicitly. - Since, as the authors note, they are are merely approximating the token-distribution given by BERT (by not using any MASK tokens), it might be interesting to see whether this approximation is in fact hurting the performance or not. That is, if we obtain token-level distributions by masking each token in the input in turn, and then use the resulting smoothed representations, is this better or worse for augmentation than the approximation the authors propose?
Does the review include a summary of the weaknesses of the paper?
yes
This paper proposes, as a form of data augmentation, to replace the one-hot sequences that are typically consumed by text classification models with an interpoloation between this one-hot distribution and a distribution over word-types obtained by running the sequence through BERT. The authors show that this form of augmentation helps on its own as well as when combined with other standard data augmentation techniques. - The paper obtains good results with a straightforward approach. - The paper is written fairly clearly. - One thing I wasn't sure I understood is whether the "smoothed" inputs are used on their own to train the model, or if they're used in addition to the standard inputs. Lines 236-239 make me think they're used in addition. This is fine, but should be emphasized more explicitly. - Since, as the authors note, they are are merely approximating the token-distribution given by BERT (by not using any MASK tokens), it might be interesting to see whether this approximation is in fact hurting the performance or not. That is, if we obtain token-level distributions by masking each token in the input in turn, and then use the resulting smoothed representations, is this better or worse for augmentation than the approximation the authors propose?
Does the review include a summary of the weaknesses of the paper?
no
This paper proposes, as a form of data augmentation, to replace the one-hot sequences that are typically consumed by text classification models with an interpoloation between this one-hot distribution and a distribution over word-types obtained by running the sequence through BERT. The authors show that this form of augmentation helps on its own as well as when combined with other standard data augmentation techniques. - The paper obtains good results with a straightforward approach. - The paper is written fairly clearly. - The paper needs some light editing (especially Section 4.1) but I don't see any significant weaknesses. - One thing I wasn't sure I understood is whether the "smoothed" inputs are used on their own to train the model, or if they're used in addition to the standard inputs. Lines 236-239 make me think they're used in addition. This is fine, but should be emphasized more explicitly. - Since, as the authors note, they are are merely approximating the token-distribution given by BERT (by not using any MASK tokens), it might be interesting to see whether this approximation is in fact hurting the performance or not. That is, if we obtain token-level distributions by masking each token in the input in turn, and then use the resulting smoothed representations, is this better or worse for augmentation than the approximation the authors propose?
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper proposes, as a form of data augmentation, to replace the one-hot sequences that are typically consumed by text classification models with an interpoloation between this one-hot distribution and a distribution over word-types obtained by running the sequence through BERT. The authors show that this form of augmentation helps on its own as well as when combined with other standard data augmentation techniques. - The paper obtains good results with a straightforward approach. - The paper is written fairly clearly. - The paper needs some light editing (especially Section 4.1) but I don't see any significant weaknesses.
Does the review mention any comments, suggestions or typos that the author should address?
no
The paper proposes a data augmentation method using a controllable smoothed representation, which is obtained by combining the one-hot representation and the smooth representation through masked language modeling. The authors showed the effectiveness of the proposed method on low-resourced sentence classification task. They also found that the smooth representation can be used on other data augmentation method to achieve better results. - The paper is well-structured. The authors explained the motivation and the methodology clearly. Figure 1 and 2 are informative and help the readers understand better understand the method. - It is nice that the text smoothing can be combined with other data augmentation approaches to achieve better performances. - The main weaknesses of the paper are the experiments, which is understandable for a short paper but I'd still expect it to be stronger. First, the setting is only on extremely low-resource regime, which is not the only case we want to use data augmentation in real-world applications. Also, sentence classification is an easier task. I feel like the proposed augmentation method has potential to be used on more NLP tasks, which was unfortunately not shown. - The proposed mixup strategy is very simple (Equation 5), I wonder if the authors have tried other ways to interpolate the one-hot vector with the MLM smoothed vector. - How does \lambda influence the performances? - How does the augmentation method compare to other baselines with more training data?
Does the review include a short summary of the paper?
yes
- The paper is well-structured. The authors explained the motivation and the methodology clearly. Figure 1 and 2 are informative and help the readers understand better understand the method. - It is nice that the text smoothing can be combined with other data augmentation approaches to achieve better performances. - The main weaknesses of the paper are the experiments, which is understandable for a short paper but I'd still expect it to be stronger. First, the setting is only on extremely low-resource regime, which is not the only case we want to use data augmentation in real-world applications. Also, sentence classification is an easier task. I feel like the proposed augmentation method has potential to be used on more NLP tasks, which was unfortunately not shown. - The proposed mixup strategy is very simple (Equation 5), I wonder if the authors have tried other ways to interpolate the one-hot vector with the MLM smoothed vector. - How does \lambda influence the performances? - How does the augmentation method compare to other baselines with more training data?
Does the review include a short summary of the paper?
no
The paper proposes a data augmentation method using a controllable smoothed representation, which is obtained by combining the one-hot representation and the smooth representation through masked language modeling. The authors showed the effectiveness of the proposed method on low-resourced sentence classification task. They also found that the smooth representation can be used on other data augmentation method to achieve better results. - The paper is well-structured. The authors explained the motivation and the methodology clearly. Figure 1 and 2 are informative and help the readers understand better understand the method. - It is nice that the text smoothing can be combined with other data augmentation approaches to achieve better performances. - The main weaknesses of the paper are the experiments, which is understandable for a short paper but I'd still expect it to be stronger. First, the setting is only on extremely low-resource regime, which is not the only case we want to use data augmentation in real-world applications. Also, sentence classification is an easier task. I feel like the proposed augmentation method has potential to be used on more NLP tasks, which was unfortunately not shown. - The proposed mixup strategy is very simple (Equation 5), I wonder if the authors have tried other ways to interpolate the one-hot vector with the MLM smoothed vector. - How does \lambda influence the performances? - How does the augmentation method compare to other baselines with more training data?
Does the review include a summary of the strengths of the paper?
yes
The paper proposes a data augmentation method using a controllable smoothed representation, which is obtained by combining the one-hot representation and the smooth representation through masked language modeling. The authors showed the effectiveness of the proposed method on low-resourced sentence classification task. They also found that the smooth representation can be used on other data augmentation method to achieve better results. - The main weaknesses of the paper are the experiments, which is understandable for a short paper but I'd still expect it to be stronger. First, the setting is only on extremely low-resource regime, which is not the only case we want to use data augmentation in real-world applications. Also, sentence classification is an easier task. I feel like the proposed augmentation method has potential to be used on more NLP tasks, which was unfortunately not shown. - The proposed mixup strategy is very simple (Equation 5), I wonder if the authors have tried other ways to interpolate the one-hot vector with the MLM smoothed vector. - How does \lambda influence the performances? - How does the augmentation method compare to other baselines with more training data?
Does the review include a summary of the strengths of the paper?
no
The paper proposes a data augmentation method using a controllable smoothed representation, which is obtained by combining the one-hot representation and the smooth representation through masked language modeling. The authors showed the effectiveness of the proposed method on low-resourced sentence classification task. They also found that the smooth representation can be used on other data augmentation method to achieve better results. - The paper is well-structured. The authors explained the motivation and the methodology clearly. Figure 1 and 2 are informative and help the readers understand better understand the method. - It is nice that the text smoothing can be combined with other data augmentation approaches to achieve better performances. - The main weaknesses of the paper are the experiments, which is understandable for a short paper but I'd still expect it to be stronger. First, the setting is only on extremely low-resource regime, which is not the only case we want to use data augmentation in real-world applications. Also, sentence classification is an easier task. I feel like the proposed augmentation method has potential to be used on more NLP tasks, which was unfortunately not shown. - The proposed mixup strategy is very simple (Equation 5), I wonder if the authors have tried other ways to interpolate the one-hot vector with the MLM smoothed vector. - How does \lambda influence the performances? - How does the augmentation method compare to other baselines with more training data?
Does the review include a summary of the weaknesses of the paper?
yes
The paper proposes a data augmentation method using a controllable smoothed representation, which is obtained by combining the one-hot representation and the smooth representation through masked language modeling. The authors showed the effectiveness of the proposed method on low-resourced sentence classification task. They also found that the smooth representation can be used on other data augmentation method to achieve better results. - The paper is well-structured. The authors explained the motivation and the methodology clearly. Figure 1 and 2 are informative and help the readers understand better understand the method. - It is nice that the text smoothing can be combined with other data augmentation approaches to achieve better performances. - How does \lambda influence the performances? - How does the augmentation method compare to other baselines with more training data?
Does the review include a summary of the weaknesses of the paper?
no
The paper proposes a data augmentation method using a controllable smoothed representation, which is obtained by combining the one-hot representation and the smooth representation through masked language modeling. The authors showed the effectiveness of the proposed method on low-resourced sentence classification task. They also found that the smooth representation can be used on other data augmentation method to achieve better results. - The paper is well-structured. The authors explained the motivation and the methodology clearly. Figure 1 and 2 are informative and help the readers understand better understand the method. - It is nice that the text smoothing can be combined with other data augmentation approaches to achieve better performances. - The main weaknesses of the paper are the experiments, which is understandable for a short paper but I'd still expect it to be stronger. First, the setting is only on extremely low-resource regime, which is not the only case we want to use data augmentation in real-world applications. Also, sentence classification is an easier task. I feel like the proposed augmentation method has potential to be used on more NLP tasks, which was unfortunately not shown. - The proposed mixup strategy is very simple (Equation 5), I wonder if the authors have tried other ways to interpolate the one-hot vector with the MLM smoothed vector. - How does \lambda influence the performances? - How does the augmentation method compare to other baselines with more training data?
Does the review mention any comments, suggestions or typos that the author should address?
yes
The paper proposes a data augmentation method using a controllable smoothed representation, which is obtained by combining the one-hot representation and the smooth representation through masked language modeling. The authors showed the effectiveness of the proposed method on low-resourced sentence classification task. They also found that the smooth representation can be used on other data augmentation method to achieve better results. - The paper is well-structured. The authors explained the motivation and the methodology clearly. Figure 1 and 2 are informative and help the readers understand better understand the method. - It is nice that the text smoothing can be combined with other data augmentation approaches to achieve better performances. - The main weaknesses of the paper are the experiments, which is understandable for a short paper but I'd still expect it to be stronger. First, the setting is only on extremely low-resource regime, which is not the only case we want to use data augmentation in real-world applications. Also, sentence classification is an easier task. I feel like the proposed augmentation method has potential to be used on more NLP tasks, which was unfortunately not shown. - The proposed mixup strategy is very simple (Equation 5), I wonder if the authors have tried other ways to interpolate the one-hot vector with the MLM smoothed vector.
Does the review mention any comments, suggestions or typos that the author should address?
no
The paper introduces a text smoothing approach and uses it in different downstream tasks. Different types of sentence classification tasks are improved by using such text smoothing method. The paper demonstrate improvement from using proposed text smoothing method. This paper does not make a significant contribution. Smoothed Representation and Mixup Strategy are not proposed by the authors. Moreover, the strategy is too simple to combine the one-hot representation and smoothed representation with a weight parameter lambda. Three low-resource sentence classification tasks are used in the experiments. Even more lower-resource task classifications are missed, such as MRPC in GLUE. It lacks the configuration of lambda, I believe this can greatly affect performance. I am still unsure about the motivation, the high probability of "average" is only due to MLM. Nevertheless, the semantic meaning of "average" is learned by MLM and a downstream task like sentiment analysis would only care about its semantic meaning. One of the biggest problems is that motivation is not convincing. I think it would be better to have examples of different predictions using your text smoothing approach. In Figure 2, how does the "average" likely affect the sentiment analysis label? Though some of the tasks aren't very useful, I believe more classification tasks should be conducted. In my experience of GLUE, SST and MNLI are two of the most resourceful sentence classification tasks compared to other tasks in GLUE.
Does the review include a short summary of the paper?
yes
The paper demonstrate improvement from using proposed text smoothing method. This paper does not make a significant contribution. Smoothed Representation and Mixup Strategy are not proposed by the authors. Moreover, the strategy is too simple to combine the one-hot representation and smoothed representation with a weight parameter lambda. Three low-resource sentence classification tasks are used in the experiments. Even more lower-resource task classifications are missed, such as MRPC in GLUE. It lacks the configuration of lambda, I believe this can greatly affect performance. I am still unsure about the motivation, the high probability of "average" is only due to MLM. Nevertheless, the semantic meaning of "average" is learned by MLM and a downstream task like sentiment analysis would only care about its semantic meaning. One of the biggest problems is that motivation is not convincing. I think it would be better to have examples of different predictions using your text smoothing approach. In Figure 2, how does the "average" likely affect the sentiment analysis label? Though some of the tasks aren't very useful, I believe more classification tasks should be conducted. In my experience of GLUE, SST and MNLI are two of the most resourceful sentence classification tasks compared to other tasks in GLUE.
Does the review include a short summary of the paper?
no
The paper introduces a text smoothing approach and uses it in different downstream tasks. Different types of sentence classification tasks are improved by using such text smoothing method. The paper demonstrate improvement from using proposed text smoothing method. This paper does not make a significant contribution. Smoothed Representation and Mixup Strategy are not proposed by the authors. Moreover, the strategy is too simple to combine the one-hot representation and smoothed representation with a weight parameter lambda. Three low-resource sentence classification tasks are used in the experiments. Even more lower-resource task classifications are missed, such as MRPC in GLUE. It lacks the configuration of lambda, I believe this can greatly affect performance. I am still unsure about the motivation, the high probability of "average" is only due to MLM. Nevertheless, the semantic meaning of "average" is learned by MLM and a downstream task like sentiment analysis would only care about its semantic meaning. One of the biggest problems is that motivation is not convincing. I think it would be better to have examples of different predictions using your text smoothing approach. In Figure 2, how does the "average" likely affect the sentiment analysis label? Though some of the tasks aren't very useful, I believe more classification tasks should be conducted. In my experience of GLUE, SST and MNLI are two of the most resourceful sentence classification tasks compared to other tasks in GLUE.
Does the review include a summary of the strengths of the paper?
yes
The paper introduces a text smoothing approach and uses it in different downstream tasks. Different types of sentence classification tasks are improved by using such text smoothing method. This paper does not make a significant contribution. Smoothed Representation and Mixup Strategy are not proposed by the authors. Moreover, the strategy is too simple to combine the one-hot representation and smoothed representation with a weight parameter lambda. Three low-resource sentence classification tasks are used in the experiments. Even more lower-resource task classifications are missed, such as MRPC in GLUE. It lacks the configuration of lambda, I believe this can greatly affect performance. I am still unsure about the motivation, the high probability of "average" is only due to MLM. Nevertheless, the semantic meaning of "average" is learned by MLM and a downstream task like sentiment analysis would only care about its semantic meaning. One of the biggest problems is that motivation is not convincing. I think it would be better to have examples of different predictions using your text smoothing approach. In Figure 2, how does the "average" likely affect the sentiment analysis label? Though some of the tasks aren't very useful, I believe more classification tasks should be conducted. In my experience of GLUE, SST and MNLI are two of the most resourceful sentence classification tasks compared to other tasks in GLUE.
Does the review include a summary of the strengths of the paper?
no
The paper introduces a text smoothing approach and uses it in different downstream tasks. Different types of sentence classification tasks are improved by using such text smoothing method. The paper demonstrate improvement from using proposed text smoothing method. This paper does not make a significant contribution. Smoothed Representation and Mixup Strategy are not proposed by the authors. Moreover, the strategy is too simple to combine the one-hot representation and smoothed representation with a weight parameter lambda. Three low-resource sentence classification tasks are used in the experiments. Even more lower-resource task classifications are missed, such as MRPC in GLUE. It lacks the configuration of lambda, I believe this can greatly affect performance. I am still unsure about the motivation, the high probability of "average" is only due to MLM. Nevertheless, the semantic meaning of "average" is learned by MLM and a downstream task like sentiment analysis would only care about its semantic meaning. One of the biggest problems is that motivation is not convincing. I think it would be better to have examples of different predictions using your text smoothing approach. In Figure 2, how does the "average" likely affect the sentiment analysis label? Though some of the tasks aren't very useful, I believe more classification tasks should be conducted. In my experience of GLUE, SST and MNLI are two of the most resourceful sentence classification tasks compared to other tasks in GLUE.
Does the review include a summary of the weaknesses of the paper?
yes
The paper introduces a text smoothing approach and uses it in different downstream tasks. Different types of sentence classification tasks are improved by using such text smoothing method. The paper demonstrate improvement from using proposed text smoothing method. One of the biggest problems is that motivation is not convincing. I think it would be better to have examples of different predictions using your text smoothing approach. In Figure 2, how does the "average" likely affect the sentiment analysis label? Though some of the tasks aren't very useful, I believe more classification tasks should be conducted. In my experience of GLUE, SST and MNLI are two of the most resourceful sentence classification tasks compared to other tasks in GLUE.
Does the review include a summary of the weaknesses of the paper?
no
The paper introduces a text smoothing approach and uses it in different downstream tasks. Different types of sentence classification tasks are improved by using such text smoothing method. The paper demonstrate improvement from using proposed text smoothing method. This paper does not make a significant contribution. Smoothed Representation and Mixup Strategy are not proposed by the authors. Moreover, the strategy is too simple to combine the one-hot representation and smoothed representation with a weight parameter lambda. Three low-resource sentence classification tasks are used in the experiments. Even more lower-resource task classifications are missed, such as MRPC in GLUE. It lacks the configuration of lambda, I believe this can greatly affect performance. I am still unsure about the motivation, the high probability of "average" is only due to MLM. Nevertheless, the semantic meaning of "average" is learned by MLM and a downstream task like sentiment analysis would only care about its semantic meaning. One of the biggest problems is that motivation is not convincing. I think it would be better to have examples of different predictions using your text smoothing approach. In Figure 2, how does the "average" likely affect the sentiment analysis label? Though some of the tasks aren't very useful, I believe more classification tasks should be conducted. In my experience of GLUE, SST and MNLI are two of the most resourceful sentence classification tasks compared to other tasks in GLUE.
Does the review mention any comments, suggestions or typos that the author should address?
yes
The paper introduces a text smoothing approach and uses it in different downstream tasks. Different types of sentence classification tasks are improved by using such text smoothing method. The paper demonstrate improvement from using proposed text smoothing method. This paper does not make a significant contribution. Smoothed Representation and Mixup Strategy are not proposed by the authors. Moreover, the strategy is too simple to combine the one-hot representation and smoothed representation with a weight parameter lambda. Three low-resource sentence classification tasks are used in the experiments. Even more lower-resource task classifications are missed, such as MRPC in GLUE. It lacks the configuration of lambda, I believe this can greatly affect performance. I am still unsure about the motivation, the high probability of "average" is only due to MLM. Nevertheless, the semantic meaning of "average" is learned by MLM and a downstream task like sentiment analysis would only care about its semantic meaning.
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper proposes a data augmentation technique named Text Smoothing, which converts sentences from their one-hot representations to controllable smoothed representations. Specifically, the authors multiply the output of a pre-trained BERT with the word embedding matrix to get the smoothed representation of an input token. Then the smoothed representation is combined with the one-hot representation by mixup to get the mixed representation. Using such mixed representation instead of the one-hot representation as the token input can be regarded as a kind of data augmentation and can significantly improve the model performance in the low-resource regime. __1. The paper is well organized and easy to follow.__ __2. The proposed method is novel and interesting:__ The idea of mixing the one-hot representation and the LM (smoothed) representation for an input token is very different from other works of data augmentation. It uses the knowledge of pre-trained BERT to integrate and compress information of multiple related words, providing richer semantics that a single token in the one-hot form cannot provide. __3. The experimental results are impressive:__ The improvement brought by text smoothing is very significant and surpasses many strong baselines. Besides, the text smoothing can be also well combined with various data augmentation methods, showing great practicability and universality. __1. Lack of significance test:__ I'm glad to see the paper reports the standard deviation of accuracy among 15 runs. However, the standard deviation of the proposed method overlaps significantly with that of the best baseline, which raises my concern about whether the improvement is statistically significant. It would be better to conduct a significance test on the experimental results. __2. Anomalous result:__ According to Table 3, the performance of BARTword and BARTspan on SST-2 degrades a lot after incorporating text smoothing, why? __3. Lack of experimental results on more datasets:__ I suggest conducting experiments on more datasets to make a more comprehensive evaluation of the proposed method. The experiments on the full dataset instead of that in the low-resource regime are also encouraged. __4. Lack of some technical details:__ __4.1__. Is the smoothed representation all calculated based on pre-trained BERT, even when the text smoothing method is adapted to GPT2 and BART models (e.g., GPT2context, BARTword, etc.)? __4.2__. What is the value of the hyperparameter lambda of the mixup in the experiments? Will the setting of this hyperparameter have a great impact on the result? __4.3__. Generally, traditional data augmentation methods have the setting of __augmentation magnification__, i.e., the number of augmented samples generated for each original sample. Is there such a setting in the proposed method? If so, how many augmented samples are synthesized for each original sample? 1. Some items in Table 2 and Table 3 have Spaces between accuracy and standard deviation, and some items don't, which affects beauty. 2. The number of BARTword + text smoothing and BARTspan + text smoothing on SST-2 in Table 3 should NOT be in bold as they cause degeneration of the performance. 3. I suggest Listening 1 to reflect the process of sending interpolated_repr into the task model to get the final representation
Does the review include a short summary of the paper?
yes
__1. The paper is well organized and easy to follow.__ __2. The proposed method is novel and interesting:__ The idea of mixing the one-hot representation and the LM (smoothed) representation for an input token is very different from other works of data augmentation. It uses the knowledge of pre-trained BERT to integrate and compress information of multiple related words, providing richer semantics that a single token in the one-hot form cannot provide. __3. The experimental results are impressive:__ The improvement brought by text smoothing is very significant and surpasses many strong baselines. Besides, the text smoothing can be also well combined with various data augmentation methods, showing great practicability and universality. __1. Lack of significance test:__ I'm glad to see the paper reports the standard deviation of accuracy among 15 runs. However, the standard deviation of the proposed method overlaps significantly with that of the best baseline, which raises my concern about whether the improvement is statistically significant. It would be better to conduct a significance test on the experimental results. __2. Anomalous result:__ According to Table 3, the performance of BARTword and BARTspan on SST-2 degrades a lot after incorporating text smoothing, why? __3. Lack of experimental results on more datasets:__ I suggest conducting experiments on more datasets to make a more comprehensive evaluation of the proposed method. The experiments on the full dataset instead of that in the low-resource regime are also encouraged. __4. Lack of some technical details:__ __4.1__. Is the smoothed representation all calculated based on pre-trained BERT, even when the text smoothing method is adapted to GPT2 and BART models (e.g., GPT2context, BARTword, etc.)? __4.2__. What is the value of the hyperparameter lambda of the mixup in the experiments? Will the setting of this hyperparameter have a great impact on the result? __4.3__. Generally, traditional data augmentation methods have the setting of __augmentation magnification__, i.e., the number of augmented samples generated for each original sample. Is there such a setting in the proposed method? If so, how many augmented samples are synthesized for each original sample? 1. Some items in Table 2 and Table 3 have Spaces between accuracy and standard deviation, and some items don't, which affects beauty. 2. The number of BARTword + text smoothing and BARTspan + text smoothing on SST-2 in Table 3 should NOT be in bold as they cause degeneration of the performance. 3. I suggest Listening 1 to reflect the process of sending interpolated_repr into the task model to get the final representation
Does the review include a short summary of the paper?
no
This paper proposes a data augmentation technique named Text Smoothing, which converts sentences from their one-hot representations to controllable smoothed representations. Specifically, the authors multiply the output of a pre-trained BERT with the word embedding matrix to get the smoothed representation of an input token. Then the smoothed representation is combined with the one-hot representation by mixup to get the mixed representation. Using such mixed representation instead of the one-hot representation as the token input can be regarded as a kind of data augmentation and can significantly improve the model performance in the low-resource regime. __1. The paper is well organized and easy to follow.__ __2. The proposed method is novel and interesting:__ The idea of mixing the one-hot representation and the LM (smoothed) representation for an input token is very different from other works of data augmentation. It uses the knowledge of pre-trained BERT to integrate and compress information of multiple related words, providing richer semantics that a single token in the one-hot form cannot provide. __3. The experimental results are impressive:__ The improvement brought by text smoothing is very significant and surpasses many strong baselines. Besides, the text smoothing can be also well combined with various data augmentation methods, showing great practicability and universality. __1. Lack of significance test:__ I'm glad to see the paper reports the standard deviation of accuracy among 15 runs. However, the standard deviation of the proposed method overlaps significantly with that of the best baseline, which raises my concern about whether the improvement is statistically significant. It would be better to conduct a significance test on the experimental results. __2. Anomalous result:__ According to Table 3, the performance of BARTword and BARTspan on SST-2 degrades a lot after incorporating text smoothing, why? __3. Lack of experimental results on more datasets:__ I suggest conducting experiments on more datasets to make a more comprehensive evaluation of the proposed method. The experiments on the full dataset instead of that in the low-resource regime are also encouraged. __4. Lack of some technical details:__ __4.1__. Is the smoothed representation all calculated based on pre-trained BERT, even when the text smoothing method is adapted to GPT2 and BART models (e.g., GPT2context, BARTword, etc.)? __4.2__. What is the value of the hyperparameter lambda of the mixup in the experiments? Will the setting of this hyperparameter have a great impact on the result? __4.3__. Generally, traditional data augmentation methods have the setting of __augmentation magnification__, i.e., the number of augmented samples generated for each original sample. Is there such a setting in the proposed method? If so, how many augmented samples are synthesized for each original sample? 1. Some items in Table 2 and Table 3 have Spaces between accuracy and standard deviation, and some items don't, which affects beauty. 2. The number of BARTword + text smoothing and BARTspan + text smoothing on SST-2 in Table 3 should NOT be in bold as they cause degeneration of the performance. 3. I suggest Listening 1 to reflect the process of sending interpolated_repr into the task model to get the final representation
Does the review include a summary of the strengths of the paper?
yes
This paper proposes a data augmentation technique named Text Smoothing, which converts sentences from their one-hot representations to controllable smoothed representations. Specifically, the authors multiply the output of a pre-trained BERT with the word embedding matrix to get the smoothed representation of an input token. Then the smoothed representation is combined with the one-hot representation by mixup to get the mixed representation. Using such mixed representation instead of the one-hot representation as the token input can be regarded as a kind of data augmentation and can significantly improve the model performance in the low-resource regime. __1. Lack of significance test:__ I'm glad to see the paper reports the standard deviation of accuracy among 15 runs. However, the standard deviation of the proposed method overlaps significantly with that of the best baseline, which raises my concern about whether the improvement is statistically significant. It would be better to conduct a significance test on the experimental results. __2. Anomalous result:__ According to Table 3, the performance of BARTword and BARTspan on SST-2 degrades a lot after incorporating text smoothing, why? __3. Lack of experimental results on more datasets:__ I suggest conducting experiments on more datasets to make a more comprehensive evaluation of the proposed method. The experiments on the full dataset instead of that in the low-resource regime are also encouraged. __4. Lack of some technical details:__ __4.1__. Is the smoothed representation all calculated based on pre-trained BERT, even when the text smoothing method is adapted to GPT2 and BART models (e.g., GPT2context, BARTword, etc.)? __4.2__. What is the value of the hyperparameter lambda of the mixup in the experiments? Will the setting of this hyperparameter have a great impact on the result? __4.3__. Generally, traditional data augmentation methods have the setting of __augmentation magnification__, i.e., the number of augmented samples generated for each original sample. Is there such a setting in the proposed method? If so, how many augmented samples are synthesized for each original sample? 1. Some items in Table 2 and Table 3 have Spaces between accuracy and standard deviation, and some items don't, which affects beauty. 2. The number of BARTword + text smoothing and BARTspan + text smoothing on SST-2 in Table 3 should NOT be in bold as they cause degeneration of the performance. 3. I suggest Listening 1 to reflect the process of sending interpolated_repr into the task model to get the final representation
Does the review include a summary of the strengths of the paper?
no
This paper proposes a data augmentation technique named Text Smoothing, which converts sentences from their one-hot representations to controllable smoothed representations. Specifically, the authors multiply the output of a pre-trained BERT with the word embedding matrix to get the smoothed representation of an input token. Then the smoothed representation is combined with the one-hot representation by mixup to get the mixed representation. Using such mixed representation instead of the one-hot representation as the token input can be regarded as a kind of data augmentation and can significantly improve the model performance in the low-resource regime. __1. The paper is well organized and easy to follow.__ __2. The proposed method is novel and interesting:__ The idea of mixing the one-hot representation and the LM (smoothed) representation for an input token is very different from other works of data augmentation. It uses the knowledge of pre-trained BERT to integrate and compress information of multiple related words, providing richer semantics that a single token in the one-hot form cannot provide. __3. The experimental results are impressive:__ The improvement brought by text smoothing is very significant and surpasses many strong baselines. Besides, the text smoothing can be also well combined with various data augmentation methods, showing great practicability and universality. __1. Lack of significance test:__ I'm glad to see the paper reports the standard deviation of accuracy among 15 runs. However, the standard deviation of the proposed method overlaps significantly with that of the best baseline, which raises my concern about whether the improvement is statistically significant. It would be better to conduct a significance test on the experimental results. __2. Anomalous result:__ According to Table 3, the performance of BARTword and BARTspan on SST-2 degrades a lot after incorporating text smoothing, why? __3. Lack of experimental results on more datasets:__ I suggest conducting experiments on more datasets to make a more comprehensive evaluation of the proposed method. The experiments on the full dataset instead of that in the low-resource regime are also encouraged. __4. Lack of some technical details:__ __4.1__. Is the smoothed representation all calculated based on pre-trained BERT, even when the text smoothing method is adapted to GPT2 and BART models (e.g., GPT2context, BARTword, etc.)? __4.2__. What is the value of the hyperparameter lambda of the mixup in the experiments? Will the setting of this hyperparameter have a great impact on the result? __4.3__. Generally, traditional data augmentation methods have the setting of __augmentation magnification__, i.e., the number of augmented samples generated for each original sample. Is there such a setting in the proposed method? If so, how many augmented samples are synthesized for each original sample? 1. Some items in Table 2 and Table 3 have Spaces between accuracy and standard deviation, and some items don't, which affects beauty. 2. The number of BARTword + text smoothing and BARTspan + text smoothing on SST-2 in Table 3 should NOT be in bold as they cause degeneration of the performance. 3. I suggest Listening 1 to reflect the process of sending interpolated_repr into the task model to get the final representation
Does the review include a summary of the weaknesses of the paper?
yes
This paper proposes a data augmentation technique named Text Smoothing, which converts sentences from their one-hot representations to controllable smoothed representations. Specifically, the authors multiply the output of a pre-trained BERT with the word embedding matrix to get the smoothed representation of an input token. Then the smoothed representation is combined with the one-hot representation by mixup to get the mixed representation. Using such mixed representation instead of the one-hot representation as the token input can be regarded as a kind of data augmentation and can significantly improve the model performance in the low-resource regime. __1. The paper is well organized and easy to follow.__ __2. The proposed method is novel and interesting:__ The idea of mixing the one-hot representation and the LM (smoothed) representation for an input token is very different from other works of data augmentation. It uses the knowledge of pre-trained BERT to integrate and compress information of multiple related words, providing richer semantics that a single token in the one-hot form cannot provide. __3. The experimental results are impressive:__ The improvement brought by text smoothing is very significant and surpasses many strong baselines. Besides, the text smoothing can be also well combined with various data augmentation methods, showing great practicability and universality. 1. Some items in Table 2 and Table 3 have Spaces between accuracy and standard deviation, and some items don't, which affects beauty. 2. The number of BARTword + text smoothing and BARTspan + text smoothing on SST-2 in Table 3 should NOT be in bold as they cause degeneration of the performance. 3. I suggest Listening 1 to reflect the process of sending interpolated_repr into the task model to get the final representation
Does the review include a summary of the weaknesses of the paper?
no
This paper proposes a data augmentation technique named Text Smoothing, which converts sentences from their one-hot representations to controllable smoothed representations. Specifically, the authors multiply the output of a pre-trained BERT with the word embedding matrix to get the smoothed representation of an input token. Then the smoothed representation is combined with the one-hot representation by mixup to get the mixed representation. Using such mixed representation instead of the one-hot representation as the token input can be regarded as a kind of data augmentation and can significantly improve the model performance in the low-resource regime. __1. The paper is well organized and easy to follow.__ __2. The proposed method is novel and interesting:__ The idea of mixing the one-hot representation and the LM (smoothed) representation for an input token is very different from other works of data augmentation. It uses the knowledge of pre-trained BERT to integrate and compress information of multiple related words, providing richer semantics that a single token in the one-hot form cannot provide. __3. The experimental results are impressive:__ The improvement brought by text smoothing is very significant and surpasses many strong baselines. Besides, the text smoothing can be also well combined with various data augmentation methods, showing great practicability and universality. __1. Lack of significance test:__ I'm glad to see the paper reports the standard deviation of accuracy among 15 runs. However, the standard deviation of the proposed method overlaps significantly with that of the best baseline, which raises my concern about whether the improvement is statistically significant. It would be better to conduct a significance test on the experimental results. __2. Anomalous result:__ According to Table 3, the performance of BARTword and BARTspan on SST-2 degrades a lot after incorporating text smoothing, why? __3. Lack of experimental results on more datasets:__ I suggest conducting experiments on more datasets to make a more comprehensive evaluation of the proposed method. The experiments on the full dataset instead of that in the low-resource regime are also encouraged. __4. Lack of some technical details:__ __4.1__. Is the smoothed representation all calculated based on pre-trained BERT, even when the text smoothing method is adapted to GPT2 and BART models (e.g., GPT2context, BARTword, etc.)? __4.2__. What is the value of the hyperparameter lambda of the mixup in the experiments? Will the setting of this hyperparameter have a great impact on the result? __4.3__. Generally, traditional data augmentation methods have the setting of __augmentation magnification__, i.e., the number of augmented samples generated for each original sample. Is there such a setting in the proposed method? If so, how many augmented samples are synthesized for each original sample? 1. Some items in Table 2 and Table 3 have Spaces between accuracy and standard deviation, and some items don't, which affects beauty. 2. The number of BARTword + text smoothing and BARTspan + text smoothing on SST-2 in Table 3 should NOT be in bold as they cause degeneration of the performance. 3. I suggest Listening 1 to reflect the process of sending interpolated_repr into the task model to get the final representation
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper proposes a data augmentation technique named Text Smoothing, which converts sentences from their one-hot representations to controllable smoothed representations. Specifically, the authors multiply the output of a pre-trained BERT with the word embedding matrix to get the smoothed representation of an input token. Then the smoothed representation is combined with the one-hot representation by mixup to get the mixed representation. Using such mixed representation instead of the one-hot representation as the token input can be regarded as a kind of data augmentation and can significantly improve the model performance in the low-resource regime. __1. The paper is well organized and easy to follow.__ __2. The proposed method is novel and interesting:__ The idea of mixing the one-hot representation and the LM (smoothed) representation for an input token is very different from other works of data augmentation. It uses the knowledge of pre-trained BERT to integrate and compress information of multiple related words, providing richer semantics that a single token in the one-hot form cannot provide. __3. The experimental results are impressive:__ The improvement brought by text smoothing is very significant and surpasses many strong baselines. Besides, the text smoothing can be also well combined with various data augmentation methods, showing great practicability and universality. __1. Lack of significance test:__ I'm glad to see the paper reports the standard deviation of accuracy among 15 runs. However, the standard deviation of the proposed method overlaps significantly with that of the best baseline, which raises my concern about whether the improvement is statistically significant. It would be better to conduct a significance test on the experimental results. __2. Anomalous result:__ According to Table 3, the performance of BARTword and BARTspan on SST-2 degrades a lot after incorporating text smoothing, why? __3. Lack of experimental results on more datasets:__ I suggest conducting experiments on more datasets to make a more comprehensive evaluation of the proposed method. The experiments on the full dataset instead of that in the low-resource regime are also encouraged. __4. Lack of some technical details:__ __4.1__. Is the smoothed representation all calculated based on pre-trained BERT, even when the text smoothing method is adapted to GPT2 and BART models (e.g., GPT2context, BARTword, etc.)? __4.2__. What is the value of the hyperparameter lambda of the mixup in the experiments? Will the setting of this hyperparameter have a great impact on the result? __4.3__. Generally, traditional data augmentation methods have the setting of __augmentation magnification__, i.e., the number of augmented samples generated for each original sample. Is there such a setting in the proposed method? If so, how many augmented samples are synthesized for each original sample?
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper proposes a framework for training and extract-the-generate model for the long document summarization task. The premise is that the extractor should pass on important information from the source to the generator for producing abstractive summaries. To train the entire network, there have been three losses defined (generator, oracle, and consistency), of which generator is the decoder loss, oracle is used to optimize the extractor with oracle labels, consistency is used to marginalize dynamic attention with extractor distribution. Results are promising on two long summarization datasets (GovReport and QMSum) and competitive on arXiv. -The paper is easy-to-follow and understandable in most parts. -The problem is well-approached, although it has not been motivated much in the paper. -Results outperform the prior SOTA by a large margin on two long datasets (GovReport and QMSum). -The idea makes sense for the long document summarization, but I’m wondering what the others have done in this area with a similar methodology? What does the system offer over the previous extract-then-generate methodologies? This is troublesome considering that the paper does not have any Related Work section, nor experimenting other extract-then-generate with their proposed model. - The extract-then-generate can be re-phrased as a two-phase summarization system that can be either trained independently or within an end-to-end model. The choice of baselines is a bit picky here considering the methodology. The authors should report the performance of other similar architectures (i.e., extract-the-generate or two-phase systems) here. - While results are competitive on arXiv, some of the baselines are composed of less parameters and obtain better performance. -The paper lacks in providing human analysis, which is an important part of current summarization systems as to revealing the limitations and qualities of the system that could not be captured by automatic metrics. - The paper misses some important experimental details such as the lambda parameters values, how the oracle snippets/sentences are picked, and etc. It could be improved. In the introduction part, the authors have made this claim: “We believe that the extract-then-generate approach mimics how a person would handle long-input summarization: first identify important pieces of information in the text and then summarize them.” It will be good to provide a reference for this claim.
Does the review include a short summary of the paper?
yes
-The paper is easy-to-follow and understandable in most parts. -The problem is well-approached, although it has not been motivated much in the paper. -Results outperform the prior SOTA by a large margin on two long datasets (GovReport and QMSum). -The idea makes sense for the long document summarization, but I’m wondering what the others have done in this area with a similar methodology? What does the system offer over the previous extract-then-generate methodologies? This is troublesome considering that the paper does not have any Related Work section, nor experimenting other extract-then-generate with their proposed model. - The extract-then-generate can be re-phrased as a two-phase summarization system that can be either trained independently or within an end-to-end model. The choice of baselines is a bit picky here considering the methodology. The authors should report the performance of other similar architectures (i.e., extract-the-generate or two-phase systems) here. - While results are competitive on arXiv, some of the baselines are composed of less parameters and obtain better performance. -The paper lacks in providing human analysis, which is an important part of current summarization systems as to revealing the limitations and qualities of the system that could not be captured by automatic metrics. - The paper misses some important experimental details such as the lambda parameters values, how the oracle snippets/sentences are picked, and etc. It could be improved. In the introduction part, the authors have made this claim: “We believe that the extract-then-generate approach mimics how a person would handle long-input summarization: first identify important pieces of information in the text and then summarize them.” It will be good to provide a reference for this claim.
Does the review include a short summary of the paper?
no
This paper proposes a framework for training and extract-the-generate model for the long document summarization task. The premise is that the extractor should pass on important information from the source to the generator for producing abstractive summaries. To train the entire network, there have been three losses defined (generator, oracle, and consistency), of which generator is the decoder loss, oracle is used to optimize the extractor with oracle labels, consistency is used to marginalize dynamic attention with extractor distribution. Results are promising on two long summarization datasets (GovReport and QMSum) and competitive on arXiv. -The paper is easy-to-follow and understandable in most parts. -The problem is well-approached, although it has not been motivated much in the paper. -Results outperform the prior SOTA by a large margin on two long datasets (GovReport and QMSum). -The idea makes sense for the long document summarization, but I’m wondering what the others have done in this area with a similar methodology? What does the system offer over the previous extract-then-generate methodologies? This is troublesome considering that the paper does not have any Related Work section, nor experimenting other extract-then-generate with their proposed model. - The extract-then-generate can be re-phrased as a two-phase summarization system that can be either trained independently or within an end-to-end model. The choice of baselines is a bit picky here considering the methodology. The authors should report the performance of other similar architectures (i.e., extract-the-generate or two-phase systems) here. - While results are competitive on arXiv, some of the baselines are composed of less parameters and obtain better performance. -The paper lacks in providing human analysis, which is an important part of current summarization systems as to revealing the limitations and qualities of the system that could not be captured by automatic metrics. - The paper misses some important experimental details such as the lambda parameters values, how the oracle snippets/sentences are picked, and etc. It could be improved. In the introduction part, the authors have made this claim: “We believe that the extract-then-generate approach mimics how a person would handle long-input summarization: first identify important pieces of information in the text and then summarize them.” It will be good to provide a reference for this claim.
Does the review include a summary of the strengths of the paper?
yes
This paper proposes a framework for training and extract-the-generate model for the long document summarization task. The premise is that the extractor should pass on important information from the source to the generator for producing abstractive summaries. To train the entire network, there have been three losses defined (generator, oracle, and consistency), of which generator is the decoder loss, oracle is used to optimize the extractor with oracle labels, consistency is used to marginalize dynamic attention with extractor distribution. Results are promising on two long summarization datasets (GovReport and QMSum) and competitive on arXiv. -The idea makes sense for the long document summarization, but I’m wondering what the others have done in this area with a similar methodology? What does the system offer over the previous extract-then-generate methodologies? This is troublesome considering that the paper does not have any Related Work section, nor experimenting other extract-then-generate with their proposed model. - The extract-then-generate can be re-phrased as a two-phase summarization system that can be either trained independently or within an end-to-end model. The choice of baselines is a bit picky here considering the methodology. The authors should report the performance of other similar architectures (i.e., extract-the-generate or two-phase systems) here. - While results are competitive on arXiv, some of the baselines are composed of less parameters and obtain better performance. -The paper lacks in providing human analysis, which is an important part of current summarization systems as to revealing the limitations and qualities of the system that could not be captured by automatic metrics. - The paper misses some important experimental details such as the lambda parameters values, how the oracle snippets/sentences are picked, and etc. It could be improved. In the introduction part, the authors have made this claim: “We believe that the extract-then-generate approach mimics how a person would handle long-input summarization: first identify important pieces of information in the text and then summarize them.” It will be good to provide a reference for this claim.
Does the review include a summary of the strengths of the paper?
no
This paper proposes a framework for training and extract-the-generate model for the long document summarization task. The premise is that the extractor should pass on important information from the source to the generator for producing abstractive summaries. To train the entire network, there have been three losses defined (generator, oracle, and consistency), of which generator is the decoder loss, oracle is used to optimize the extractor with oracle labels, consistency is used to marginalize dynamic attention with extractor distribution. Results are promising on two long summarization datasets (GovReport and QMSum) and competitive on arXiv. -The paper is easy-to-follow and understandable in most parts. -The problem is well-approached, although it has not been motivated much in the paper. -Results outperform the prior SOTA by a large margin on two long datasets (GovReport and QMSum). -The idea makes sense for the long document summarization, but I’m wondering what the others have done in this area with a similar methodology? What does the system offer over the previous extract-then-generate methodologies? This is troublesome considering that the paper does not have any Related Work section, nor experimenting other extract-then-generate with their proposed model. - The extract-then-generate can be re-phrased as a two-phase summarization system that can be either trained independently or within an end-to-end model. The choice of baselines is a bit picky here considering the methodology. The authors should report the performance of other similar architectures (i.e., extract-the-generate or two-phase systems) here. - While results are competitive on arXiv, some of the baselines are composed of less parameters and obtain better performance. -The paper lacks in providing human analysis, which is an important part of current summarization systems as to revealing the limitations and qualities of the system that could not be captured by automatic metrics. - The paper misses some important experimental details such as the lambda parameters values, how the oracle snippets/sentences are picked, and etc. It could be improved. In the introduction part, the authors have made this claim: “We believe that the extract-then-generate approach mimics how a person would handle long-input summarization: first identify important pieces of information in the text and then summarize them.” It will be good to provide a reference for this claim.
Does the review include a summary of the weaknesses of the paper?
yes
This paper proposes a framework for training and extract-the-generate model for the long document summarization task. The premise is that the extractor should pass on important information from the source to the generator for producing abstractive summaries. To train the entire network, there have been three losses defined (generator, oracle, and consistency), of which generator is the decoder loss, oracle is used to optimize the extractor with oracle labels, consistency is used to marginalize dynamic attention with extractor distribution. Results are promising on two long summarization datasets (GovReport and QMSum) and competitive on arXiv. -The paper is easy-to-follow and understandable in most parts. -The problem is well-approached, although it has not been motivated much in the paper. -Results outperform the prior SOTA by a large margin on two long datasets (GovReport and QMSum). In the introduction part, the authors have made this claim: “We believe that the extract-then-generate approach mimics how a person would handle long-input summarization: first identify important pieces of information in the text and then summarize them.” It will be good to provide a reference for this claim.
Does the review include a summary of the weaknesses of the paper?
no
This paper proposes a framework for training and extract-the-generate model for the long document summarization task. The premise is that the extractor should pass on important information from the source to the generator for producing abstractive summaries. To train the entire network, there have been three losses defined (generator, oracle, and consistency), of which generator is the decoder loss, oracle is used to optimize the extractor with oracle labels, consistency is used to marginalize dynamic attention with extractor distribution. Results are promising on two long summarization datasets (GovReport and QMSum) and competitive on arXiv. -The paper is easy-to-follow and understandable in most parts. -The problem is well-approached, although it has not been motivated much in the paper. -Results outperform the prior SOTA by a large margin on two long datasets (GovReport and QMSum). -The idea makes sense for the long document summarization, but I’m wondering what the others have done in this area with a similar methodology? What does the system offer over the previous extract-then-generate methodologies? This is troublesome considering that the paper does not have any Related Work section, nor experimenting other extract-then-generate with their proposed model. - The extract-then-generate can be re-phrased as a two-phase summarization system that can be either trained independently or within an end-to-end model. The choice of baselines is a bit picky here considering the methodology. The authors should report the performance of other similar architectures (i.e., extract-the-generate or two-phase systems) here. - While results are competitive on arXiv, some of the baselines are composed of less parameters and obtain better performance. -The paper lacks in providing human analysis, which is an important part of current summarization systems as to revealing the limitations and qualities of the system that could not be captured by automatic metrics. - The paper misses some important experimental details such as the lambda parameters values, how the oracle snippets/sentences are picked, and etc. It could be improved. In the introduction part, the authors have made this claim: “We believe that the extract-then-generate approach mimics how a person would handle long-input summarization: first identify important pieces of information in the text and then summarize them.” It will be good to provide a reference for this claim.
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper proposes a framework for training and extract-the-generate model for the long document summarization task. The premise is that the extractor should pass on important information from the source to the generator for producing abstractive summaries. To train the entire network, there have been three losses defined (generator, oracle, and consistency), of which generator is the decoder loss, oracle is used to optimize the extractor with oracle labels, consistency is used to marginalize dynamic attention with extractor distribution. Results are promising on two long summarization datasets (GovReport and QMSum) and competitive on arXiv. -The paper is easy-to-follow and understandable in most parts. -The problem is well-approached, although it has not been motivated much in the paper. -Results outperform the prior SOTA by a large margin on two long datasets (GovReport and QMSum). -The idea makes sense for the long document summarization, but I’m wondering what the others have done in this area with a similar methodology? What does the system offer over the previous extract-then-generate methodologies? This is troublesome considering that the paper does not have any Related Work section, nor experimenting other extract-then-generate with their proposed model. - The extract-then-generate can be re-phrased as a two-phase summarization system that can be either trained independently or within an end-to-end model. The choice of baselines is a bit picky here considering the methodology. The authors should report the performance of other similar architectures (i.e., extract-the-generate or two-phase systems) here. - While results are competitive on arXiv, some of the baselines are composed of less parameters and obtain better performance. -The paper lacks in providing human analysis, which is an important part of current summarization systems as to revealing the limitations and qualities of the system that could not be captured by automatic metrics. - The paper misses some important experimental details such as the lambda parameters values, how the oracle snippets/sentences are picked, and etc. It could be improved.
Does the review mention any comments, suggestions or typos that the author should address?
no
- This paper works on zero-shot relation extraction. Note that the authors assume that ground-truth entities in each sentence are given as input. In other words, this task does not require identifying entities. - The problematic issue this paper focuses on is the difficulty of distinguishing similar but different-class relations, called “similar relations” and “similar entities” (see the details in line 71-84 and Table 1 shows some examples). - To mitigate this issue, the authors propose a new relation contrastive learning framework (RCL). The reason why they applied contrastive learning (particularly, instance-wise contrastive learning) to zero-shot relation extraction is that some existing studies reported its remarkable effectiveness for representation learning (see the details in line. 84-91). - Their proposed method achieved better performance (Table 2). Also, using dropout for augmentation is more effective than other augmentation techniques (Table 3), which is consistent with the result of Guo et al. (2021). - Their method consistently achieves performance improvements (Table 2). - Their proposed method is so simple that readers can reimplement their method. - Their claim is not properly supported. In Section 1 (in line 71-84 and 121-124), the authors introduce the problem they want to solve, and they state as follows: “It effectively mitigates two types of similar problems: similar relations and similar entities by learning representations jointly optimized with contrastive loss and classification loss.” It is true that they show some actual examples their method solved in Appendix (Figure 6). However, there is no quantitative evidence for supporting that their proposed method solves or mitigates the problem. Thus, readers cannot judge whether their method can mitigate the “similar relations and similar entities” problem or not. Readers would appreciate if the authors could provide quantitative results for supporting the claim. - The authors adopt different evaluation metrics (B^3 F1, NMI, ARI) from the standard metric (F1 score) used in many existing papers on zero-shot RE, such as Chen and Li (2021) and Levy et al. (2017). In terms of the three metrics, the state-of-the-art method, ZS-BERT, is inferior to other methods, such as Att-BiLSTM, which surprises many readers. Although it is okay to adopt the different metrics, many readers are likely to see the performance comparisons in terms of the standard F1 score as well as the three metrics. - The proposed method is not so new. It is true that the proposed method includes a few simple extensions for relation extraction (e.g., Softmax Layer and Concat Layer), the key idea and most parts of their method are based on existing contrastive learning methods, such as SimCSE proposed by Gao et al. (2021). So, readers would be happy if the authors specify which parts are their original extensions more clearly. - Minor point: Some notations are confusing and undefined. Please see “Comments, Suggestions And Typos” in detail. - How about comparing your proposed method with existing ones by using the standard F1 score? - Missing references: (1) For Open Relation Extraction (in line 156), it might be better to cite the pioneering work, Bank et al. “Open Information Extraction from the Web” in Proc. of IJCAI2007. For contrastive learning in NLP, there are some existing studies. ( 2) For sequence labeling, Wiseman and Stratos “Label-Agnostic Sequence Labeling by Copying Nearest Neighbors” in Proc. of ACL2019. ( 3) For span classification such as NER, Ouchi et al. “Instance-Based Learning of Span Representations” in Proc. of ACL2020. They do not call their methods "contrastive learning," but their methods are a type of instance-wise contrastive learning. - Notations: (1) In line 247, the subscript $i$ (in $X_i$) is not defined. ( 2) In Equation 1, the symbol $i$ is used for both the subscript and the superscript, such as $X^i_i$, which is a little bit confusing. It might be better to use different symbols. ( 3) Also, in Equation 1, the authors assume that the first entity span is $(i, j-1)$ and the second one $(k, l-1)$. It might be better to specify the entity spans and the notations.
Does the review include a short summary of the paper?
yes
- Their method consistently achieves performance improvements (Table 2). - Their proposed method is so simple that readers can reimplement their method. - Their claim is not properly supported. In Section 1 (in line 71-84 and 121-124), the authors introduce the problem they want to solve, and they state as follows: “It effectively mitigates two types of similar problems: similar relations and similar entities by learning representations jointly optimized with contrastive loss and classification loss.” It is true that they show some actual examples their method solved in Appendix (Figure 6). However, there is no quantitative evidence for supporting that their proposed method solves or mitigates the problem. Thus, readers cannot judge whether their method can mitigate the “similar relations and similar entities” problem or not. Readers would appreciate if the authors could provide quantitative results for supporting the claim. - The authors adopt different evaluation metrics (B^3 F1, NMI, ARI) from the standard metric (F1 score) used in many existing papers on zero-shot RE, such as Chen and Li (2021) and Levy et al. (2017). In terms of the three metrics, the state-of-the-art method, ZS-BERT, is inferior to other methods, such as Att-BiLSTM, which surprises many readers. Although it is okay to adopt the different metrics, many readers are likely to see the performance comparisons in terms of the standard F1 score as well as the three metrics. - The proposed method is not so new. It is true that the proposed method includes a few simple extensions for relation extraction (e.g., Softmax Layer and Concat Layer), the key idea and most parts of their method are based on existing contrastive learning methods, such as SimCSE proposed by Gao et al. (2021). So, readers would be happy if the authors specify which parts are their original extensions more clearly. - Minor point: Some notations are confusing and undefined. Please see “Comments, Suggestions And Typos” in detail. - How about comparing your proposed method with existing ones by using the standard F1 score? - Missing references: (1) For Open Relation Extraction (in line 156), it might be better to cite the pioneering work, Bank et al. “Open Information Extraction from the Web” in Proc. of IJCAI2007. For contrastive learning in NLP, there are some existing studies. ( 2) For sequence labeling, Wiseman and Stratos “Label-Agnostic Sequence Labeling by Copying Nearest Neighbors” in Proc. of ACL2019. ( 3) For span classification such as NER, Ouchi et al. “Instance-Based Learning of Span Representations” in Proc. of ACL2020. They do not call their methods "contrastive learning," but their methods are a type of instance-wise contrastive learning. - Notations: (1) In line 247, the subscript $i$ (in $X_i$) is not defined. ( 2) In Equation 1, the symbol $i$ is used for both the subscript and the superscript, such as $X^i_i$, which is a little bit confusing. It might be better to use different symbols. ( 3) Also, in Equation 1, the authors assume that the first entity span is $(i, j-1)$ and the second one $(k, l-1)$. It might be better to specify the entity spans and the notations.
Does the review include a short summary of the paper?
no
- This paper works on zero-shot relation extraction. Note that the authors assume that ground-truth entities in each sentence are given as input. In other words, this task does not require identifying entities. - The problematic issue this paper focuses on is the difficulty of distinguishing similar but different-class relations, called “similar relations” and “similar entities” (see the details in line 71-84 and Table 1 shows some examples). - To mitigate this issue, the authors propose a new relation contrastive learning framework (RCL). The reason why they applied contrastive learning (particularly, instance-wise contrastive learning) to zero-shot relation extraction is that some existing studies reported its remarkable effectiveness for representation learning (see the details in line. 84-91). - Their proposed method achieved better performance (Table 2). Also, using dropout for augmentation is more effective than other augmentation techniques (Table 3), which is consistent with the result of Guo et al. (2021). - Their method consistently achieves performance improvements (Table 2). - Their proposed method is so simple that readers can reimplement their method. - Their claim is not properly supported. In Section 1 (in line 71-84 and 121-124), the authors introduce the problem they want to solve, and they state as follows: “It effectively mitigates two types of similar problems: similar relations and similar entities by learning representations jointly optimized with contrastive loss and classification loss.” It is true that they show some actual examples their method solved in Appendix (Figure 6). However, there is no quantitative evidence for supporting that their proposed method solves or mitigates the problem. Thus, readers cannot judge whether their method can mitigate the “similar relations and similar entities” problem or not. Readers would appreciate if the authors could provide quantitative results for supporting the claim. - The authors adopt different evaluation metrics (B^3 F1, NMI, ARI) from the standard metric (F1 score) used in many existing papers on zero-shot RE, such as Chen and Li (2021) and Levy et al. (2017). In terms of the three metrics, the state-of-the-art method, ZS-BERT, is inferior to other methods, such as Att-BiLSTM, which surprises many readers. Although it is okay to adopt the different metrics, many readers are likely to see the performance comparisons in terms of the standard F1 score as well as the three metrics. - The proposed method is not so new. It is true that the proposed method includes a few simple extensions for relation extraction (e.g., Softmax Layer and Concat Layer), the key idea and most parts of their method are based on existing contrastive learning methods, such as SimCSE proposed by Gao et al. (2021). So, readers would be happy if the authors specify which parts are their original extensions more clearly. - Minor point: Some notations are confusing and undefined. Please see “Comments, Suggestions And Typos” in detail. - How about comparing your proposed method with existing ones by using the standard F1 score? - Missing references: (1) For Open Relation Extraction (in line 156), it might be better to cite the pioneering work, Bank et al. “Open Information Extraction from the Web” in Proc. of IJCAI2007. For contrastive learning in NLP, there are some existing studies. ( 2) For sequence labeling, Wiseman and Stratos “Label-Agnostic Sequence Labeling by Copying Nearest Neighbors” in Proc. of ACL2019. ( 3) For span classification such as NER, Ouchi et al. “Instance-Based Learning of Span Representations” in Proc. of ACL2020. They do not call their methods "contrastive learning," but their methods are a type of instance-wise contrastive learning. - Notations: (1) In line 247, the subscript $i$ (in $X_i$) is not defined. ( 2) In Equation 1, the symbol $i$ is used for both the subscript and the superscript, such as $X^i_i$, which is a little bit confusing. It might be better to use different symbols. ( 3) Also, in Equation 1, the authors assume that the first entity span is $(i, j-1)$ and the second one $(k, l-1)$. It might be better to specify the entity spans and the notations.
Does the review include a summary of the strengths of the paper?
yes
- This paper works on zero-shot relation extraction. Note that the authors assume that ground-truth entities in each sentence are given as input. In other words, this task does not require identifying entities. - The problematic issue this paper focuses on is the difficulty of distinguishing similar but different-class relations, called “similar relations” and “similar entities” (see the details in line 71-84 and Table 1 shows some examples). - To mitigate this issue, the authors propose a new relation contrastive learning framework (RCL). The reason why they applied contrastive learning (particularly, instance-wise contrastive learning) to zero-shot relation extraction is that some existing studies reported its remarkable effectiveness for representation learning (see the details in line. 84-91). - Their proposed method achieved better performance (Table 2). Also, using dropout for augmentation is more effective than other augmentation techniques (Table 3), which is consistent with the result of Guo et al. (2021). - Their claim is not properly supported. In Section 1 (in line 71-84 and 121-124), the authors introduce the problem they want to solve, and they state as follows: “It effectively mitigates two types of similar problems: similar relations and similar entities by learning representations jointly optimized with contrastive loss and classification loss.” It is true that they show some actual examples their method solved in Appendix (Figure 6). However, there is no quantitative evidence for supporting that their proposed method solves or mitigates the problem. Thus, readers cannot judge whether their method can mitigate the “similar relations and similar entities” problem or not. Readers would appreciate if the authors could provide quantitative results for supporting the claim. - The authors adopt different evaluation metrics (B^3 F1, NMI, ARI) from the standard metric (F1 score) used in many existing papers on zero-shot RE, such as Chen and Li (2021) and Levy et al. (2017). In terms of the three metrics, the state-of-the-art method, ZS-BERT, is inferior to other methods, such as Att-BiLSTM, which surprises many readers. Although it is okay to adopt the different metrics, many readers are likely to see the performance comparisons in terms of the standard F1 score as well as the three metrics. - The proposed method is not so new. It is true that the proposed method includes a few simple extensions for relation extraction (e.g., Softmax Layer and Concat Layer), the key idea and most parts of their method are based on existing contrastive learning methods, such as SimCSE proposed by Gao et al. (2021). So, readers would be happy if the authors specify which parts are their original extensions more clearly. - Minor point: Some notations are confusing and undefined. Please see “Comments, Suggestions And Typos” in detail. - How about comparing your proposed method with existing ones by using the standard F1 score? - Missing references: (1) For Open Relation Extraction (in line 156), it might be better to cite the pioneering work, Bank et al. “Open Information Extraction from the Web” in Proc. of IJCAI2007. For contrastive learning in NLP, there are some existing studies. ( 2) For sequence labeling, Wiseman and Stratos “Label-Agnostic Sequence Labeling by Copying Nearest Neighbors” in Proc. of ACL2019. ( 3) For span classification such as NER, Ouchi et al. “Instance-Based Learning of Span Representations” in Proc. of ACL2020. They do not call their methods "contrastive learning," but their methods are a type of instance-wise contrastive learning. - Notations: (1) In line 247, the subscript $i$ (in $X_i$) is not defined. ( 2) In Equation 1, the symbol $i$ is used for both the subscript and the superscript, such as $X^i_i$, which is a little bit confusing. It might be better to use different symbols. ( 3) Also, in Equation 1, the authors assume that the first entity span is $(i, j-1)$ and the second one $(k, l-1)$. It might be better to specify the entity spans and the notations.
Does the review include a summary of the strengths of the paper?
no
- This paper works on zero-shot relation extraction. Note that the authors assume that ground-truth entities in each sentence are given as input. In other words, this task does not require identifying entities. - The problematic issue this paper focuses on is the difficulty of distinguishing similar but different-class relations, called “similar relations” and “similar entities” (see the details in line 71-84 and Table 1 shows some examples). - To mitigate this issue, the authors propose a new relation contrastive learning framework (RCL). The reason why they applied contrastive learning (particularly, instance-wise contrastive learning) to zero-shot relation extraction is that some existing studies reported its remarkable effectiveness for representation learning (see the details in line. 84-91). - Their proposed method achieved better performance (Table 2). Also, using dropout for augmentation is more effective than other augmentation techniques (Table 3), which is consistent with the result of Guo et al. (2021). - Their method consistently achieves performance improvements (Table 2). - Their proposed method is so simple that readers can reimplement their method. - Their claim is not properly supported. In Section 1 (in line 71-84 and 121-124), the authors introduce the problem they want to solve, and they state as follows: “It effectively mitigates two types of similar problems: similar relations and similar entities by learning representations jointly optimized with contrastive loss and classification loss.” It is true that they show some actual examples their method solved in Appendix (Figure 6). However, there is no quantitative evidence for supporting that their proposed method solves or mitigates the problem. Thus, readers cannot judge whether their method can mitigate the “similar relations and similar entities” problem or not. Readers would appreciate if the authors could provide quantitative results for supporting the claim. - The authors adopt different evaluation metrics (B^3 F1, NMI, ARI) from the standard metric (F1 score) used in many existing papers on zero-shot RE, such as Chen and Li (2021) and Levy et al. (2017). In terms of the three metrics, the state-of-the-art method, ZS-BERT, is inferior to other methods, such as Att-BiLSTM, which surprises many readers. Although it is okay to adopt the different metrics, many readers are likely to see the performance comparisons in terms of the standard F1 score as well as the three metrics. - The proposed method is not so new. It is true that the proposed method includes a few simple extensions for relation extraction (e.g., Softmax Layer and Concat Layer), the key idea and most parts of their method are based on existing contrastive learning methods, such as SimCSE proposed by Gao et al. (2021). So, readers would be happy if the authors specify which parts are their original extensions more clearly. - Minor point: Some notations are confusing and undefined. Please see “Comments, Suggestions And Typos” in detail. - How about comparing your proposed method with existing ones by using the standard F1 score? - Missing references: (1) For Open Relation Extraction (in line 156), it might be better to cite the pioneering work, Bank et al. “Open Information Extraction from the Web” in Proc. of IJCAI2007. For contrastive learning in NLP, there are some existing studies. ( 2) For sequence labeling, Wiseman and Stratos “Label-Agnostic Sequence Labeling by Copying Nearest Neighbors” in Proc. of ACL2019. ( 3) For span classification such as NER, Ouchi et al. “Instance-Based Learning of Span Representations” in Proc. of ACL2020. They do not call their methods "contrastive learning," but their methods are a type of instance-wise contrastive learning. - Notations: (1) In line 247, the subscript $i$ (in $X_i$) is not defined. ( 2) In Equation 1, the symbol $i$ is used for both the subscript and the superscript, such as $X^i_i$, which is a little bit confusing. It might be better to use different symbols. ( 3) Also, in Equation 1, the authors assume that the first entity span is $(i, j-1)$ and the second one $(k, l-1)$. It might be better to specify the entity spans and the notations.
Does the review include a summary of the weaknesses of the paper?
yes
- This paper works on zero-shot relation extraction. Note that the authors assume that ground-truth entities in each sentence are given as input. In other words, this task does not require identifying entities. - The problematic issue this paper focuses on is the difficulty of distinguishing similar but different-class relations, called “similar relations” and “similar entities” (see the details in line 71-84 and Table 1 shows some examples). - To mitigate this issue, the authors propose a new relation contrastive learning framework (RCL). The reason why they applied contrastive learning (particularly, instance-wise contrastive learning) to zero-shot relation extraction is that some existing studies reported its remarkable effectiveness for representation learning (see the details in line. 84-91). - Their proposed method achieved better performance (Table 2). Also, using dropout for augmentation is more effective than other augmentation techniques (Table 3), which is consistent with the result of Guo et al. (2021). - Their method consistently achieves performance improvements (Table 2). - Their proposed method is so simple that readers can reimplement their method. - How about comparing your proposed method with existing ones by using the standard F1 score? - Missing references: (1) For Open Relation Extraction (in line 156), it might be better to cite the pioneering work, Bank et al. “Open Information Extraction from the Web” in Proc. of IJCAI2007. For contrastive learning in NLP, there are some existing studies. ( 2) For sequence labeling, Wiseman and Stratos “Label-Agnostic Sequence Labeling by Copying Nearest Neighbors” in Proc. of ACL2019. ( 3) For span classification such as NER, Ouchi et al. “Instance-Based Learning of Span Representations” in Proc. of ACL2020. They do not call their methods "contrastive learning," but their methods are a type of instance-wise contrastive learning. - Notations: (1) In line 247, the subscript $i$ (in $X_i$) is not defined. ( 2) In Equation 1, the symbol $i$ is used for both the subscript and the superscript, such as $X^i_i$, which is a little bit confusing. It might be better to use different symbols. ( 3) Also, in Equation 1, the authors assume that the first entity span is $(i, j-1)$ and the second one $(k, l-1)$. It might be better to specify the entity spans and the notations.
Does the review include a summary of the weaknesses of the paper?
no
- This paper works on zero-shot relation extraction. Note that the authors assume that ground-truth entities in each sentence are given as input. In other words, this task does not require identifying entities. - The problematic issue this paper focuses on is the difficulty of distinguishing similar but different-class relations, called “similar relations” and “similar entities” (see the details in line 71-84 and Table 1 shows some examples). - To mitigate this issue, the authors propose a new relation contrastive learning framework (RCL). The reason why they applied contrastive learning (particularly, instance-wise contrastive learning) to zero-shot relation extraction is that some existing studies reported its remarkable effectiveness for representation learning (see the details in line. 84-91). - Their proposed method achieved better performance (Table 2). Also, using dropout for augmentation is more effective than other augmentation techniques (Table 3), which is consistent with the result of Guo et al. (2021). - Their method consistently achieves performance improvements (Table 2). - Their proposed method is so simple that readers can reimplement their method. - Their claim is not properly supported. In Section 1 (in line 71-84 and 121-124), the authors introduce the problem they want to solve, and they state as follows: “It effectively mitigates two types of similar problems: similar relations and similar entities by learning representations jointly optimized with contrastive loss and classification loss.” It is true that they show some actual examples their method solved in Appendix (Figure 6). However, there is no quantitative evidence for supporting that their proposed method solves or mitigates the problem. Thus, readers cannot judge whether their method can mitigate the “similar relations and similar entities” problem or not. Readers would appreciate if the authors could provide quantitative results for supporting the claim. - The authors adopt different evaluation metrics (B^3 F1, NMI, ARI) from the standard metric (F1 score) used in many existing papers on zero-shot RE, such as Chen and Li (2021) and Levy et al. (2017). In terms of the three metrics, the state-of-the-art method, ZS-BERT, is inferior to other methods, such as Att-BiLSTM, which surprises many readers. Although it is okay to adopt the different metrics, many readers are likely to see the performance comparisons in terms of the standard F1 score as well as the three metrics. - The proposed method is not so new. It is true that the proposed method includes a few simple extensions for relation extraction (e.g., Softmax Layer and Concat Layer), the key idea and most parts of their method are based on existing contrastive learning methods, such as SimCSE proposed by Gao et al. (2021). So, readers would be happy if the authors specify which parts are their original extensions more clearly. - Minor point: Some notations are confusing and undefined. Please see “Comments, Suggestions And Typos” in detail. - How about comparing your proposed method with existing ones by using the standard F1 score? - Missing references: (1) For Open Relation Extraction (in line 156), it might be better to cite the pioneering work, Bank et al. “Open Information Extraction from the Web” in Proc. of IJCAI2007. For contrastive learning in NLP, there are some existing studies. ( 2) For sequence labeling, Wiseman and Stratos “Label-Agnostic Sequence Labeling by Copying Nearest Neighbors” in Proc. of ACL2019. ( 3) For span classification such as NER, Ouchi et al. “Instance-Based Learning of Span Representations” in Proc. of ACL2020. They do not call their methods "contrastive learning," but their methods are a type of instance-wise contrastive learning. - Notations: (1) In line 247, the subscript $i$ (in $X_i$) is not defined. ( 2) In Equation 1, the symbol $i$ is used for both the subscript and the superscript, such as $X^i_i$, which is a little bit confusing. It might be better to use different symbols. ( 3) Also, in Equation 1, the authors assume that the first entity span is $(i, j-1)$ and the second one $(k, l-1)$. It might be better to specify the entity spans and the notations.
Does the review mention any comments, suggestions or typos that the author should address?
yes
- This paper works on zero-shot relation extraction. Note that the authors assume that ground-truth entities in each sentence are given as input. In other words, this task does not require identifying entities. - The problematic issue this paper focuses on is the difficulty of distinguishing similar but different-class relations, called “similar relations” and “similar entities” (see the details in line 71-84 and Table 1 shows some examples). - To mitigate this issue, the authors propose a new relation contrastive learning framework (RCL). The reason why they applied contrastive learning (particularly, instance-wise contrastive learning) to zero-shot relation extraction is that some existing studies reported its remarkable effectiveness for representation learning (see the details in line. 84-91). - Their proposed method achieved better performance (Table 2). Also, using dropout for augmentation is more effective than other augmentation techniques (Table 3), which is consistent with the result of Guo et al. (2021). - Their method consistently achieves performance improvements (Table 2). - Their proposed method is so simple that readers can reimplement their method. - Their claim is not properly supported. In Section 1 (in line 71-84 and 121-124), the authors introduce the problem they want to solve, and they state as follows: “It effectively mitigates two types of similar problems: similar relations and similar entities by learning representations jointly optimized with contrastive loss and classification loss.” It is true that they show some actual examples their method solved in Appendix (Figure 6). However, there is no quantitative evidence for supporting that their proposed method solves or mitigates the problem. Thus, readers cannot judge whether their method can mitigate the “similar relations and similar entities” problem or not. Readers would appreciate if the authors could provide quantitative results for supporting the claim. - The authors adopt different evaluation metrics (B^3 F1, NMI, ARI) from the standard metric (F1 score) used in many existing papers on zero-shot RE, such as Chen and Li (2021) and Levy et al. (2017). In terms of the three metrics, the state-of-the-art method, ZS-BERT, is inferior to other methods, such as Att-BiLSTM, which surprises many readers. Although it is okay to adopt the different metrics, many readers are likely to see the performance comparisons in terms of the standard F1 score as well as the three metrics. - The proposed method is not so new. It is true that the proposed method includes a few simple extensions for relation extraction (e.g., Softmax Layer and Concat Layer), the key idea and most parts of their method are based on existing contrastive learning methods, such as SimCSE proposed by Gao et al. (2021). So, readers would be happy if the authors specify which parts are their original extensions more clearly. - Minor point: Some notations are confusing and undefined. Please see “Comments, Suggestions And Typos” in detail.
Does the review mention any comments, suggestions or typos that the author should address?
no
This paper presents several logic failures in the way that attribution methods are evaluated. For each failure they identify, the authors discuss why it will prevent proper identification of attribution quality, and design experiments that crystallize this potential failure. The authors survey the attribution evaluation literature, and argue that such logic failures hinder the advancements of better, more reliable attribution methods. This is a well written paper that discusses an important and very timely subject. It highlights important failures that are often overlooked in the model interpretability literature. Each failure the authors identify is clearly explained and motivated, and the experiments that demonstrate it are neat and clear. I believe that this paper can improve the way we evaluate attribution methods, and can be beneficial for many in the community. While I have a generally favorable opinion of this paper, I have two issues with it that if addressed would improve the paper in my opinion. First, I agree with the reasoning and premise, but I think the paper is missing a quite obvious connection to causal inference. Another way of explaining the current limitation of attribution evaluation is that these methods are almost never compared against counterfactuals (the only counterexamples I can think of are the causal model explanation papers). That is, we need benchmarks that include counterfactual examples where we know what is the type of manipulation that was done, and compare model predictions between the original examples and the counterfactual examples. More broadly, we can think of model explanation as a causal estimation problem, and evaluate methods as we do in causal inference (sensitivity analysis and the likes). While this paper is not the place to invent new evaluation methods, I think that it’s worthwhile to highlight the potential connections between all these failures and the lack of causal thinking in the model interpretability literature. Second, I think that the discussion points are a bit too vague and are not as thought out as the described logic failures. Especially in 3.3, I agree that we should want models that are robust to perturbations, but it is not a problem of evaluation methods, it’s a problem of model robustness. We do want our attribution methods to capture such inconsistency if it exists, this is exactly why we use it. The paper is generally very well-written, but may benefit from an additional quick round of editing. Some typos for example: Line 616: “make it more robustness”
Does the review include a short summary of the paper?
yes
This is a well written paper that discusses an important and very timely subject. It highlights important failures that are often overlooked in the model interpretability literature. Each failure the authors identify is clearly explained and motivated, and the experiments that demonstrate it are neat and clear. I believe that this paper can improve the way we evaluate attribution methods, and can be beneficial for many in the community. While I have a generally favorable opinion of this paper, I have two issues with it that if addressed would improve the paper in my opinion. First, I agree with the reasoning and premise, but I think the paper is missing a quite obvious connection to causal inference. Another way of explaining the current limitation of attribution evaluation is that these methods are almost never compared against counterfactuals (the only counterexamples I can think of are the causal model explanation papers). That is, we need benchmarks that include counterfactual examples where we know what is the type of manipulation that was done, and compare model predictions between the original examples and the counterfactual examples. More broadly, we can think of model explanation as a causal estimation problem, and evaluate methods as we do in causal inference (sensitivity analysis and the likes). While this paper is not the place to invent new evaluation methods, I think that it’s worthwhile to highlight the potential connections between all these failures and the lack of causal thinking in the model interpretability literature. Second, I think that the discussion points are a bit too vague and are not as thought out as the described logic failures. Especially in 3.3, I agree that we should want models that are robust to perturbations, but it is not a problem of evaluation methods, it’s a problem of model robustness. We do want our attribution methods to capture such inconsistency if it exists, this is exactly why we use it. The paper is generally very well-written, but may benefit from an additional quick round of editing. Some typos for example: Line 616: “make it more robustness”
Does the review include a short summary of the paper?
no
This paper presents several logic failures in the way that attribution methods are evaluated. For each failure they identify, the authors discuss why it will prevent proper identification of attribution quality, and design experiments that crystallize this potential failure. The authors survey the attribution evaluation literature, and argue that such logic failures hinder the advancements of better, more reliable attribution methods. This is a well written paper that discusses an important and very timely subject. It highlights important failures that are often overlooked in the model interpretability literature. Each failure the authors identify is clearly explained and motivated, and the experiments that demonstrate it are neat and clear. I believe that this paper can improve the way we evaluate attribution methods, and can be beneficial for many in the community. While I have a generally favorable opinion of this paper, I have two issues with it that if addressed would improve the paper in my opinion. First, I agree with the reasoning and premise, but I think the paper is missing a quite obvious connection to causal inference. Another way of explaining the current limitation of attribution evaluation is that these methods are almost never compared against counterfactuals (the only counterexamples I can think of are the causal model explanation papers). That is, we need benchmarks that include counterfactual examples where we know what is the type of manipulation that was done, and compare model predictions between the original examples and the counterfactual examples. More broadly, we can think of model explanation as a causal estimation problem, and evaluate methods as we do in causal inference (sensitivity analysis and the likes). While this paper is not the place to invent new evaluation methods, I think that it’s worthwhile to highlight the potential connections between all these failures and the lack of causal thinking in the model interpretability literature. Second, I think that the discussion points are a bit too vague and are not as thought out as the described logic failures. Especially in 3.3, I agree that we should want models that are robust to perturbations, but it is not a problem of evaluation methods, it’s a problem of model robustness. We do want our attribution methods to capture such inconsistency if it exists, this is exactly why we use it. The paper is generally very well-written, but may benefit from an additional quick round of editing. Some typos for example: Line 616: “make it more robustness”
Does the review include a summary of the strengths of the paper?
yes
This paper presents several logic failures in the way that attribution methods are evaluated. For each failure they identify, the authors discuss why it will prevent proper identification of attribution quality, and design experiments that crystallize this potential failure. The authors survey the attribution evaluation literature, and argue that such logic failures hinder the advancements of better, more reliable attribution methods. While I have a generally favorable opinion of this paper, I have two issues with it that if addressed would improve the paper in my opinion. First, I agree with the reasoning and premise, but I think the paper is missing a quite obvious connection to causal inference. Another way of explaining the current limitation of attribution evaluation is that these methods are almost never compared against counterfactuals (the only counterexamples I can think of are the causal model explanation papers). That is, we need benchmarks that include counterfactual examples where we know what is the type of manipulation that was done, and compare model predictions between the original examples and the counterfactual examples. More broadly, we can think of model explanation as a causal estimation problem, and evaluate methods as we do in causal inference (sensitivity analysis and the likes). While this paper is not the place to invent new evaluation methods, I think that it’s worthwhile to highlight the potential connections between all these failures and the lack of causal thinking in the model interpretability literature. Second, I think that the discussion points are a bit too vague and are not as thought out as the described logic failures. Especially in 3.3, I agree that we should want models that are robust to perturbations, but it is not a problem of evaluation methods, it’s a problem of model robustness. We do want our attribution methods to capture such inconsistency if it exists, this is exactly why we use it. The paper is generally very well-written, but may benefit from an additional quick round of editing. Some typos for example: Line 616: “make it more robustness”
Does the review include a summary of the strengths of the paper?
no
This paper presents several logic failures in the way that attribution methods are evaluated. For each failure they identify, the authors discuss why it will prevent proper identification of attribution quality, and design experiments that crystallize this potential failure. The authors survey the attribution evaluation literature, and argue that such logic failures hinder the advancements of better, more reliable attribution methods. This is a well written paper that discusses an important and very timely subject. It highlights important failures that are often overlooked in the model interpretability literature. Each failure the authors identify is clearly explained and motivated, and the experiments that demonstrate it are neat and clear. I believe that this paper can improve the way we evaluate attribution methods, and can be beneficial for many in the community. While I have a generally favorable opinion of this paper, I have two issues with it that if addressed would improve the paper in my opinion. First, I agree with the reasoning and premise, but I think the paper is missing a quite obvious connection to causal inference. Another way of explaining the current limitation of attribution evaluation is that these methods are almost never compared against counterfactuals (the only counterexamples I can think of are the causal model explanation papers). That is, we need benchmarks that include counterfactual examples where we know what is the type of manipulation that was done, and compare model predictions between the original examples and the counterfactual examples. More broadly, we can think of model explanation as a causal estimation problem, and evaluate methods as we do in causal inference (sensitivity analysis and the likes). While this paper is not the place to invent new evaluation methods, I think that it’s worthwhile to highlight the potential connections between all these failures and the lack of causal thinking in the model interpretability literature. Second, I think that the discussion points are a bit too vague and are not as thought out as the described logic failures. Especially in 3.3, I agree that we should want models that are robust to perturbations, but it is not a problem of evaluation methods, it’s a problem of model robustness. We do want our attribution methods to capture such inconsistency if it exists, this is exactly why we use it. The paper is generally very well-written, but may benefit from an additional quick round of editing. Some typos for example: Line 616: “make it more robustness”
Does the review include a summary of the weaknesses of the paper?
yes
This paper presents several logic failures in the way that attribution methods are evaluated. For each failure they identify, the authors discuss why it will prevent proper identification of attribution quality, and design experiments that crystallize this potential failure. The authors survey the attribution evaluation literature, and argue that such logic failures hinder the advancements of better, more reliable attribution methods. This is a well written paper that discusses an important and very timely subject. It highlights important failures that are often overlooked in the model interpretability literature. Each failure the authors identify is clearly explained and motivated, and the experiments that demonstrate it are neat and clear. I believe that this paper can improve the way we evaluate attribution methods, and can be beneficial for many in the community. The paper is generally very well-written, but may benefit from an additional quick round of editing. Some typos for example: Line 616: “make it more robustness”
Does the review include a summary of the weaknesses of the paper?
no
This paper presents several logic failures in the way that attribution methods are evaluated. For each failure they identify, the authors discuss why it will prevent proper identification of attribution quality, and design experiments that crystallize this potential failure. The authors survey the attribution evaluation literature, and argue that such logic failures hinder the advancements of better, more reliable attribution methods. This is a well written paper that discusses an important and very timely subject. It highlights important failures that are often overlooked in the model interpretability literature. Each failure the authors identify is clearly explained and motivated, and the experiments that demonstrate it are neat and clear. I believe that this paper can improve the way we evaluate attribution methods, and can be beneficial for many in the community. While I have a generally favorable opinion of this paper, I have two issues with it that if addressed would improve the paper in my opinion. First, I agree with the reasoning and premise, but I think the paper is missing a quite obvious connection to causal inference. Another way of explaining the current limitation of attribution evaluation is that these methods are almost never compared against counterfactuals (the only counterexamples I can think of are the causal model explanation papers). That is, we need benchmarks that include counterfactual examples where we know what is the type of manipulation that was done, and compare model predictions between the original examples and the counterfactual examples. More broadly, we can think of model explanation as a causal estimation problem, and evaluate methods as we do in causal inference (sensitivity analysis and the likes). While this paper is not the place to invent new evaluation methods, I think that it’s worthwhile to highlight the potential connections between all these failures and the lack of causal thinking in the model interpretability literature. Second, I think that the discussion points are a bit too vague and are not as thought out as the described logic failures. Especially in 3.3, I agree that we should want models that are robust to perturbations, but it is not a problem of evaluation methods, it’s a problem of model robustness. We do want our attribution methods to capture such inconsistency if it exists, this is exactly why we use it. The paper is generally very well-written, but may benefit from an additional quick round of editing. Some typos for example: Line 616: “make it more robustness”
Does the review mention any comments, suggestions or typos that the author should address?
yes
This paper presents several logic failures in the way that attribution methods are evaluated. For each failure they identify, the authors discuss why it will prevent proper identification of attribution quality, and design experiments that crystallize this potential failure. The authors survey the attribution evaluation literature, and argue that such logic failures hinder the advancements of better, more reliable attribution methods. This is a well written paper that discusses an important and very timely subject. It highlights important failures that are often overlooked in the model interpretability literature. Each failure the authors identify is clearly explained and motivated, and the experiments that demonstrate it are neat and clear. I believe that this paper can improve the way we evaluate attribution methods, and can be beneficial for many in the community. While I have a generally favorable opinion of this paper, I have two issues with it that if addressed would improve the paper in my opinion. First, I agree with the reasoning and premise, but I think the paper is missing a quite obvious connection to causal inference. Another way of explaining the current limitation of attribution evaluation is that these methods are almost never compared against counterfactuals (the only counterexamples I can think of are the causal model explanation papers). That is, we need benchmarks that include counterfactual examples where we know what is the type of manipulation that was done, and compare model predictions between the original examples and the counterfactual examples. More broadly, we can think of model explanation as a causal estimation problem, and evaluate methods as we do in causal inference (sensitivity analysis and the likes). While this paper is not the place to invent new evaluation methods, I think that it’s worthwhile to highlight the potential connections between all these failures and the lack of causal thinking in the model interpretability literature. Second, I think that the discussion points are a bit too vague and are not as thought out as the described logic failures. Especially in 3.3, I agree that we should want models that are robust to perturbations, but it is not a problem of evaluation methods, it’s a problem of model robustness. We do want our attribution methods to capture such inconsistency if it exists, this is exactly why we use it.
Does the review mention any comments, suggestions or typos that the author should address?
no
The paper uncovers critical issues around interpretability and explainability of the reasoning and performance of modern deep learning models. Using several theoretical and experimental analysis, it specifically reveals the weaknesses of existing attribution methods designed to qualify and support research propositions by assessing model predictions. Because of their deceptive nature i.e. easily disregarded or even missed, the paper presents these weaknesses as logic traps. For example, if input features responsible for certain predictions are perturbed, scores provided by attribution methods should reflect the significant change or difference in the model’s predictions given the perturbations. The paper highlights the potential damage of neglecting these traps such as inaccurate evaluation, unfair comparison, unmerited pressure to re-use unreliable evaluation techniques etc. In its analysis, the paper shows factual information of how attribution methods can be misleading when they approve of a model’s prediction countering the actual norm or expectation. Two helpful examples of logic traps include, (1) the assumption that a humans' decision making process is equivalent to that of a neural network. In a question answering task, BERT base maintained its test set performance despite replacing development set samples with empty strings. Ideally, the performance would drop because of the perturbations, but it did not, therefore clearly misleading human decision making. Moreover, the paper empirically proves this error by showing a drop in the model's confidence in its prediction on unchanged samples. ( 2) A second example of a logic trap is using arribution methods as ground truth to evaluate target attribution method. In an evaluation based on meaningful perturbation, such as Area over the perturbation curve (AOPC) which is precisely the average difference between probabilities predicted with original input feature and those predicted with perturbed input features. While the norm expects this AOPC to be significant or at least representative of the degree of perturbation, the paper illustrates that the perturbation (modification) strategy will dictate the eventual AOPC value, i.e. it varies with respect to the modification strategy. Scores of different attribution methods when inputs are perturbed by token deletion are inconsistent with their scores when inputs are perturbed by token replacement. The paper concludes with suggestions of limiting the impact of logic traps such as the ones discussed. Enhancing the target model as well as Excluding predictions with lower confidence. Precise and concise abstract. The paper is organised and well written. The subject addressed is crucial for the deep learning community. Redirecting attention to the appropriately selecting attribution methods (during research task evaluation stages) can subtly reduce the immense effort and focus researchers have in outperforming previous works which in any case can potentially be premised on unreliable evaluation methods. A sufficient number of examples to illustrate the logic traps is used, which is very helpful especially because they are deceptively obvious. Lines 179-182 Surprisingly, the trained MRC model maintained the original prediction on 64.0% of the test set samples (68.4% on correctly answered samples and 55.4% on wrongly answered samples). It’s clear that the MRC model surprisingly maintains prediction accuracy when evaluated on 64% of the test samples, however, what follows in the brackets is unclear i.e. "68.4% on correctly answered samples and 55.4% on wrongly answered samples" is an unclear statement. Lines 183-185 Moreover, we analyze the model confidence change in these unchanged samples, where the probability on the predicted label is used as the confidence score. What unchanged samples are you referring to? Did you replace all the development samples with empty strings or was it just a portion you replaced with empty strings hence retaining a few that you refer to as unchanged? If not, are the unchanged samples in the test set or? Evaluation 3 and Logic trap 3. You use a hard to follow example to illustrate the logic trap you define as “the change in attribution scores is brought about by the model reasoning process rather than the attribution method unreliability”. Because you indicate that deep models are vulnerable to adversarial samples, which indeed is right and therefore you would expect attribution scores to be faithful to the shift caused by the attack. The argument feels more like, the change in attribution scores is with respect to the change in samples which eventually will meet a different model reasoning process? For the results you discuss and summarize, Is the claim that the original evaluation method correctly obtains a low similarity in the F1 scores of the adversarial sample subset and the original sample subset, whereas the attribution method says otherwise? A rewrite of this section particularly the experiment and its results to add clarity or rather use of a different example would improve the work. Lines 088-092 Last, the over-belief in existing evaluation metrics encourages efforts to propose more accurate attribution methods, notwithstanding the evaluation system is unreliable. The statement above looks more like you intended to say discourages rather than encourages. Please have a look. Lines 276 and 328 AOPC rather than APOC Lines 528 With no overlap between the two subsets, there is no way we can hypothesis the adversarial samples share similar model reasoning to the original samples. hypothesise rather than hypothesis in the above sentence. You can probably do away with some repetitions of long sentences such as what is in the introduction as well as in the conclusion. “ Though strictly accurate evaluation metrics for attribution methods might be a “unicorn” which will likely never be found, we should not just ignore logic traps in existing evaluation methods and draw conclusions recklessly.”
Does the review include a short summary of the paper?
yes
Precise and concise abstract. The paper is organised and well written. The subject addressed is crucial for the deep learning community. Redirecting attention to the appropriately selecting attribution methods (during research task evaluation stages) can subtly reduce the immense effort and focus researchers have in outperforming previous works which in any case can potentially be premised on unreliable evaluation methods. A sufficient number of examples to illustrate the logic traps is used, which is very helpful especially because they are deceptively obvious. Lines 179-182 Surprisingly, the trained MRC model maintained the original prediction on 64.0% of the test set samples (68.4% on correctly answered samples and 55.4% on wrongly answered samples). It’s clear that the MRC model surprisingly maintains prediction accuracy when evaluated on 64% of the test samples, however, what follows in the brackets is unclear i.e. "68.4% on correctly answered samples and 55.4% on wrongly answered samples" is an unclear statement. Lines 183-185 Moreover, we analyze the model confidence change in these unchanged samples, where the probability on the predicted label is used as the confidence score. What unchanged samples are you referring to? Did you replace all the development samples with empty strings or was it just a portion you replaced with empty strings hence retaining a few that you refer to as unchanged? If not, are the unchanged samples in the test set or? Evaluation 3 and Logic trap 3. You use a hard to follow example to illustrate the logic trap you define as “the change in attribution scores is brought about by the model reasoning process rather than the attribution method unreliability”. Because you indicate that deep models are vulnerable to adversarial samples, which indeed is right and therefore you would expect attribution scores to be faithful to the shift caused by the attack. The argument feels more like, the change in attribution scores is with respect to the change in samples which eventually will meet a different model reasoning process? For the results you discuss and summarize, Is the claim that the original evaluation method correctly obtains a low similarity in the F1 scores of the adversarial sample subset and the original sample subset, whereas the attribution method says otherwise? A rewrite of this section particularly the experiment and its results to add clarity or rather use of a different example would improve the work. Lines 088-092 Last, the over-belief in existing evaluation metrics encourages efforts to propose more accurate attribution methods, notwithstanding the evaluation system is unreliable. The statement above looks more like you intended to say discourages rather than encourages. Please have a look. Lines 276 and 328 AOPC rather than APOC Lines 528 With no overlap between the two subsets, there is no way we can hypothesis the adversarial samples share similar model reasoning to the original samples. hypothesise rather than hypothesis in the above sentence. You can probably do away with some repetitions of long sentences such as what is in the introduction as well as in the conclusion. “ Though strictly accurate evaluation metrics for attribution methods might be a “unicorn” which will likely never be found, we should not just ignore logic traps in existing evaluation methods and draw conclusions recklessly.”
Does the review include a short summary of the paper?
no
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card