Unnamed: 0
int64
2
9.3k
sentence
stringlengths
30
941
aspect_term_1
stringlengths
1
32
aspect_term_2
stringlengths
2
27
aspect_term_3
stringlengths
2
23
aspect_term_4
stringclasses
25 values
aspect_term_5
stringclasses
7 values
aspect_term_6
stringclasses
1 value
aspect_category_1
stringclasses
9 values
aspect_category_2
stringclasses
9 values
aspect_category_3
stringclasses
9 values
aspect_category_4
stringclasses
2 values
aspect_category_5
stringclasses
1 value
aspect_term_1_polarity
stringclasses
3 values
aspect_term_2_polarity
stringclasses
3 values
aspect_term_3_polarity
stringclasses
3 values
aspect_term_4_polarity
stringclasses
3 values
aspect_term_5_polarity
stringclasses
3 values
aspect_term_6_polarity
stringclasses
1 value
aspect_category_1_polarity
stringclasses
3 values
aspect_category_2_polarity
stringclasses
3 values
aspect_category_3_polarity
stringclasses
3 values
aspect_category_4_polarity
stringclasses
1 value
aspect_category_5_polarity
stringclasses
1 value
8,671
The main difference from previous approaches is that the model is that the embeddings are trained end-to-end for a specific task, rather than trying to produce generically useful embeddings.[approaches-NEU, task-NEU], [CMP-NEU]
approaches
task
null
null
null
null
CMP
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
8,672
The method leads to better performance than using no external resources,[method-POS, performance-POS], [EMP-POS]
method
performance
null
null
null
null
EMP
null
null
null
null
POS
POS
null
null
null
null
POS
null
null
null
null
8,673
but not as high performance as using Glove embeddings.[performance-NEG], [CMP-NEG]
performance
null
null
null
null
null
CMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,674
The paper is clearly written, and has useful ablation experiments.[paper-POS, experiments-POS], [CLA-POS, EMP-POS]
paper
experiments
null
null
null
null
CLA
EMP
null
null
null
POS
POS
null
null
null
null
POS
POS
null
null
null
8,675
However, I have a couple of questions/concerns: - Most of the gains seem to come from using the spelling of the word.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,676
As the authors note, this kind of character level modelling has been used in many previous works.[modelling-NEG, works-NEG], [CMP-NEG]
modelling
works
null
null
null
null
CMP
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
8,677
- I would be slightly surprised if no previous work has used external resources for training word representations using an end-task loss,[previous work-NEU], [CMP-NEU]
previous work
null
null
null
null
null
CMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,679
- I'm a little skeptical about how often this method would really be useful in practice.[method-NEG], [EMP-NEG]
method
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,680
It seems to assume that you don't have much unlabelled text (or you'd use Glove), but you probably need a large labelled dataset to learn how to read dictionary definitions well.[labelled dataset-NEG, unlabelled text-NEG], [SUB-NEG]
labelled dataset
unlabelled text
null
null
null
null
SUB
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
8,681
All the experiments use large tasks - it would be helpful to have an experiment showing an improvement over character-level modelling on a smaller task.[experiment-NEG], [EMP-NEG]
experiment
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,682
- The results on SQUAD seem pretty weak - 52-64%, compared to the SOTA of 81.[results-NEG], [CMP-NEG, EMP-NEG]
results
null
null
null
null
null
CMP
EMP
null
null
null
NEG
null
null
null
null
null
NEG
NEG
null
null
null
8,683
It seems like the proposed method is quite generic, so why not apply it to a stronger baseline? [method-NEG, baseline-NEG], [EMP-NEG]]
method
baseline
null
null
null
null
EMP
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
8,688
Main comments: - The idea of building 3D adversarial objects is novel so the study is interesting.[idea-POS], [EMP-POS]
idea
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,689
However, the paper is incomplete, with a very low number of references, only 2 conference papers if we assume the list is up to date.[references-NEG], [SUB-NEG, CMP-NEG]
references
null
null
null
null
null
SUB
CMP
null
null
null
NEG
null
null
null
null
null
NEG
NEG
null
null
null
8,691
- The presentation of the results is not very clear.[presentation-NEG, results-NEG], [PNF-NEG]
presentation
results
null
null
null
null
PNF
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
8,692
See specific comments below. - It would be nice to include insights to improve neural nets to become less sensitive to these attacks.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,693
Minor comments: Fig1 : a bug with color seems to have been fixed Model section: be consistent with the notations.[Fig1-NEU, notations-NEG], [EMP-NEG, PNF-NEG]
Fig1
notations
null
null
null
null
EMP
PNF
null
null
null
NEU
NEG
null
null
null
null
NEG
NEG
null
null
null
8,694
Bold everywhere or nowhere[null], [PNF-NEG]
null
null
null
null
null
null
PNF
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,695
Results: The tables are difficult to read and should be clarified:[tables-NEG], [PNF-NEG]
tables
null
null
null
null
null
PNF
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,696
What does the l2 metric stands for ? [null], [PNF-NEU]
null
null
null
null
null
null
PNF
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,697
How about min, max ?[null], [PNF-NEU]
null
null
null
null
null
null
PNF
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,698
Accuracy -> classification accuracy[null], [PNF-NEU]
null
null
null
null
null
null
PNF
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,699
Models -> 3D models Describe each metric (Adversarial, Miss-classified, Correct) [null], [PNF-NEU, SUB-NEU]
null
null
null
null
null
null
PNF
SUB
null
null
null
null
null
null
null
null
null
NEU
NEU
null
null
null
8,701
The paper falls far short of the standard expected of an ICLR submission.[paper-NEG], [APR-NEG]
paper
null
null
null
null
null
APR
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,702
The paper has little to no content.[paper-NEG], [SUB-NEG]
paper
null
null
null
null
null
SUB
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,703
There are large sections of blank page throughout.[null], [PNF-NEG]
null
null
null
null
null
null
PNF
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,704
The algorithm, iterative temporal differencing, is introduced in a figure -- there is no formal description.[description-NEG, figure-NEU], [CLA-NEG, SUB-NEG]
description
figure
null
null
null
null
CLA
SUB
null
null
null
NEG
NEU
null
null
null
null
NEG
NEG
null
null
null
8,707
The paper over-uses acronyms; sentences like "In this figure, VBP, VBP with FBA, and ITD using FBA for VBP..." are painful to read.[paper-NEG], [PNF-NEG]
paper
null
null
null
null
null
PNF
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,712
The experimental results show that the propped model outperforms tree-lstm using external parsers.[experimental results-POS, propped model-POS], [EMP-POS]
experimental results
propped model
null
null
null
null
EMP
null
null
null
null
POS
POS
null
null
null
null
POS
null
null
null
null
8,713
Comment: I kinda like the idea of using chart, and the attention over chart cells.[chart-POS], [PNF-POS]
chart
null
null
null
null
null
PNF
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,714
The paper is very well written.[paper-POS], [CLA-POS]
paper
null
null
null
null
null
CLA
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,715
- My only concern about the novelty of the paper is that the idea of using CYK chart-based mechanism is already explored in Le and Zuidema (2015).[paper-NEG], [NOV-NEG]
paper
null
null
null
null
null
NOV
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,716
- Le and Zudema use pooling and this paper uses weighted sum.[paper-NEU], [CMP-NEU]
paper
null
null
null
null
null
CMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,717
Any differences in terms of theory and experiment?[theory-NEU, experiment-NEU], [EMP-NEU]
theory
experiment
null
null
null
null
EMP
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
8,718
- I like the new attention over chart cells.[chart-POS], [EMP-POS]
chart
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,719
But I was surprised that the authors didn't use it in the second experiment (reverse dictionary).[experiment-NEG], [EMP-NEG]
experiment
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,720
- In table 2, it is difficult for me to see if the difference between unsupervised tree-lstm and right-branching tree-lstm (0.3%) is "good enough".[table-NEG], [PNF-NEG]
table
null
null
null
null
null
PNF
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,721
In which cases the former did correctly but the latter didn't?[cases-NEG], [EMP-NEG]
cases
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,722
- In table 3, what if we use the right-branching tree-lstm with attention?[table-NEU], [EMP-NEU]
table
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,723
- In table 4, why do Hill et al lstm and bow perform much better than the others?[table-NEU], [EMP-NEU]]
table
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,726
In some domains this can be a much better approach and this is supported by experimentation.[approach-POS], [EMP-POS]
approach
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,728
- Efficient exploration is a big problem for deep reinforcement learning (epsilon-greedy or Boltzmann is the de-facto baseline) and there are clearly some examples where this approach does much better.[approach-POS], [EMP-POS]
approach
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,729
- The noise-scaling approach is (to my knowledge) novel, good and in my view the most valuable part of the paper.[approach-POS], [NOV-POS]
approach
null
null
null
null
null
NOV
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,730
- This is clearly a very practical and extensible idea... the authors present good results on a whole suite of tasks.[idea-POS, results-POS], [EMP-POS]
idea
results
null
null
null
null
EMP
null
null
null
null
POS
POS
null
null
null
null
POS
null
null
null
null
8,731
- The paper is clear and well written, it has a narrative and the plots/experiments tend to back this up.[paper-POS], [CLA-POS, EMP-POS]
paper
null
null
null
null
null
CLA
EMP
null
null
null
POS
null
null
null
null
null
POS
POS
null
null
null
8,732
- I like the algorithm, it's pretty simple/clean and there's something obviously *right* about it (in SOME circumstances).[algorithm-POS], [EMP-POS]
algorithm
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,734
- At many points in the paper the claims are quite overstated.[claims-NEG], [EMP-NEG]
claims
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,735
Parameter noise on the policy won't necessarily get you efficient exploration... and in some cases it can even be *worse* than epsilon-greedy... if you just read this paper you might think that this was a truly general statistically efficient method for exploration (in the style of UCRL or even E^3/Rmax etc).[null], [CMP-NEG]
null
null
null
null
null
null
CMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,736
- For instance, the example in 4.2 only works because the optimal solution is to go right in every timestep... if you had the network parameterized in a different way (or the actions left/right were relabelled) then this parameter noise approach would *not* work...[example-NEG], [EMP-NEG]
example
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,738
I think the claim/motivation for this example in the bootstrapped DQN paper is more along the lines of deep exploration and you should be clear that your parameter noise does *not* address this issue.[claim-NEU], [CLA-NEU]
claim
null
null
null
null
null
CLA
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,739
- That said I think that the example in 4.2 is *great* to include... you just need to be more upfront about how/why it works and what you are banking on with the parameter-space exploration.[example-POS], [EMP-NEU]
example
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
NEU
null
null
null
null
8,740
Essentially you perform a local exploration rule in parameter space... and sometimes this is great -[null], [EMP-POS]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
8,741
but you should be careful to distinguish this type of method from other approaches.[method-NEU], [EMP-NEU]
method
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,742
This must be mentioned in section 4.2 does parameter space noise explore efficiently because the answer you seem to imply is yes ... when the answer is clearly NOT IN GENERAL... but it can still be good sometimes ;D[section-NEU], [PNF-NEU]
section
null
null
null
null
null
PNF
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,744
I can't really support the conclusion RL with parameter noise exploration learns more efficiently than both RL and evolutionary strategies individually.[conclusion-NEG], [EMP-NEG]
conclusion
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,745
This sort of sentence is clearly wrong and for many separate reasons: - Parameter noise exploration is not a separate/new thing from RL... it's even been around for ages! It feels like you are talking about DQN/A3C/(whatever algorithm got good scores in Atari last year) as RL and that's just really not a good way to think about it.[sentence-NEG], [CMP-NEG, EMP-NEG]
sentence
null
null
null
null
null
CMP
EMP
null
null
null
NEG
null
null
null
null
null
NEG
NEG
null
null
null
8,746
- Parameter noise exploration can be *extremely* bad relative to efficient exploration methods (see section 2.4.3 https://searchworks.stanford.edu/view/11891201)[section-NEG], [CMP-NEG]
section
null
null
null
null
null
CMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,747
Overall, I like the paper, I like the algorithm and I think it is a valuable contribution.[contribution-POS], [EMP-POS]
contribution
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,749
In some (maybe even many of the ones you actually care about) settings this can be a really great approach, especially when compared to epsilon-greedy.[approach-POS], [CMP-POS, EMP-POS]
approach
null
null
null
null
null
CMP
EMP
null
null
null
POS
null
null
null
null
null
POS
POS
null
null
null
8,751
You shouldn't claim such a universal revolution to exploration / RL / evolution because I don't think that it's correct.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
8,752
Further, I don't think that clarifying that this method is *not* universal/general really hurts the paper... you could just add a section in 4.2 pointing out that the chain example wouldn't work if you needed to do different actions at each timestep (this algorithm does *not* perform deep exploration).[method-NEU, section-NEU], [EMP-NEU]
method
section
null
null
null
null
EMP
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
8,756
Review: The paper is clearly written.[paper-POS], [CLA-POS]
paper
null
null
null
null
null
CLA
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,757
It is sometimes difficult to communicate ideas in this area, so I appreciate the author's effort in choosing good notation.[notation-POS], [PNF-POS]
notation
null
null
null
null
null
PNF
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,758
Using an architecture to learn how to split the input, find solutions, then merge these is novel.[architecture-POS, solutions-POS, novel-POS], [NOV-POS]
architecture
solutions
novel
null
null
null
NOV
null
null
null
null
POS
POS
POS
null
null
null
POS
null
null
null
null
8,760
The ideas and formalism of the merge and partition operations are valuable contributions.[ideas-POS, contributions-POS], [EMP-POS, IMP-POS]
ideas
contributions
null
null
null
null
EMP
IMP
null
null
null
POS
POS
null
null
null
null
POS
POS
null
null
null
8,761
The experimental side of the paper is less strong.[experimental side-NEG], [EMP-NEG]
experimental side
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,762
There are good results on the convex hull problem, which is promising.[results-POS], [EMP-POS]
results
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,763
There should also be a comparison to a k-means solver in the k-means section as an additional baseline.[comparison-NEU], [SUB-NEG, CMP-NEG]
comparison
null
null
null
null
null
SUB
CMP
null
null
null
NEU
null
null
null
null
null
NEG
NEG
null
null
null
8,764
I'm also not sure TSP is an appropriate problem to demonstrate the method's effectiveness.[problem-NEU], [EMP-POS]
problem
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
POS
null
null
null
null
8,765
Perhaps another problem that has an explicit divide and conquer strategy could be used instead.[problem-NEU], [SUB-NEU]
problem
null
null
null
null
null
SUB
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,766
It would also be nice to observe failure cases of the model.[model-NEU], [SUB-NEU]
model
null
null
null
null
null
SUB
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,767
This could be done by visually showing the partition constructed or seeing how the model learned to merge solutions..[model-NEU, solutions-NEU], [EMP-NEU]
model
solutions
null
null
null
null
EMP
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
8,768
This is a relatively new area to tackle, so while the experiments section could be strengthened, I think the ideas present in the paper are important and worth publishing.[experiments section-NEU, ideas-POS, paper-POS], [EMP-POS]
experiments section
ideas
paper
null
null
null
EMP
null
null
null
null
NEU
POS
POS
null
null
null
POS
null
null
null
null
8,773
Typos: 1. Author's names should be enclosed in parentheses unless part of the sentence.[Typos-NEG], [CLA-NEG]
Typos
null
null
null
null
null
CLA
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,774
2. I believe then should be removed in the sentence ...scale invariance, then exploiting... on page 2.[page-NEG], [CLA-NEG]
page
null
null
null
null
null
CLA
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,776
The topic is interesting however the description in the paper is lacking clarity.[topic-POS, description-NEG], [CLA-NEG]
topic
description
null
null
null
null
CLA
null
null
null
null
POS
NEG
null
null
null
null
NEG
null
null
null
null
8,777
The paper is written in a procedural fashion - I first did that, then I did that and after that I did third.[paper-NEU], [PNF-NEU]
paper
null
null
null
null
null
PNF
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,778
Having proper mathematical description and good diagrams of what you doing would have immensely helped.[description-NEU], [EMP-NEU]
description
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,779
Another big issue is the lack of proper validation in Section 3.4.[issue-NEG, validation-NEG, Section-NEU], [EMP-NEG]
issue
validation
Section
null
null
null
EMP
null
null
null
null
NEG
NEG
NEU
null
null
null
NEG
null
null
null
null
8,780
Even if you do not know what metric to use to objectively compare your approach versus baseline there are plenty of fields suffering from a similar problem yet doing subjective evaluations, such as listening tests in speech synthesis.[approach-NEU], [CMP-NEU]
approach
null
null
null
null
null
CMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,781
Given that I see only one example I can not objectively know if your model produces examples like that 'each' time so having just one example is as good as having none. [example-NEG], [SUB-NEG]
example
null
null
null
null
null
SUB
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,785
Such simple trick alleviates the effort in tuning stepsize, and can be incorporated with popular stochastic first-order optimization algorithms, including SGD, SGD with Nestrov momentum, and Adam. Surprisingly, it works well in practice.[null], [EMP-POS]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
8,786
Although the theoretical analysis is weak that theorem 1 does not reveal the main reason for the benefits of such trick, considering their performance, I vote for acceptance.[theoretical analysis-NEG, acceptance-POS], [REC-POS, EMP-NEG]
theoretical analysis
acceptance
null
null
null
null
REC
EMP
null
null
null
NEG
POS
null
null
null
null
POS
NEG
null
null
null
8,788
1, the derivation of the update of alpha relies on the expectation formulation.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,789
I would like to see the investigation of the effect of the size of minibatch to reveal the variance of the gradient in the algorithm combined with such trick.[investigation-NEU], [EMP-NEU]
investigation
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,790
2, The derivation of the multiplicative rule of HD relies on a reference I cannot find. Please include this part for self-containing.[reference-NEU], [SUB-NEU]
reference
null
null
null
null
null
SUB
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,791
3, As the authors claimed, the Maclaurin et.al. 2015 is the most related work, however, they are not compared in the experiments.[related work-NEU, experiments-NEG], [CMP-NEG]
related work
experiments
null
null
null
null
CMP
null
null
null
null
NEU
NEG
null
null
null
null
NEG
null
null
null
null
8,792
Moreover, the empirical comparisons are only conducted on MNIST.[empirical comparisons-NEG], [CMP-NEG, EMP-NEU]
empirical comparisons
null
null
null
null
null
CMP
EMP
null
null
null
NEG
null
null
null
null
null
NEG
NEU
null
null
null
8,793
To be more convincing, it will be good to include such competitor and comparing on practical applications on CIFAR10/100 and ImageNet.[null], [CMP-NEU]
null
null
null
null
null
null
CMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,794
Minors: In the experiments results figures, after adding the new trick, the SGD algorithms become more stable, i.e., the variance diminishes.[experiments results-POS], [EMP-POS]
experiments results
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
8,795
Could you please explain why such phenomenon happens?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,801
The main issue I am having is what are the applicable insight from the analysis:[analysis-NEU], [IMP-NEU]
analysis
null
null
null
null
null
IMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,803
2. Does the result implies that we should make the decision boundary more flat, or curved but on different directions? And how to achieve that?[result-NEU], [EMP-NEU]
result
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,804
It might be my mis-understanding but from my reading a prescriptive procedure for universal perturbation seems not attained from the results presented.[results-NEU], [EMP-NEG]
results
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
8,808
However the corpus the authors choose are quite small,[corpus-NEG], [SUB-NEG]
corpus
null
null
null
null
null
SUB
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
8,809
the variance of the estimate will be quite high, I suspect whether the same conclusions could be drawn[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,810
. It would be more convincing if there are experiments on the billion word corpus or other larger datasets, or at least on a corpus with 50 million tokens.[experiments-NEU], [SUB-NEU]
experiments
null
null
null
null
null
SUB
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
8,811
This will use significant resources and is much more difficult,[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
8,812
but it's also really valuable, because it's much more close to real world usage of language models.[null], [IMP-POS]
null
null
null
null
null
null
IMP
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
8,813
And less tuning is needed for these larger datasets.[datasets-NEU], [EMP-NEU]
datasets
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null