Unnamed: 0
int64
2
9.3k
sentence
stringlengths
30
941
aspect_term_1
stringlengths
1
32
aspect_term_2
stringlengths
2
27
aspect_term_3
stringlengths
2
23
aspect_term_4
stringclasses
25 values
aspect_term_5
stringclasses
7 values
aspect_term_6
stringclasses
1 value
aspect_category_1
stringclasses
9 values
aspect_category_2
stringclasses
9 values
aspect_category_3
stringclasses
9 values
aspect_category_4
stringclasses
2 values
aspect_category_5
stringclasses
1 value
aspect_term_1_polarity
stringclasses
3 values
aspect_term_2_polarity
stringclasses
3 values
aspect_term_3_polarity
stringclasses
3 values
aspect_term_4_polarity
stringclasses
3 values
aspect_term_5_polarity
stringclasses
3 values
aspect_term_6_polarity
stringclasses
1 value
aspect_category_1_polarity
stringclasses
3 values
aspect_category_2_polarity
stringclasses
3 values
aspect_category_3_polarity
stringclasses
3 values
aspect_category_4_polarity
stringclasses
1 value
aspect_category_5_polarity
stringclasses
1 value
2
Based on the results by Lee et al, which shows that first order methods converge to local minimum solution (instead of saddle points), it can be concluded that the global minima of this problem can be found by any manifold descent techniques, including standard gradient descent methods.[problem-NEU], [CMP-POS, EMP-POS]
problem
null
null
null
null
null
CMP
EMP
null
null
null
NEU
null
null
null
null
null
POS
POS
null
null
null
3
In general I found this paper clearly written and technically sound.[paper-POS], [CLA-POS, EMP-POS]
paper
null
null
null
null
null
CLA
EMP
null
null
null
POS
null
null
null
null
null
POS
POS
null
null
null
4
I also appreciate the effort of developing theoretical results for deep learning, even though the current results are restrictive to very simple NN architectures.[theoretical results-POS], [EMP-POS]
theoretical results
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
5
Contribution: As discussed in the literature review section, apart from previous results that studied the theoretical convergence properties for problems that involves a single hidden unit NN, this paper extends the convergence results to problems that involves NN with two hidden units.[literature review section-NEU, previous results-NEU, paper-NEU], [CMP-NEU]
literature review section
previous results
paper
null
null
null
CMP
null
null
null
null
NEU
NEU
NEU
null
null
null
NEU
null
null
null
null
6
The analysis becomes considerably more complicated,[analysis-NEG], [EMP-NEG]
analysis
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
7
and the contribution seems to be novel and significant.[contribution-POS], [NOV-POS, IMP-POS]
contribution
null
null
null
null
null
NOV
IMP
null
null
null
POS
null
null
null
null
null
POS
POS
null
null
null
8
I am not sure why did the authors mentioned the work on over-parameterization though.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
9
It doesn't seem to be relevant to the results of this paper (because the NN architecture proposed in this paper is rather small).[results-NEU], [EMP-NEG]
results
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
10
Comments on the Assumptions: - Please explain the motivation behind the standard Gaussian assumption of the input vector x.[motivations-NEU], [EMP-NEU]
motivations
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
11
- Please also provide more motivations regarding the assumption of the orthogonality of weights: w_1^top w_2 0 (or the acute angle assumption in Section 6).[motivations-NEU, assumption-NEU, Section-NEU], [EMP-NEU]
motivations
assumption
Section
null
null
null
EMP
null
null
null
null
NEU
NEU
NEU
null
null
null
NEU
null
null
null
null
12
Without extra justifications, it seems that the theoretical result only holds for an artificial problem setting.[theoretical result-NEG], [EMP-NEG]
theoretical result
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
13
While the ReLU activation is very common in NN architecture, without more motivations I am not sure what are the impacts of these results.[motivations-NEU, impacts-NEU], [EMP-NEG, IMP-NEU]
motivations
impacts
null
null
null
null
EMP
IMP
null
null
null
NEU
NEU
null
null
null
null
NEG
NEU
null
null
null
14
General Comment: The technical section is quite lengthy, and unfortunately I am not available to go over every single detail of the proofs.[technical section-NEG], [SUB-NEG]
technical section
null
null
null
null
null
SUB
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
15
From the analysis in the main paper, I believe the theoretical contribution is correct and sound.[analysis-NEU, theoretical contribution-POS], [EMP-POS]
analysis
theoretical contribution
null
null
null
null
EMP
null
null
null
null
NEU
POS
null
null
null
null
POS
null
null
null
null
16
While I appreciate the technical contributions,[technical contributions-POS], [EMP-POS]
technical contributions
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
17
in order to improve the readability of this paper, it would be great to see more motivations of the problem studied in this paper (even with simple examples).[motivations-NEU], [SUB-NEG]
motivations
null
null
null
null
null
SUB
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
18
Furthermore, it is important to discuss the technical assumptions on the 1) standard Gaussianity of the input vector,[assumptions-NEU], [SUB-NEU, EMP-NEU]
assumptions
null
null
null
null
null
SUB
EMP
null
null
null
NEU
null
null
null
null
null
NEU
NEU
null
null
null
19
and 2) the orthogonality of the weights (and the acute angle assumption in Section 6) on top of the discussions in Section 8.1, as they are critical to the derivations of the main theorems. [Section-NEU], [SUB-NEU, EMP-NEU]
Section
null
null
null
null
null
SUB
EMP
null
null
null
NEU
null
null
null
null
null
NEU
NEU
null
null
null
21
The propose data augmentation and BC learning is relevant, much robust than frequency jitter or simple data augmentation.[null], [EMP-POS]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
22
In equation 2, please check the measure of the mixture.[equation-NEU], [EMP-NEU]
equation
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
23
Why not simply use a dB criteria ?[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
24
The comments about applying a CNN to local features or novel approach to increase sound recognition could be completed with some ICLR 2017 work towards injected priors using Chirplet Transform.[comments-NEU, novel approach-NEU], [NOV-NEU, CMP-NEU]
comments
novel approach
null
null
null
null
NOV
CMP
null
null
null
NEU
NEU
null
null
null
null
NEU
NEU
null
null
null
25
The authors might discuss more how to extend their model to image recognition, or at least of other modalities as suggested.[discuss-NEU, model-NEU], [EMP-NEU]
discuss
model
null
null
null
null
EMP
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
26
Section 3.2.2 shall be placed later on, and clarified.[Section-NEU], [CLA-NEU, PNF-NEU]
Section
null
null
null
null
null
CLA
PNF
null
null
null
NEU
null
null
null
null
null
NEU
NEU
null
null
null
27
Discussion on mixing more than two sounds leads could be completed by associative properties, we think... ? [Discussion-NEU], [EMP-NEU]
Discussion
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
31
I am overall a fan of the general idea of this paper; scaling up to huge inputs is definitely a necessary research direction for QA.[idea-POS], [EMP-POS]
idea
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
32
However, I have some concerns about the specific implementation and model discussed here.[model-NEU], [EMP-NEU]
model
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
33
How much of the proposed approach is specific to getting good results on bAbI (e.g., conditioning the knowledge encoder on only the previous sentence, time stamps in the knowledge tuple, super small RNNs, four simple functions in the n-gram machine, structure tweaking) versus having a general-purpose QA model for natural language?[proposed approach-NEU], [EMP-NEU]
proposed approach
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
34
Addressing some of these issues would likely prevent scaling to millions of (real) sentences, as the scalability is reliant on programs being efficiently executed (by simple string matching) against a knowledge storage.[issues-NEU], [SUB-NEG, EMP-NEG]
issues
null
null
null
null
null
SUB
EMP
null
null
null
NEU
null
null
null
null
null
NEG
NEG
null
null
null
35
The paper is missing a clear analysis of NGM's limitations...[analysis-NEG], [EMP-NEG]
analysis
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
36
the examples of knowledge storage from bAbI in the supplementary material are also underwhelming as the model essentially just has to learn to ignore stopwords since the sentences are so simple.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
37
In its current form, I am borderline but leaning towards rejecting this paper.[paper-NEG], [REC-NEG]
paper
null
null
null
null
null
REC
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
38
Other questions: - is -gram really the most appropriate term to use for the symbolic representation?[null], [PNF-NEU]
null
null
null
null
null
null
PNF
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
39
N-grams are by definition contiguous sequences... The authors may want to consider alternatives.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
41
The evaluations are only conducted on 5 of the 20 bAbI tasks, so it is hard to draw any conclusions from the results as to the validity of this approach.[evaluations-NEG], [SUB-NEG]
evaluations
null
null
null
null
null
SUB
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
42
Can the authors comment on how difficult it will be to add functions to the list in Table 2 to handle the other 15 tasks? Or is NGM strictly for extractive QA?[Table-NEU], [EMP-NEU]
Table
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
43
- beam search is performed on each sentence in the input story to obtain knowledge tuples... while the answering time may not change (as shown in Figure 4) as the input story grows, the time to encode the story into knowledge tuples certainly grows, which likely necessitates the tiny RNN sizes used in the paper.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
44
How long does the encoding time take with 10 million sentences?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
45
- Need more detail on the programmer architecture, is it identical to the one used in Liang et al., 2017? [detail-NEU], [SUB-NEU, EMP-NEU]
detail
null
null
null
null
null
SUB
EMP
null
null
null
NEU
null
null
null
null
null
NEU
NEU
null
null
null
51
None of these ideas are new before but I haven't seen them combined in this way before.[ideas-NEU], [NOV-NEG]
ideas
null
null
null
null
null
NOV
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
52
This is a very practical idea, well-explained with a thorough set of experiments across three different tasks.[idea-POS, paper-POS], [EMP-POS]
idea
paper
null
null
null
null
EMP
null
null
null
null
POS
POS
null
null
null
null
POS
null
null
null
null
53
The paper is not surprising[paper-NEG], [NOV-NEG]
paper
null
null
null
null
null
NOV
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
54
but this seems like an effective technique for people who want to build effective systems with whatever data they've got. [technique-POS], [EMP-POS]]
technique
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
57
The exposition of the model architecture could use some additional detail to clarify some steps and possibly fix some minor errors (see below).[model architecture-NEG, detail-NEG], [SUB-NEG]
model architecture
detail
null
null
null
null
SUB
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
58
I would prefer less material but better explained.[material-NEU], [EMP-NEU]
material
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
60
The paper could be more focused around a single scientific question: does the PATH function as formulated help?[paper-NEU], [EMP-NEU]
paper
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
61
The authors do provide a novel formulation and demonstrate the gains on a variety of concrete problems taken form the literature.[experiments-POS, problems-POS], [NOV-POS]
experiments
problems
null
null
null
null
NOV
null
null
null
null
POS
POS
null
null
null
null
POS
null
null
null
null
62
I also like that they try to design experiments to understand the role of specific parts of the proposed architecture.[experiments-POS, proposed architecture-POS], [EMP-POS]
experiments
proposed architecture
null
null
null
null
EMP
null
null
null
null
POS
POS
null
null
null
null
POS
null
null
null
null
63
The graphs are WAY TOO SMALL to read.[graphs-NEG], [PNF-NEG]
graphs
null
null
null
null
null
PNF
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
64
Figure #s are missing off several figures.[Figure-NEG], [PNF-NEG]
Figure
null
null
null
null
null
PNF
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
65
MODEL & ARCHITECTURE The PATH function given a current state s and a goal state s', returns a distribution over the best first action to take to get to the goal P(A).[null], [EMP-POS]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
66
( If the goal state s' was just the next state, then this would just be a dynamics model and this would be model-based learning?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
67
So I assume there are multiple steps between s and s'?).[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
68
At the beginning of section 2.1, I think the authors suggest the PATH function could be pre-trained independently by sampling a random state in the state space to be the initial state and a second random state to be the goal state and then using an RL algorithm to find a path.[section-NEU], [EMP-NEU]
section
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
69
Presumably, once one had found a path ( (s, a0), (s1, a1), (s2, a2), ..., (sn-1,an-1), s' ) one could then train the PATH policy on the triple (s, s', a0) ?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
70
This seems like a pretty intense process: solving some representative subset of all possible RL problems for a particular environment ... Maybe one choses s and s' so they are not too far away from each other (the experimental section later confirms this distance is > 7.[section-NEU], [EMP-NEU]
section
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
71
Maybe bring this detail forward)?[detail-NEU], [EMP-NEU]
detail
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
72
The expression Trans'( (s,s), a)' (Trans(s,a), s') was confusing.[expression-NEG], [CLA-NEG]
expression
null
null
null
null
null
CLA
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
73
I think the idea here is that the expression Trans'( (s,s) , a )' represents the n-step transition function and 'a' represents the first action?[expression-NEU], [EMP-NEU]
expression
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
74
The second step is to train the goal function for a specific task.[task-NEU], [EMP-NEU]
task
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
75
So I gather our policy takes the form of a composed function and the chain rule gives close to their expression in 2.2[expression-NEU], [EMP-NEU]
expression
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
78
What is confusing is that they define A( s, a, th^p, th^g, th^v ) sum_i gamma^i r_{t+1} + gamma^k V( s_{t+k} ; th^v ) - V( s_t ; th^v )[null], [CLA-NEG]
null
null
null
null
null
null
CLA
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
79
The left side contains th^p and th^g, but the right side does not.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
80
Should these parameters be take out of the n-step advantage function A?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
81
The second alternative for training the goal function tau seems confusing.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
82
I get that tau is going to be constrained by whatever representation PATH function was trained on and that this representation might affect the overall performance - performance.[performance-NEU], [EMP-NEU]
performance
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
83
I didn't get the contrast with method one.[method-NEG], [EMP-NEG]
method
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
84
How do we treat the output of Tau as an action?[output-NEU], [EMP-NEU]
output
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
85
Are you thinking of the gradient coming back through PATH as a reward signal?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
86
More detail here would be helpful.[detail-NEG], [SUB-NEG]
detail
null
null
null
null
null
SUB
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
87
EXPERIMENTS: Lavaworld: authors show that pretraining the PATH function on longer 7-11 step policies leads to better performance when given a specific Lava world problem to solve.[performance-POS, problem-POS], [EMP-POS]
performance
problem
null
null
null
null
EMP
null
null
null
null
POS
POS
null
null
null
null
POS
null
null
null
null
88
So the PATH function helps and longer paths are better.[null], [EMP-POS]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
90
What is the upper bound on the size of PATH lengths you can train?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
92
From a scientific point of view, this seems orthogonal to the point of the paper, though is relevant if you were trying to build a system.[paper-POS], [EMP-POS]
paper
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
94
This isn't too surprising.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
95
Both picking up the passenger (reachability) and dropping them off somewhere are essentially the same task: moving to a point.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
96
It is interesting that the Task function is able to encode the higher level structure of the TAXI problem's two phases.[Task function-POS], [EMP-POS]
Task function
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
97
Another task you could try is to learn to perform the same task in two different environments.[task-POS], [EMP-POS]
task
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
98
Perhaps the TAXI problem, but you have two different taxis that require different actions in order to execute the same path in state space.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
99
This would require a phi(s) function that is trained in a way that doesn't depend on the action a.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
101
Is this where you artificially return an agent to a state that would normally be hard to reach?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
102
The authors show that UA results in gains on several of the games.[null], [EMP-POS]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
103
The authors also demonstrate that using multiple agents with different policies can be used to collect training examples for the PATH function that improve its utility over training examples collected by a single agent policy.[training examples-NEU], [EMP-NEU]
training examples
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
104
RELATED WORK: Good contrast to hierarchical learning: we don't have switching regimes here between high-level options[regimes-POS], [CMP-POS]
regimes
null
null
null
null
null
CMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
105
I don't understand why the authors say the PATH function can be viewed as an inverse?[null], [CLA-NEG]
null
null
null
null
null
null
CLA
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
106
Oh - now I get it. Because it takes an extended n-step transition and generates an action.[null], [CLA-POS]
null
null
null
null
null
null
CLA
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
108
-I think title is misleading, as the more concise results in this paper is about linear networks I recommend adding linear in the title i.e. changing the title to ... deep LINEAR networks[title-NEG, results-NEU], [EMP-NEU, PNF-NEG]
title
results
null
null
null
null
EMP
PNF
null
null
null
NEG
NEU
null
null
null
null
NEU
NEG
null
null
null
109
- Theorems 2.1, 2.2 and the observation (2) are nice![Theorems-POS, observation-POS], [EMP-POS]
Theorems
observation
null
null
null
null
EMP
null
null
null
null
POS
POS
null
null
null
null
POS
null
null
null
null
110
- Theorem 2.2 there is no discussion about the nature of the saddle point is it strict?[Theorem-NEU, discussion-NEG], [SUB-NEG]
Theorem
discussion
null
null
null
null
SUB
null
null
null
null
NEU
NEG
null
null
null
null
NEG
null
null
null
null
111
Does this theorem imply that the global optima can be reached from a random initialization?[theorem-NEU], [EMP-NEU]
theorem
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
112
Regardless of if this theorem can deal with these issues, a discussion of the computational implications of this theorem is necessary.[theorem-NEU, issues-NEU, discussion-NEU], [SUB-NEU]
theorem
issues
discussion
null
null
null
SUB
null
null
null
null
NEU
NEU
NEU
null
null
null
NEU
null
null
null
null
113
- I'm a bit puzzled by Theorems 4.1 and 4.2 and why they are useful.[Theorems-NEU], [EMP-NEU]
Theorems
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
114
Since these results do not seem to have any computational implications about training the neural nets what insights do we gain about the problem by knowing this result? [results-NEG, insights-NEU, problem-NEU], [EMP-NEG]
results
insights
problem
null
null
null
EMP
null
null
null
null
NEG
NEU
NEU
null
null
null
NEG
null
null
null
null
115
Further discussion would be helpful. [discussion-NEU], [SUB-NEU]
discussion
null
null
null
null
null
SUB
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
120
The performance improvement is expected and validated by experiments.[performance-POS, experiments-POS], [EMP-POS]
performance
experiments
null
null
null
null
EMP
null
null
null
null
POS
POS
null
null
null
null
POS
null
null
null
null
121
But I am not sure if the novelty is strong enough for an ICLR paper. [novelty-NEU], [APR-NEU, NOV-NEU]
novelty
null
null
null
null
null
APR
NOV
null
null
null
NEU
null
null
null
null
null
NEU
NEU
null
null
null
125
The suggested techniques are nice and show promising results.[techniques-POS, results-POS], [EMP-POS]
techniques
results
null
null
null
null
EMP
null
null
null
null
POS
POS
null
null
null
null
POS
null
null
null
null
126
But I feel a lot can still be done to justify them, even just one of them.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
127
For instance, the authors manipulate the objective of G using a new parameter alpha_new and divide heuristically the range of its values.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
128
But, in the experimental section results are shown only for a single value, alpha_new 0.9 The authors also suggest early stopping but again (as far as I understand) only a single value for the number of iterations was tested.[results-NEU], [EMP-NEU]
results
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null