Unnamed: 0
int64
2
9.3k
sentence
stringlengths
30
941
aspect_term_1
stringlengths
1
32
aspect_term_2
stringlengths
2
27
aspect_term_3
stringlengths
2
23
aspect_term_4
stringclasses
25 values
aspect_term_5
stringclasses
7 values
aspect_term_6
stringclasses
1 value
aspect_category_1
stringclasses
9 values
aspect_category_2
stringclasses
9 values
aspect_category_3
stringclasses
9 values
aspect_category_4
stringclasses
2 values
aspect_category_5
stringclasses
1 value
aspect_term_1_polarity
stringclasses
3 values
aspect_term_2_polarity
stringclasses
3 values
aspect_term_3_polarity
stringclasses
3 values
aspect_term_4_polarity
stringclasses
3 values
aspect_term_5_polarity
stringclasses
3 values
aspect_term_6_polarity
stringclasses
1 value
aspect_category_1_polarity
stringclasses
3 values
aspect_category_2_polarity
stringclasses
3 values
aspect_category_3_polarity
stringclasses
3 values
aspect_category_4_polarity
stringclasses
1 value
aspect_category_5_polarity
stringclasses
1 value
9,115
In particular, the advantages and disadvantages of different categories are not systematically compared, and hence the readers cannot get insightful comments and suggestions from this survey.[advantages and disadvantages-NEG], [EMP-NEG]
advantages and disadvantages
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,116
n In general, survey papers are not very suitable for publication at conferences.[survey papers-NEG, conferences-NEG], [APR-NEG]]
survey papers
conferences
null
null
null
null
APR
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
9,118
It makes several important contributions, including extending the previously published bounds by Telgarsky et al. to tighter bounds for the special case of ReLU DNNs, giving a construction for a family of hard functions whose affine pieces scale exponentially with the dimensionality of the inputs, and giving a procedure for searching for globally optimal solution of a 1-hidden layer ReLU DNN with linear output layer and convex loss.[contributions-POS], [EMP-POS]
contributions
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
9,119
I think these contributions warrant publishing the paper at ICLR 2018.[contributions-POS, paper-POS], [APR-POS, REC-POS]
contributions
paper
null
null
null
null
APR
REC
null
null
null
POS
POS
null
null
null
null
POS
POS
null
null
null
9,120
The paper is also well written, a bit dense in places, but overall well organized and easy to follow.[paper-POS], [CLA-POS, PNF-POS]
paper
null
null
null
null
null
CLA
PNF
null
null
null
POS
null
null
null
null
null
POS
POS
null
null
null
9,121
A key limitation of the paper in my opinion is that typically DNNs do not contain a linear final layer.[limitation-NEG, paper-NEG], [EMP-NEG]
limitation
paper
null
null
null
null
EMP
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
9,122
It will be valuable to note what, if any, of the representation analysis and global convergence results carry over to networks with non-linear (Softmax, e.g.) final layer.[representation analysis-NEU, results-NEU], [EMP-NEU]
representation analysis
results
null
null
null
null
EMP
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
9,123
I also think that the global convergence algorithm is practically unfeasible for all but trivial use cases due to terms like D^nw, would like hearing authors' comments in case I'm missing some simplification.[algorithm-NEG], [EMP-NEG]
algorithm
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,124
One minor suggestion for improving readability is to explicitly state, whenever applicable, that functions under consideration are PWL.[null], [SUB-NEU, EMP-NEU]
null
null
null
null
null
null
SUB
EMP
null
null
null
null
null
null
null
null
null
NEU
NEU
null
null
null
9,125
For example, adding PWL to Theorems and Corollaries in Section 3.1 will help. [Theorems-NEU, Section-NEU], [EMP-NEU]
Theorems
Section
null
null
null
null
EMP
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
9,126
Similarly would be good to state, wherever applicable, the DNN being discussed is a ReLU DNN.[null], [SUB-NEU]
null
null
null
null
null
null
SUB
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,130
I have two problems with these claims: 1) Modern ConvNet architectures (Inception, ResNeXt, SqueezeNet, BottleNeck-DenseNets and ShuffleNets) don't have large fully connected layers.[claims-NEG], [EMP-NEG]
claims
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,131
2) The authors reject the technique of 'Deep compression' as being impractical.[technique-NEU], [EMP-NEU]
technique
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,132
I suspect it is actually much easier to use in practice as you don't have to a-priori know the correct level of sparsity for every level of the network.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
9,133
p3. What does 'normalized' mean?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,135
p3. Are you using an L2 weight penalty?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,136
If not, your fully-connected baseline may be unnecessarily overfitting the training data.[baseline-NEG], [EMP-NEG]
baseline
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,137
p3. Table 1. Where do the choice of CL Junction densities come from?[Table-NEU], [EMP-NEU]
Table
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,138
Did you do a grid search to find the optimal level of sparsity at each level?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,139
p7-8. I had trouble following the left/right & front/back notation.[p-NEU], [PNF-NEG]
p
null
null
null
null
null
PNF
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
9,140
p8. Figure 7. How did you decide which data points to include in the plots?[p-NEU, Figure-NEU], [EMP-NEU]
p
Figure
null
null
null
null
EMP
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
9,142
Congratulations on a very interesting and clear paper.[paper-POS], [CLA-POS]
paper
null
null
null
null
null
CLA
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
9,143
While ICLR is not focused on neuroscientific studies, this paper clearly belongs here as it shows what representations develop in recurrent networks that are trained on spatial navigation.[paper-POS], [APR-POS]
paper
null
null
null
null
null
APR
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
9,145
I found it is very interesting that the emergence of these representations was contingent on some regularization constraint.[representations-POS], [EMP-POS]
representations
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
9,146
This seems similar to the visual domain where edge detectors emerge easily when trained on natural images with sparseness constraints as in Olshausen&Field and later reproduced with many other models that incorporate sparseness constraints.[null], [EMP-POS]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
9,147
I do have some questions about the training itself.[training-NEU], [EMP-NEU]
training
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,148
The paper mentions a metabolic cost that is not specified in the paper.[paper-NEG], [SUB-NEG]
paper
null
null
null
null
null
SUB
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,149
This should be added.[null], [SUB-NEG]
null
null
null
null
null
null
SUB
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
9,151
I am puzzled why is the error is coming down before the boundary interaction?[error-NEU], [EMP-NEU]
error
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,152
Even more puzzling, why does this error go up again for the blue curve (no interaction)? Shouldn't at least this curve be smooth? [error-NEU], [EMP-NEU]
error
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,155
On the positive side, the paper is mostly well-written, seems technically correct, and there are some results that indicate that the MSA is working quite well on relatively complex tasks.[paper-POS, results-POS], [CLA-POS, EMP-POS]
paper
results
null
null
null
null
CLA
EMP
null
null
null
POS
POS
null
null
null
null
POS
POS
null
null
null
9,156
On the negative side, there seems to be relatively limited novelty: we can think of MSA as one particular communication (i.e, star) configuration one could use is a multiagent system.[novelty-NEG], [NOV-NEG]
novelty
null
null
null
null
null
NOV
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,157
One aspect does does strike me as novel is the gated composition module, which allows differentiation of messages to other agents based on the receivers internal state.[gated composition module-POS], [NOV-POS]
gated composition module
null
null
null
null
null
NOV
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
9,158
(So, the *interpretation* of the message is learned). I like this idea,[idea-POS], [EMP-POS]
idea
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
9,159
however, the results are mixed, and the explanation given is plausible, but far from a clearly demonstrated answer.[results-NEU, explanation-NEU], [EMP-NEG]
results
explanation
null
null
null
null
EMP
null
null
null
null
NEU
NEU
null
null
null
null
NEG
null
null
null
null
9,161
however the summed global signal is hand crafted information and does not facilitate an independently reasoning master agent.[issues-NEU], [SUB-NEU]
issues
null
null
null
null
null
SUB
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,162
-Please explain what is meant here by 'hand crafted information', my understanding is that the f^i in figure 1 of that paper are learned modules?[figure-NEU, modules-NEU], [PNF-NEU]
figure
modules
null
null
null
null
PNF
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
9,163
-Please explain what would be the differences with CommNet with 1 extra agent that takes in the same information as your 'master'.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,164
*This relates also to this: Later we empirically verify that, even when the overall in- formation revealed does not increase per se, an independent master agent tend to absorb the same information within a big picture and effectively helps to make decision in a global manner.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,167
Specifically, we compare the performance among the CommNet model, our MS-MARL model without explicit master state (e.g. the occupancy map of controlled agents in this case), and our full model with an explicit occupancy map as a state to the master agent.[performance-NEU], [EMP-NEU]
performance
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,168
As shown in Figure 7 (a)(b), by only allowed an independently thinking master agent and communication among agents, our model already outperforms the plain CommNet model which only supports broadcast- ing communication of the sum of the signals.[model-POS], [EMP-NEU]
model
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
NEU
null
null
null
null
9,169
-Minor: I think that the statement which only supports broadcast-ing communication of the sum of the signals is not quite fair: surely they have used a 1-channel communication structure, but it would be easy to generalize that.[statement-NEG], [EMP-NEG]
statement
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,170
-Major: When I look at figure 4D, I see that the proposed approach *also* only provides the master with the sum (or really mean) with of the individual messages...? So it is not quite clear to me what explains the difference. *In 4.4, it is not quite clear exactly how the figure of master and slave actions is created.[proposed approach-NEG, figure-NEU], [EMP-NEU]
proposed approach
figure
null
null
null
null
EMP
null
null
null
null
NEG
NEU
null
null
null
null
NEU
null
null
null
null
9,171
This seems to suggest that the only thing that the master can communicate is action information?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,173
* In table 2, it is not clear how significant these differences are.[table-NEG], [PNF-NEG]
table
null
null
null
null
null
PNF
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,174
What are the standard errors?[standard errors-NEU], [EMP-NEG]
standard errors
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
9,175
* The section 3.2 explains standard things (policy gradient), but the details are a bit unclear.[section-NEG], [SUB-NEG]
section
null
null
null
null
null
SUB
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,176
In particular, I do not see how the Gaussian/softmax layers are integrated; they do not seem to appear in figure 4?[figure-NEG], [SUB-NEG]
figure
null
null
null
null
null
SUB
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,177
* I cannot understand figure 7 without more explanation.[figure-NEG, explanation-NEG], [SUB-NEG]
figure
explanation
null
null
null
null
SUB
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
9,178
(The background is all black - did something go wrong with the pdf?)[background-NEG], [PNF-NEG]
background
null
null
null
null
null
PNF
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,179
Details: * references are wrongly formatted throughout.[references-NEG], [PNF-NEG]
references
null
null
null
null
null
PNF
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,180
* In this regard, we are among the first to combine both the centralized perspective and the decentralized perspective This is a weak statement (E.g., I suppose that in the greater scheme of things all of us will be amongst the first people that have walked this earth...)[null], [NOV-NEG]
null
null
null
null
null
null
NOV
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
9,183
Can it be made crisper?[null], [PNF-NEU]
null
null
null
null
null
null
PNF
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,184
* Note here that, although we explicitly input an occupancy map to the master agent, the actual infor- mation of the whole system remains the same.[information-NEU], [EMP-NEU]
information
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,185
This is a somewhat peculiar statement.[statement-NEG], [PNF-NEG]
statement
null
null
null
null
null
PNF
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,186
Clearly, the distribution of information over the agents is crucial.[information-NEU], [EMP-NEU]
information
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,191
This works because each variable (of the state space) is modified in turn, so that the resulting update is invertible, with a tractable transformation inspired by Dinh et al 2016.[variable-NEU, update-NEU], [CMP-NEU]
variable
update
null
null
null
null
CMP
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
9,192
Overall, I believe this paper is of good quality, clearly and carefully written, and potentially accelerates mixing in a state-of-the-art MCMC method, HMC, in many practical cases.[paper-POS], [CLA-POS, EMP-POS]
paper
null
null
null
null
null
CLA
EMP
null
null
null
POS
null
null
null
null
null
POS
POS
null
null
null
9,194
The experimental section proves the usefulness of the method on a range of relevant test cases; in addition, an application to a latent variable model is provided sec5.2.[section-POS, method-POS, sec-POS], [EMP-POS]
section
method
sec
null
null
null
EMP
null
null
null
null
POS
POS
POS
null
null
null
POS
null
null
null
null
9,195
Fig 1a presents results in terms of numbers of gradient evaluations, but I couldn't find much in the way of computational cost of L2HMC in the paper. [Fig-NEG, results-NEG, paper-NEU], [SUB-NEG, EMP-NEG]
Fig
results
paper
null
null
null
SUB
EMP
null
null
null
NEG
NEG
NEU
null
null
null
NEG
NEG
null
null
null
9,196
I can't see where the number 124x in sec 5.1 stems from.[sec-NEG], [CLA-NEG]
sec
null
null
null
null
null
CLA
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,197
As a user, I would be interested in the typical computational cost of both MCMC sampler training and MCMC sampler usage (inference?), compared to competing methods.[competing methods-NEU], [SUB-NEU]
competing methods
null
null
null
null
null
SUB
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,198
This is admittedly hard to quantify objectively, but just an order of magnitude would be helpful for orientation.[null], [SUB-NEU]
null
null
null
null
null
null
SUB
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,199
Would it be relevant, in sec5.1, to compare to other methods than just HMC, eg LAHMC?[sec-NEG], [CMP-NEG, SUB-NEG]
sec
null
null
null
null
null
CMP
SUB
null
null
null
NEG
null
null
null
null
null
NEG
NEG
null
null
null
9,200
I am missing an intuition for several things: eq7, the time encoding defined in Appendix C Appendix Fig5, I cannot quite see how the caption claim is supported by the figure (just hardly for VAE, but not for HMC).[eq-NEG, Appendix-NEG, Fig-NEG, figure-NEG], [PNF-NEG, CLA-NEG]
eq
Appendix
Fig
figure
null
null
PNF
CLA
null
null
null
NEG
NEG
NEG
NEG
null
null
NEG
NEG
null
null
null
9,202
# Minor errors - sec1: The sampler is trained to minimize a variation: should be maximize as well as on a the real-world[sec-NEG], [EMP-NEG]
sec
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,203
- sec3.2 and 1/2 v^T v the kinetic: energy missing[sec-NEG], [SUB-NEG]
sec
null
null
null
null
null
SUB
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,204
- sec4: the acronym L2HMC is not expanded anywhere in the paper[sec-NEG], [CLA-NEG, PNF-NEG]
sec
null
null
null
null
null
CLA
PNF
null
null
null
NEG
null
null
null
null
null
NEG
NEG
null
null
null
9,205
The sentence We will denote the complete augmented...p(d) might be moved to after from a uniform distribution in the same paragraph.[sentence-NEU, paragraph-NEU], [PNF-NEU]
sentence
paragraph
null
null
null
null
PNF
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
9,206
In paragraph starting We now update x: - specify for clarity: the first update, which yields x' / the second update, which yields x'' [paragraph-NEG], [CLA-NEG, PNF-NEG]
paragraph
null
null
null
null
null
CLA
PNF
null
null
null
NEG
null
null
null
null
null
NEG
NEG
null
null
null
9,207
- only affects $x_{bar{m}^t}$: should be $x'_{bar{m}^t}$ (prime missing) [null], [PNF-NEG]
null
null
null
null
null
null
PNF
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
9,208
- the syntax using subscript m^t is confusing to read; wouldn't it be clearer to write this as a function, eg mask(x',m^t)?[syntax-NEG], [PNF-NEG, CLA-NEG]
syntax
null
null
null
null
null
PNF
CLA
null
null
null
NEG
null
null
null
null
null
NEG
NEG
null
null
null
9,209
- inside zeta_2 and zeta_3, do you not mean $m^t and $bar{m}^t$ ?[null], [PNF-NEG]
null
null
null
null
null
null
PNF
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
9,210
- sec5: add reference for first mention of A NICE MC[sec-NEG, reference-NEG], [PNF-NEG]
sec
reference
null
null
null
null
PNF
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
9,211
- Appendix A: - Let's -> Let [Appendix-NEG], [PNF-NEG]
Appendix
null
null
null
null
null
PNF
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,212
- eq12 should be x'' ... -[eq-NEG], [PNF-NEG]
eq
null
null
null
null
null
PNF
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,213
Appendix C: space missing after Section 5.1[Appendix-NEG, Section-NEG], [PNF-NEG]
Appendix
Section
null
null
null
null
PNF
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
9,214
- Appendix D1: In this section is presented : sounds odd[Appendix-NEG, section-NEG], [PNF-NEG]
Appendix
section
null
null
null
null
PNF
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
9,215
n- Appendix D3: presumably this should consist of the figure 5 ? Maybe specify.[Appendix-NEG, figure-NEG], [PNF-NEG]]
Appendix
figure
null
null
null
null
PNF
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
9,218
Strengths: The proposed method has achieved a better convergence rate in different tasks than all other hand-engineered algorithms.[proposed method-POS], [EMP-POS]
proposed method
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
9,219
The proposed method has better robustess in different tasks and different batch size setting.[proposed method-POS], [EMP-POS]
proposed method
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
9,220
The invariant of coordinate permutation and the use of block-diagonal structure improve the efficiency of LQG.[null], [EMP-POS]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
POS
null
null
null
null
9,221
Weaknesses: 1. Since the batch size is small in each experiment, it is hard to compare convergence rate within one epoch.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,222
More iterations should be taken and the log-scale style figure is suggested.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,223
2. In Figure 1b, L2LBGDBGD converges to a lower objective value, while the other figures are difficult to compare, the convergence value should be reported in all experiments.[Figure-NEU, experiments-NEG], [CMP-NEG]
Figure
experiments
null
null
null
null
CMP
null
null
null
null
NEU
NEG
null
null
null
null
NEG
null
null
null
null
9,224
3. "The average recent iterate" described in section 3.6 uses recent 3 iterations to compute the average, the reason to choose "3", and the effectiveness of different choices should be discussed, as well as the "24" used in state features.[section-NEU], [EMP-NEU]
section
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,225
4. Since the block-diagonal structure imposed on A_t, B_t, and F_t, how to choose a proper block size?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,226
Or how to figure out a coordinate group?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,227
5. The caption in Figure 1,3, "with 48 input and hidden units" should clarify clearly.[Figure-NEG], [CLA-NEG]
Figure
null
null
null
null
null
CLA
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,228
The curves of different methods are suggested to use different lines (e.g., dashed lines) to denote different algorithms rather than colors only.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,229
6. typo: sec 1 parg 5, "current iterate" -> "current iteration".[typo-NEG], [CLA-NEG]
typo
null
null
null
null
null
CLA
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,231
by Li & Malik, this paper tends to solve the high-dimensional problem.[paper-NEU], [CMP-NEU]
paper
null
null
null
null
null
CMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,232
With the new observation of invariant in coordinates permutation in neural networks, this paper imposes the block-diagonal structure in the model to reduce the complexity of LQG algorithm.[paper-NEU, model-NEU], [EMP-NEU]
paper
model
null
null
null
null
EMP
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
9,239
I could not find any technical contribution or something sufficiently mature and interesting for presenting in ICLR.[technical contribution-NEG], [APR-NEG]
technical contribution
null
null
null
null
null
APR
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,240
Some issues: - submission is supposed to be double blind but authors reveal their identity at the start of section 2.1.[section-NEG], [PNF-NEG]
section
null
null
null
null
null
PNF
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,241
- implementation details all over the place (section 3. is called Implementation, but at that point no concrete idea has been proposed, so it seems too early for talking about tensorflow and keras).[implementation details-NEG, section-NEG], [PNF-NEG, EMP-NEG]
implementation details
section
null
null
null
null
PNF
EMP
null
null
null
NEG
NEG
null
null
null
null
NEG
NEG
null
null
null
9,245
2) though the non-saturating variant (see Eq. 3) of ``standard GAN'' may converge towards a minimum of the Jensen-Shannon divergence, it does not mean that the minimization process follows gradients of the Jensen-Shannon divergence (and conversely, following gradient paths of the Jensen-Shannon divergence may not converge towards a minimum, but this was rather the point of the previous critiques about ``standard GAN''). [null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,246
3) the penalization strategies introduced for ``non-standard GAN'' with specific motivations, may also apply successfully to the ``standard GAN'', improving robustness, thereby helping to set hyperparameters.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,248
Overall, I believe that the paper provides enough material to substantiate these claims, even if the message could be better delivered.[claims-NEU], [SUB-POS]
claims
null
null
null
null
null
SUB
null
null
null
null
NEU
null
null
null
null
null
POS
null
null
null
null
9,249
In particular, the writing is sometimes ambiguous (e.g. in Section 2.3, the reader who did not follow the recent developments on the subject on arXiv will have difficulties to rebuild the cross-references between authors, acronyms and formulae).[writing-NEG], [CLA-NEG]
writing
null
null
null
null
null
CLA
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null