Unnamed: 0
int64
2
9.3k
sentence
stringlengths
30
941
aspect_term_1
stringlengths
1
32
aspect_term_2
stringlengths
2
27
aspect_term_3
stringlengths
2
23
aspect_term_4
stringclasses
25 values
aspect_term_5
stringclasses
7 values
aspect_term_6
stringclasses
1 value
aspect_category_1
stringclasses
9 values
aspect_category_2
stringclasses
9 values
aspect_category_3
stringclasses
9 values
aspect_category_4
stringclasses
2 values
aspect_category_5
stringclasses
1 value
aspect_term_1_polarity
stringclasses
3 values
aspect_term_2_polarity
stringclasses
3 values
aspect_term_3_polarity
stringclasses
3 values
aspect_term_4_polarity
stringclasses
3 values
aspect_term_5_polarity
stringclasses
3 values
aspect_term_6_polarity
stringclasses
1 value
aspect_category_1_polarity
stringclasses
3 values
aspect_category_2_polarity
stringclasses
3 values
aspect_category_3_polarity
stringclasses
3 values
aspect_category_4_polarity
stringclasses
1 value
aspect_category_5_polarity
stringclasses
1 value
9,250
The answers to the critiques referenced in the paper are convincing, though I must admit that I don't know how crucial it is to answer these critics, since it is difficult to assess wether they reached or will reach a large audience.[answers-POS], [IMP-NEU]
answers
null
null
null
null
null
IMP
null
null
null
null
POS
null
null
null
null
null
NEU
null
null
null
null
9,251
Details: - p. 4 please do not qualify KL as a distance metric [null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
9,252
- Section 4.3: Every GAN variant was trained for 200000 iterations, and 5 discriminator updates were done for each generator update is ambiguous: what is exactly meant by iteration (and sometimes step elsewhere)?[Section-NEU], [EMP-NEU]
Section
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,253
- Section 4.3: the performance measure is not relevant regarding distributions. The l2 distance is somewhat OK for means, but it makes little sense for covariance matrices. [Section-NEU], [EMP-NEU]
Section
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,260
For the JCP-S model, the loss function is unclear to me.[model-NEG], [EMP-NEG]
model
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,261
L is defined for 3rd order tensors only; how is the extended to n > 3?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,262
Intuitively it seems that L is redefined, and for, say, n 4, the model is M(i,j,k,n) sum_1^R u_ir u_jr u_kr u_nr.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,263
However, the statement since we are using at most third order tensors in this work I am further confused.[statement-NEG], [EMP-NEG]
statement
null
null
null
null
null
EMP
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,264
Is it just that JCP-S also incorporates 2nd order embeddings?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,265
I believe this requires clarification in the manuscript itself.[manuscript-NEU], [EMP-NEG]
manuscript
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
9,266
For the evaluations, there are no other tensor-based methods evaluated, although there exist several well-known tensor-based word embedding models existing: Pengfei Liu, Xipeng Qiuu2217 and Xuanjing Huang, Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model, IJCAI 2015 Jingwei Zhang and Jeremy Salwen, Michael Glass and Alfio Gliozzo.[evaluations-NEG], [CMP-NEU]
evaluations
null
null
null
null
null
CMP
null
null
null
null
NEG
null
null
null
null
null
NEU
null
null
null
null
9,268
Additionally, since it seems the main benefit of using a tensor-based method is that you can use 3rd order cooccurance information, multisense embedding methods should also be evaluated.[methods-NEU], [EMP-NEU]
methods
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,269
There are many such methods, see for example Jiwei Li, Dan Jurafsky, Do Multi-Sense Embeddings Improve Natural Language Understanding?[methods-NEU], [EMP-NEU]
methods
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,270
and citations within, plus quick googling for more recent works.[citations-NEU], [EMP-NEU]
citations
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,271
I am not saying that these works are equivalent to what the authors are doing, or that there is no novelty, but the evaluations seem extremely unfair to only compare against matrix factorization techniques, when in fact many higher order extensions have been proposed and evaluated, and especially so on the tasks proposed (in particular the 3-way outlier detection).[novelty-NEU, evaluations-NEG], [CMP-NEG, EMP-NEG]
novelty
evaluations
null
null
null
null
CMP
EMP
null
null
null
NEU
NEG
null
null
null
null
NEG
NEG
null
null
null
9,272
Observe also that in table 2, NNSE gets the highest performance in both MEN and MTurk.[table-NEU], [EMP-NEU]
table
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,273
Frankly this is not very surprising; matrix factorization is very powerful, and these simple word similarity tasks are well-suited for matrix factorization.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
9,274
So, statements like as we can see, our embeddings very clearly outperform the random embedding at this task is an unnecessary inflation of a result that 1) is not good[statements-NEG, result-NEG], [EMP-NEG]
statements
result
null
null
null
null
EMP
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
9,275
and 2) is reasonable to not be good.[null], [EMP-NEG]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEG
null
null
null
null
9,276
Overall, I think for a more sincere evaluation, the authors need to better pick tasks that clearly exploit 3-way information and compare against other methods proposed to do the same.[evaluation-NEU], [EMP-NEG]
evaluation
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEG
null
null
null
null
9,277
The multiplicative relation analysis is interesting,[analysis-POS], [EMP-POS]
analysis
null
null
null
null
null
EMP
null
null
null
null
POS
null
null
null
null
null
POS
null
null
null
null
9,278
but at this point it is not clear to me why multiplicative is better than additive in either performance or in giving meaningful interpretations of the model.[performance-NEU, model-NEU], [EMP-NEG]
performance
model
null
null
null
null
EMP
null
null
null
null
NEU
NEU
null
null
null
null
NEG
null
null
null
null
9,279
In conclusion, because the novelty is also not that big (CP decomposition for word embeddings is a very natural idea) I believe the evaluation and analysis must be significantly strengthened for acceptance. [novelty-NEG], [NOV-NEG, IMP-NEG, REC-NEG]
novelty
null
null
null
null
null
NOV
IMP
REC
null
null
NEG
null
null
null
null
null
NEG
NEG
NEG
null
null
9,281
Summary: The authors take two pages to describe the data they eventually analyze - Chinese license plates (sections 1,2), with the aim of predicting auction price based on the luckiness of the license plate number.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,282
The authors mentions other papers that use NN's to predict prices, contrasting them with the proposed model by saying they are usually shallow not deep, and only focus on numerical data not strings.[papers-NEU, proposed model-NEU], [CMP-NEU]
papers
proposed model
null
null
null
null
CMP
null
null
null
null
NEU
NEU
null
null
null
null
NEU
null
null
null
null
9,288
In section 7, the RNN is combined with a handcrafted feature model he criticized in a earlier section for being too simple to create an ensemble model that predicts the prices marginally better.[section-NEU], [CMP-NEU]
section
null
null
null
null
null
CMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,290
Sec 3 The author does not mention the following reference: Deep learning for stock prediction using numerical and textual information by Akita et al. that does incorporate non-numerical info to predict stock prices with deep networks.[Sec-NEG], [PNF-NEG]
Sec
null
null
null
null
null
PNF
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,291
Sec 4 What are the characters embedded with? This is important to specify.[Sec-NEU], [EMP-NEU]
Sec
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,292
Is it Word2vec or something else?[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,293
What does the lookup table consist of?[table-NEU], [EMP-NEU]
table
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,294
References should be added to the relevant methods.[References-NEU], [EMP-NEU]
References
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,295
Sec 5 I feel like there are many regression models that could have been tried here with word2vec embeddings that would have been an interesting comparison.[Sec-NEU], [SUB-NEU, CMP-NEU]
Sec
null
null
null
null
null
SUB
CMP
null
null
null
NEU
null
null
null
null
null
NEU
NEU
null
null
null
9,296
LSTMs as well could have been a point of comparison.[null], [EMP-NEU]
null
null
null
null
null
null
EMP
null
null
null
null
null
null
null
null
null
null
NEU
null
null
null
null
9,297
Sec 6 Nothing too insightful is said about the RNN Model.[Sec-NEG], [SUB-NEG]
Sec
null
null
null
null
null
SUB
null
null
null
null
NEG
null
null
null
null
null
NEG
null
null
null
null
9,298
Sec 7 The ensembling was a strange extension especially with the Woo model given that the other MLP architecture gave way better results in their table.[Sec-NEG, results-NEG], [CMP-NEG]
Sec
results
null
null
null
null
CMP
null
null
null
null
NEG
NEG
null
null
null
null
NEG
null
null
null
null
9,299
Overall: This is a unique NLP problem, and it seems to make a lot of sense to apply an RNN here, considering that word2vec is an RNN.[problem-NEU], [EMP-NEU]
problem
null
null
null
null
null
EMP
null
null
null
null
NEU
null
null
null
null
null
NEU
null
null
null
null
9,300
However comparisons are lacking and the paper is not presented very scientifically.[comparisons-NEG, paper-NEG], [SUB-NEG, CMP-NEG, PNF-NEG]
comparisons
paper
null
null
null
null
SUB
CMP
PNF
null
null
NEG
NEG
null
null
null
null
NEG
NEG
NEG
null
null
9,301
The lack of comparisons made it feel like the author cherry picked the RNN to outperform other approaches that obviously would not do well.[comparisons-NEG, approaches-NEG], [SUB-NEG, CMP-NEG]]
comparisons
approaches
null
null
null
null
SUB
CMP
null
null
null
NEG
NEG
null
null
null
null
NEG
NEG
null
null
null