paper_id
stringlengths 10
19
| venue
stringclasses 15
values | focused_review
stringlengths 7
10.2k
| point
stringlengths 45
643
|
---|---|---|---|
NIPS_2020_350 | NIPS_2020 | -The equations (3) and (4) are, however, very similar to [3] and [A, B] in the way that they force the minor-class examples to have larger decision values (i.e., \exp \eta_j) in training. The proposed softmax seems particularly similar to eq. (11) in [B]. The authors should have cited these papers and provided further discussion and comparison. This point limits the novelty/significance of the paper. [A] Han-Jia Ye et al., Identifying and Compensating for Feature Deviation in Imbalanced Deep Learning, arxiv 2020 [B] S.H. Khan et al., Cost-Sensitive Learning of Deep Feature Representations from Imbalanced Data, IEEE transactions on neural networks and learning systems, 2017. -The proposed meta sampler has a similar idea to [12,24,27] but the authors didn't differentiate the proposed method from theirs. It is hard for me to judge the novelty of the proposed meta sampler. (It is indeed quite similar to [24].) These methods are also not compared in the experiments. From Table 5, I don't quite see the benefit of the meta sampler over the meta reweighter. -The loss function derived from the theoretical analysis seems to not directly imply the proposed softmax. In eq. (9), the "margin" term \gamma^\star_j is a class-dependent constant, and adding them into the overall loss won't affect the learning of the network parameters. Nevertheless, in equation (10), such a "margin" term becomes affective to the network training. -I don't quite follow why the authors want to bring in class-balanced sampler or meta-sampler. The authors argued that re-sampling techniques can be harmful to model training (see Line 23-26, 50-56), but finally still apply it. I would suggest that the authors provide more discussions about why it is needed in extremely imbalanced cases. Moreover, the description of the meta sampler is a bit hard to follow: 1) Is the sample distribution updated in the inner loop or outer? From Line 171-175, it seems that the outer loop will update the sample distribution. 2) Do the authors only apply the meta sampler in a decoupled way? That is, to update the linear classifier when the features are fixes? If so, please provide more discussion on this and when (which epoch) do the authors start applying the meta sampler? 3) The addition of the meta sampler makes the contributions of the paper a bit vague: please include both the balanced softmax results w/ and w/o the meta sampler in the experimental results. If the meta sampler is used in a decoupled way, then when to start the meta sampler leads to a mother hyper-parameter. Also, the authors mentioned that on LVIS, they used meta reweighter rather than meta sampler, which is confusing. -Last but not the least, in the experiments the authors' baseline softmax results (in Table 2) are much higher than those reported in other papers. The baseline results are even better than or on part with existing approaches. I thus doubt if the superb performance reported by the authors is partly due to a better baseline method. | 2) Do the authors only apply the meta sampler in a decoupled way? That is, to update the linear classifier when the features are fixes? If so, please provide more discussion on this and when (which epoch) do the authors start applying the meta sampler? |
YLJs4mKJCF | ICLR_2024 | - Authors use their own defined vanilla metric, and lack related fairness-aware metrics like Equality odds (EO)
- Authors are encouraged to conduct more experiments on more datasets like COMPAS and Drug Comsumptionm, please kindly follow this AAAI paper which authors have cited: Exacerbating Algorithmic Bias through Fairness Attacks.
- Personally, I reckon authors are encouraged to conduct experiments on deeper NN (I think simple MLP is not that DEEP to be called "DNN"), though the datasets are relatively simple. I'm curious about these experiments to investigate ENG. Authors are encouraged to conduct more analysis on the further version of this work, which is good for community: ) | - Authors use their own defined vanilla metric, and lack related fairness-aware metrics like Equality odds (EO) - Authors are encouraged to conduct more experiments on more datasets like COMPAS and Drug Comsumptionm, please kindly follow this AAAI paper which authors have cited: Exacerbating Algorithmic Bias through Fairness Attacks. |
NIPS_2018_260 | NIPS_2018 | 1. The parameterizations considered of the value functions at the end of the day belong to discrete time, due to the need to discretize the SDEs and sample the state-action-reward triples. Given this discrete implementa- tion, and the fact that experimentally the authors run into the conven- tional di_x000e_culties of discrete time algorithms with continuous state-action function approximation, I am a little bewildered as to what the actual bene_x000c_t is of this problem formulation, especially since it requires a re- de_x000c_nition of the value function as one that is compatible with SDEs (eqn. (4) ). That is, the intrinsic theoretical bene_x000c_ts of this perspective are not clear, especially since the main theorem is expressed in terms of RKHS only. 2. In the experiments, the authors mention kernel adaptive _x000c_filters (aka kernel LMS) or Gaussian processes as potential avenues of pursuit for addressing the function estimation in continuous domains. However, these methods are fundamentally limited by their sample complexity bottleneck, i.e., the quadratic complexity in the sample size. There's some experimental ref- erence to forgetting factors, but this issue can be addressed in a rigorous manner that preserves convergence while breaking the bottleneck, see, e.g., A. Koppel, G. Warnell, E. Stump, and A. Ribeiro, ``Parsimonious on- line learning with kernels via sparse projections in function space," arXiv preprint arXiv:1612.04111, 2016. Simply applying these methods without consideration for the fact that the sample size conceptually is approaching in_x000c_nity, makes an update of the form (16) inapplicable to RL in general. Evaluating the Bellman operator requires computing an expected value. 3. Moreover, the limited complexity of the numerical evaluation is reflective of this complexity bottleneck, in my opinion. There are far more effective RKHS value function estimation methods than GPTD in terms of value function estimation quality and memory efficiency: A. Koppel, G. Warnell, E. Stump, P. Stone, and A. Ribeiro. ``Policy Evaluation in Continuous MDPs with E_x000e_cient Kernelized Gradient Tem- poral Di_x000b_fference," in IEEE Trans. Automatic Control (submitted), Dec. 2017." It's strange that the authors only compare against a mediocre benchmark rather than the state of the art. 4. The discussion at the beginning of section 3 doesn't make sense or is written in a somewhat self-contradictory manner. The authors should take greater care to explain the di_x000b_erence between value function estimation challenges due to unobservability, and value function estimation problems that come up directly from trying to solve Bellman's evaluation equation. I'm not sure what is meant in this discussion. 5. Also, regarding L87-88: value function estimation is NOT akin supervised learning unless one does Monte Carlo rollouts to do empirical approxima- tions of one of the expectations, due to the double sampling problem, as discussed in R. S. Sutton, H. R. Maei, and C. Szepesvari, \A convergent o(n) temporal- di_x000b_erence algorithm for o_x000b_-policy learning with linear function approxi- mation," in Advances in neural information processing systems, 2009, pp. 1609?1616. and analyzed in great detail in : V. R. Konda and J. N. Tsitsiklis, ``Convergence rate of linear two-timescale stochastic approximation," Annals of applied probability, pp. 796- 819, 2004. 6. The Algorithm 1 pseudo-code is strangely broad so as to be hand-waving. There's no speci_x000c_cs of a method that could actually be implemented, or even computed in the abstract. Algorithm 1 could just as well say "train a deep network" in the inner loop of an algorithm, which is unacceptable, and not how pseudo-code works. Specifically, one can't simply "choose" at random" an RKHS function estimation algorithm and plug it in and assume it works, since the lion-share of methods for doing so either re- quire in_x000c_nite memory in the limit or employ memory-reduction that cause divergence. 7. L107-114 seems speculative or overly opinionated. This should be stated as a remark, or an aside in a Discussion section, or removed. 8. A general comment: there are no transitions between sections, which is not good for readability. 9. Again, the experiments are overly limited so as to not be convincing. GPTD is a very simplistic algorithm which is not even guaranteed to pre- serve posterior consistency, aka it is a divergent Bayesian method. There- fore, it seems like a straw man comparison. And this comparison is con- ducted on a synthetic example, whereas most RL works at least consider a rudimentary OpenAI problem such as Mountain car, if not a real robotics, power systems, or _x000c_financial application. | 7. L107-114 seems speculative or overly opinionated. This should be stated as a remark, or an aside in a Discussion section, or removed. |
g3VOQpuqlF | EMNLP_2023 | * The result that randomly concatenating passages from an open-domain corpus gives better performance than natural long form text and semantically linked passages is counter-intuitive. I would like to understand if this conclusion is an artifact of the datasets for long form training or the tasks considered. I would like to see some qualitative analysis/human evaluations done on a small subset of the generations between the different models to make sure that the conclusion is valid.
* It would have been nice to consider baselines such as Rope and Alibi relative positional embeddings to verify the performance improvement obtained by making the changes suggested in the paper. | * It would have been nice to consider baselines such as Rope and Alibi relative positional embeddings to verify the performance improvement obtained by making the changes suggested in the paper. |
NIPS_2020_491 | NIPS_2020 | - The main weakness of the work relates to the computational complexity of 1) computing the local subgraphs (are shortest paths computed ahead of the training process?), 2) evaluating each node's label individually. Can authors comment on the impact on training/evaluation time? - Another important missing element from the paper is the value of neighborhood size h, as well as an analysis of its influence over the model's performance. This is the key parameter of the proposed strategy and providing readers with intuitive knowledge of the value of h to use, and the robustness of the method with respect to larger or smaller neighborhoods is essential. Similarly, different hyperparameter sets are used per dataset, which is not ideal. Can authors provide insights into how performance varies with a constant set of parameters? - Certain aspects of the training set-up needs clarifying. Mainly the task generation process (what constitutes a task, can one task contain multiple graphs, are local substructures randomly sampled regardless of the original graph, are all nodes labelled in the training set, etc) - Certain sections are too condensed and would be much clearer and informative if expanded (e.g. related work, training setup, testing setup, baseline methods). In the interest of space, table 1 could be moved to supplementary. | - Another important missing element from the paper is the value of neighborhood size h, as well as an analysis of its influence over the model's performance. This is the key parameter of the proposed strategy and providing readers with intuitive knowledge of the value of h to use, and the robustness of the method with respect to larger or smaller neighborhoods is essential. Similarly, different hyperparameter sets are used per dataset, which is not ideal. Can authors provide insights into how performance varies with a constant set of parameters? |
NIPS_2019_1089 | NIPS_2019 | - The paper can be seen as incremental improvements on previous work that has used simple tensor products to representation multimodal data. This paper largely follows previous setups but instead proposes to use higher-order tensor products. ****************************Quality**************************** Strengths: - The paper performs good empirical analysis. They have been thorough in comparing with some of the existing state-of-the-art models for multimodal fusion including those from 2018 and 2019. Their model shows consistent improvements across 2 multimodal datasets. - The authors provide a nice study of the effect of polynomial tensor order on prediction performance and show that accuracy increases up to a point. Weaknesses: - There are a few baselines that could also be worth comparing to such as âStrong and Simple Baselines for Multimodal Utterance Embeddings, NAACL 2019â - Since the model has connections to convolutional arithmetic units then ConvACs can also be a baseline for comparison. Given that you mention that âresulting in a correspondence of our HPFN to an even deeper ConACâ, it would be interesting to see a comparison table of depth with respect to performance. What depth is needed to learning âflexible and higher-order local and global intercorrelationsâ? - With respect to Figure 5, why do you think accuracy starts to drop after a certain order of around 4-5? Is it due to overfitting? - Do you think it is possible to dynamically determine the optimal order for fusion? It seems that the order corresponding to the best performance is different for different datasets and metrics, without a clear pattern or explanation. - The model does seem to perform well but there seem to be much more parameters in the model especially as the model consists of more layers. Could you comment on these tradeoffs including time and space complexity? - What are the impacts on the model when multimodal data is imperfect, such as when certain modalities are missing? Since the model builds higher-order interactions, does missing data at the input level lead to compounding effects that further affect the polynomial tensors being constructed, or is the model able to leverage additional modalities to help infer the missing ones? - How can the model be modified to remain useful when there are noisy or missing modalities? - Some more qualitative evaluation would be nice. Where does the improvement in performance come from? What exactly does the model pick up on? Are informative features compounded and highlighted across modalities? Are features being emphasized within a modality (i.e. better unimodal representations), or are better features being learned across modalities? ****************************Clarity**************************** Strengths: - The paper is well written with very informative Figures, especially Figures 1 and 2. - The paper gives a good introduction to tensors for those who are unfamiliar with the literature. Weaknesses: - The concept of local interactions is not as clear as the rest of the paper. Is it local in that it refers to the interactions within a time window, or is it local in that it is within the same modality? - It is unclear whether the improved results in Table 1 with respect to existing methods is due to higher-order interactions or due to more parameters. A column indicating the number of parameters for each model would be useful. - More experimental details such as neural networks and hyperparameters used should be included in the appendix. - Results should be averaged over multiple runs to determine statistical significance. - There are a few typos and stylistic issues: 1. line 2: "Despite of being compactâ -> âDespite being compactâ 2. line 56: âWe refer multiway arraysâ -> âWe refer to multiway arraysâ 3. line 158: âHPFN to a even deeper ConACâ -> âHPFN to an even deeper ConACâ 4. line 265: "Effect of the modelling mixed temporal-modality features." -> I'm not sure what this means, it's not grammatically correct. 5. equations (4) and (5) should use \left( and \right) for parenthesis. 6. and so on⦠****************************Significance**************************** Strengths: - This paper will likely be a nice addition to the current models we have for processing multimodal data, especially since the results are quite promising. Weaknesses: - Not really a weakness, but there is a paper at ACL 2019 on "Learning Representations from Imperfect Time Series Data via Tensor Rank Regularizationâ which uses low-rank tensor representations as a method to regularize against noisy or imperfect multimodal time-series data. Could your method be combined with their regularization methods to ensure more robust multimodal predictions in the presence of noisy or imperfect multimodal data? - The paper in its current form presents a specific model for learning multimodal representations. To make it more significant, the polynomial pooling layer could be added to existing models and experiments showing consistent improvement over different model architectures. To be more concrete, the yellow, red, and green multimodal data in Figure 2a) can be raw time-series inputs, or they can be the outputs of recurrent units, transformer units, etc. Demonstrating that this layer can improve performance on top of different layers would be this work more significant for the research community. ****************************Post Rebuttal**************************** I appreciate the effort the authors have put into the rebuttal. Since I already liked the paper and the results are quite good, I am maintaining my score. I am not willing to give a higher score since the tasks are rather straightforward with well-studied baselines and tensor methods have already been used to some extent in multimodal learning, so this method is an improvement on top of existing ones. | - What are the impacts on the model when multimodal data is imperfect, such as when certain modalities are missing? Since the model builds higher-order interactions, does missing data at the input level lead to compounding effects that further affect the polynomial tensors being constructed, or is the model able to leverage additional modalities to help infer the missing ones? |
ACL_2017_33_review | ACL_2017 | Similar idea has also been used in (Teng et al., 2016). Though this work is more elegant in the framework design and mathematical representation, the experimental comparison with (Teng et al., 2016) is not as convincing as the comparisons with the rest methods. The authors only reported the re-implementation results on the sentence level experiment of SST and did not report their own phrase-level results.
Some details are not well explained, see discussions below.
- General Discussion: The reviewer has the following questions/suggestions about this work, 1. Since the SST dataset has phrase-level annotations, it is better to show the statistics of the times that negation or intensity words actually take effect.
For example, how many times the word "nothing" appears and how many times it changes the polarity of the context.
2. In section 4.5, the bi-LSTM is used for the regularizers. Is bi-LSTM used to predict the sentiment label?
3. The authors claimed that "we only use the sentence-level annotation since one of our goals is to avoid expensive phrase-level annotation". However, the reviewer still suggest to add the results. Please report them in the rebuttal phase if possible.
4. " s_c is a parameter to be optimized but could also be set fixed with prior knowledge." The reviewer didn't find the specific definition of s_c in the experiment section, is it learned or set fixed? What is the learned or fixed value?
5. In section 5.4 and 5.5, it is suggested to conduct an additional experiment with part of the SST dataset where only phrases with negation/intensity words are included. Report the results on this sub-dataset with and without the corresponding regularizer can be more convincing. | -General Discussion: The reviewer has the following questions/suggestions about this work, 1. Since the SST dataset has phrase-level annotations, it is better to show the statistics of the times that negation or intensity words actually take effect. For example, how many times the word "nothing" appears and how many times it changes the polarity of the context. |
KmphHE92wU | ICLR_2025 | 1. The novelty of the method is somewhat limited, especially the direct application of existing invariant point cloud networks over the eigenvectors without any transfer challenge.
2. The paper doesn’t provide any detail about instance models of $\rho$, $\phi$, and $\psi$ (in Equations 6 and 7) implemented in the experiments, and doesn’t discuss the Lipschitz continuity of these practical models.
3. There is no detail about how OGE-Aug deals with the sign ambiguity of eigenvectors.
4. Authors ignore some essential experiments.
- Authors don’t verify the stability of the OGE-Aug on OOD benchmarks such as DrugOOD [1], where SPE [2] is validated on this dataset.
- There is no ablation study on the effectiveness of the proposed “soft splitting” strategy. I think it’ll be better to conduct experiments on Vanilla OGE-Aug as a control group.
- There is no experiment studying the sensitivity of hyperparameters. For example, the authors don’t explore how different shapes of ρ influence the effectiveness and stability of the positional encodings.
- The paper doesn’t report the running time of different positional encoding methods.
5. As mentioned in Line 346, insisting on the universality of invariant representation function f may hurt the stability of the positional encoder, however, there is no quantitative analysis nor experimental guidance about how to trade off university and stability.
[1] Ji, Yuanfeng, et al. "Drugood: Out-of-distribution dataset curator and benchmark for ai-aided drug discovery–a focus on affinity prediction problems with noise annotations." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 7. 2023.
[2] Huang, Yinan, et al. "On the stability of expressive positional encodings for graph neural networks." arXiv preprint arXiv:2310.02579 (2023). | - Authors don’t verify the stability of the OGE-Aug on OOD benchmarks such as DrugOOD [1], where SPE [2] is validated on this dataset. |
SzWvRzyk6h | ICLR_2025 | * The phrasing like "the relationship between traditional style (LIWC-style) and sensorial style" may not be accurate. They are not totally independent, and LIWC-style includes categories that can capture aspects of sensory style.
* Apart from applying SVD to the BERT embedding, have the authors considered freezing some layers of the model while only training a few layers? Or other parameter-efficient methods such as LoRA? These methods are natural to think about and could provide a valuable basis for experimental comparison.
* This paper lacks the recognition of other works to provide a better academic background of the study. There has been some relevant work on using sensorial style together with other dimension of linguistic style for text analysis (https://doi.org/10.1080/09296174.2017.1405719, https://dl.acm.org/doi/pdf/10.1145/1979742.1979614), and there are also other recent works on improving LLMs' understanding of diverse linguistic style using lexicon (https://aclanthology.org/2024.acl-long.740/). It would be beneficial to cite and discuss these works to highlight the differences from them and contextualize the findings.
* The structure of the sections, particularly Section 4, appears disorganized and rushed. It may be beneficial for the author to consider reorganizing them for better clarity.
* The analysis in Section 4 seems superficial. It mainly focuses on the grouping of LIWC features and how these groups manifest in different text genres. However, it does not show how these categories might affect the personal engagement with sensorial description. | * Apart from applying SVD to the BERT embedding, have the authors considered freezing some layers of the model while only training a few layers? Or other parameter-efficient methods such as LoRA? These methods are natural to think about and could provide a valuable basis for experimental comparison. |
ICLR_2021_512 | ICLR_2021 | - Important pieces of prior work are missing from the related work section. The paper seems to be strongly related to Tensor Field Networks (TFN) (Thomas et al. 2018), as both define Euclidean and permutation equivariant convolutions on point clouds / graphs. Furthermore, there are several other methods that operate on graph that are embedded in a Euclidean space, such as SchNet (Schütt et al 2017). The graph network methods currently discussed all do not include the point coordinates in their operations. Lastly, the proposed method operates globally linearly on features on a graph, equivariantly to permutations, which is done in prior work, e.g. Maron 2018. - The experimental section only compares to methods that in their convolution are unaware of the point coordinates (except for in the input features). A comparison to coordinate-aware methods, such as TFN or SchNet seems appropriate. - The core object, the isometric adjacency matrix G, is ill-defined. In Eq 1 it is defined trough the embedding coordinates and “the transformation invariant rank-2 tensor” T. This object is not defined in the paper, which makes section 3 very confusing to read. In section 3, it appears like that the defined objects D take the role of object G in the above, so what is the role of eq 1? - In section 3, the authors speak of “collections of rank-p tensors”. However, these objects seem to actually be tensors of the shape N^a x d^p, where N is the number of nodes, d is the dimensionality of the embedding, and a and p are natural numbers. These objects transform under both permutations and Euclidean transformations in the obvious way. Why not make this fact explicit? That would make section 3 much easier to read. It seems like that when p=0, then a=1, and when p>0, then a=2. Except for in sec 3.2.2, in which a p=3 tensor has a=1. - In Sec 3.2, what are f_in and f_out? Are these the dimensionalities of the tensor product representation? Or do they denote the number of copies of the representation? If it’s the former, I don’t see how the network is equivariant. If it’s the latter, I don’t understand the last paragraph of 3.2.2, which says 1H \in R^{N x f_in}, which looks like a 0-tensor. - Can the authors clarify “To achieve translation equivariance, a constant tensor can be added to the output collection of tensors.”? The proposed method seems to only lead to translation invariant features. I do not follow how adding a constant tensor leads to translation equivariance that is not invariance. - Am I correct in understanding that the method scales cubic with the number of vertices (e.g. eqs 4, 6)? Or is there some sparsity used in the implementation, but not mentioned? Should we expect a method of cubic complexity to scale to 1M vertices? In a naïve implementation, a fast modern GPU with 14.2E12 flops would need 20h for a single 1Mx1M matrix-matrix multiplication (1E18 floating point operations). - The authors claim the method scales to 1M vertices, but I cannot find this in the experiments. Table 4 speaks of 155k vertices. How did the authors determine the method scales to 1M vertices?
Recommendation: In its current form, I recommend rejection of this paper. Section 3 is insufficiently clear written, the related work lack important references to prior work and the experiments lack a comparison to potentially strong other methods. This is a shame, because I’d like to see this paper succeed, as the core idea is very strong. Significant improvements in the above criticisms can improve my score.
Suggestions for improvement: - Be clear about what the G object is and what eq 1 means. - Be explicit about types the objects, be more explicit about the indices that refer to the permutation representation, to the indices that refer to the Euclidean representation and the indices that refer to copies of the same representation. I think there is an opportunity to be more clear, more explicit, while reducing notational clutter. - Expand the related work section - Compare to the strong baselines that use the coordinates. - Provide argumentation for the claim to scale to 1M vertices.
Minor points: - Eq 7, \times should be \otimes? - Eq 14, what is j? - The authors write: “A, B and C are X, Y and Z respectively”. Perhaps this could be re-written to the easier to read “A=X, B=Y and C=Z”. This happens each time the word “respectively” is used. - Table 3 typo, gluster -> cluster
Post rebuttal
The authors addressed all my concerns and strongly improved their paper. I think it is now a good candidate for acceptance, as it provides an interesting alternative to / variation on tensor field networks. I raise my rating from 4 to 7. | - Expand the related work section - Compare to the strong baselines that use the coordinates. |
WYsLU5TEEo | ICLR_2024 | - **Limited to Binary Tasks**: A major limitation of the paper is that it only addresses binary classification tasks. It would be interesting to expand its applicability to multiclass problems to demonstrate broader utility, as mentioned in the discussion section.
- **Single Seed Experiments**: The experiments in the paper are limited to training on a single seed, making it difficult to assess the significance of performance differences and the true impact of the proposed cycle consistency loss on convergence. Multiple seed experiments would provide a more robust evaluation.
- **Experiment Clarity**: The presentation of experiments can be confusing and should be more detailed. For instance, the "Hybrid D" model is never introduced in the paper. The explanation of the computation of performance when using D is also presented *after* showing results. The description of Table 2 is also unclear, making it challenging for readers to understand the methodology and the comparison.
- **Misleading Introduction**: The paper introduces the approach as "combining classifier and discriminator in a single model" (in the abstract), which is incorrect since the generator and discriminator are fundamentally different.
- **Lack of Comparative Analysis**: The paper lacks a comparison with other counterfactual approaches, which could provide insights into the quality of the counterfactuals produced and help position the proposed method within the broader context of counterfactual research. | - **Single Seed Experiments**: The experiments in the paper are limited to training on a single seed, making it difficult to assess the significance of performance differences and the true impact of the proposed cycle consistency loss on convergence. Multiple seed experiments would provide a more robust evaluation. |
UK7Hs7f0So | ICLR_2024 | 1. The current version of the paper solely presents the average value obtained from five trials without including information about the standard deviation. It is highly recommended to include error bars.
2. Why use the VMF distribution and the truncated normal distribution to characterize the angle and magnitude of the target vector? The motivation behind this is unclear to me.
3. Metrics used to evaluate uncertainty are not sufficiently convincing, a more commonly used metric, CRPS [1], was not used in the experiment.
4. Some probabilistic time series baselines are not compared with the proposed method in the experiment, such as TransMAF [2], [3]. References:
[1] Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American statistical Association, 102(477):359–378, 2007.
[2] Binh Tang and David S Matteson. Probabilistic transformer for time series analysis. Advances in Neural Information Processing Systems, 34:23592–23608, 2021.
[3] Kashif Rasul, Abdul-Saboor Sheikh, Ingmar Schuster, Urs Bergmann, and Roland Vollgraf. Multivariate probabilistic time series forecasting via conditioned normalizing flows. arXiv preprint arXiv:2002.06103, 2020. | 2. Why use the VMF distribution and the truncated normal distribution to characterize the angle and magnitude of the target vector? The motivation behind this is unclear to me. |
NIPS_2022_836 | NIPS_2022 | 1. The application of this method seems to be in a very limited field which is a differentiable simulation of optical encoders. 2. The authors could have shown the result on 1~2 more datasets. 3. UNets have been there for a while. Are they indeed the best baseline method to compare the presented method against? 4. There is no theory or citations to back the claim made in line 136. 5. An entire multi-GPU setup is required for the optimizations in the proposed method, which makes it not very accessible for many potential users. | 5. An entire multi-GPU setup is required for the optimizations in the proposed method, which makes it not very accessible for many potential users. |
ACL_2017_239_review | ACL_2017 | The overall result is not very useful for ML practioners in this field, because it merely confirms what has been known or suspected, i.e. it depends on the task at hand, the labeled data set size, the type of the model, etc. So, the result in this paper is not very actionable. The reviewer noted that this comprehensive analysis deepens the understanding of this topic.
- General Discussion: The paper's presentation can be improved. Specifically: 1) The order of the figures/tables in the paper should match the order they are mentioned in the papers. Right now their order seems quite random.
2) Several typos (L250, 579, etc). Please use a spell checker.
3) Equation 1 is not very useful, and its exposition looks strange. It can be removed, and leave just the text explanations.
4) L164 mentions the "Appendix", but it is not available in the paper.
5) Missing citation for the public skip-gram data set in L425.
6) The claim in L591-593 is too strong. It must be explained more clearly, i.e. when it is useful and when it is not.
7) The observation in L642-645 is very interesting and important. It will be good to follow up on this and provide concrete evidence or example from some embedding. Some visualization may help too.
8) In L672 should provide examples of such "specialized word embeddings" and how they are different than the general purpose embedding.
9) Figuer 3 is too small to read. | 5) Missing citation for the public skip-gram data set in L425. |
ACL_2017_699_review | ACL_2017 | 1. Some discussions are required on the convergence of the proposed joint learning process (for RNN and CopyRNN), so that readers can understand, how the stable points in probabilistic metric space are obtained? Otherwise, it may be tough to repeat the results.
2. The evaluation process shows that the current system (which extracts 1.
Present and 2. Absent both kinds of keyphrases) is evaluated against baselines (which contains only "present" type of keyphrases). Here there is no direct comparison of the performance of the current system w.r.t. other state-of-the-arts/benchmark systems on only "present" type of key phrases. It is important to note that local phrases (keyphrases) are also important for the document. The experiment does not discuss it explicitly. It will be interesting to see the impact of the RNN and Copy RNN based model on automatic extraction of local or "present" type of key phrases.
3. The impact of document size in keyphrase extraction is also an important point. It is found that the published results of [1], (see reference below) performs better than (with a sufficiently high difference) the current system on Inspec (Hulth, 2003) abstracts dataset. 4. It is reported that current system uses 527,830 documents for training, while 40,000 publications are held out for training baselines. Why are all publications not used in training the baselines? Additionally, The topical details of the dataset (527,830 scientific documents) used in training RNN and Copy RNN are also missing. This may affect the chances of repeating results.
5. As the current system captures the semantics through RNN based models. So, it would be better to compare this system, which also captures semantics. Even, Ref-[2] can be a strong baseline to compare the performance of the current system.
Suggestions to improve: 1. As, per the example, given in the Figure-1, it seems that all the "absent" type of key phrases are actually "Topical phrases". For example: "video search", "video retrieval", "video indexing" and "relevance ranking", etc.
These all define the domain/sub-domain/topics of the document. So, In this case, it will be interesting to see the results (or will be helpful in evaluating "absent type" keyphrases): if we identify all the topical phrases of the entire corpus by using tf-idf and relate the document to the high-ranked extracted topical phrases (by using Normalized Google Distance, PMI, etc.). As similar efforts are already applied in several query expansion techniques (with the aim to relate the document with the query, if matching terms are absent in document).
Reference: 1. Liu, Zhiyuan, Peng Li, Yabin Zheng, and Maosong Sun. 2009b. Clustering to find exemplar terms for keyphrase extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 257–266.
2. Zhang, Q., Wang, Y., Gong, Y., & Huang, X. (2016). Keyphrase extraction using deep recurrent neural networks on Twitter. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (pp. 836-845). | 5. As the current system captures the semantics through RNN based models. So, it would be better to compare this system, which also captures semantics. Even, Ref-[2] can be a strong baseline to compare the performance of the current system. Suggestions to improve: |
NIPS_2018_810 | NIPS_2018 | of the approach, the proposed message passing scheme, relying on the hierarchical representation, does not seem very principled. It would be nice to know more precisely what has inspired the authors to make these design choices. Perhaps as a consequence, in qualitative results show sometimes objects seem to rip appart or smoothly deform into independent pieces. Qualitatively, some results are impressive, while some do not conform to our expectations of what would happen; cf my previous comment on unexpected deformations of objects, but also some motions: cubes moving slightly on the ground although they should be still, etc. More importantly, the applicability of the method is questionable, as it requires ground truth for the local and global deltas of each particle for training. Furthermore, even for inference, it would require the initial states of the particles. This would be a huge challenge in computer vision applications, and in any other applications relying on real videos, rather than synthetic videos. This greatly limits the applicability of the proposed representation, and as a consequence of the proposed architecture. Therefore I have doubts at this point that the proposed architecture has "the potential to form the basis of next generation physics predictors for use in computer vision, robotics, and quantitative cognitive science". To claim this, the authors should be able to show preliminary results of how their method could be used, even in very simple, but realistic settings. Nevertheless I believe that it is worthwhile sharing these results with the community, as well as the code for the method and the environment, as this is an important problem, and the authors attack it with an ambitious and novel (although slightly ad hoc) approach. To be fully confident that this paper should be accepted, I would like the authors to clarify the following points: - it is not clear how the quantitative results are obtained: what data exactly is used for training, validating and testing ? - ternary input to the phi network, encoding the type of pairwise relationship -> is this effective ? What happens otherwise, if this is not given, or if weights are not shared ? Also wouldn't it make sense to have categorical input in the form of a one hot vector instead ? - the algorithm described for the creation of the graph G_H is inconsistent with the description of the remaining edges (l.122 to 1.131). According to the algorithm, there should only be edges between siblings, from the root to the leaves and back, and from the root to all intermediate nodes. Maybe this is due to l.127: "each new node taking the place for its child leaves" should be "each new node taking the place of the root" ? This would then also be more consistent with Algorithm 1. Please clarify. In Figure 3b) it is not clear what the red links represent, if the edges are supposed to be directed. - I don't understand the process for learning the r_ij. If it can be different for each pair of particle (or even for each source particle i), how can it be learned using the losses described in 4.3 ? We can obtain their values on the objects of the training set, but then for each new object, we need to gather synthetic videos of the object moving, in order to learn its properties? Wouldn't it be simpler to have fixed parameters ? Furthermore, if the stiffness of the material is to be learned, why not learn the mass as well ? - in the algorithm described for the creation of the graph G_H, it is said that one should "add edges between cluster nodes if the clusters are connected by edges between leaves" - how is there a presence or an absence of an edge in the first place ? Nitpicking: - are humans really able to predict subtle deformations of masses at different scales ? Or can they simply perceive them and say whether they are plausible or not ?(l.39-44) Typos: - par(i) should be par(p) l.128 - Figure 3: "is constraint" should be "is constrained" - l.152; l should be p_l ? - l 155 repetition with the following: "Effects are summed together" -------------------------------- Final decision after rebuttal: The authors have addressed all of my concerns, except for one which I share with R3 that the experimental protocol for the quantitative results is not described. Overall however I think that the paper presents an ambitious and novel method to a very important problem, which they convincingly test in a challenging experimental set up. I agree with R3 that the qualitative evaluation should include the weaknesses in the main paper and I think it is beneficial that the authors have agreed to do so. I remain convinced that it would be worth sharing these results with the community. I can only urge the authors to respect their commitment to share the code and environment, and to add the missing details about the experimental protocol, to ease further research in this area and comparison to their work. I stand by my initial decision that the paper should be accepted as the approach is a very well executed stepping stone for others to improve on in the extremely challenging setting proposed by the authors. | - it is not clear how the quantitative results are obtained: what data exactly is used for training, validating and testing ? |
ICLR_2022_488 | ICLR_2022 | - The authors claim that a volume-preserving mixing function is a natural restriction and is easily satisfied. I would like to see a stronger argument why this is true, as it seems easy to think of non-volume-preserving mixing functions. Such an argument should include why the triangle dataset and MNIST would be generated by volume-preserving mixing functions. - The experiments find that none of the assumptions in the main theorem are necessary for identifiability to hold. More discussion about what necessary conditions would look like would improve the paper. - The experiments are not very strong. Only quantitively analyzes the simple triangle dataset, so does not include a non-trivial dataset with known sources, so that identifiability can be quantified. Also, the experiments show large overlap with the experiments of [Sorrenson et al 2020]. An additional experiment with more complicated images with known sources would improve the paper. - It is unclear why the model does not fully succeed in identifying the true sources in the triangle dataset. Is one of the assumptions not satisfied? Are there learning difficulties?
Further comments: - It seems that there is a constraint that q(z u) must be equal to the push-forward of p(s u) through g ∘ f
, in other words, that Figure 1 can be interpreted as a commuting diagram. However, this is never explicitly stated in the definition of the estimating model. Could the authors please clarify this? - I believe the appendix should be separately provides as supplementary material.
Typos: - In (18) in the appendix, z_0 should be s_0? - Above (23) in the appendix, should positive-defined be positive definite? Or positive semi-definite? | - It is unclear why the model does not fully succeed in identifying the true sources in the triangle dataset. Is one of the assumptions not satisfied? Are there learning difficulties? Further comments: |
NIPS_2021_1360 | NIPS_2021 | and questions:
There lacks an introduction of Laplacian matrix but directly uses it. A paper should be self-contained.
The motivation is not strong. The authors stated that "... the transformer architecture ... outperformed many SOTA models ... motivates us". This sounds like "A tool is powerful, then I try the tool on my task, and it works well! Then I publish a paper". However, it lacks of analysis why Transformer works well on this task, which would bring more insights to the community.
Section 3.3 needs to be polished more on writing.
1 In Eqn. (8), The e^{(t)} is proposed without further explanation. What is it? Why is it needed? What is the motivation of proposing it?
2 In Eqn. (8), are f_\theta(v_j) and \hat{y}_j^{(t)} the same thing since they both are the predictions of v_j by the predictor.
3 In line 177-178, if the y_j^{true} is "user-unkonwn", then how do you compute e^{(t)} in Enq. (8)?
4 Why this SE framework can help to improve, how does it help? Similar to 2, please DO NOT just show me what you have done and achieved, but also show me why and how you manage to do these.
I would consider increasing the rating based on the authors' response.
Reference: [1] Luo, et al. "Neural architecture search with gbdt." arXiv preprint arXiv:2007.04785 (2020). https://arxiv.org/abs/2007.04785 | 4 Why this SE framework can help to improve, how does it help? Similar to 2, please DO NOT just show me what you have done and achieved, but also show me why and how you manage to do these. I would consider increasing the rating based on the authors' response. Reference: [1] Luo, et al. "Neural architecture search with gbdt." arXiv preprint arXiv:2007.04785 (2020). https://arxiv.org/abs/2007.04785 |
NIPS_2021_815 | NIPS_2021 | - In my opinion, the paper is a bit hard to follow. Although this is expected when discussing more involved concepts, I think it would be beneficial for the exposition of the manuscript and in order to reach a larger audience, to try to make it more didactic. Some suggestions: - A visualization showing a counting of homomorphisms vs subgraph isomorphism counting. - It might be a good idea to include a formal or intuitive definition of the treewidth since it is central to all the proofs in the paper. - The authors define rooted patterns (in a similar way to the orbit counting in GSN), but do not elaborate on why it is important for the patterns to be rooted, neither how they choose the roots. A brief discussion is expected, or if non-rooted patterns are sufficient, it might be better for the sake of exposition to discuss this case only in the supplementary material. - The authors do not adequately discuss the computational complexity of counting homomorphisms. They make brief statements (e.g., L 145 “Better still, homomorphism counts of small graph patterns can be efficiently computed even on large datasets”), but I think it will be beneficial for the paper to explicitly add the upper bounds of counting and potentially elaborate on empirical runtimes. - Comparison with GSN: The authors mention in section 2 that F-MPNNs are a unifying framework that includes GSNs. In my perspective, given that GSN is a quite similar framework to this work, this is an important claim that should be more formally stated. In particular, as shown by Curticapean et al., 2017, in order to obtain isomorphism counts of a pattern P, one needs not only to compute P-homomorphisms, but also those of the graphs that arise when doing “non-edge contractions” (the spasm of P). Hence a spasm(P)-MPNN would require one extra layer to simulate a P-GSN. I think formally stating this will give the interested reader intuition on the expressive power of GSNs, albeit not an exact characterisation (we can only say that P-GSN is at most as powerful as a spasm(P)-MPNN but we cannot exactly characterise it; is that correct?) - Also, since the concept of homomorphisms is not entirely new in graph ML, a more elaborate comparison with the paper by NT and Maehara, “Graph Homomorphism Convolution”, ICML’20 would be beneficial. This paper can be perceived as the kernel analogue to F-MPNNs. Moreover, in this paper, a universality result is provided, which might turn out to be beneficial for the authors as well.
Additional comments:
I think that something is missing from Proposition 3. In particular, if I understood correctly the proof is based on the fact that we can always construct a counterexample such that F-MPNNs will not be equally strong to 2-WL (which by the way is a stronger claim). However, if the graphs are of bounded size, a counterexample is not guaranteed to exist (this would imply that the reconstruction conjecture is false). Maybe it would help to mention in Proposition 3 that graphs are of unbounded size?
Moreover, there is a detail in the proof of Proposition 3 that I am not sure that it’s that obvious. I understand why the subgraph counts of C m + 1
are unequal between the two compared graphs, but I am not sure why this is also true for homomorphism counts.
Theorem 3: The definition of the core of a graph is unclear to me (e.g., what if P contains cliques of multiple sizes?)
In the appendix, the authors mention they used 16 layers for their dataset. That is an unusually large number of layers for GNNs. Could the authors comment on this choice?
In the same context as above, the experiments on the ZINC benchmark are usually performed with either ~100K or 500K parameters. Although I doubt that changing the number of parameters will lead to a dramatic change in performance, I suggest that the authors repeat their experiments, simply for consistency with the baselines.
The method of Bouritsas et al., arxiv’20 is called “Graph Substructure Networks” (instead of “Structure”). I encourage the authors to correct this.
After rebuttal
The authors have adequately addressed all my concerns. Enhancing MPNNs with structural features is a family of well-performing techniques that have recently gained traction. This paper introduces a unifying framework, in the context of which many open theoretical questions can be answered, hence significantly improving our understanding. Therefore, I will keep my initial recommendation and vote for acceptance. Please see my comment below for my final suggestions which, along with some improvements on the presentation, I hope will increase the impact of the paper.
Limitations: The limitations are clearly stated in section 1, by mainly referring to the fact that the patterns need to be selected by hand. I would also add a discussion on the computational complexity of homomorphism counting.
Negative societal impact: A satisfactory discussion is included in the end of the experimental section. | - A visualization showing a counting of homomorphisms vs subgraph isomorphism counting. |
NIPS_2019_82 | NIPS_2019 | 1. One major risk of methods that exploit relationships between action units is that the relationships can be very different accross datasets (e.g. AU6 can occur both in an expression of pain and in happiness, and this co-occurence will be very different in a positive salience dataset such as SEMAINE compared to something like UNBC pain dataset). This difference in correlation can already be seen in Figure 1 with quite different co-occurences of AU1 and AU12. A good way to test the generalization of such work is by performing cross-dataset experiments, which this paper is lacking. 2. The language in the paper is sometimes conversational and not scientific (use of terms like massive), and there are several opinions and claims that are not substantiated (e.g. "... facial landmarks, which are helpful for the recognition of AUs defined in small regions"), the paper could benefit from copy-editing 3. Why are two instances of the same network (resnet) are used as different views? Would using a different architecture instead be considered a more differing view? Would be great to see a justification for using two resnet networks. 4. Why is the approach limited to two views, it feels like the system should be able to generalize to more views without too much difficulty? Minor comments: - What is PCA style guarantee? - What is v in equation 2? - why are dfferent numbers of unlabeled images using in training BP4D and EmotioNet models? Trivia: massive face images -> large datasets donates -> denotes (x2) adjacent -> adjacency | 4. Why is the approach limited to two views, it feels like the system should be able to generalize to more views without too much difficulty? Minor comments: |
ICLR_2021_2196 | ICLR_2021 | weakness.
Other comments • The proposed method, plastic gates, which performs best amongst the baselines used when combined with product of experts models, seems simple and effective but I am inclined to question how novel it is, since it just amounts to multi-step online gradient descent on the mixture weights. • The metrics used for evaluating continual learning, loss after switch and recovery time after switch, which are one of the main selling points of the paper are suitable for the datasets provided, but would not be applicable in a setting where either the task boundaries are not known or there are no hard task boundaries to be identified. • Typo Section 2 Paragraph 2: “MNNIST” -> “MNIST” | • The metrics used for evaluating continual learning, loss after switch and recovery time after switch, which are one of the main selling points of the paper are suitable for the datasets provided, but would not be applicable in a setting where either the task boundaries are not known or there are no hard task boundaries to be identified. |
ARR_2022_268_review | ARR_2022 | • It is not clear why does the user decoder at time step t uses only the information till time step t from the agent decoder and why not use the information from all the time steps?
• The motivation of applying the attention divergence loss to force attention similarity is still not clear to me. What happens if att^a_u is made equal to att^u_u . I couldn’t find any model ablation which justifies this loss as well. • The human evaluation is not clearly able to identify if the model improvements actually help as the results are close and not consistent across models (although reasoning has been provided).
Paper is well written and organized. Additional experiments to justify the the attention divergence loss can help the paper. More examples in case studies can better help in understanding the contributions. | • It is not clear why does the user decoder at time step t uses only the information till time step t from the agent decoder and why not use the information from all the time steps? |
NIPS_2021_1852 | NIPS_2021 | W1: The design of extending SGC (from Equation 1) to EIGNN (from Equation 3) is somehow implicit and ad-hoc without clear justifications. The authors should explain this more in details for better understanding by general audiences that not very familiar with implicit models.
W2: During the time complexity analysis, only the complexity of training is analyzed, but it seems like the computation of eigendecomposition of S, the normalized adjacency matrix with self-loops, (Line 176) is not added, which usually requires the cost of O ( n 3 )
. If this is true, a full eigendecomposition of a large sparse S could make EIGNN an impractical approach for prohibiting the scalability in terms of large number of nodes n
for huge real-world graphs.
W3: Several concerns upon experiments include: 1) The discussion on arbitrary hyperparameter γ
is missing, including how to set it in practice for a given graph and analyzing on the sensitivity of this hyperparameter, otherwise it will be hard for the researchers to follow. 2) As the weakness on the analysis of complexity, why the author chooses not to evaluate the long-range dependency on the standard dataset Amazon Co-purchase as used in IGNN. Amazon Co-purchase dataset has another benefit that it can also reflect the scalability of proposed method since it is a large dataset with ~33k nodes, while the experiments on real-world dataset are all conducted on graphs that less than 10k. 3) For the evaluation on over-smoothing, it would be interesting to see how the EIGNN performs with respect to over-smoothing under standard setting on real-world datasets, especially in comparison with variants focusing on dealing with over-smoothing, such as the setting used in GCNII. 4) The evaluation on robustness is not very convincing since structural attack is known to be more powerful and appreciative when we attack on graph-structured data. Thus, the authors are suggested to defend their proposed model against several popular structural attack methods such as Nettack for better demonstration rather than attacks on features used in experiments. | 1) The discussion on arbitrary hyperparameter γ is missing, including how to set it in practice for a given graph and analyzing on the sensitivity of this hyperparameter, otherwise it will be hard for the researchers to follow. |
KadOFOsUpQ | ICLR_2025 | - I am not very convinced by the ablation method used in section 4.1, i.e., by replacing output vector by mean values. It seems a bit ad-hoc for me without further justification. Why use mean but not other statistics? How robust are the results, or is it specific only to the ablation method used here?
- Given that induction heads and FV heads appear at different locations (layers) within the model, head "location" can be one confounding factor that contributes to the difference in ICL performance when ablating induction heads vs. FV heads. There should perhaps be a controlled baseline that ablates heads at different locations in the model.
- The empirical results presented in the paper appear a bit weak. It is not clear how many tasks are evaluated (Is Figure 4 showing averaged results?), and which ICL tasks are used exactly? How well do the tasks represent real-world ICL/few-shot use cases?
- Some conclusions made from the observations seem more like conjectures instead of actual proof. Paper can be made more sound to clarify conjectures from conclusions with substantiated results. E.g., Line 252: "This suggests that induction and FV heads may not fully overlap, and that FV heads may implement more complex or abstract computations than induction heads".
- Minor: The paper presentation can be improved with clearer background introduction of induction heads and FV heads. | - Given that induction heads and FV heads appear at different locations (layers) within the model, head "location" can be one confounding factor that contributes to the difference in ICL performance when ablating induction heads vs. FV heads. There should perhaps be a controlled baseline that ablates heads at different locations in the model. |
ACL_2017_792_review | ACL_2017 | 1. Unfortunately, the results are rather inconsistent and one is not left entirely convinced that the proposed models are better than the alternatives, especially given the added complexity. Negative results are fine, but there is insufficient analysis to learn from them. Moreover, no results are reported on the word analogy task, besides being told that the proposed models were not competitive - this could have been interesting and analyzed further.
2. Some aspects of the experimental setup were unclear or poorly motivated, for instance w.r.t. to corpora and datasets (see details below).
3. Unfortunately, the quality of the paper deteriorates towards the end and the reader is left a little disappointed, not only w.r.t. to the results but with the quality of the presentation and the argumentation.
- General Discussion: 1. The authors aim "to learn representations for both words and senses in a shared emerging space". This is only done in the LSTMEmbed_SW version, which rather consisently performs worse than the alternatives. In any case, what is the motivation for learning representations for words and senses in a shared semantic space? This is not entirely clear and never really discussed in the paper.
2. The motivation for, or intuition behind, predicting pre-trained embeddings is not explicitly stated. Also, are the pre-trained embeddings in the LSTMEmbed_SW model representations for words or senses, or is a sum of these used again? If different alternatives are possible, which setup is used in the experiments?
3. The importance of learning sense embeddings is well recognized and also stressed by the authors. Unfortunately, however, it seems that these are never really evaluated; if they are, this remains unclear. Most or all of the word similarity datasets considers words independent of context.
4. What is the size of the training corpora? For instance, using different proportions of BabelWiki and SEW is shown in Figure 4; however, the comparison is somewhat problematic if the sizes are substantially different. The size of SemCor is moreover really small and one would typically not use such a small corpus for learning embeddings with, e.g., word2vec. If the proposed models favor small corpora, this should be stated and evaluated.
5. Some of the test sets are not independent, i.e. WS353, WSSim and WSRel, which makes comparisons problematic, in this case giving three "wins" as opposed to one.
6. The proposed models are said to be faster to train by using pre-trained embeddings in the output layer. However, no evidence to support this claim is provided. This would strengthen the paper.
7. Table 4: why not use the same dimensionality for a fair(er) comparison?
8. A section on synonym identification is missing under similarity measurement that would describe how the multiple-choice task is approached.
9. A reference to Table 2 is missing.
10. There is no description of any training for the word analogy task, which is mentioned when describing the corresponding dataset. | 8. A section on synonym identification is missing under similarity measurement that would describe how the multiple-choice task is approached. |
ARR_2022_201_review | ARR_2022 | 1. The Methodology section is very hard to follow. The model architecture description is rather confusing and sometimes uses inconsistent notation. For example, Section 2.2 introduces $v^p_{t-1}$ in the description which does not appear in the equations. Some of the notation pertaining to the labels ($l_0$, $l_{t-1}$) initially gives the impression that a sequence of tokens are being generated as the label, which is not the case.
2. It is not clear why the baseline model considered for the NLI task is based on an older work (Conneau et al., 2017) and not the work by Kumar and Talukdar (2020), which seems more relevant to this work and is also cited in the paper. In addition, all the comparisons in the result section (and the delta numbers in Table 1) are made against a baseline vanilla Transformer model.
3. Some of the numbers presented in Table 1 are confusing. It is not clear how a BLEU score of 22.51 is obtained by evaluating the ground truth dataset against itself for the NLI task. It is also not explained how the perplexity results are obtained. Are the numbers averaged across the dataset? Would that be a valid way to present these results?
4. Some details like whether the results are averaged across multiple runs, and what the inter-annotator agreement was in the human evaluation are missing.
- In Section 2.1, the Transformer model is declared to have been established as the dominant approach for test generation. The next sentence immediately proceeds to the model description. This paragraph would flow better if there were a sentence linking the two parts and making it explicit that the Transformer model is being used. For example, "We therefore adopt the Transformer model as our base model" or something to that effect.
- The description of the model architecture needs to be rephrased for clarity. See (1) under Weaknesses for some notation inconsistencies that can be improved as well.
- In Section 2.3, Line 266 the description states $L$, $R$ are conditioned on $X$. Presumably it is meant to be $L$, $E$ since no $R$ is introduced anywhere.
- In Section 3.2, the description of the Transformer baseline states that an MLP layer is added for generating sentence-level interpretations. This was perhaps meant to be predicted labels and not the interpretations, unless the decoder here is generating the labels - this needs clarification or correction.
- In the results under "Interpretation Promotion", it is stated that "most of the explanations generated by our method are reasonable even though the BLEU scores are low". It might help to add an example here to make this more convincing.
- Line 043: comprehensibleness -> comprehensibility - Line 044: human -> humans - Line 046: which nevertheless -> which are nevertheless - Line 059: With the annotated -> With annotated - Line 135: method achieve -> method achieves - Line 163: dataset annotated by human -> human-annotated dataset - Line 169: a MLP -> an MLP - Line 233: to better inference -> to do better inference/to infer better - Line 264: two distribution -> two distributions - Line 303: maximum the negative likelihood -> maximize the negative likelihood - Line 493--494: the opening quotes are incorrectly formatted - Line 509: exapmle -> example | 1. The Methodology section is very hard to follow. The model architecture description is rather confusing and sometimes uses inconsistent notation. For example, Section 2.2 introduces $v^p_{t-1}$ in the description which does not appear in the equations. Some of the notation pertaining to the labels ($l_0$, $l_{t-1}$) initially gives the impression that a sequence of tokens are being generated as the label, which is not the case. |
aVqGqTyky7 | EMNLP_2023 | 1. Lack of explanation and analysis of the model's deep mechanism. Why the dynamic update of confidence score works and why the model can outperformance supervised models so significantly need further detailed explanation. Only description but no explanation makes it short of interpretability.
2. Inconsistant symbol usage. The w_x in Eq. 5-7 and the w_i^t in Eq. 9 have different formats, which might lead to confusion.
3. An overview of the workflow and the model, which can make it easier to get the whole picture of the work, is needed.
4. Missing Ethics Statement (e.g., the reproductivity of the work).
5. Typo on Line 496 (wrong serial number '(2)'). | 3. An overview of the workflow and the model, which can make it easier to get the whole picture of the work, is needed. |
NIPS_2020_560 | NIPS_2020 | I had a few concerns/confusions about the algorithmic motivations. 1. To debias the sketch, it seems that one needs to know the statistical dimension d_lambda of the design matrix A. This can't be computed accurately without basically the same runtime as required to solve the ridge regression problem in the first place. Thus is seems there will be some bias, possibly defeating the purpose of the approach. I couldn't find this issue discussed in the paper. A similar issue arrises when actually computing the surrogate sketch. 2. The cost to hat H in Theorem 2 seems slower than known runtimes for just fully solving the system? E.g. via sparse sketching, Clarkson and Woodruff 2013, you can compute a constant factor preconditioner in nnz(A)+d^3 time and then solve the system to epsilon accuracy in log(1/epsilon) iterations each costing O(nnz(A)+d^2) time. So why doesn't a single server just directly solve the system and make na exact newton step instead of computing the Hessian sketch? | 1. To debias the sketch, it seems that one needs to know the statistical dimension d_lambda of the design matrix A. This can't be computed accurately without basically the same runtime as required to solve the ridge regression problem in the first place. Thus is seems there will be some bias, possibly defeating the purpose of the approach. I couldn't find this issue discussed in the paper. A similar issue arrises when actually computing the surrogate sketch. |
Kr7KpDm8MO | ICLR_2024 | Here are some of my major concerns:
1) I doubt comparing the dynamics of random walk (with zero mean gradients) with neural network training with true objective is meaningful. In particular, it is not clear how a random walk (drawing gradients from a normal 0 mean distribution) can trace the dynamic of neural network trained with a true loss.
2) The authors state *"Although the noise component can easily depend on the the progress on the underlying objective function, we can
view the random walk as an approximation of this noise component."* but the random walk is not a function of the true loss, hence making it unclear how it is close to real training dynamics. It maybe possible that I misunderstood, so I will wait for further clarification from the authors.
3) "This causes the expected rotation of the vector in each update to remain constant along with its magnitude.": Although the GD iterates converge in some sense but for moderate lr GD, oscillates in the edge of stability regime https://arxiv.org/abs/2103.00065. Hence, it is not necessarily true that expected rotation in each update remains constant. Minor:
4) In figure-2,3, the weight norm equillibrium is a scalar, but denoted as a vector in the figure, does the author mean the weight vector at equillibrium?
5) Similarly for figure-3, please redefine the figure as the expected quantities are scalars but shown as a vector.
6) Define what is the expectation over, when defining the angular update? | 5) Similarly for figure-3, please redefine the figure as the expected quantities are scalars but shown as a vector. |
S8VFVe6MWL | ICLR_2025 | Overall, the authors tend to trust subjective metrics (that they've done) over objective metrics and draw conclusions based on them, but the more I think about it, the more questions I have about the subjective evaluation process.
Also for all three of the main contributions: not enough logical connections were explained or consensus was reached to support the claims.
1. There is no analysis of the dataset they propose and the authors' description is misleading.
- Throughout the paper, the TUT dataset is described as anechoic (L138, L221, L449), and I believed that for a while, but after actually listening to the GTs in the supplementary material, I realized that they are very echoic. Please clarify this.
2. In fact, whether the TUT dataset is reverberant or anechoic, both cases raise questions about the novelty of this work.
1. If the TUT dataset is anechoic (and the GT samples in the supplementary material are incorrectly attached), it seems self-evident that BinauralZero will perform better on anechoic datasets.
- The baselines were trained on binaural 'room' recordings, so they will always contain room reverb. BinauralZero, on the other hand, will follow the recording environments of the training data of its vocoder (WaveFit), and although it is not stated in the WaveFit paper, given that the test data is LibriTTS, it seems likely that there is little room characterization in the training data. (In fact, if one listens to BinauralZero's output on the TUT data, the reverb is very weak.) Based on this fact alone, it is not surprising that BinauralZero outperforms the baselines on the TUT data, given that TUT is an 'anechoic binaural rendering'. How can this result be related to the claim that “the existing neural models seem to highly overfit to non-spatial acoustic features”? Please provide any specific examples of ‘non-spatial acoustic features’ that the existing neural models seem to highly overfit.
2. Conversely, if the TUT dataset is reverberant and the GT samples in the supplementary material are all properly attached, how could the subjective assessment result in a high MUSHRA score despite BinauralZero being anechoic? The fact that the model rated most similar to the (reverberant) TUT GT is the (anechoic) BinauralZero does not resonate with my consensus. Other than this:
1. One can clearly hear out-of-phase artifacts from most of BinauralZero’s output for the Binaural Speech dataset (especially for the sibilant sounds). On the other hand, those artifacts are hard to find from the outputs for the TUT dataset. Please elaborate on this discrepancy.
2. NFS outputs for the TUT dataset are slightly detuned. As far as I know, NFS only applies multichannel linear phased filters (so it cannot change a pitch by its design.) Please expand on this.
3. All of the baseline models were trained on a 48 kHz sampling rate, WaveFit is trained on a 24 kHz sampling rate, and the TUT dataset seems to have a 44.1 kHz sampling rate. Nothing is mentioned about the conversions between these differences.
4. All of the baseline models require orientation to be input. How were the orientations of the sources in the TUT dataset synthesized? What, if any, differences in the distribution of orientation and location coordinates are there from the Binaural Speech dataset, and what are the implications?
5. Tests primarily using MUSHRA will also include results for hidden anchors. What are the anchors in this paper? How were they scored?
3. The ablations seem to deserve better experiment setup, as so many questions arise:
1. If one is to insist that the four “automated metrics” mentioned in the paper decorrelate to perceptual metrics, why not report other metrics (e.g., SDR, SAQAM [1], NORD [2], AODG [3])? NORD is even open-sourced.
2. Provide the configurations for the mel-spectrogram conversion (window/hop size, number of FFT points, number of Mels, …). Then, please clarify how BinauralZero can model the accurate delay within the hop size (12.5 ms).
- A typical maximum ITD is considered 0.66 ms (when a sound source is positioned at 90° azimuth to one ear), but let's say the ITD for the GTW's output was 1 ms for the sake of brevity (this amounts to approximately for the maximal difference in time of arrival for the subject with 40 cm distance between ears). In the BinauralZero framework, the waveform with 1ms ITD will be converted into a Mel spectrogram to be generated as a 'natural-sounding waveform' via vocoder. How will this 1ms ITD be preserved throughout this process?
[1] Manocha, P., Kumar, A., Xu, B., Menon, A., Gebru, I. D., Ithapu, V. K., & Calamia, P. (2022). SAQAM: Spatial audio quality assessment metric. *arXiv preprint arXiv:2206.12297*.
[2] Manocha, P., Gebru, I. D., Kumar, A., Markovic, D., & Richard, A. (2023, June). Nord: Non-matching reference based relative depth estimation from binaural speech. In *ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)* (pp. 1-5). IEEE.
[3] Schäfer, M., Bahram, M., & Vary, P. (2013, May). An extension of the PEAQ measure by a binaural hearing model. In *2013 IEEE International Conference on Acoustics, Speech and Signal Processing* (pp. 8164-8168). IEEE. | 3. The ablations seem to deserve better experiment setup, as so many questions arise: |
ACL_2017_318_review | ACL_2017 | 1. Presentation and clarity: important details with respect to the proposed models are left out or poorly described (more details below). Otherwise, the paper generally reads fairly well; however, the manuscript would need to be improved if accepted.
2. The evaluation on the word analogy task seems a bit unfair given that the semantic relations are explicitly encoded by the sememes, as the authors themselves point out (more details below).
- General Discussion: 1. The authors stress the importance of accounting for polysemy and learning sense-specific representations. While polysemy is taken into account by calculating sense distributions for words in particular contexts in the learning procedure, the evaluation tasks are entirely context-independent, which means that, ultimately, there is only one vector per word -- or at least this is what is evaluated. Instead, word sense disambiguation and sememe information are used for improving the learning of word representations. This needs to be clarified in the paper.
2. It is not clear how the sememe embeddings are learned and the description of the SSA model seems to assume the pre-existence of sememe embeddings. This is important for understanding the subsequent models. Do the SAC and SAT models require pre-training of sememe embeddings?
3. It is unclear how the proposed models compare to models that only consider different senses but not sememes. Perhaps the MST baseline is an example of such a model? If so, this is not sufficiently described (emphasis is instead put on soft vs. hard word sense disambiguation). The paper would be stronger with the inclusion of more baselines based on related work.
4. A reasonable argument is made that the proposed models are particularly useful for learning representations for low-frequency words (by mapping words to a smaller set of sememes that are shared by sets of words). Unfortunately, no empirical evidence is provided to test the hypothesis. It would have been interesting for the authors to look deeper into this. This aspect also does not seem to explain the improvements much since, e.g., the word similarity data sets contain frequent word pairs.
5. Related to the above point, the improvement gains seem more attributable to the incorporation of sememe information than word sense disambiguation in the learning procedure. As mentioned earlier, the evaluation involves only the use of context-independent word representations. Even if the method allows for learning sememe- and sense-specific representations, they would have to be aggregated to carry out the evaluation task.
6. The example illustrating HowNet (Figure 1) is not entirely clear, especially the modifiers of "computer".
7. It says that the models are trained using their best parameters. How exactly are these determined? It is also unclear how K is set -- is it optimized for each model or is it randomly chosen for each target word observation? Finally, what is the motivation for setting K' to 2? | 4. A reasonable argument is made that the proposed models are particularly useful for learning representations for low-frequency words (by mapping words to a smaller set of sememes that are shared by sets of words). Unfortunately, no empirical evidence is provided to test the hypothesis. It would have been interesting for the authors to look deeper into this. This aspect also does not seem to explain the improvements much since, e.g., the word similarity data sets contain frequent word pairs. |
NIPS_2020_777 | NIPS_2020 | 1. Data preparation. As the authors pointed out, data serve a very important role in the whole work. However, the authors did not describe clearly how a) training images are rendered b) query points are sampled during training c) normalizations are applied for 2D and 3D data. Are they the same as PiFu? In implicit function network e.g. PiFu and DeepSDF, both b) and c) are extremely important and could greatly affect the quality of results. 2. Study of global feature. Methods like PiFu purposely avoid using voxel-like feature because of their high computational and memory cost. What is the resolution of the 3D voxel, and does it introduce unnecessary overhead to the whole network? It would be more convincing to study the importance of the global feature in Sec4.2 by comparing with different resolutions of voxel features. Notice when the resolution is reduced to 1x1x1, this is actually the case of using a single global feature. | 2. Study of global feature. Methods like PiFu purposely avoid using voxel-like feature because of their high computational and memory cost. What is the resolution of the 3D voxel, and does it introduce unnecessary overhead to the whole network? It would be more convincing to study the importance of the global feature in Sec4.2 by comparing with different resolutions of voxel features. Notice when the resolution is reduced to 1x1x1, this is actually the case of using a single global feature. |
NIPS_2016_93 | NIPS_2016 | - The claims made in the introduction are far from what has been achieved by the tasks and the models. The authors call this task language learning, but evaluate on question answering. I recommend the authors tone-down the intro and not call this language learning. It is rather a feedback driven QA in the form of a dialog. - With a fixed policy, this setting is a subset of reinforcement learning. Can tasks get more complicated (like what explained in the last paragraph of the paper) so that the policy is not fixed. Then, the authors can compare with a reinforcement learning algorithm baseline. - The details of the forward-prediction model is not well explained. In particular, Figure 2(b) does not really show the schematic representation of the forward prediction model; the figure should be redrawn. It was hard to connect the pieces of the text with the figure as well as the equations. - Overall, the writing quality of the paper should be improved; e.g., the authors spend the same space on explaining basic memory networks and then the forward model. The related work has missing pieces on more reinforcement learning tasks in the literature. - The 10 sub-tasks are rather simplistic for bAbi. They could solve all the sub-tasks with their final model. More discussions are required here. - The error analysis on the movie dataset is missing. In order for other researchers to continue on this task, they need to know what are the cases that such model fails. | - The error analysis on the movie dataset is missing. In order for other researchers to continue on this task, they need to know what are the cases that such model fails. |
ACL_2017_178_review | ACL_2017 | - The evaluation reported in this paper includes only intrinsic tasks, mainly on similarity/relatedness datasets. As the authors note, such evaluations are known to have very limited power in predicting the utility of embeddings in extrinsic tasks. Accordingly, it has become recently much more common to include at least one or two extrinsic tasks as part of the evaluation of embedding models.
- The similarity/relatedness evaluation datasets used in the paper are presented as datasets recording human judgements of similarity between concepts. However, if I understand correctly, the actual judgements were made based on presenting phrases to the human annotators, and therefore they should be considered as phrase similarity datasets, and analyzed as such.
- The medical concept evaluation dataset, ‘mini MayoSRS’ is extremely small (29 pairs), and its larger superset ‘MayoSRS’ is only a little larger (101 pairs) and was reported to have a relatively low human annotator agreement. The other medical concept evaluation dataset, ‘UMNSRS’, is more reasonable in size, but is based only on concepts that can be represented as single words, and were represented as such to the human annotators. This should be mentioned in the paper and makes the relevance of this dataset questionable with respect to representations of phrases and general concepts. - As the authors themselves note, they (quite extensively) fine tune their hyperparameters on the very same datasets for which they report their results and compare them with prior work. This makes all the reported results and analyses questionable.
- The authors suggest that their method is superb to prior work, as it achieved comparable results while prior work required much more manual annotation. I don't think this argument is very strong because the authors also use large manually-constructed ontologies, and also because the manually annotated dataset used in prior work comes from existing clinical records that did not require dedicated annotations.
- In general, I was missing more useful insights into what is going on behind the reported numbers. The authors try to treat the relation between a phrase and its component words on one hand, and a concept and its alternative phrases on the other, as similar types of a compositional relation. However, they are different in nature and in my mind each deserves a dedicated analysis. For example, around line 588, I would expect an NLP analysis specific to the relation between phrases and their component words. Perhaps the reason for the reported behavior is dominant phrase headwords, etc. Another aspect that was absent but could strengthen the work, is an investigation of the effect of the hyperparameters that control the tradeoff between the atomic and compositional views of phrases and concepts.
General Discussion: Due to the above mentioned weaknesses, I recommend to reject this submission. I encourage the authors to consider improving their evaluation datasets and methodology before re-submitting this paper.
Minor comments: - Line 069: contexts -> concepts - Line 202: how are phrase overlaps handled?
- Line 220: I believe the dimensions should be |W| x d. Also, the terminology ‘negative sampling matrix’ is confusing as the model uses these embeddings to represent contexts in positive instances as well.
- Line 250: regarding ‘the observed phrase just completed’, it not clear to me how words are trained in the joint model. The text may imply that only the last words of a phrase are considered as target words, but that doesn’t make sense. - Notation in Equation 1 is confusing (using c instead of o) - Line 361: Pedersen et al 2007 is missing in the reference section.
- Line 388: I find it odd to use such a fine-grained similarity scale (1-100) for human annotations.
- Line 430: The newly introduced term ‘strings’ here is confusing. I suggest to keep using ‘phrases’ instead.
- Line 496: Which task exactly was used for the hyper-parameter tuning?
That’s important. I couldn’t find that even in the appendix.
- Table 3: It’s hard to see trends here, for instance PM+CL behaves rather differently than either PM or CL alone. It would be interesting to see development set trends with respect to these hyper-parameters.
- Line 535: missing reference to Table 5. | - Table 3: It’s hard to see trends here, for instance PM+CL behaves rather differently than either PM or CL alone. It would be interesting to see development set trends with respect to these hyper-parameters. |
NIPS_2020_1710 | NIPS_2020 | - there are very few experimental details about distillation - is this distillation only on the training set, or is there data augmentation? - it is difficult to understand e.g. figure 5, there are a lot of lines on top of each other - the main metrics reported are performance compared to remaining weights, but the authors could report flops or model size, to make this much more concrete | - it is difficult to understand e.g. figure 5, there are a lot of lines on top of each other - the main metrics reported are performance compared to remaining weights, but the authors could report flops or model size, to make this much more concrete |
ICLR_2021_1213 | ICLR_2021 | weakness of the paper. Then, I present my additional comments which are related to specific expressions in the main text, proof steps in the appendix etc. I would appreciate it very much if authors could address my questions/concerns under “Additional Comments” as well, since they affect my assessment and understanding of the paper; consequently my score for the paper. Summary:
• The paper focuses on convergence of two newly-proposed versions of AdaGrad, namely AdaGrad-window and AdaGrad-truncation, for finite sum setting where each component is smooth and possibly nonconvex.
• The authors prove convergence rate with respect to number of epochs T, where in each epoch one full pass over the data is performed with respect to well-known “random shuffling” sampling strategy.
• Specifically, AdaGrad-window is shown to achieve O ~ ( T − 1 / 2 )
rate of convergence, whereas AdaGrad-truncation attains ( T − 1 / 2 )
convergence, under component-wise smoothness and bounded gradients assumptions. Additionally, authors introduce a new condition/assumption called consistency ratio which is an essential element of their analysis.
• The paper explains the proposed modification to AdaGrad and provide their intuition for such adjustments. Then, the main results are presented followed by a proof sketch, which demonstrates the main steps of the theoretical approach.
• In order to evaluate the practical performance of the modified adaptive methods in a comparative fashion, two set of experiments were provided: training logistic regression model on MNIST dataset and Resnet-18 model on CIFAR-10 dataset. In these experiments; SGD, SGD with random shuffling, AdaGrad and AdaGrad-window were compared. Additionally, authors plot the behavior of their proposed condition “consistency ratio” over epochs. Strengths:
• I think epoch-wise analysis, especially for finite sum settings, could help provide insights into behaviors of optimization algorithms. For instance, it may enable to further investigate effect of batch size or different sampling strategies with respect to progress of the algorithms after every full pass of data. This may also help with comparative analysis of deterministic and stochastic methods.
• I have checked the proof of Theorem 1 in details and had a less detailed look at Theorems 2 and 3. I appreciate some of the technically rigorous sections of the analysis as the authors bring together analytical tools from different resources and re-prove certain results with respect to their adjustments.
• Performance comparison in the paper is rather simple but the authors try to provide a perspective of their consistency condition through numerical evidence. It gives some rough idea about how to interpret this condition.
• Main text is written in a clear; authors highlight their modification to AdaGrad and also highlight what their new “consistency condition” is. Proposed contributions of the paper are stated clearly although I do not totally agree with certain claims. One of the main theorems has a proof sketch which gives an overall idea about authors’ approach to proving the results. Weaknesses:
• Although numerically the paper provides an insight into the consistency condition, it is not verifiable ahead of time. One needs to run a simulation to get some idea about this condition, although it still wouldn’t verify the correctness. Since authors did not provide any theoretical motivation for their condition, I am not fully convinced out this assumption. For instance, authors could argue about a specific problem setting in which this condition holds.
• Theorem 3 (Adagrad-truncation) sets the stepsize depends on knowledge of r
. I couldn’t figure out how it is possible to compute the value r
ahead of time. Therefore, I do not think this selection is practically applicable. Although I appreciate the theoretical rigor that goes into proving Theorem 3, I believe the concerns about computing r
weakens the importance of this result. If I am missing out some important point, I would like to kindly ask the authors to clarify it for me.
• The related work which is listed in Table 1, within the group “Adaptive Gradient Methods” prove \emph{iteration-wise} convergence rates for variants of Adam and AdaGrad, which I would call the usual practice. This paper argues about \emph{epoch-wise} convergence. The authors claim improvement over those prior papers although the convergence rate quantifications are not based on the same grounds. All of those methods consider the more general expectation minimization setting. I would suggest the authors to make this distinction clear and highlight iteration complexities of such methods while comparing previous results with theirs. In my opinion, total complexity comparison is more important that rate comparison for the setting that this paper considers.
• As a follow up to the previous comment, the related work could have highlighted related results in finite sum setting. Total complexity comparisons with respect to finite sum setting is also important. There exists results for finite-sum nonconvex optimization with variance reduction, e.g., Stochastic Variance Reduction for Nonconvex Optimization, 2016, Reddi et. al. I believe it is important to comparatively evaluate the results of this paper with that of such prior work.
• Numerically, authors only compare against AdaGrad and SGD. I would say this paper is a rather theory paper, but it claims rate improvements, for which I previously stated my doubts. Therefore, I would expect comparisons against other methods as well, which is of interest to ICLR community in my opinion.
• This is a minor comment that should be easy to address. For ICLR, supplementary material is not mandatory to check, however, this is a rather theoretical paper and the correctness/clarity of proofs is important. I would say authors could have explained some of the steps of their proof in a more open way. There are some crucial expressions which were obtained without enough explanations. Please refer to my additional comments in the following part.
Additional Comments:
• I haven’t seen the definition that x t , m + 1 = x t + 1 , 1
in the main text. It appears in the supplements. Could you please highlight this in the main text as it is important for indexing in the analysis?
• Second bullet point of your contributions claim that “[consistency] condition is easy to verify”. I do not agree with this as I cannot see how someone could guarantee/compute the value r
ahead of time or even after observing any sequence of gradients. Could you please clearly define what verification means in this context?
• In Assumption A3, I understand that G t e i = g t , i and G t e = ∑ i = 1 m g t , i
. I believe the existing notation makes it complicated for the reader to understand the implications of this condition.
• In the paragraph right above Section 4.2, authors state that presence of second moments, V t , i
enables adaptive methods to have improved rates of SGD through Lemma 3. Could the authors please explain this in details?
• In Corollary 1, authors state that “the computational complexity is nearly O ( m 5 / 2 n d 2 ϵ − 2 ) ~
”. A similar statement exists in Corollary 2. Could you please explain what “nearly” means in this context?
• In Lemma 8 in the supplements, a a T and b b T
in the main expression of the lemma are rank-1 matrices. This lemma has been used in the proof of Lemma 4. As far as I understood, Lemma 8 is used in such a way that a a T or b b T
correspond to something like g t , j 2 – g t − 1 , j 2
. I am not sure if this construction fits into Lemma 8 because, for instance, the expression g t , j 2 – g t − 1 , j 2
is difference of two rank-1 matrices, which could have rank \leq 2. Hence, there may not exist some vector a
such that a a T = g t , j 2 – g t − 1 , j 2
, hence Lemma 8 may not be applied. If I am mistaken in my judgment I am 100% open for a discussion with the authors.
• In the supplements, in section “A.1.7 PROOF OF MAIN THEOREM 1”, in the expression following the first line, I didn’t understand how you obtained the last upper bound to ∇ f ( x t , i )
. Could you please explain how this is obtained? Score:
I would like to vote for rejecting the paper. I praise the analytically rigorous proofs for the main theorems and the use of a range of tools for proving the key lemmas. Epoch-wise analysis for stochastic methods could provide insight into behavior of algorithms, especially with respect to real-life experimental setting. However, I have some concerns:
I am not convinced about the importance of consistency ratio and that it is a verifiable condition.
Related work in Table 1 has iteration-wise convergence in the general expectation-minimization setting whereas this paper considers finite sum structure with epoch-wise convergence rates. The comparison with related work is not sufficient/convincing in this perspective.
(Minor) I would suggest the authors to have a more comprehensive experimental study with comparisons against multiple adaptive/stochastic optimizers. More experimental insight might be better for demonstrating consistency ratio.
Overall, due to the reasons and concerns stated in my review, I vote for rejecting this paper. I am open for further discussions with the authors regarding my comments and their future clarifications.
======================================= Post-Discussions =======================================
I would like to thank the authors for their clarifications. After exchanging several responses with the authors and regarding other reviews, I decide to keep my score.
Although the authors come up with a more meaningful assumption, i.e., SGC, compared to their initial condition, I am not fully convinced about the contributions with respect to prior work: SGC assumption is a major factor in the improved rates and it is a very restrictive assumption to make in practice.
Although this paper proposes theoretical contributions regarding adaptive gradient methods, the experiments could have been a bit more detailed. I am not sure whether the experimental setup fully displays improvements of the proposed variants of AdaGrad. | • In order to evaluate the practical performance of the modified adaptive methods in a comparative fashion, two set of experiments were provided: training logistic regression model on MNIST dataset and Resnet-18 model on CIFAR-10 dataset. In these experiments; SGD, SGD with random shuffling, AdaGrad and AdaGrad-window were compared. Additionally, authors plot the behavior of their proposed condition “consistency ratio” over epochs. Strengths: |
SDV7Y6Dhx9 | ICLR_2025 | - Some details of the proposed method are missing, as noted in the questions section below.
- This work introduces many hyperparameters, i.e. target supervision dropout ratio ($\alpha$), activation map update frequencies ($K$), and enhancement factor ($\beta$). A more in-depth analysis of the hyperparameter space and its influence on performance would improve the method's usability and adaptability. | - Some details of the proposed method are missing, as noted in the questions section below. |
NIPS_2017_369 | NIPS_2017 | * (Primary concern) Paper is too dense and is not very easy to follow; multiple reads were required to grasp the concepts and contribution. I would strongly recommend simplifying the description and explaining the architecture and computations better; Figure 7, Section 8 as well as lines 39-64 can be reduced to gain more space.
* While the MNIST and CIFAR experiments is promising but they are not close to the state-of-art methods. It is not obvious if such explicitly dynamic routing is required to address the problem OR if recent advances such as residual units that have enabled significantly deeper networks can implicitly capture routing even with simple schemes such as a max-pooling. It would be good if authors can share their insights on this. | * (Primary concern) Paper is too dense and is not very easy to follow; multiple reads were required to grasp the concepts and contribution. I would strongly recommend simplifying the description and explaining the architecture and computations better; Figure 7, Section 8 as well as lines 39-64 can be reduced to gain more space. |
NIPS_2021_1852 | NIPS_2021 | W1: The design of extending SGC (from Equation 1) to EIGNN (from Equation 3) is somehow implicit and ad-hoc without clear justifications. The authors should explain this more in details for better understanding by general audiences that not very familiar with implicit models.
W2: During the time complexity analysis, only the complexity of training is analyzed, but it seems like the computation of eigendecomposition of S, the normalized adjacency matrix with self-loops, (Line 176) is not added, which usually requires the cost of O ( n 3 )
. If this is true, a full eigendecomposition of a large sparse S could make EIGNN an impractical approach for prohibiting the scalability in terms of large number of nodes n
for huge real-world graphs.
W3: Several concerns upon experiments include: 1) The discussion on arbitrary hyperparameter γ
is missing, including how to set it in practice for a given graph and analyzing on the sensitivity of this hyperparameter, otherwise it will be hard for the researchers to follow. 2) As the weakness on the analysis of complexity, why the author chooses not to evaluate the long-range dependency on the standard dataset Amazon Co-purchase as used in IGNN. Amazon Co-purchase dataset has another benefit that it can also reflect the scalability of proposed method since it is a large dataset with ~33k nodes, while the experiments on real-world dataset are all conducted on graphs that less than 10k. 3) For the evaluation on over-smoothing, it would be interesting to see how the EIGNN performs with respect to over-smoothing under standard setting on real-world datasets, especially in comparison with variants focusing on dealing with over-smoothing, such as the setting used in GCNII. 4) The evaluation on robustness is not very convincing since structural attack is known to be more powerful and appreciative when we attack on graph-structured data. Thus, the authors are suggested to defend their proposed model against several popular structural attack methods such as Nettack for better demonstration rather than attacks on features used in experiments. | 3) For the evaluation on over-smoothing, it would be interesting to see how the EIGNN performs with respect to over-smoothing under standard setting on real-world datasets, especially in comparison with variants focusing on dealing with over-smoothing, such as the setting used in GCNII. |
NIPS_2022_2813 | NIPS_2022 | weakness (insight and contribution), my initial rating is borderline. Strengths:
+ The problem of adapting CLIP under few-shot setting is recent. Compared to the baseline method CoOp, the improvement of the proposed method is significant.
+ The ablation studies and analysis in Section 4.4 is well organized and clearly written. It is easy to follow the analysis and figure our the contribution of each component. Also, Figure 2 is well designed and clear to illustrate the pipeline.
+ The experimental analysis is comprehensive. The analysis on computation time and inference speed is also provided. Weakness:
- (major concern) The contribution is somehow limited. The main contribution is applying optimal transport for few-shot adaptation of CLIP. After reading the paper, it is not clear enough to me why Optimal Transport is better than other distance. Especially, the insight behind the application of Optimal Transport is not clear. I would like to see more analysis and explanation on why Optimal Transport works well. Otherwise, it seems that this work is just an application work on a specific model and a specific task, which limits the contribution.
- The recent related work CoCoOp [1] is not compared in the experiments. Although it is a CVPR'22 work that is officially published after the NeurIPS deadline, as the extended version of CoOp, it is necessary to compare with CoCoOp in the experiments.
- In the approach method, there lacks a separate part or subsection to introduce the inference strategy, i.e., how to use the multiple prompts in the test stage.
- Table 2 mixed different ablation studies (number of prompts, visual feature map, constraint). It would be great if the table can be split into several tables according to the analyzed component.
- The visualization in Figure 4 is not clear. It is not easy to see the attention as it is transparent. References
[1] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In CVPR, 2022.
After reading the authors' response and the revised version, my concerns (especially the contribution of introducing the optimal transport distance for fine-tuning vision-language models) are well addressed and I am happy to increase my rating. | - In the approach method, there lacks a separate part or subsection to introduce the inference strategy, i.e., how to use the multiple prompts in the test stage. |
ICLR_2021_2330 | ICLR_2021 | Weakness
- Method on Fourier domain supervision lacks more analysis and intuition. It's unclear how the size of the grid is defined to perform FFT, from my understanding, the size is critical as local frequency will be changed using different grid size. Is it fixed throughout training? What is the effect of having different sizes?
- The generator has a recurrent structure that supports 10 frame generation, but the discriminator looks at three frames (from figure 1) at a time, which seems to limit the power of temporal consistency.
- In figure 7 result and supplemental video result, SurfGAN produces smoother results (MSE seems closer to the red ground truth in figure 7). This seems contradicts the use of Fourier components for supervision -- what causes this discrepancy?
- Figure 4 is confusing. It's not clear what the columns mean -- it is not explained in the text or caption.
- Notation is confusing. M and N are used without definition. Suggestion
- Spell out F.L.T.R in figure 4
- Figure 1 text is too small to see
- It is recommended to have notation and figure cross-referenced (e.g. M and N are not shown in the figure) | - Figure 4 is confusing. It's not clear what the columns mean -- it is not explained in the text or caption. |
NIPS_2019_991 | NIPS_2019 | [Clarity] * What is the value of the c constant (MaxGapUCB algorithm) used in experiments? How was it determined? How does it impact the performance of MaxGapUCB? * The experiment results could be discussed more. For example, should we conclude from the Streetview experiment that MaxGapTop2UCB is better than the other ones? [Significance] * The real-world applications of this new problem setting are not clear. The authors mention applicability to sorting/ranking. It seems like this would require a recursive application of proposed algorithms to recover partial ordering. However, the procedure to find the upper bounds on gaps (Alg. 4) has complexity K^2, where K is the number of arms. How would that translate in computational complexity when solving a ranking problem? Minor details: * T_a(t) is used in Section 3.1, but only defined in Section 4. * The placement of Figure 2 is confusing. --------------------------------------------------------------------------- I have read the rebuttal. Though the theoretical contribution seems rather low given existing work on pure exploration, the authors have convinced me of the potential impacts of this work. | * The experiment results could be discussed more. For example, should we conclude from the Streetview experiment that MaxGapTop2UCB is better than the other ones? [Significance] * The real-world applications of this new problem setting are not clear. The authors mention applicability to sorting/ranking. It seems like this would require a recursive application of proposed algorithms to recover partial ordering. However, the procedure to find the upper bounds on gaps (Alg. 4) has complexity K^2, where K is the number of arms. How would that translate in computational complexity when solving a ranking problem? Minor details: |
ICLR_2023_3780 | ICLR_2023 | 1. The motivation is unclear. The authors consider that semantics used in both synthesizing visual features and learning embedding functions will introduce bias toward seen classes. However, some methods [1][2] using semantics seem to get better results. Please compare with them and give more explanations for the motivation of the proposed paper. Moreover, it would be more convincing to add comparative experiments that use semantics but do not decouple. 2. The comparison on the results of the CUB is unfair. Most of the methods, such as FREE and f-VAEGAN-D2, utilize 312-dimensional attributes as the auxiliary semantic information. You need to experiment with attributes rather than 1024-dimensional semantic descriptors if you want to compare with these methods. 3. Some recent methods like [2] and [3] are ignored to be compared. 4. I wonder why the results are so low using only ML in the ablation experiments. The results are even lower than some simple early methods like f-CLSWGAN [4] and f-VAEGAN-D2 [5]. More explanations can be given. 5. A minor problem. In section 3.4, the authors said that synthesized features including both seen and unseen classes are used to train the final classifier. However, it seems that only the synthesized unseen features are used.
[1] Generative Dual Adversarial Network for Generalized Zero-shot Learning. CVPR 2019. [2] Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification. ECCV 2020. [3] Contrastive Embedding for Generalized Zero-Shot Learning. CVPR 2021. [4] Feature Generating Networks for Zero-Shot Learning. CVPR 2018. [5] F-VAEGAN-D2: A feature generating framework for any-shot learning. CVPR 2019. | 4. I wonder why the results are so low using only ML in the ablation experiments. The results are even lower than some simple early methods like f-CLSWGAN [4] and f-VAEGAN-D2 [5]. More explanations can be given. |
NIPS_2018_265 | NIPS_2018 | in the paper. I will list them as follows. Major comments: =============== - Since face recognition/verification methods are already performing well, the great motivation for face frontalization is for applications in the wild and difficult conditions such as surveillance images where pose, resolution, lighting conditions, etc⦠vary wildly. To this effect, the paper lacks sufficient motivation for these applications. - The major drawback is the method is a collection of many existing methods and as such it is hard to draw the major technical contribution of the paper. Although a list of contributions was provided at the end of the introduction none of them are convincing enough to set this paper aside technically. Most of the techniques combined are standard existing methods and/or models. If the combination of these existing methods was carefully analyzed and there were convincing results, it could be a good application paper. But, read on below for the remaining issues I see to consider it as application paper. - The loss functions are all standard L1 and L2 losses with the exception of the adversarial loss which is also a standard in training GANs. The rationale for the formulation of these losses is little or nonexistent. - The results for face verification/recognition were mostly less than 1% and even outperformed by as much as 7% on pose angles 75 and 90 (see Table 1). The exception dataset evaluated is IJB-A, in which the proposed model performed by as much as 2% and that is not surprising given IJB-A is collected in well constrained conditions unlike LFW. These points are not discussed really well. - The visual qualities of the generated images also has a significant flaw in which it tends to produce a more warped bulged regions (see Fig. 3) than the normal side. Although, the identity preservation is better than other methods, the distortions are significant. This lack of symmetry is interesting given the dense correspondence is estimated. - Moreover, the lack of ablation analysis (in the main paper) makes it very difficult to pinpoint from which component the small performance gain is coming from. - In conclusion, due to the collection of so many existing methods to constitute the proposed methods and its lack of convincing results, the computational implications do not seem warranted. Minor comments: =============== - Minor grammatical issue need to be checked here and there. - The use of the symbol \hat for the ground truth is technically not appealing. Ground truth variables are usually represented by normal symbols while estimated variables are represented with \hat. This needs to be corrected throughout for clarity. | - Moreover, the lack of ablation analysis (in the main paper) makes it very difficult to pinpoint from which component the small performance gain is coming from. |
ICLR_2021_1906 | ICLR_2021 | & Questions:
I think the analysis is a bit problematic. Th. 2 shows that when the number of classes is large (>8), the noise rate of similarity labels is less than class labels. And the authors use Th. 3 to prove that if the noise rate of transition matrix decreases the model will have a better generalization. However, as far as I understand, the supervision effect of the pairwise label differs a lot between positive and negative labels. In fact negative pairwise supervision is not very meaningful as there are a lot of gradient directions that can minimize the loss. Thus I think evaluate on the noise ratio of the whole pairwise similarity matrix is not very meaningful. And since the supervision effect of class-level and similarity-level labels is so different, it casts questions on the whole theoretical analysis in my understanding.
The baselines on CIFAR seem too low compared with the SOTAs, e.g. [1], and the improvement of the proposed method is limited. And the final result is not comparable as well. For example, under the setting of 0.5 sym noise of CIFAR-10, the best result of the proposed method is 81.15, while [1] has 84.78.
[1] Chen, Pengfei, et al. "Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels." International Conference on Machine Learning. 2019.
Post Rebuttal Modification
Regarding A1: I agree with R2 that the theory has major concerns and the authors were not able to fix it during rebuttal. I think we need to be clear that whether the method can work empirically and whether the provided theory can explain it are two problems. Now it seems to me that it is clear that the theory is wrong, and the problem is that the authors did not take into account the difference of the class-wise labels and pairwise labels. I suggest the authors to change the theory completely or remove the theory before next submission.
Regarding A2: I don't chase SOTAs and I could certainly appreciate works that give nice theoretical insight but limited improvement. Now that the theory is wrong I have to be critical about the experiments. Since the performance is much worse than STOA, it is no longer clear whether the proposed algorithm works or it's just because the baselines are too bad.
I adjusted my rating from 5 to 1.
Regarding the authors' 2rd and 3rd responses
First, please allow me to clarify that my wording "the theory is wrong" means the theoretical justification on why the proposed algorithm can benefit from the transformation and achieve better performances is wrong, as the authors wrote “This theoretically justifies why the proposed method works well” in their submission. The major flaw/concern has been raised by R1(Q1) and myself(Q1), and the authors’ responses on these two questions are not convincing. This is the concern that I have been asking, so I assume it is not “vague”. I don’t see any potential way to fix this major concern in the current theoretical justification sketch, so I think this submission needs a major revision. I want to give my apology if my wording "the theory is wrong" leads to misunderstanding to the authors or other reviewers.
Second, I would like to see ACs or PCs to step in and let me know if I could rate the submission as 1 in this case. I have temporarily increased my rating from 1 to 3 as it has been questioned by the authors, especially the author who “have served as a reviewer 100+ times and as an area chair 10+ times for top conferences like NeurIPS/ICML/ICLR”.
What’s more, I would also like to request apologies from the authors. The wording “angry” is unpleasant and misleading. As the author asked, “what are you angry for?”, I’m not angry at all. I simply adjusted my post-rebuttal rating with my expertise after reading the authors’ responses and other reviewers’ comments.
Finally, I would like to remind the author who “have served as a reviewer 100+ times and as an area chair 10+ times for top conferences like NeurIPS/ICML/ICLR”, one of the main rules of academic writing is to avoid using second person. I hope this will be helpful. Best | 2 shows that when the number of classes is large (>8), the noise rate of similarity labels is less than class labels. And the authors use Th. |
ARR_2022_16_review | ARR_2022 | 1). Although the hypothesis is quite interesting, it is not well verified by the designed experiment. As pointed out in Section 3.1, models in conventional methods are trained on the original training set in addition to the generated adversarial examples. In contrast, the base model is trained on the adversarial set only. It is better to compare the model trained on the original dataset with that trained on the mixture so as to highlight the impact of the augmented adversarial examples. Since this experiment serves as the motivation throughout this work, it is critical to make it more convincing. 2). The statement (title or scope) needs more extensive experiments to support. The overall goal of this work is to improve the robustness of pre-trained language models, while RoBERT and DeBERTa are only examined on SST-2, and large versions are not touched. Evaluation on only SST-2 and IMDb ( two classification tasks) seems weak. 3). It is encouraged to introduce the motivation by a lightweight experiment, but the Introduction is not the appropriate place for it. It is said "achieved with ADA, FADA, and the original training set" in the caption of Figure 1, but I can only see two pairs of learning curves. FADA should be defined formally in Section 3.1. The experiments conducted in line 207-216 is unclear without proper reference to either tables or figures. Is it the same as the one in the Introduction? I also try to find the definition of evaluation metrics in Section 4, but finally, they are in the caption of Table 2. I believe huge efforts have been made in this work, but the paper structure needs reorganization to underline the motivation and then highlight the contributions with detailed settings.
Also see the comments above.
+ An appendix is recommended to involve more details regarding experimental setup (e.g., hyper-parameters exploration) and case studies (e.g., “friendly” examples).
+ Please avoid using words like "friendly" and FGM in the Abstract without further definition.
+ Line 021 – less steps + Line 228 – Take … | 1). Although the hypothesis is quite interesting, it is not well verified by the designed experiment. As pointed out in Section 3.1, models in conventional methods are trained on the original training set in addition to the generated adversarial examples. In contrast, the base model is trained on the adversarial set only. It is better to compare the model trained on the original dataset with that trained on the mixture so as to highlight the impact of the augmented adversarial examples. Since this experiment serves as the motivation throughout this work, it is critical to make it more convincing. |
NIPS_2020_810 | NIPS_2020 | - The CNN experiments are not fully convincing (see below). - Some related work is not properly addressed (see below). | - The CNN experiments are not fully convincing (see below). |
ACL_2017_150_review | ACL_2017 | I have some doubts about the interpretation of the results. In addition, I think that some of the claims regarding the capability of the method proposed to learn morphology are not propperly backed by scientific evidence.
- General Discussion: This paper explores a complex architecture for character-level neural machine translation (NMT). The proposed architecture extends a classical encoder-decoder architecture by adding a new deep word-encoding layer capable of encoding the character-level input into sub-word representations of the source-language sentence. In the same way, a deep word-decoding layer is added to the output to transform the target-language sub-word representations into a character sequence as the final output of the NMT system. The objective of such architecture is to take advantage of the benefits of character-level NMT (reduction of the size of the vocabulary and flexibility to deal with unseen words) and, at the same time, improving the performance of the whole system by using an intermediate representation of sub-words to reduce the size of the input sequence of characters. In addition, the authors claim that their deep word-encoding model is able to learn morphology better than other state-of-the-art approaches.
I have some concerns regarding the evaluation. The authors compare their approach to other state-of-the-art systems taking into account two parameters: training time and BLEU score. However, I do not clearly see the advantage of the model proposed (DCNMT) in front of other approaches such as bpe2char. The difference between both approaches as regards BLEU score is very small (0.04 in Cs-En and 0.1 in En-Cs) and it is hard to say if one of them is outperforming the other one without statistical significance information: has statistical significance been evaluated? As regards the training time, it is worth mentioning that the bpe2char for Cs-En takes 8 days less than DCNMT. For En-Cs training time is not provided (why not?) and for En-Fr bpe2char is not evaluated. I think that a more complete comparison with this system should be carried out to prove the advantages of the model proposed.
My second concern is on the 5.2 Section, where authors start claiming that they investigated about the ability of their system to learn morphology. However, the section only contains a examples and some comments on them. Even though these examples are very well chosen and explained in a very didactic way, it is worth noting that no experiments or formal evaluation seem to have been carried out to support the claims of the authors. I would definitely encourage authors to extend this very interesting part of the paper that could even become a different paper itself. On the other hand, this Section does not seem to be a critical point of the paper, so for the current work I may suggest just to move this section to an appendix and soften some of the claims done regarding the capabilities of the system to learn morphology.
Other comments, doubts and suggestions: - There are many acronyms that are used but are not defined (such as LSTM, HGRU, CNN or PCA) or which are defined after starting to use them (such as RNN or BPE). Even though some of these acronyms are well known in the field of deep learning, I would encourage the authors to defined them to improve clearness.
- The concept of energy is mentioned for the first time in Section 3.1. Even though the explanation provided is enough at that point, it would be nice to refresh the idea of energy in Section 5.2 (where it is used several times) and providing some hints about how to interpret it: a high energy on a character would be indicating that the current morpheme should be split at that point? In addition, the concept of peak (in Figure 5) is not described.
- When the acronym BPE is defined, capital letters are used, but then, for the rest of mentions it is lower cased; is there a reason for this?
- I am not sure if it is necessary to say that no monolingual corpus is used in Section 4.1.
- It seems that there is something wrong with Figure 4a since the colours for the energy values are not shown for every character.
- In Table 1, the results for model (3) (Chung et al. 2016) for Cs-En were not taken from the papers, since they are not reported. If the authors computed these results by themselves (as it seems) they should mention it.
- I would not say that French is morphologically poor, but rather that it is not that rich as Slavic languages such as Czech.
- Why a link is provided for WMT'15 training corpora but not for WMT'14?
- Several references are incomplete Typos: - "..is the bilingual, parallel corpora provided..." -> "..are the bilingual, parallel corpora provided..." - "Luong and Manning (2016) uses" -> "Luong and Manning (2016) use" - "HGRU (It is" -> "HGRU (it is" - "coveres" -> "covers" - "both consists of two-layer RNN, each has 1024" -> "both consist of two-layer RNN, each have 1024" - "the only difference between CNMT and DCNMT is CNMT" -> "the only difference between CNMT and DCNMT is that CNMT" | - In Table 1, the results for model (3) (Chung et al. 2016) for Cs-En were not taken from the papers, since they are not reported. If the authors computed these results by themselves (as it seems) they should mention it. |
vexCLJO7vo | EMNLP_2023 | 1. This paper aims to evaluate the performance of current LLMs on different temporal factors and select three types of factors, including cope, order, and counterfactual. What is the rationale behind selecting these three types of factors, and how do they relate to each other?
2. More emphasis should be placed on prompt design. This paper introduces several prompting methods to address issues in MenatQA. Since different prompts may result in varying performance outcomes, it is essential to discuss how to design prompts effectively.
3. The analysis of experimental results is insufficient. For instance, the authors only mention that the scope prompting method shows poor performance on GPT-3.5-turbo, but they do not provide any analysis of the underlying reasons behind this outcome. | 2. More emphasis should be placed on prompt design. This paper introduces several prompting methods to address issues in MenatQA. Since different prompts may result in varying performance outcomes, it is essential to discuss how to design prompts effectively. |
KEH6Cqjdw2 | EMNLP_2023 | - How do we extend the approaches to other (countries' legal documents)?
- Data collection and annotation are not clear
- The Enforceable Annotation might have ethical issues. What will be the reward for the 10 law experts? why did they volunteer? Does it count toward their study (credit), or will they co-author the paper? This is a serious issue, which might lead to the low quality of the data.
- The concept of "editing samples" is not clear
- Majority voting from 11 definitions is not clear?
- You could compare your result with SoTA approaches, for example with HateXplain models. | - You could compare your result with SoTA approaches, for example with HateXplain models. |
FVhmnvqnsI | ICLR_2024 | 1. It's not clear what's the purpose of baseline B. It looks like the results are only compared to baseline A and C.
2. It's not clear why the freezing is used in MLS selection. If adaptive is good, why not just use adaptive method to choose the subset?
3. Will the additional loss bring extra computational cost? | 2. It's not clear why the freezing is used in MLS selection. If adaptive is good, why not just use adaptive method to choose the subset? |
OGdl9d3BEC | EMNLP_2023 | 1. The authors highlight that they have not implemented the quantisation methods on GPU systems to demonstrate real speedups due to a lack of CUDA kernel implementation.
2. The paper also mentions that the search algorithm does not include arithmetic density due to a lack of hardware models.
3. Although the authors have mentioned the limitations in the paper, they should provide a more detailed plan on how they plan to address these drawbacks in their future work.
4. The evaluation is limited to language modeling and downstream NLP tasks. Testing the proposed quantisation methods on other modalities like computer vision could benefit the paper.
5. The authors should provide detailed ablation studies to isolate the impact of block-based quantisation. | 3. Although the authors have mentioned the limitations in the paper, they should provide a more detailed plan on how they plan to address these drawbacks in their future work. |
NIPS_2022_765 | NIPS_2022 | While the authors show improved numbers on benchmark datasets, it would be nice to also show and discuss how the proposed knowledge-CLIP model is qualitatively improving over the baseline CLIP. For example, in Intro and Figure 1, the authors motivates this paper by arguing that the baseline CLIP only captures text-image co-occurrence and fails to adjust for negation in text, etc. - is this issue solved in the proposed knowledge-CLIP model? Some existing work that combines text and KG (e.g. https://arxiv.org/abs/2104.06378) has done closely-related analyses such as adding negation or changing entities in text to see if the KG-augmented method can robustly handle them. It would be very interesting if the authors perform such analysis on the proposed knowledge-CLIP model that combines image, text and KGs. | - is this issue solved in the proposed knowledge-CLIP model? Some existing work that combines text and KG (e.g. https://arxiv.org/abs/2104.06378) has done closely-related analyses such as adding negation or changing entities in text to see if the KG-augmented method can robustly handle them. It would be very interesting if the authors perform such analysis on the proposed knowledge-CLIP model that combines image, text and KGs. |
ICLR_2023_2217 | ICLR_2023 | The main idea is to propose a new method to rectify the classical prototype network, similar to the previous work 'Prototype Rectification for Few-Shot Learning' i.e. BD-CSPN (Liu et al. (2020)). However, the authors do not provide sufficient analysis of the differences. It is confusing for the readers to understand the advantages compared with BD-CSPN.
The experimental results are not complete. For example, BD-CSPN provided the results on tieredImageNet. However, the authors do not make the comparison in the experimental section. This leads to a question: does the proposed method truly outperform the BD-CSPN based on the rectification prototype?
In Fig. 2, the authors had better add the results of FP_AINet to enhance the benefits of the proposed method.
In Alg. 1 the adaptive induction network output c i
, but there is no c i
in Fig. 2.
In Eq. 3, it is confusing to use p m
in the numerator but use p c
in the denominator. What is the reason?
In Alg. 2, only the mean μ f
is used for the fusion prototype. Have the authors considered adding the variance for further improvement? By the way, it is better to use μ g
to replace μ f
, which is consistent with Eq. 2. | 2. In Eq. 3, it is confusing to use p m in the numerator but use p c in the denominator. What is the reason? In Alg. 2, only the mean μ f is used for the fusion prototype. Have the authors considered adding the variance for further improvement? By the way, it is better to use μ g to replace μ f , which is consistent with Eq. |
NIPS_2019_465 | NIPS_2019 | - Demonstrating that an agent trained with a human model performs better than an agent assuming an optimal human is not necessarily a new idea and is quite well-studied in HRI and human-AI collaboration. While the work considers the idea from the perspective of techniques, such as self-play and population-based training, the authors need to justify how this is significantly different from prior work. - The idea and execution is simple. The model of the human is basic, which is fine if the idea itself is very novel, but there are many works on incorporating human models into AI learning systems. Originality: While the work is set in the context of more recent algorithms, the idea of modelling humans and not assuming humans are optimal in training is not a new concept. There are several works in a similar area, so it would be important to differentiate the work with many prior works. - Koppula, Hema S., Ashesh Jain, and Ashutosh Saxena. "Anticipatory planning for human-robot teams." Experimental Robotics. Springer, Cham, 2016. - Nikolaidis, Stefanos, et al. "Efficient model learning from joint-action demonstrations for human-robot collaborative tasks." Proceedings of the tenth annual ACM/IEEE international conference on human-robot interaction. ACM, 2015. - Freedman, Richard G., and Shlomo Zilberstein. "Integration of planning with recognition for responsive interaction using classical planners." Thirty-First AAAI Conference on Artificial Intelligence. 2017. Quality: The paper had overall high quality. The authors paid attention to details about the approach and included them in the text, which helped to understand the full procedure. It was unclear what the imitation learning condition was. Is that an agent that acts exactly as if it were a human based on the trained human BC model? If so, it seems like an inappropriate baseline since the premise of the work is that an agent is collaborating with a human rather than acting like the human acts. Clarity: The paper was written clearly. In terms of terminology: It seemed like BC and H_proxy were both trained using behavior cloning, which made the names a bit of a misnomer. In the Figure 3 caption, the hyphens made the explanation confusing. There were a few typos, included below, but overall, the approach and results were explained well. - Pg 6: taking the huaman into account â taking the human into account - Pg 6: but this times â but this time - Pg 7: simulat failure modes â similar failure modes Significance: Modelling humans when training AI systems is an important topic for the community, as many of our trained models will have to work with people while current algorithms do not always handle this. So, the general idea is definitely significant. The main concern is the originality of the work compared to prior work on modelling humans in collaborative tasks for better team performance. Other comments: - What does the planning baseline add to the story? - Was the data collection for the 5 layouts randomized? It sounds like the data was always collected in the same order, which means there may be learning effects across the different layouts. - How did you pick 400 timesteps? ----------------------- I have read the author response, and the authors make good points about how the work's contributions still provide value to the HRI and related communities. Specifically, the authors discuss the importance of considering humans in more recent deep learning frameworks and how this provides new value compared to prior works that focus on modelling humans in planning-based frameworks, which is reasonable. I additionally appreciate the experiment that the authors conduct in order to compare their method to a noisy optimality condition used in prior work. | - Koppula, Hema S., Ashesh Jain, and Ashutosh Saxena. "Anticipatory planning for human-robot teams." Experimental Robotics. Springer, Cham, 2016. |
BkR4QG4azn | ICLR_2025 | - **Computational cost**: While the paper mentions the additional cost didn't lead to "significant delays in computation", it is not clear why. I believe the paper deserves a more comprehensive discussion about the computational complexity of the proposal. Also, I wonder if the proposed approach becomes prohibitive in some settings.
- **Experiments**: The theoretical analysis does not seem to support the claimed gains on real-world datasets. What are the implications of correctness to top-k diversity/reward? Also, although the paper cites ZINC250K in the Introduction, the experiments only include the QM9 dataset.
- **Technical novelty**: The theoretical contributions of the paper are straightforward. I wonder if the GFlowNet community already knows about the equivalent action problem.
- **Notation**: I found the notation overloaded, which may confuse readers unfamiliar with GFlowNets. For instance, the paper uses the same $P_F$ to refer to the graph-level, state-level policies, and the marginal distribution over terminal states (i.e., $P_F(x)$).
- **Limitations**: The paper does not discuss limitations. | - **Computational cost**: While the paper mentions the additional cost didn't lead to "significant delays in computation", it is not clear why. I believe the paper deserves a more comprehensive discussion about the computational complexity of the proposal. Also, I wonder if the proposed approach becomes prohibitive in some settings. |
NIPS_2018_612 | NIPS_2018 | weakness is not including baselines that address the overfitting in boosting with heuristics. Ordered boosting is non-trivial, and it would be good to know how far simpler (heuristic) fixes go towards mitigating the problem. Overall, I think this paper will spur new research. As I read it, I easily came up with variations and alternatives that I wanted to see tried and compared. DETAILED COMMENTS The paper is already full of content, so the ideas for additional comparisons are really suggestions to consider. * For both model estimations, why start at example 1? Why not start at an example that is 1% of the way into the training data, to help reduce the risk of high variance estimates for early examples? * The best alternative I've seen for fixing TS leakage, while reusing the data sample, uses tools from differential privacy [1, 2]. How does this compare to Ordered TS? * Does importance-sampled voting [3] have the same target leakage problem as gradient boosting? This algorithm has a similar property of only using part of the sequence of examples for a given model. (I was very impressed by this algorithm when I used it; beat random forests hands down for our situation.) * How does ordered boosting compare to the subsampling trick mentioned in l. 150? * Yes, fixes that involve bagging (e.g., BagBoo [4]) add computational time, but so does having multiple permuted sequences. Seems worth a (future?) comparison. * Why not consider multiple permutations, and for each, split into required data subsets to avoid or mitigate leakage? Seems like it would have the same computational cost as ordered boosting. * Recommend checking out the Wilcoxon signed rank test for testing if two algorithms are significantly different over a range of data sets. See [6]. * l. 61: "A categorical feature..." * l. 73: "for each categorical *value*" ? * l. 97: For clarity, consider explaining a bit more how novel values in the test set are handled. * The approach here reminds me a bit of Dawid's prequential analysis, e.g., [5]. Could be worth checking those old papers to see if there is a useful connection. * l. 129: "we reveal" => "we describe" ? * l. 131: "called ordered boosting" * l. 135-137: The "shift" terminology seems less understandable than talking about biased estimates. * l. 174: "remind" => "recall" ? * l. 203-204: "using one tree structure"; do you mean shared \sigma? * Algorithm 1: only one random permutation? * l. 237: Don't really understand what is meant by right hand side of equality. What is 2^j subscript denoting? * l. 257: "tunning" => "tuning" * l. 268: ", what is expected." This reads awkwardly. * l. 311: This reference is incomplete. REFERENCES [1] https://www.slideshare.net/SessionsEvents/misha-bilenko-principal-researcher-microsoft [2] https://www.youtube.com/watch?v=7sZeTxIrnxs [3] Breiman (1999). Pasting small votes for classification in large databases and on-line. Machine Learning 36(1):85--103. [4] Pavlov et al. (2010). BagBoo: A scalable hybrid bagging-the-boosting model. In CIKM. [5] Dawid (1984). Present position and potential developments: Some personal views: Statistical Theory: The Prequential Approach. Journal of the Royal Stastical Society, Series A, 147(2). [6] Demsar (2006). Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7:1--30. | * l.97: For clarity, consider explaining a bit more how novel values in the test set are handled. |
ICLR_2023_2237 | ICLR_2023 | 1.Similar methods have already been proposed for multi-task learning and has not been disccussed in this paper [1].
1.When sampling on the convex hull parameterization, authors choose to adopt the Dirichlet distribution since its support is the T-dimensional simplex. Does this distribution have other properties. Why using this distribution? If p≫1,how the ensemble will change.
2.When training, a mono tonic relationship is imposed between the degree of a single-task predictor participation and the weight of the corresponding task loss. As a result, the ensemble engenders a subspace that explicitly encodes tradeoffs and results in a continuous parameterization of the Pareto Front. Whether the mono tonic relationship can be replaced by other relationships? Explaining this point may be better.
[1]Navon A, Shamsian A, Fetaya E, et al. Learning the Pareto Front with Hypernetworks[C]//International Conference on Learning Representations. 2020. | 1.Similar methods have already been proposed for multi-task learning and has not been disccussed in this paper [1]. |
be0sdRYSlH | ICLR_2025 | - It is thought to be a rather peripheral study, but the approach is novel.
- It is expected that the amount of computation of FedMITR is higher than other methods. Have you compared this?
- The results of the IID case need to be shared. It is necessary to share the results of the experiments using higher values of Dirichlet distribution parameters. $\alpha=0.5$ is still the data is heterogeneous to express IID.
- It is necessary to use a larger dataset for experiments. However, this is a chronic problem in federated learning studies, and it is not just a weakness of FedMITR. | - It is expected that the amount of computation of FedMITR is higher than other methods. Have you compared this? |
ICLR_2023_341 | ICLR_2023 | weakness :
the proposed method is may not be entirely novel. people have been adding symbolic reasoning to neural models for awhile, and the finding has always been : "If we can successfully 'hack' the underlying DSL that represented the set of tasks, adding symbolic reasoning would perform well". For instance, these works tend to follow the steps of: 1) identify a set of tasks that would be easily represented with symbolic execution, and 2) devote significant engineering efforts to construct the DSL and a symbolic interpreter to help the neural/llm model make better inferences/plans.
this work would be of significant contribution if it can show that steps 1) and 2) can be avoided by using a generic external knowledge base (as shown in figure 3). however the writing is too confusing I cannot be sure if that is the case or not. | 1) and2) can be avoided by using a generic external knowledge base (as shown in figure 3). however the writing is too confusing I cannot be sure if that is the case or not. |
NIPS_2018_600 | NIPS_2018 | weakness of the non-local (NL) module [31] that the correlations across channels are less taken into account, and then formulate the compact generalized non-local (CGNL) module to remedy the issue through summarizing the previous methods of NL and bilinear pooling [14] in a unified manner. The CGNL is evaluated on thorough experiments for action and fine-grained classification tasks, exhibiting promising performance competitive to the state-of-the-arts. Positives: + The paper is well organized and easy to follow. + The generalized formulation (8,9) to unify bilinear pooling and non-local module is theoretically sound. + Good performance. Negatives: - Less discussion on the linear version of CGNL using dot product for f. - Missing fundamental comparison to the simple ResBlock. The authors nicely present the generalized formulation toward CGNL by unifying the two previous works of bilinear pooling and non-local module. Though the kernelized (non-linear) correlation function f is well theoretically motivated, the actual form of f that achieves the better empirical performance is a âlinearâ form (dot product). In this regard, the reviewer has the following concerns. - Less discussion about the linear form. If the reviewer correctly understands the CGNL formulation, the linear function f of dot product f (line 204) can greatly simplify the CGNL into Y = X * W_theta * tr[(X*W_phi)â * (X*W_g)] = X * W_theta * tr[(XâX)* W_g* W_phiâ] = s * X * W_theta, where s = tr[(XâX) * W_g * W_phiâ]= tr[(XâX)* W] is just a scalar and W = W_g*W_phiâ. This reformulation would be beneficial from the following viewpoints. > It reduces the parameters from {W_theta, W_phi, W_g} to {W_theta, W}, which facilitates the implementation. > It is closely related to squeeze-and-excitation (SE) module [9]. The above formulation can be regarded as a bilinear extension of SE from âsqueezeâ viewpoint since it âsqueezesâ the feature map X into the bilinear form of XâX while SE simply employs an average-pooling. Such discussions as above would help the readers to further understand the methods and to further extend the method. - Missing comparison. Based on the above discussion, one can think that the baseline for the linear CGNL is a simple ResBlock of Z = BatchNorm( X * W_z ) + X, while the linear CGNL is Z = BatchNorm( s * X * W_theta * W_z ) + X = BatchNorm( s * X * W_tz ) + X. The only difference is the scaling factor s that is also build on X. Through batch normalization, such a scaling might be less effective (during the training) and thus by comparing these closely-related methods, the authors have to clarify its effectiveness of CGNL empirically. Due to this concern, the reviewer can not fairly evaluate the impact of the method on classification performance. [After Rebuttal] The reviewer appreciates the authorsâ efforts to perform the comparison experiments in such a short rebuttal period. The comparison with the standard ResBlock clarifies the effectiveness of the proposed method as well as helps us to further understand how it works. | + Good performance. Negatives:- Less discussion on the linear version of CGNL using dot product for f. |
NIPS_2019_772 | NIPS_2019 | of this approach (e.g., it does not take into account language compositionally). I appreciate that the authors used different methods to extract influential objects: Human attention (in line with previous works), text explanation (to rely on another modality), and question parsing (to remove the need of extra annotation). As a complementary analysis, I would have compared object sets (Jaccard Distance) which are extracted with visual cues and text description. Indeed, the VQA-X dataset contains both information for each question/answer pairs. The method is more or less correctly explained. The training details seems complete and allow for reproducibility. The authors do not provide code source although they mentioned it in the reproducibility checklist. The empirical results are quite convincing and the necessary baselines and ablation studies are correctly provided. The formatting is simple and clear! It would have been perfect to provide the error bar as the number of experiments remains low (and over a small number of epochs) The cherry on the cake would be to run similar experiments on VQAv1 / VQA-CP1? To increase the impact of the paper, I would recommend extending the setting to either dense image captioning, or question answering (if possible). I feel that the discussion section raise some excellent points: - I really like table 4, that clearly show that the method perform as expected (I would have add HINT for being exhaustive) - the ablation study is convincing But, a lot of open-questions are still left open and could have been discussed. For instance, I would have appreciated a more in-depth analysis of model errors. What about the model complexity? Why only reweighting L_{crit}. How does evolve L_crit and L_infl at training time? On a more general note, I think the overall writing and paper architecture can be greatly improved. For instance, - the introduction and related work can be partially merged and summarized. - 4.2 starts by providing high-level intuition while 4.1 does not. - Training details incorporate some result discussion Generic questions (sorted by impact): - What is the impact of |I|, do you have the performance ration according to the number of top |I| influential objects - Eq1 is a modified version of GardCAM, however, the modifications are not highlighted (neither explained). For instance, why did the authors remove the ReLU - Even if the weight sensitivity in equation 5 is well motivated, it is not supported by previous works. Thus, did you perform an ablation study? It would be very have been nice in the discussion section. - What is the actual computation cost of the two losses? What is the relative additional time required? +5%, +20%, +200%? - As you used heuristics to retrieve influential objects, did you try to estimate the impact of false negatives in the loss. - How did you pick 0.6 for glove embedding similarity? Did you perform k-cross-validation? What is the potential impact - Have you tried other influential loss (Eq3)? For instance, replacing the min with a mean or NDCG? Remarks: - I would use a different notation for SV(.,.,.) as it is not symmetric. For instance SV_{a}(v_i || v_j) would avoid confusion (I am using KL notation here) - Non-formal expression should be avoided: Ex: "l32 What's worse" - The references section is full of format inconsistencies. Besides, some papers are published with proceeding but are referred to arxiv papers. - 3.1 introduces non-important notation, e.g., function h(.) or f(.) that are never used in the paper. - Several subsections could be gathered together, or define as a paragraph: 2.1/2.2/2.3 ; 5.1/5.2/5.3, etc. It would have save space for more experiments Conclusion: The paper introduces two losses to better tie influential objects and potential answers. The method is convincing, and the experimental results are diverse and good. However, I still think that the paper requires further polishing to improve the readability. I would also advocate providing more element to support the proposed method and to analyze the strengths and weaknesses. Although the current experiences are quite convincing, I would advocate adding more analysis to definitely conclude the efficiency of the method. ---------------------------------- The rebuttal was clearly written and insightful. it answered most of my questions, and the authors demonstrate their ability to update the paper accordingly. Therefore, I am happy to increase my score, and accept the paper | - How did you pick 0.6 for glove embedding similarity? Did you perform k-cross-validation? What is the potential impact - Have you tried other influential loss (Eq3)? For instance, replacing the min with a mean or NDCG? Remarks: |
NIPS_2019_651 | NIPS_2019 | (large relative error compared to AA on full dataset) are reported. - Clarity: The submission is well written and easy to follow, the concept of coresets is well motivated and explained. While some more implementation details could be provided (source code is intended to be provided with camera-ready version), a re-implementation of the method appears feasible. - Significance: The submission provides a method to perform (approximate) AA on large datasets by making use of coresets and therefore might be potentially useful for a variety of applications. Detailed remarks/questions: 1. Algorithm 2 provides the coreset C and the query Q consists of the archetypes z_1, â¦, z_k which are initialised with the FurthestSum procedure. However, it is not quite clear to me how the archetype positions are updated after initialisation. Could the authors please comment on that? 2. The presented theorems provide guarantees for the objective functions phi on data X and coreset C for a query Q. Table 1 reporting the relative errors suggests that there might be a substantial deviation between coreset and full dataset archetypes. However, the interpretation of archetypes in a particular application is when AA proves particularly useful (as for example in [1] or [2]). Is the archetypal interpretation of identifying (more or less) stable prototypes whose convex combinations describe the data still applicable? 3. Practically, the number of archetypes k is of interest. In the presented framework, is there a way to perform model selection in order to identify an appropriate k? 4. The work in [3] might be worth to mention as a related approach. There, the edacious nature of AA is approached by learning latent representation of the dataset as a convex combination of (learnt) archetypes and can be viewed as a non-linear AA approach. [1] Shoval et al., Evolutionary Trade-Offs, Pareto Optimality, and the Geometry of Phenotype Space, Science 2012. [2] Hart et al., Inferring biological tasks using Pareto analysis of high-dimensional data, Nature Methods 2015. [3] Keller et al., Deep Archetypal Analysis, arxiv preprint 2019. ---------------------------------------------------------------------------------------------------------------------- I appreciate the authorsâ response and the additional experimental results. I consider the plot of the coreset archetypes on a toy experiment insightful and it might be a relevant addition to the appendix. In my opinion, the submission constitutes a relevant contribution to archetypal analysis which makes it more feasible in real-world applications and provides some theoretical guarantees. Therefore, I raise my assessment to accept. | - Significance: The submission provides a method to perform (approximate) AA on large datasets by making use of coresets and therefore might be potentially useful for a variety of applications. Detailed remarks/questions: |
NIPS_2019_1089 | NIPS_2019 | - The paper can be seen as incremental improvements on previous work that has used simple tensor products to representation multimodal data. This paper largely follows previous setups but instead proposes to use higher-order tensor products. ****************************Quality**************************** Strengths: - The paper performs good empirical analysis. They have been thorough in comparing with some of the existing state-of-the-art models for multimodal fusion including those from 2018 and 2019. Their model shows consistent improvements across 2 multimodal datasets. - The authors provide a nice study of the effect of polynomial tensor order on prediction performance and show that accuracy increases up to a point. Weaknesses: - There are a few baselines that could also be worth comparing to such as âStrong and Simple Baselines for Multimodal Utterance Embeddings, NAACL 2019â - Since the model has connections to convolutional arithmetic units then ConvACs can also be a baseline for comparison. Given that you mention that âresulting in a correspondence of our HPFN to an even deeper ConACâ, it would be interesting to see a comparison table of depth with respect to performance. What depth is needed to learning âflexible and higher-order local and global intercorrelationsâ? - With respect to Figure 5, why do you think accuracy starts to drop after a certain order of around 4-5? Is it due to overfitting? - Do you think it is possible to dynamically determine the optimal order for fusion? It seems that the order corresponding to the best performance is different for different datasets and metrics, without a clear pattern or explanation. - The model does seem to perform well but there seem to be much more parameters in the model especially as the model consists of more layers. Could you comment on these tradeoffs including time and space complexity? - What are the impacts on the model when multimodal data is imperfect, such as when certain modalities are missing? Since the model builds higher-order interactions, does missing data at the input level lead to compounding effects that further affect the polynomial tensors being constructed, or is the model able to leverage additional modalities to help infer the missing ones? - How can the model be modified to remain useful when there are noisy or missing modalities? - Some more qualitative evaluation would be nice. Where does the improvement in performance come from? What exactly does the model pick up on? Are informative features compounded and highlighted across modalities? Are features being emphasized within a modality (i.e. better unimodal representations), or are better features being learned across modalities? ****************************Clarity**************************** Strengths: - The paper is well written with very informative Figures, especially Figures 1 and 2. - The paper gives a good introduction to tensors for those who are unfamiliar with the literature. Weaknesses: - The concept of local interactions is not as clear as the rest of the paper. Is it local in that it refers to the interactions within a time window, or is it local in that it is within the same modality? - It is unclear whether the improved results in Table 1 with respect to existing methods is due to higher-order interactions or due to more parameters. A column indicating the number of parameters for each model would be useful. - More experimental details such as neural networks and hyperparameters used should be included in the appendix. - Results should be averaged over multiple runs to determine statistical significance. - There are a few typos and stylistic issues: 1. line 2: "Despite of being compactâ -> âDespite being compactâ 2. line 56: âWe refer multiway arraysâ -> âWe refer to multiway arraysâ 3. line 158: âHPFN to a even deeper ConACâ -> âHPFN to an even deeper ConACâ 4. line 265: "Effect of the modelling mixed temporal-modality features." -> I'm not sure what this means, it's not grammatically correct. 5. equations (4) and (5) should use \left( and \right) for parenthesis. 6. and so on⦠****************************Significance**************************** Strengths: - This paper will likely be a nice addition to the current models we have for processing multimodal data, especially since the results are quite promising. Weaknesses: - Not really a weakness, but there is a paper at ACL 2019 on "Learning Representations from Imperfect Time Series Data via Tensor Rank Regularizationâ which uses low-rank tensor representations as a method to regularize against noisy or imperfect multimodal time-series data. Could your method be combined with their regularization methods to ensure more robust multimodal predictions in the presence of noisy or imperfect multimodal data? - The paper in its current form presents a specific model for learning multimodal representations. To make it more significant, the polynomial pooling layer could be added to existing models and experiments showing consistent improvement over different model architectures. To be more concrete, the yellow, red, and green multimodal data in Figure 2a) can be raw time-series inputs, or they can be the outputs of recurrent units, transformer units, etc. Demonstrating that this layer can improve performance on top of different layers would be this work more significant for the research community. ****************************Post Rebuttal**************************** I appreciate the effort the authors have put into the rebuttal. Since I already liked the paper and the results are quite good, I am maintaining my score. I am not willing to give a higher score since the tasks are rather straightforward with well-studied baselines and tensor methods have already been used to some extent in multimodal learning, so this method is an improvement on top of existing ones. | - The paper gives a good introduction to tensors for those who are unfamiliar with the literature. Weaknesses: |
ARR_2022_287_review | ARR_2022 | 1. The authors presented a fine-grained evaluation set in this paper. However, the anti-stereotype that appears in previous datasets is missing in the constructed dataset. In addition, details of annotations are missing in this paper. Since stereotype detection is quite challenging, it would be important to discuss how to guarantee the annotation quality and whether annotators can reach an agreement on collected corpus.
2. Missing related baselines. Only PLMs are considered in this paper and other task-related baselines are missing.
3. Missing in-depth analysis on experimental results. For example, why the improvements of models are limited on offense detection dataset and are significant on coarse stereotype set?
1. More discussions about dataset construction should be provided, e.g., the time range of data collection, preprocessing strategies and quality control.
2. All "table" and "figure" in the context should be capitalized. | 3. Missing in-depth analysis on experimental results. For example, why the improvements of models are limited on offense detection dataset and are significant on coarse stereotype set? |
ICLR_2023_1587 | ICLR_2023 | The main issue with this work is that the evaluation setup is not realistic at all. For an experimental paper like this, verifying its applicability on real-world datasets is important. Yet, 2 datasets are synthetically generated and only 1 is of real birds. This birds dataset, too, is very simple, in that the feature is very easily identifiable (the beak), and it is not clear if this method scales to more realistic distributions where the features are not as simple.
Another huge issue is that experiments are only conducted at the extremely small-sample regime, up to 500 samples on the synthetic datasets of shapes and up to 60 examples on the bird dataset. No one is deploying machine learning trained on 60 samples. If the method was to train on all labeled data, and only incorporate some additional explanations, then that would be much more reasonable. But that is not what is happening here. Advice:
The idea of leveraging a few human annotations to increase performance is interesting, but the rest of the paper needs to be completely reworked. Here's what a great version of this paper would look like:
Consider a suite of real-world datasets, such as those in the WILDS benchmark. Do not include any synthetic data experiments (they add no value) and report performance on the specific metric for each dataset. Another benefit of this is that experiments are run on non-binary tasks as well.
Train on all available labeled data. The WILDS dataset contains training data splits. You should compare two main methods primarily: 1) the baseline of training on the labeled data, and 2) the new method of training on the labeled data, plus incorporating input mask explanation annotations for a few (say, 60) examples.
Use modern backbone baselines (say, Resnet50 or DenseNet121) for the feature extraction layer - 3 conv layers is definitely too small for anything non-synthetic.
I have to say that even given this version of the idea, I am skeptical this would work (lots of such robustness/domain invariance interventions have been proposed and have failed). But this is just my opinion, my advice, and the rest of this review is independent of this viewpoint. | 2) the new method of training on the labeled data, plus incorporating input mask explanation annotations for a few (say, 60) examples. Use modern backbone baselines (say, Resnet50 or DenseNet121) for the feature extraction layer - 3 conv layers is definitely too small for anything non-synthetic. I have to say that even given this version of the idea, I am skeptical this would work (lots of such robustness/domain invariance interventions have been proposed and have failed). But this is just my opinion, my advice, and the rest of this review is independent of this viewpoint. |
GSBHKiw19c | ICLR_2024 | - The interpretation of dynamics model as an agent and introducing the concept of reward adds unnecessary complexity to the method and makes it a bit difficult to easily understand the method. Simply formulating the main idea with adversarial generative training and using the score D as a reward could make the paper be more simple and easy to understand.
- The paper introduces multiple hyperparameters and did quite extensive hyperparameters search (e.g., temperature, penalty, and threshold, ..). Making sure that the baseline is fully tuned with the similar resource given to the proposed method could be important for a fair comparison.
- Figure 2 is very difficult to parse and how this leads to the conclusion that dynamics rewards is superior to dynamics model is not clear.
- The additional complexity of learning dynamics reward and implementation complexity might limit the wide adoption of the proposed method. | - The paper introduces multiple hyperparameters and did quite extensive hyperparameters search (e.g., temperature, penalty, and threshold, ..). Making sure that the baseline is fully tuned with the similar resource given to the proposed method could be important for a fair comparison. |
tauoKi9IWO | EMNLP_2023 | - Missing performance comparison with other approaches. Missing performance comparison on out-of-domain data. *Edit:* This concern seems to be partially addressed during the rebuttal phase and the authors provided some additional baseline results.
- L259 "Perplexity is the probability that the model generates the current sentence". This is not what perplexity is. Eq1 - This does not look like perplexity either, this looks like cross-entropy.
- The writing of the paper seems rushed and there are many grammatical mistakes, unfinished sentences. E.g., in the abstract L016 "tool that can sourcing text", L026 "achieving x3.7 faster for recognizing text". GPT-Zero (or GPT-zero) has footnote with link to FAQ on the first three pages. Etc.
- I don't like the including of GPT-Zero in the efficiency comparison. It is an API that you have no control over. I don't find the comparison fair.
- I don't like the mapping between the four requirements and the four RQs. Especially the first two seems to be forced. How does the fact that the true perplexity can be used for classification says anything about the specificity of your method? | - L259 "Perplexity is the probability that the model generates the current sentence". This is not what perplexity is. Eq1 - This does not look like perplexity either, this looks like cross-entropy. |
NIPS_2019_431 | NIPS_2019 | Weakness: 1. A special case of the proposed model is the Gaussian mixture model. Can the authors discuss the proved convergence rates and sample complexity bounds with that established in GMM (Balakrishnan et al., 2017)? It is interesting to see if there is any accuracy loss by using a different proof technique. Sivaraman Balakrishnan, Martin J. Wainwright, and Bin Yu. Statistical guarantees for the EM algorithm: From population to sample-based analysis. The Annals of Statistics, 45(1):77â120, 2017 2. The finite sample analysis (sample complexity bounds) is only derived for the 1-dimensional case. This largely limits the popularity of the proposed theoretical framework. Can the authors extend the finite sample analysis to a general d-dimensional case, or at least provide some numerical study to show the convergence performance? 3. In Proposition 6.1, the condition \eta \ge C_0 for some constant C_0 seems to be strong. Typically, the signal-to-noise ratio \eta is a small value. It would be great if the authors can further clarify this condition and compare it with that in Section 4 (correct model case). | 3. In Proposition 6.1, the condition \eta \ge C_0 for some constant C_0 seems to be strong. Typically, the signal-to-noise ratio \eta is a small value. It would be great if the authors can further clarify this condition and compare it with that in Section 4 (correct model case). |
NIPS_2021_2338 | NIPS_2021 | Weakness: 1. Regarding the adaptive masking part, the authors' work is incremental, and there have been many papers on how to do feature augmentation, such as GraphCL[1], GCA[2]. The authors do not experiment with widely used datasets such as Cora, Citeseer, ArXiv, etc. And they did not compare with better baselines for node classification, such as GRACE[3], GCA[2], MVGRL[4], etc. I think this part of the work is shallow and not enough to constitute a contribution. The authors should focus on the main contribution, i.e., graph-level contrastive learning, and need to improve the node-level augmentation scheme. 2. In the graph classification task, the compared baseline is not sufficient, such as MVGRL[4], gpt-gnn[5] are missing. I hope the authors could add more baselines of graph contrastive learning and test them on some common datasets. 3. I am concerned whether the similarity-aware positive sample selection will accelerate GNN-based encoder over-smoothing, i.e., similar nodes or graphs will be trained with features that converge excessively and discard their own unique features. In addition, whether selecting positive samples in the same dataset without introducing some perturbation noise would lead to lower generalization performance. The authors experimented with the transfer performance of the model on the graph classification task, though it still did not allay my concerns about the model generalization. I hope there will be more experiments on different downstream tasks and across different domains. Remarks: 1. The authors seem to have over-compressed the line spacing and abused vspace. 2. Table 5 is collapsed.
[1] Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen, “Graph contrastive learning with augmentations,” Advances in Neural Information Processing Systems, vol. 33, 2020. [2] Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Graph contrastive learning with adaptive augmentation,” arXiv preprint arXiv:2010.14945, 2020. [3] Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Deep graph contrastive representation learning,” arXiv preprint arXiv:2006.04131, 2020. [4] Hassani, Kaveh, and Amir Hosein Khasahmadi. "Contrastive multi-view representation learning on graphs." International Conference on Machine Learning. PMLR, 2020. [5] Hu, Ziniu, et al. "Gpt-gnn: Generative pre-training of graph neural networks." Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020. | 2. In the graph classification task, the compared baseline is not sufficient, such as MVGRL[4], gpt-gnn[5] are missing. I hope the authors could add more baselines of graph contrastive learning and test them on some common datasets. |
ICLR_2021_2562 | ICLR_2021 | - The major concern lies in the evaluation of the proposed strategies. Here, the authors considers that their method purify the input image before passing it to the model and an adaptive attack against their edge map based defense strategies will likely results in structural damage to the edge map. However, it is crucial to evaluate the proposed defense against an adversarial attack which craft the adversarial examples to produce minimal structural alterations to the edge map but mislead the model predictions. An adversary could potentially optimize the perturbation in such manner and may remain successful in attacking the model. - Results on CIFAR-10 and Icons-50 of GAN bases shape defense depicted in Figure 4 do not provide solid evidence on the model robustness. Here, the performance on clean inputs degraded significantly and the improvement seen in perturbed samples might be the result of the trade-off between model robustness and generalization as noted in the literature. - The claims on robustness against natural image corruptions using the edge maps seems to be valid only on GTSRB dataset and do not hold true for TinyImageNet and Icons-50 as seen in Figure 6. The robustness of the model with edge maps is similar or on par with the model without edge maps on these two datasets. These results suggest that additional usage of edge maps do not improve model robustness and the improved performance seen on GTSRB could be attributed to the simple nature of the objects in the dataset.
Final thoughts: The proposed method is clearly motivated. Although the performance gains on adversarial robustness is significant, there are critical points yet to be addressed. Therefore, I marginally accept this paper.
Post rebuttal: The authors have devised an adaptive attack to craft the adversarial examples against edge maps and shown that the proposed technique is still remain robust. However, the essence of robustness in this work lies in the BINARIZATION of the input (i.e., binarized edge maps) which is shown in the previous work [1] and need not necessarily attribute to the shape information obtained through edge maps. I recently came across state-of-the-art deep edge detector [2] that produces non-binarized edge maps, which could be interesting for authors to validate their approach using such non-binary inputs. Hence, I maintain my initial rating and marginally accept this paper.
[1] ON THE SENSITIVITY OF ADVERSARIAL ROBUSTNESS TO INPUT DATA DISTRIBUTIONS, ICLR 2019
[2] Richer Convolutional Features for Edge Detection, CVPR 2017 | - The major concern lies in the evaluation of the proposed strategies. Here, the authors considers that their method purify the input image before passing it to the model and an adaptive attack against their edge map based defense strategies will likely results in structural damage to the edge map. However, it is crucial to evaluate the proposed defense against an adversarial attack which craft the adversarial examples to produce minimal structural alterations to the edge map but mislead the model predictions. An adversary could potentially optimize the perturbation in such manner and may remain successful in attacking the model. |
NIPS_2016_117 | NIPS_2016 | weakness of this work is impact. The idea of "direct feedback alignment" follows fairly straightforwardly from the original FA alignment work. Its notable that it is useful in training very deep networks (e.g. 100 layers) but its not clear that this results in an advantage for function approximation (the error rate is higher for these deep networks). If the authors could demonstrate that DFA allows one to train and make use of such deep networks where BP and FA struggle on a larger dataset this would significantly enhance the impact of the paper. In terms of biological understanding, FA seems more supported by biological observations (which typically show reciprocal forward and backward connections between hierarchical brain areas, not direct connections back from one region to all others as might be expected in DFA). The paper doesn't provide support for their claim, in the final paragraph, that DFA is more biologically plausible than FA. Minor issues: - A few typos, there is no line numbers in the draft so I haven't itemized them. - Table 1, 2, 3 the legends should be longer and clarify whether the numbers are % errors, or % correct (MNIST and CIFAR respectively presumably). - Figure 2 right. I found it difficult to distinguish between the different curves. Maybe make use of styles (e.g. dashed lines) or add color. - Figure 3 is very hard to read anything on the figure. - I think this manuscript is not following the NIPS style. The citations are not by number and there are no line numbers or an "Anonymous Author" placeholder. - I might be helpful to quantify and clarify the claim "ReLU does not work very well in very deep or in convolutional networks." ReLUs were used in the AlexNet paper which, at the time, was considered deep and makes use of convolution (with pooling rather than ReLUs for the convolutional layers). | - Table 1, 2, 3 the legends should be longer and clarify whether the numbers are % errors, or % correct (MNIST and CIFAR respectively presumably). |
NIPS_2017_567 | NIPS_2017 | Weakness:
1. I find the first two sections of the paper hard to read. The author stacked a number of previous approaches but failed to explain each method clearly.
Here are some examples:
(1) In line 43, I do not understand why the stacked LSTM in Fig 2(a) is "trivial" to convert to the sequential LSTM Fig2(b). Where are the h_{t-1}^{1..5} in Fig2(b)? What is h_{t-1} in Figure2(b)?
(2) In line 96, I do not understand the sentence "our lower hierarchical layers zoom in time" and the sentence following that.
2. It seems to me that the multi-scale statement is a bit misleading, because the slow and fast RNN do not operate on different physical time scale, but rather on the logical time scale when the stacks are sequentialized in the graph. Therefore, the only benefit here seems to be the reduce of gradient path by the slow RNN.
3. To reduce the gradient path on stacked RNN, a simpler approach is to use the Residual Units or simply fully connect the stacked cells. However, there is no comparison or mention in the paper.
4. The experimental results do not contain standard deviations and therefore it is hard to judge the significance of the results. | 4. The experimental results do not contain standard deviations and therefore it is hard to judge the significance of the results. |
pUKps5dL4s | ICLR_2024 | 1. The momentum method is usually used for acceleration. However, the theoretical advantage, such as an improved convergence rate over PGD, has not been discussed. Indeed, we cannot see the benefit of the convergence analysis (Proposition 4.1) compared to the PGD method.
2. In light of the theoretical work on sampling and particle-based optimization methods, the provided analysis seems somewhat weak. For instance, the existence and smoothness of the solution of SDE (2a)-(2d), and any guarantees of the discretization (in time and space), are not provided.
3. The important assumptions should be exposed in the main text, and their reasonability should be discussed. However, Assumptions 1 and 2 (Lipschitz smoothness and LSI) are hidden in the Appendix. It is better that the authors provide an example that satisfies all assumptions to convince the readers that the theory is not vacuous.
4. The authors missed the series of mean-field optimization works. For instance, see the following papers and references therein:
- [Mei, Montanari, and Nguyen (2018)] A mean-field view of the landscape of two-layer neural networks.
- [Chizat and Bach (2018)] On the global convergence of gradient descent for over-parameterized Convex Analysis of the Mean Field Langevin Dynamics models using optimal transport.
- [Nitanda and Suzuki (2017)] Stochastic particle gradient descent for infinite ensembles.
- [Hu, Ren, Siska, and Szpruch (2019)] Mean-field Langevin dynamics and energy landscape of neural networks.
- [Nitanda, Wu, and Suzuki (2022)] Convex Analysis of the Mean Field Langevin Dynamics.
- [Chizat (2022)] Mean-field langevin dynamics: Exponential convergence and annealing.
- [Chen, Ren, and Wang (2023)] Uniform-in-time propagation of chaos for mean field Langevin dynamics.
- [Suzuki, Wu, and Nitanda (2023)] Convergence of mean-field Langevin dynamics: Time and space discretization, stochastic gradient, and variance reduction
Minor comments:
- Page 4: notation $\ell$ is undefined.
- Equation (1) is missing. | 2. In light of the theoretical work on sampling and particle-based optimization methods, the provided analysis seems somewhat weak. For instance, the existence and smoothness of the solution of SDE (2a)-(2d), and any guarantees of the discretization (in time and space), are not provided. |
ICLR_2021_1783 | ICLR_2021 | 1. The main contribution of this paper is introducing adversarial learning process between the generator and the ranker. The innovation of this paper is concerned. 2. Quality of generated images by proposed method is limited. While good continuous control is achieved, the realism of generated results showed in paper and supplemental material is limited. 3. Visual comparisons and ablation study are insufficient.
Comments/Questions: 1. Could you elaborate more on why proposed method achieves better fine-grained control over the interested attribute? Was it crucial to change the formular of ranker’s loss function from classification to regression? 2. Could you provide more visual comparisons between the proposed method and prior works? 3. There are also some other works focusing on the semantic face editing and they show the ability to achieve continuous control over different attributes, like [1]. Could you elaborate the difference between your work and these papers? 4. Statements in Section 4.2 are somewhat redundant.
Minor: 1. Missing proper expression for the third face image in Figure 2. 2. Missing close parenthesis at the bottom of Page 4. 3. Inconsistent statement and reference for Celeb Faces Attributes Dataset in experiment section.
[1] Shen, Yujun and Gu, Jinjin and Tang, Xiaoou and Zhou, Bolei. “Interpreting the Latent Space of GANs for Semantic Face Editing”, In CVPR, 2020. https://dblp.org/rec/conf/cvpr/ShenGTZ20 | 2. Quality of generated images by proposed method is limited. While good continuous control is achieved, the realism of generated results showed in paper and supplemental material is limited. |
NIPS_2021_2191 | NIPS_2021 | of the paper: [Strengths]
The problem is relevant.
Good ablation study.
[Weaknesses] - The statement in the intro about bottom up methods is not necessarily true (Line 28). Bottom-up methods do have a receptive fields that can infer from all the information in the scene and can still predict invisible keypoints. - Several parts of the methodology are not clear. - PPG outputs a complete pose relative to every part’s center. Thus O_{up} should contain the offset for every keypoint with respect to the center of the upper part. In Eq.2 of the supplementary material, it seems that O_{up} is trained to output the offset for the keypoints that are not farther than a distance \textit{r) to the center of corresponding part. How are the groundtruths actually built? If it is the latter, how can the network parts responsible for each part predict all the keypoints of the pose. - Line 179, what did the authors mean by saying that the fully connected layers predict the ground-truth in addition to the offsets? - Is \delta P_{j} a single offset for the center of that part or it contains distinct offsets for every keypoint? - In Section 3.3, how is G built using the human skeleton? It is better to describe the size and elements of G. Also, add the dimensions of G,X, and W to better understand what DGCN is doing. - Experiment can be improved: - For instance, the bottom-up method [9] has reported results on crowdpose dataset outperforming all methods in Table 4 with a ResNet-50 (including the paper one). It will be nice to include it in the tables - It will be nice to evaluate the performance of their method on the standard MS coco dataset to see if there is a drop in performance in easy (non occluded) settings. - No study of inference time. Since this is a pose estimation method that is direct and does not require detection or keypoint grouping, it is worth to compare its inference speed to previous top-down and bottom-up pose estimation method. - Can we visualize G, the dynamic graph, as it changes through DGCN? It might give an insight on what the network used to predict keypoints, especially the invisible ones.
[Minor comments]
In Algorithm 1 line 8 in Suppl Material, did the authors mean Eq 11 instead of Eq.4?
Fig1 and Fig2 in supplementary are the same
Spelling Mistake line 93: It it requires…
What does ‘… updated as model parameters’ mean in line 176
Do the authors mean Equation 7 in line 212?
The authors have talked about limitations in Section 5 and have mentioned that there are not negative societal impacts. | - In Section 3.3, how is G built using the human skeleton? It is better to describe the size and elements of G. Also, add the dimensions of G,X, and W to better understand what DGCN is doing. |
ARR_2022_28_review | ARR_2022 | The main concerns with this paper is that it doesn't fully explain some choices in the model (see comments/questions section). Moreover, some parts of the paper are actually not fully clear. Finally, some details are missing, making the paper incomplete.
- Algorithm 1 is not really explained. For example, at each step (1, 2, 2a, 3, 3a) are you sampling a different batch from S and T? Is the notation L(X) meaning that you optimize only the parameters X of the architecture?
- Line 232: When you say you "mine", what do you exactly mean? Does this mean you sample P sentences from the set of sentences of S and T with similar constraints?
- Lines 237-238 and Line 262: Why would you want to use the representation from the critic last layer? - Line 239: "Ci are a set of constraints for a sentence" should be moved before.
- Table 1: It seems that the results for DRG and ARAE are not averaged over 5 runs (they're exactly the same of the previous paper version) - Table 1: How did you choose the p=0.6?
- Table 1: row=ARAE, column=POLITICAL-FL It seems this value should be the one in bold.
- Lines 349-353: It seems you're comparing results for ARAE + CONTRA, ARAE + CLF and ARAE + CONTRA + CL with respect to simple ARAE, while in the text you mention only ARAE + CONTRA and ARAE + CLF.
- Line 361: and SIM to -> and SIM with respect to - Figure 3: Please, rephrase the caption of the errors bars (or explain it in the text). It is not clear what do you mean.
- Line 389: You mention here you used different p values as in Table 1. This table doesn't report results with different values for p. - Lines 422-423: why using nucleous sampling when the best results were with greedy decoding? Where does 0.9 come from?
- In general, in the experiments, what are the source and target domains?
- Line 426-Table4: What do you want to demonstrate here? Could you add an explanation? What constraints/attributes are preserved? What is the source domain? What is the target domain?
- Lines 559-560: This is not entirely true. In Cycle Consistency loss you can iterate between two phases of the reconstructions (A-B-A and B-A-B) with two separate standard backpropagation processes.
- Line 573: works focuses -> works focus | - Lines 559-560: This is not entirely true. In Cycle Consistency loss you can iterate between two phases of the reconstructions (A-B-A and B-A-B) with two separate standard backpropagation processes. |
ICLR_2021_2674 | ICLR_2021 | Though the training procedure is novel, a part of the algorithm is not well-justified to follow the physics and optics nature of this problem. A few key challenges in depth from defocus are missing, and the results lack a full analysis. See details below:
- the authors leverage multiple datasets, including building their own to train the model. However, different dataset is captured by different cameras, and thus the focusing distance, aperture settings, and native image resolution all affect the circle of confusion, how are those ambiguities taken into consideration during training?
- related to the point above, the paper doesn't describe the pre-processing stage, neither did it mention how the image is passed into the network. Is the native resolution preserved, or is it downsampled?
- According to Held et al "Using Blur to Affect Perceived Distance and Size", disparity and defocus can be approximated by a scalar that is related to the aperture and the focus plane distance. In the focal stack synthesis stage, how is the estimated depth map converted to a defocus map to synthesize the blur?
- the paper doesn't describe how is the focal stack synthesized, what's the forward model of using a defocus map and an image to synthesize defocused image? how do you handle the edges where depth discontinuities happen?
- in 3.4, what does “Make the original in-focus region to be more clear” mean? in-focus is defined to be sharpest region an optical system can resolve, how can it be more clear?
- the paper doesn't address handling textureless regions, which is a challenging scenario in depth from defocus. Related to this point, how are the ArUco markers placed? is it random?
- fig 8 shows images with different focusing distance, but it only shows 1m and 5m, which both exist in the training data. How about focusing distance other than those appeared in training? does it generalize well?
- what is the limit of the amount of blur presented in the input that the proposed models would fail? Are there any efforts in testing on smartphone images where the defocus is *just* noticeable by human eyes? how do the model performances differ for different defocus levels?
Minor suggestions
- figure text should be rasterized, and figures should maintain its aspect ratio.
- figure 3 is confusing as if the two nets are drawn to be independent from each other -- CNN layers are represented differently, one has output labeled while the other doesn't. It's not labeled as the notation written in the text so it's hard to reference the figure from the text, or vice versa.
- the results shown in the paper are low-resolution, it'd be helpful to have zoomed in regions of the rendered focal stack or all-in-focus images to inspect the quality.
- the sensor plane notation 's' introduced in 3.1 should be consistent in format with the other notations.
- calling 'hyper-spectral' is confusing. Hyperspectral imaging is defined as the imaging technique that obtains the spectrum for each pixel in the image of a scene. | - calling 'hyper-spectral' is confusing. Hyperspectral imaging is defined as the imaging technique that obtains the spectrum for each pixel in the image of a scene. |
ACL_2017_150_review | ACL_2017 | I have some doubts about the interpretation of the results. In addition, I think that some of the claims regarding the capability of the method proposed to learn morphology are not propperly backed by scientific evidence.
- General Discussion: This paper explores a complex architecture for character-level neural machine translation (NMT). The proposed architecture extends a classical encoder-decoder architecture by adding a new deep word-encoding layer capable of encoding the character-level input into sub-word representations of the source-language sentence. In the same way, a deep word-decoding layer is added to the output to transform the target-language sub-word representations into a character sequence as the final output of the NMT system. The objective of such architecture is to take advantage of the benefits of character-level NMT (reduction of the size of the vocabulary and flexibility to deal with unseen words) and, at the same time, improving the performance of the whole system by using an intermediate representation of sub-words to reduce the size of the input sequence of characters. In addition, the authors claim that their deep word-encoding model is able to learn morphology better than other state-of-the-art approaches.
I have some concerns regarding the evaluation. The authors compare their approach to other state-of-the-art systems taking into account two parameters: training time and BLEU score. However, I do not clearly see the advantage of the model proposed (DCNMT) in front of other approaches such as bpe2char. The difference between both approaches as regards BLEU score is very small (0.04 in Cs-En and 0.1 in En-Cs) and it is hard to say if one of them is outperforming the other one without statistical significance information: has statistical significance been evaluated? As regards the training time, it is worth mentioning that the bpe2char for Cs-En takes 8 days less than DCNMT. For En-Cs training time is not provided (why not?) and for En-Fr bpe2char is not evaluated. I think that a more complete comparison with this system should be carried out to prove the advantages of the model proposed.
My second concern is on the 5.2 Section, where authors start claiming that they investigated about the ability of their system to learn morphology. However, the section only contains a examples and some comments on them. Even though these examples are very well chosen and explained in a very didactic way, it is worth noting that no experiments or formal evaluation seem to have been carried out to support the claims of the authors. I would definitely encourage authors to extend this very interesting part of the paper that could even become a different paper itself. On the other hand, this Section does not seem to be a critical point of the paper, so for the current work I may suggest just to move this section to an appendix and soften some of the claims done regarding the capabilities of the system to learn morphology.
Other comments, doubts and suggestions: - There are many acronyms that are used but are not defined (such as LSTM, HGRU, CNN or PCA) or which are defined after starting to use them (such as RNN or BPE). Even though some of these acronyms are well known in the field of deep learning, I would encourage the authors to defined them to improve clearness.
- The concept of energy is mentioned for the first time in Section 3.1. Even though the explanation provided is enough at that point, it would be nice to refresh the idea of energy in Section 5.2 (where it is used several times) and providing some hints about how to interpret it: a high energy on a character would be indicating that the current morpheme should be split at that point? In addition, the concept of peak (in Figure 5) is not described.
- When the acronym BPE is defined, capital letters are used, but then, for the rest of mentions it is lower cased; is there a reason for this?
- I am not sure if it is necessary to say that no monolingual corpus is used in Section 4.1.
- It seems that there is something wrong with Figure 4a since the colours for the energy values are not shown for every character.
- In Table 1, the results for model (3) (Chung et al. 2016) for Cs-En were not taken from the papers, since they are not reported. If the authors computed these results by themselves (as it seems) they should mention it.
- I would not say that French is morphologically poor, but rather that it is not that rich as Slavic languages such as Czech.
- Why a link is provided for WMT'15 training corpora but not for WMT'14?
- Several references are incomplete Typos: - "..is the bilingual, parallel corpora provided..." -> "..are the bilingual, parallel corpora provided..." - "Luong and Manning (2016) uses" -> "Luong and Manning (2016) use" - "HGRU (It is" -> "HGRU (it is" - "coveres" -> "covers" - "both consists of two-layer RNN, each has 1024" -> "both consist of two-layer RNN, each have 1024" - "the only difference between CNMT and DCNMT is CNMT" -> "the only difference between CNMT and DCNMT is that CNMT" | - The concept of energy is mentioned for the first time in Section 3.1. Even though the explanation provided is enough at that point, it would be nice to refresh the idea of energy in Section 5.2 (where it is used several times) and providing some hints about how to interpret it: a high energy on a character would be indicating that the current morpheme should be split at that point? In addition, the concept of peak (in Figure 5) is not described. |
NIPS_2021_1727 | NIPS_2021 | of their work?
A better performance breakdown (ablation studies) on the contritions 1), 2) and 3) mentioned above: Although some ablation studies about the effectiveness of each components are given in Section 3 and 4. It would be better if how each of them contributes to the final performance improvements are given, e.g., how the performance of simply combining the Linformer and the window attention in Big Bird using contrition 1) and 2)? Will the benefit of DualLN (Figure 1) and Dynamic project still exists in CV tasks, which usually have relatively shorter sequences?
Some confusing details: What's the f function in Line 162, is it a Linear layer? In Table 3, CvT*-LS-21 seems to have comparable or even better accuracy than CvT*-LS-17 and CvT*-LS-21S with much less FLOPs, any insight or analysis about it?
Clarity: Is the submission clearly written? Is it well organized? (If not, please make constructive suggestions for improving its clarity.) Does it adequately inform the reader? (Note that a superbly written paper provides enough information for an expert reader to reproduce its results.)
This submission is clearly written and well organized.
Significance: Are the results important? Are others (researchers or practitioners) likely to use the ideas or build on them? Does the submission address a difficult task in a better way than previous work? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach?
The results can help the researchers to build more efficient Transformer models w/ the proposed Transformer-LS as the building block.
The authors addressed the limitations and potential negative societal impact of their work. | 1),2) and3) mentioned above: Although some ablation studies about the effectiveness of each components are given in Section 3 and 4. It would be better if how each of them contributes to the final performance improvements are given, e.g., how the performance of simply combining the Linformer and the window attention in Big Bird using contrition |
NIPS_2016_417 | NIPS_2016 | 1. Most of the human function learning literature has used tasks in which people never visualize data or functions. This is also the case in naturalistic settings where function learning takes place, where we have to form a continuous mapping between variables from experience. All of the tasks that were used in this paper involved presenting people with data in the form of a scatterplot or functional relationship, and asking them to evaluate lines applied to those axes. This task is more akin to data analysis than the traditional function learning task, and much less naturalistic. This distinction matters because performance in the two tasks is likely to be quite different. In the standard function learning task, it is quite hard to get people to learn periodic functions without other cues to periodicity. Many of the effects in this paper seem to be driven by periodic functions, suggesting that they may not hold if traditional tasks were used. I don't think this is a major problem if it is clearly acknowledged and it is made clear that the goal is to evaluate whether data-analysis systems using compositional functions match human intuitions about data analysis. But it is important if the paper is intended to be primarily about function learning in relation to the psychological literature, which has focused on a very different task. 2. I'm curious to what extent the results are due to being able to capture periodicity, rather than compositionality more generally. The comparison model is one that cannot capture periodic relationships, and in all of the experiments except Experiment 1b the relationships that people were learning involved periodicity. Would adding periodicity to the spectral kernel be enough to allow it to capture all of these results at a similar level to the explicitly compositional model? 3. Some of the details of the models are missing. In particular the grammar over kernels is not explained in any detail, making it hard to understand how this approach is applied in practice. Presumably there are also probabilities associated with the grammar that define a hypothesis space of kernels? How is inference performed? | 3. Some of the details of the models are missing. In particular the grammar over kernels is not explained in any detail, making it hard to understand how this approach is applied in practice. Presumably there are also probabilities associated with the grammar that define a hypothesis space of kernels? How is inference performed? |
rv9c1BqY0L | ICLR_2025 | 1. The contributions of this paper are marginal and incremental. The framework of SimUSER is similar to Agent4Rec [1] and the novelty lies in the incorporation of visual information and memory design, which is incremental.
2. The role of visual information is unknown. While the main contribution of this paper comes from the knowledge-graph memory and visual-driven reasoning, the ablation study does not explicitly verify the effectiveness. In Table 10, w/o perception module and w perception exhibit similar performance, and the implementation detail of w/o perception is unknown. More importantly, given the sample number of 1000 users, the improvements are impossible to be significant (i.e., p < 0.05). These experiment results are questionable.
3. Some concepts are wrongly used. For example, the knowledge graph in this paper is actually the widely used user-item interaction graph [2] in collaborative filtering, which is significantly different from KG [3]. Please carefully check this concept and refine the writing.
4. The link to the code is expired.
[1] Zhang, An, et al. "On generative agents in recommendation." *Proceedings of the 47th international ACM SIGIR conference on research and development in Information Retrieval*. 2024.
[2] He, Xiangnan, et al. "Lightgcn: Simplifying and powering graph convolution network for recommendation." *Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval*. 2020.
[3] Wang, Xiang, et al. "Kgat: Knowledge graph attention network for recommendation." *Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining*. 2019. | 2. The role of visual information is unknown. While the main contribution of this paper comes from the knowledge-graph memory and visual-driven reasoning, the ablation study does not explicitly verify the effectiveness. In Table 10, w/o perception module and w perception exhibit similar performance, and the implementation detail of w/o perception is unknown. More importantly, given the sample number of 1000 users, the improvements are impossible to be significant (i.e., p < 0.05). These experiment results are questionable. |
NIPS_2020_1195 | NIPS_2020 | 1. While the presented model can be posed as a transfer learning problem. This paper is more about concept drift. Therefore, the title Transfer learning via l1 Regularization is a bit too broad and can be misleading for some readers. 2. In the experiments on concept drift and "transfer learning", only Lasso is the only method studied and compared. However, Lasso is not considered as a state-of-the-art (SOTA) for concept drift and transfer learning. Lasso is designed for neither of these problem. On the other hand, there are many other methods for concept drift and transfer learning, with some discussed in the Related Work section but none is compared against in the experiment. 3. There are many existing works on concept drift (e.g. twitter activities, anomaly detection), as the authors have cited. However, this paper studies only synthetic concept drift problems. It is not clear whether the proposed solution can deal with concept drift in real data successfully. 4. Similar to the above, there are many transfer learning benchmarks and methods. However, this paper studies only a synthetic example without comparing to any transfer learning methods (as said above, Lasso is not designed for transfer learning). 5. The end of Sec. 4.2 states that Transfer Lasso showed the best accuracy in feature screening. However, previous works on Lasso screening are not cited or compared, e.g. Ren et al. "Safe feature screening for generalized LASSO." TPAMI 40.12 (2017): 2992-3006. 6. Section 4.3 follows the experiments in [17]. However, the presented results did not include [17] (and related works on the same data) in comparison. 7. Line 253: how was the data divided into 30 batches? 8. Line 258: What is the cause of such computational instability for binary features? What are the ways to mitigate this problem? 9. Figure 5-right: annotations on the colours used are missing. 10. Minor issues. Typo. Line 179: unchaing | 5. The end of Sec. 4.2 states that Transfer Lasso showed the best accuracy in feature screening. However, previous works on Lasso screening are not cited or compared, e.g. Ren et al. "Safe feature screening for generalized LASSO." TPAMI 40.12 (2017): 2992-3006. |
NIPS_2020_1490 | NIPS_2020 | 1) the authors do not compare with the model in [15]: "Modeling long-and short-term temporal patterns with deep neural networks.". This restricts the potential impact of the model. 2) the model has many components whose hyper parameters are not fully provided (someone may have to trace them in the source code) 3) the paper doesn't propose a conceptual/computational novelty. it combines existing modules to achieves its results. | 2) the model has many components whose hyper parameters are not fully provided (someone may have to trace them in the source code) |
ICLR_2023_4079 | ICLR_2023 | • There are considerable similarities with another paper [1] (see below references). The work in this paper is not novel and there are no citation given to [1]. EM approach, regeneration approach is mentioned in [1]. • The experimental results are not convincing. The paper provides that joint learning on CIFAR- 100 dataset gives 39.97% accuracy when tested on class incremental learning. However, there seems to be more accurate results obtained with CIFAR-100 dataset on class incremental learning. For example, the paper [2] obtains 58.4% accuracy. In addition, the memory size is 10 times lower than this setup. The experiments do not contain the paper [2]. Other relevant papers [3, 4] whose accuracies are listed higher for this dataset are not compared and referenced either. • Although it is provided that a 6-fold cross-validation is used for every dataset, the reason for cross-validation is not understood because other papers that this work compares to did not use the cross- validation in their papers. Therefore, it is not clear why 6-fold cross-validation is required for this problem. • The notation for results is not clear. The paper claims the improvement for CIFAR-10 is 3%p but it is not clear what %p stands for. • Although there is a reference to [2] and other types of rehearsal based continual learning methods, the experiments do not contain any of the rehearsal methods. • The setup for the experiments is missing. The code is not provided. • The effect of memory size is ambiguous. An ablation study containing the effect of memory size should be added for justifying the memory size selection. • In Table-1, the experimental results for CelebA dataset are written in caption. However, there are not any experiments with CelebA dataset. [1] Overcoming Catastrophic Forgetting with Gaussian Mixture Replay (Pfülb and Geppert, 2021) [2] Gdumb: A simple approach that questions our progress in continual learning (Prabhu et al., 2020) [3] Supervised Contrastive Learning (Khosla et al., 2020) [4] Split-and-Bridge: Adaptable Class Incremental Learning within a Single Neural Network (Kim and Choi, 2021) | • The notation for results is not clear. The paper claims the improvement for CIFAR-10 is 3%p but it is not clear what %p stands for. |
NIPS_2020_1454 | NIPS_2020 | - The novelty seems limited. The idea of building correlation in low-res and then refine for or facilitate the high-res results might be new in the literature for correspondence search, but it is quite common and has been widely adopted in previous work doing stereo match, where the main job is also to find correspondence but along epipolar line. - The main contribution of this paper, IMO, is to run 4D convolution on low-res correlation volume, which saves computation and possibly achieve comparable performance. If so, the experiment showing the saving of computational resources, e.g. gpu runtime, flip-flop, memory, must be given. - Similar multi-scale approach in stereo matching often runs fast at the cost of losing accuracy, since the correlation volume in low-res is not as informative as the high-res one, and it is not easy to fix if some mistakes are made in low-res. However the experiments show that the result is even better than SOTA. It would be good to add more explanation and analysis. - It would be nice and inspiring to show some qualitative results, possibly with zoomed-in view, for cases where previous methods failed but okay with the proposed method. Also, it's good to show some failure cases and analysis the limitation. | - It would be nice and inspiring to show some qualitative results, possibly with zoomed-in view, for cases where previous methods failed but okay with the proposed method. Also, it's good to show some failure cases and analysis the limitation. |
ACL_2017_148_review | ACL_2017 | - The goal of your paper is not entirely clear. I had to read the paper 4 times and I still do not understand what you are talking about!
- The article is highly ambiguous what it talks about - machine comprehension or text readability for humans - you miss important work in the readability field - Section 2.2. has completely unrelated discussion of theoretical topics.
- I have the feeling that this paper is trying to answer too many questions in the same time, by this making itself quite weak. Questions such as “does text readability have impact on RC datasets” should be analyzed separately from all these prerequisite skills.
- General Discussion: - The title is a bit ambiguous, it would be good to clarify that you are referring to machine comprehension of text, and not human reading comprehension, because “reading comprehension” and “readability” usually mean that.
- You say that your “dataset analysis suggested that the readability of RC datasets does not directly affect the question difficulty”, but this depends on the method/features used for answer detection, e.g. if you use POS/dependency parse features.
- You need to proofread the English of your paper, there are some important omissions, like “the question is easy to solve simply look..” on page 1.
- How do you annotate datasets with “metrics”??
- Here you are mixing machine reading comprehension of texts and human reading comprehension of texts, which, although somewhat similar, are also quite different, and also large areas.
- “readability of text” is not “difficulty of reading contents”. Check this: DuBay, W.H. 2004. The Principles of Readability. Costa Mesa, CA: Impact information. - it would be good if you put more pointers distinguishing your work from readability of questions for humans, because this article is highly ambiguous.
E.g. on page 1 “These two examples show that the readability of the text does not necessarily correlate with the difficulty of the questions” you should add “for machine comprehension” - Section 3.1. - Again: are you referring to such skills for humans or for machines? If for machines, why are you citing papers for humans, and how sure are you they are referring to machines too?
- How many questions the annotators had to annotate? Were the annotators clear they annotate the questions keeping in mind machines and not people? | -General Discussion:- The title is a bit ambiguous, it would be good to clarify that you are referring to machine comprehension of text, and not human reading comprehension, because “reading comprehension” and “readability” usually mean that. |
ZWi6RpT4mJ | ICLR_2025 | The work suffers from severe weaknesses in its analysis and presentation and contains incorrect mathematical statements.
To begin with, the two grossest offenders:
1. On line 238, the authors claim that "According to the Central Limit Theorem (CLT), a normally distributed random variable can be produced through a finite linear combination of any random variables" - this statement makes multiple incorrect assertions. The CLT does not guarantee Gaussianity in a non-asymptotic regime and most certainly does not hold for a finite linear combination of arbitrary random variables.
2. On the same line (L238), the authors claim, "As we have confirmed, the weights are normally distributed". As far as I can tell, this statement is not even approximately correct from any sensible perspective. The assertion that the marginal weight distribution is Gaussian is disproved by the authors' own Figure 1b, which depict marginal weight distributions that are either significantly more concentrated (Fig 1b, left) or significantly more heavy-tailed (Fig 1b, right) than a Gaussian. In fact, it is not even clear what data the authors are plotting: are they plotting histograms of the weight distribution of a single INR (I believe this to be the case), or are they plotting the weight distribution by collecting the weights from the same INR architecture trained on different data points?
Unfortunately, based on these two points, I believe the paper is already potentially beyond repairability to be published at ICLR. However, should the authors wish to improve their paper to be publishable, beyond correcting the above issues, there are further matters that need attention:
- Ablation studies are missing: how does quantization affect reconstruction quality? How do different distributions for the entries of the sensing matrix impact the results?
- Missing related works: the authors should discuss the works of [1] and especially [2], as the latter includes a similar linear weight-sharing technique as the authors propose here.
- Clarifying the use of the term entropy coding: The authors use Brotli coding - a universal source coding method based on the Lempel-Ziv algorithm. Claiming that this is entropy coding is somewhat misleading, as in the literature, entropy coding usually implies that we build an explicit statistical model and use it in conjunction with arithmetic coding to encode the data.
- Most figures in the main text are illegible at 100% zoom. Please increase the font size of labels, legends and tick marks to match the caption font size.
- While I appreciated the explanation of the attempted but failed compression techniques the authors tried in Section 3.2, they should provide experimental evidence, as the discussion is currently based on anecdotal evidence only.
- More minor: The first paragraph of the introduction is very general and can be safely removed.
## References
- [1] Guo, Z., Flamich, G., He, J., Chen, Z., and Hernández-Lobato, J. M. (2023). Compression with Bayesian implicit neural representations. In NeurIPS 2023
- [2] He, J., Flamich, G., Guo, Z., and Hernández-Lobato, J. M. (2024). Recombiner: Robust and enhanced compression with bayesian implicit neural representations. In ICLR 2024 | 1. On line 238, the authors claim that "According to the Central Limit Theorem (CLT), a normally distributed random variable can be produced through a finite linear combination of any random variables" - this statement makes multiple incorrect assertions. The CLT does not guarantee Gaussianity in a non-asymptotic regime and most certainly does not hold for a finite linear combination of arbitrary random variables. |
NIPS_2020_1296 | NIPS_2020 | #ERROR! | - It is required to analyze the time complexity of the proposed policies mentioned in Section 4. |
NIPS_2017_53 | NIPS_2017 | Weakness
1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA.
2. Given that the paper uses a billinear layer to combine representations, it should mention in related work the rich line of work in VQA, starting with [B] which uses billinear pooling for learning joint question image representations. Right now the manner in which things are presented a novice reader might think this is the first application of billinear operations for question answering (based on reading till the related work section). Billinear pooling is compared to later.
3. L151: Would be interesting to have some sort of a group norm in the final part of the model (g, Fig. 1) to encourage disentanglement further.
4. It is very interesting that the approach does not use an LSTM to encode the question. This is similar to the work on a simple baseline for VQA [C] which also uses a bag of words representation.
5. (*) Sec. 4.2 it is not clear how the question is being used to learn an attention on the image feature since the description under Sec. 4.2 does not match with the equation in the section. Speficially the equation does not have any term for r^q which is the question representation. Would be good to clarify. Also it is not clear what \sigma means in the equation. Does it mean the sigmoid activation? If so, multiplying two sigmoid activations (with the \alpha_v computation seems to do) might be ill conditioned and numerically unstable.
6. (*) Is the object detection based attention being performed on the image or on some convolutional feature map V \in R^{FxWxH}? Would be good to clarify. Is some sort of rescaling done based on the receptive field to figure out which image regions belong correspond to which spatial locations in the feature map?
7. (*) L254: Trimming the questions after the first 10 seems like an odd design choice, especially since the question model is just a bag of words (so it is not expensive to encode longer sequences).
8. L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations using Equation. (3) still done in this case?
Minor Points:
- L122: Assuming that we are multiplying in equation (1) by a dense projection matrix, it is unclear how the resulting matrix is expected to be sparse (arenât we mutliplying by a nicely-conditioned matrix to make sure everything is dense?).
- Likewise, unclear why the attended image should be sparse. I can see this would happen if we did attention after the ReLU but if sparsity is an issue why not do it after the ReLU?
Perliminary Evaluation
The paper is a really nice contribution towards leveraging traditional vision tasks for visual question answering. Major points and clarifications for the rebuttal are marked with a (*).
[A] Andreas, Jacob, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2015. âNeural Module Networks.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1511.02799.
[B] Fukui, Akira, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. âMultimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1606.01847.
[C] Zhou, Bolei, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. âSimple Baseline for Visual Question Answering.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1512.02167. | 4. It is very interesting that the approach does not use an LSTM to encode the question. This is similar to the work on a simple baseline for VQA [C] which also uses a bag of words representation. |
XIHl40UylS | EMNLP_2023 | * This work still does not realize transferring from *arbitrary unknown* styles since data generation needs stylistic data as input. Ideally, the data is supposed to be constructed unsupervisedly. The performance on out-of-domain styles is questionable.
* I don’t understand why in the human evaluation the authors use an automatic metric TSS rather than a human metric to evaluate the style control. This weakens the convincingness of human evaluation.
* The trend of NLP is towards zero-shot and few-shot. The approaches requiring heavy finetuning are less convenient in practice. | * I don’t understand why in the human evaluation the authors use an automatic metric TSS rather than a human metric to evaluate the style control. This weakens the convincingness of human evaluation. |
i7jAYFYDcM | ICLR_2025 | 1. The expert imitation procedure introduces overhead into the training pipeline, as each training step requires replanning. Although this is sidestepped by Lazy Reanalyze, it remains a fundamental limitation of the method.
2. The experiments are run with a small number of seeds (3 seeds).
3. The experiments succinctly prove the point that the authors try to make. That said, it would strengthen the paper to include experiments across more diverse domains (those in TD-MPC 2). | 3. The experiments succinctly prove the point that the authors try to make. That said, it would strengthen the paper to include experiments across more diverse domains (those in TD-MPC 2). |
SdoSUDBWJY | ICLR_2024 | 1. The authors do not sufficiently show that this degeneration is an issue empirically in my opinion. To start, the authors should show a few real examples where vanilla RNP gives a nonsense justification while the predictor still outputs the correct label; and show that RNP + A2I fixes these cases. In addition, the authors could consider plotting a histogram of the length of the rationale (for RNP and RNP+A2I), and showing that samples with short justifications correspond to degenerate cases (e.g. the punctuation example). Overall, the sparsity of the A2I augmented models (in Table 1) do not seem significantly different from the sparsity of the base models, and so I am not convinced that A2I solves the issue presented.
2. The proposed method makes sense, but there are several much simpler solutions that the authors should try and compare with. First, it seems to me that the root cause of the problem is that the generator is overpowered -- it's able to internally detect the label, and then feed special tokens correlated with the label to the predictor which do not have semantic meaning. As such, some simple solutions would be to reduce the capacity of the generator; add regularization to the generator; or to train the generator and predictor in an alternating fashion, with more steps for the predictor. The final suggestion is similar to how GANs are trained. In addition, the problem examined is very similar to mode collapse in GANs, and some of the solutions there (e.g. a diversity regularizer [1]) could work as well.
3. There are a few edge cases which I am not convinced that A2I will be able to fix. Primarily, these deal with the circumstance where, in the toy example, the $t_+$ do not appear in the negative samples, and vice versa -- so $t_+$ is a token that appears almost exclusively in positive examples, and $t_-$ is a token that appears almost exclusively in negative examples. Such spurious correlations have been found in natural language tasks [3-4]. In these cases, it seems like the attacker would not be able to choose the corresponding token, and would thus still output random noise.
4. Another concern deals with the singular sentiment assumption. This seems like a strong assumption that is very dataset and task specific, and the authors already discuss its failure modes in the appendices. The presence of negation seems to be another case where the assumption would be violated. As such, I am not convinced in the generalizability of the method to other datasets and tasks. Regardless, the authors should formulate this assumption mathematically in the text.
5. Overall, the clarity of the paper could be improved. Some of the formulation sections are hard to parse. For example, the authors formulate the problem as one of sampling bias, which makes sense intuitively. However, the mathematical formulation and causal graphs for this section don't follow the prior work in sampling bias [2].
6. The utility of the method is limited in the era of large pre-trained LLMs, which would achieve very high _zero-shot_ accuracy on all of the sentiment tasks evaluated, likely even higher than the GloVe + GRU networks studied in the paper. Such LLMs also have the capability of explaining its own reasoning (as the authors have referenced). To improve the significance of the method, the authors should consider applying their method to finetune a large-scale LLM (though the authors mention that even finetuning BERT is challenging for RNP). They could also consider applying it to images and graphs, as described in the introduction.
7. The authors do not show any confidence intervals for their results, so it is unclear whether performance gains are statistically significant. They also only evaluate on two datasets, though these seem to be standard datasets in the RNP community.
[1] Diversity-Sensitive Conditional Generative Adversarial Networks. ICLR 2019.
[2] Controlling Selection Bias in Causal Inference. AISTATS 2012.
[3] An empirical study on robustness to spurious correlations using pre-trained language models. TACL 2020.
[4] On Feature Learning in the Presence of Spurious Correlations. NeurIPS 2022. | 7. The authors do not show any confidence intervals for their results, so it is unclear whether performance gains are statistically significant. They also only evaluate on two datasets, though these seem to be standard datasets in the RNP community. [1] Diversity-Sensitive Conditional Generative Adversarial Networks. ICLR 2019. [2] Controlling Selection Bias in Causal Inference. AISTATS 2012. [3] An empirical study on robustness to spurious correlations using pre-trained language models. TACL 2020. [4] On Feature Learning in the Presence of Spurious Correlations. NeurIPS 2022. |
7FXgefa9lU | EMNLP_2023 | I like the paper overall, and I think the contribution is probably sufficient for a short paper. The concerns below could be addressed in follow-up work.
- The method is only evaluated on a single encoder model, and on two classification datasets. Experiments on a larger range of models/datasets would be necessary for the claims to be fully convincing.
- The paper does not evaluate the magnitude of interpretability tax associated with the method.
- The baseline model which is chosen for comparisons is quite old, predating BERT by several years. It is not clear how to interpret the current method's improvement over this baseline. | - The paper does not evaluate the magnitude of interpretability tax associated with the method. |
ICLR_2023_1088 | ICLR_2023 | The novelty is somewhat thin: Until the second half of page 5, the paper is mostly presenting existing backgrounds. The novelty mainly falls in Sec. 4. But the LUQ itself is rather straightforward to design, once the goal of designing logarithmic and unbiased quantizer is clear. The approaches in Sec. 5 are also rather standard and to some extent explored in previous literature. I'd say the main contribution of this paper is showing that such a simple combination of existing techniques is sufficient to achieve (surpringly good) accuracy, rather than proposing novel techniques. | 4. But the LUQ itself is rather straightforward to design, once the goal of designing logarithmic and unbiased quantizer is clear. The approaches in Sec. 5 are also rather standard and to some extent explored in previous literature. I'd say the main contribution of this paper is showing that such a simple combination of existing techniques is sufficient to achieve (surpringly good) accuracy, rather than proposing novel techniques. |
NIPS_2021_1374 | NIPS_2021 | Weakness: Major:
Lack of many important technical details:
1.1. Some important symbols are not explained, which makes it very difficult to understand what has been done. For example, what is y j
? And how is Ω t
sampled (sparely sampled on the trajectory only, or randomly and densely covering the testing environment)? What is V, E in Figure 2?
1.2. The method did not mention at all how the map Φ
can handle variations in Ω s
caused by camera rotations, which is a key question on whether the map can localize the camera when it takes two photos at the same position but with different orientations.
1.3. How are the positive and negative samples in the triplets generated? What's the influence of the hyperparameters in this module (3.3.1) on the whole method? 1.4. There is no experiment/ablation study to justify the need/importance for equation (6) instead of Euclidean distance in a local area.
Lack of discussions of the failure cases. Would this method fail in some cases? For example, the experiments never showed the trajectory in symmetric environments/maps. If Ω t
is a square or rectangle, how can this method handle the rotational ambiguities? How does it know that an image is taken at the top-left corner instead of the top-right corner (because the optimal transport cost could be the same in this case)?
Lack of comparisons to baseline methods, and related work discussions on other weakly supervised training of the localization function Φ
(essentially a PoseNet [Kendall 2015]).
Lack of real-world image-based localization experiments. Minor:
5. Section 3.2 may be shortened since many equations are NOT proposed by this work, but are well-known in the community already. This would save space to include the above missing important details.
6. A topological map has a special meaning in SLAM and localization (no global metric information), which is different from the one used in this paper. It is suggested to change the term to a metric map, because it seems that you do need to make metric measurements (even if it could be noisy) to construct Ω t .
7. Is it stable to train the deep localization network with the differentiable Sinkhorn? I wish to see some training losses.
8. What if you use L2 instead of KL for L(.,.)? Any ablation study? Do you have any reference to support your claim that "choosing Eucidean distances for L is ill-posed"?
I like the raw idea of the paper, and the problem being addressed is an important one in robotics. However, the lack of details and the quality of the presentation make it very difficult for me to recommend this paper for acceptance. | 7. Is it stable to train the deep localization network with the differentiable Sinkhorn? I wish to see some training losses. |
NIPS_2022_1818 | NIPS_2022 | Weakness: 1. The technical contribution of this paper is limited. Comparing to existing CF methods, it only proposes to employ an extra popularity-based predictor and combine the results with an existing CF model. 2. The paper overclaims the strength of the proposed BC loss in theoretical analysis. The geometric interpretability, theorem 1, the high/low entropy representations, and the hard-negative mining ability, are actually the same thing (i.e., applying stronger constrains for samples with higher popularity) from different viewpoints. | 2. The paper overclaims the strength of the proposed BC loss in theoretical analysis. The geometric interpretability, theorem 1, the high/low entropy representations, and the hard-negative mining ability, are actually the same thing (i.e., applying stronger constrains for samples with higher popularity) from different viewpoints. |
0bderX6zwr | EMNLP_2023 | 1. The proposed method may be less relevant to the authors' motivations in abstract section (automatic scores are not effective and human evaluation scores are not affordable). Since the proposed framework FFAEVAL and some similar framework like Chatbot Arena are used to do comparison between dialogue systems, I do not think it can be directly used to evaluate a single dialogue system, like give a fluency score or something like that. So these arena-based evaluation systems may not solve the problems of current score-based evaluation systems.
2. The claims of these paper lack solid experiments support. Dialogue systems evaluated in experiments are kind of out-dated, such as DialoGPT and PLATO-XL. The poor performance of old models may influence the evaluation process. Using current LLM-based chatbot like alpaca, vicuna may be more convincing. Besides, the experiments section lacks some details, like the number of test examples and the inference settings.
3. Although the authors state some difference between their framework and Chatbot Arena (a popular evaluation framework for comparing different dialogue systems), I think the difference between the two methods is not obvious. Chatbot Arena compares models two by two and then computes the elo score. FFAEVAL compares all models in one conversation and then computes the TrueSkill score. | 1. The proposed method may be less relevant to the authors' motivations in abstract section (automatic scores are not effective and human evaluation scores are not affordable). Since the proposed framework FFAEVAL and some similar framework like Chatbot Arena are used to do comparison between dialogue systems, I do not think it can be directly used to evaluate a single dialogue system, like give a fluency score or something like that. So these arena-based evaluation systems may not solve the problems of current score-based evaluation systems. |
NIPS_2022_572 | NIPS_2022 | 1: No experiments on CutMix (and other variants, but CutMix should be enough). Does RegMixup also help CutMix?
2: The training of RegMixup seems to see 2x samples per iteration. Thus the running speed is slow (as authors claimed 1.5 x slower). When compared to other methods, RegMixup seeing 2x samples may lead to unfair comparison.
3: Might be good to validate the performance of RegMixup on transformer and other training recipes (used in transformer models).
Limitations are discussed. Negative societal impact are not discussed. A potential negative societal impact is that the interpretability of RegMixup is still limited and hence needs to be carefully applied to sensitive applications. | 2: The training of RegMixup seems to see 2x samples per iteration. Thus the running speed is slow (as authors claimed 1.5 x slower). When compared to other methods, RegMixup seeing 2x samples may lead to unfair comparison. |
NIPS_2017_110 | NIPS_2017 | of this work include that it is a not-too-distant variation of prior work (see Schiratti et al, NIPS 2015), the search for hyperparameters for the prior distributions and sampling method do not seem to be performed on a separate test set, the simultion demonstrated that the parameters that are perhaps most critical to the model's application demonstrate the greatest relative error, and the experiments are not described with adequate detail. This last issue is particularly important as the rupture time is what clinicians would be using to determine treatment choices. In the experiments with real data, a fully Bayesian approach would have been helpful to assess the uncertainty associated with the rupture times. Paritcularly, a probabilistic evaluation of the prospective performance is warranted if that is the setting in which the authors imagine it to be most useful. Lastly, the details of the experiment are lacking. In particular, the RECIST score is a categorical score, but the authors evaluate a numerical score, the time scale is not defined in Figure 3a, and no overall statistics are reported in the evaluation, only figures with a select set of examples, and there was no mention of out-of-sample evaluation.
Specific comments:
- l132: Consider introducing the aspects of the specific model that are specific to this example model. For example, it should be clear from the beginning that we are not operating in a setting with infinite subdivisions for \gamma^1 and \gamma^m and that certain parameters are bounded on one side (acceleration and scaling parameters).
- l81-82: Do you mean to write t_R^m or t_R^{m-1} in this unnumbered equation? If it is correct, please define t_R^m. It is used subsequently and it's meaning is unclear.
- l111: Please define the bounds for \tau_i^l because it is important for understanding the time-warp function.
- Throughout, the authors use the term constrains and should change to constraints.
- l124: What is meant by the (*)?
- l134: Do the authors mean m=2?
- l148: known, instead of know
- l156: please define \gamma_0^{***}
- Figure 1: Please specify the meaning of the colors in the caption as well as the text.
- l280: "Then we made it explicit" instead of "Then we have explicit it" | - l81-82: Do you mean to write t_R^m or t_R^{m-1} in this unnumbered equation? If it is correct, please define t_R^m. It is used subsequently and it's meaning is unclear. |
Subsets and Splits