Input
stringlengths
128
43.6k
Output
stringlengths
141
10k
This work studies the question to what extend a reparametrization of an optimization problem, i.e. representing the original parameters w to optimize for as a function of some other parameters theta, can accelerate the convergences of the gradient flow / gradient descent for nonconvex optimization problems. It studies the dynamics of the flow via eigenvectors of a matrix M formed as the expectation over the outer product of the gradient of the loss with itself to reveal 'slow' and 'fast' modes of the evolution. It subsequently derives sufficient conditions for the reparametrization (which is chosen to be linear but time varying) to balance the decay on all modes. After discussing an efficient approximation of the theoretically derived scheme, numerical results demonstrate the effectiveness of the proposed reparametrization in two exemplary applications. In terms of its strength, this paper contains interesting thoughts about the intriguing idea that a temporally varying linear reparametrization of the unknown can accelerate the gradient flow based optimization. The general topic, the combination of theoretical analysis and numerical experiments, and the bridge between the two by using efficient numerical approximations of what the theory demands, are strength of this paper. And although the numerical experiments are certainly not exhaustive, there is some proof of concept of the benefit in the particular applications considered here. Unfortunately, the paper also has some clear drawbacks. In particular, I found the paper difficult to follow and the main idea from an optimization perspective appears to be unnecessarily hidden in a framework on "neural" reparametrizations. Unless I misunderstood the main idea significantly, the "neural reparametrization" illustrated by a neural network in Fig. 1(b), later turns out to be a linear parametrization only, i.e., considering the gradient flow for theta in $w(t) = \\kappa(t) \\theta(t)$ instead of in the original variable w. Before considering this to be a graph neural network, I would have been interested in how this idea relates to other classical optimization methods: Has the idea of temporally changing but linear reparametrization not been considered in the optimization literature before? As kappa turns out to be the square root of the inverse of the Hessian, is there a relation to Newton or quasi-Newton methods? For me, the paper would have been easier to follow from this more classical optimization perspective. In particular, the considered gradient flow resulting from the linear reparametrization seems to be $\\partial_t \\theta(t) = \\kappa(t)^T \\nabla L (\\kappa(t) \\theta(t))$, and should be stated explicitly. If now the change in $\\kappa$ is negligible slow in comparison to the change in $\\theta$, and if $\\kappa$ represents the (scaled) square root of the inverse Hessian, isn't that the flow arising from Newton's method? I would much rather prefer a clear motivation and presentation of the paper from such a classical perspective before delving into graph neural networks. Some minor aspects - In equation (2) there is an $\\epsilon_{i,j}$, but I think the way eq. (1) is written it is unclear what 'off-diagonal' elements in $\\epsilon$ mean. (Of course the delta ensures there are no off-diagonals, but then I would avoid the notation). - "Equation 1 is also an ordinary differential equation"; I would call it a partial differential equation. - I am sometimes not sure which quantities are random variables and which ones are not. In eq. (4), for instance, random variables seem to have been dragged out of the expectation, which I do not understand. - An example of why the paper was a little difficult for me to follow are sentences like "When running GD, the maximum change in w is bounded to ensure numerical stability." This sounds like a modification of GD (like gradient clipping), but it is actually meant as a condition to limit the step size you are using. Thus, isn't the reasoning flipped, i.e., in order to ensure numerical stability, we have to bound ...? - In the entire analysis, it could be made clearer that M is time dependent. The first sentence of section 2.3 is the first time where it is really prominent. The discretization for time dependent matrices might of course make the behavior of the actual algorithm differ from the (continuous) gradient flow. - Before eq. (10) it is exemplified that $w = \\sigma(A\\theta + b)$ would be a valid choice. $A$, $b$, and $\\sigma$ are, however, not defined and if $\\sigma$ refers to a (nonlinear) activation function, I do not see how this is true. - page 2, "abounded" - page 6, "adaptove learning rates" - If the numerical experiments are carried out with Adam, shouldn't the theory also consider effects like (adaptively scaled) momentum? - In Fig. 2, why does GCN-1 seemingly start with a much lower loss function value than the other methods? Does it have a sharp drop at the beginning? - "GCN with $A^2$ as the propagation rule achieves the highest speedup". What is $A^2$? Please define. - "where difficult to separate the slow and fast modes" >> "where it is difficult ..." - The numerical results are, to my mind, not a strong indication of the proposed approach being a universal way to accelerate gradient-based methods. In particular, I am wondering how specific the acceleration results are to the applications? Also, what amount of hyperparameter tuning is required for the proposed approach to work well? Although I like the general idea and do believe that reparametrization can balance out different convergence speeds of different modes to some extend, I found the presentation to be a little confusing. The appoach seems to reduce to a linear reparametrization, which seems to relate it to other (more classical) approaches. Along with the list of minor aspects that make the paper a little difficult to follow, I need some clarification on this aspect. <doc-sep>This paper proposes a reparameterization of non-linear non-convex optimization problems. This reparameterization amounts to a linear map (i.e., "optimization params = linear operation of a different set of parameters). These linear maps are interpreted as a graph convolution network. The experimental results are validated on "Kuramoto models" and "persistent homology models". Strengths: * The idea of reparameterization is nice. Weaknesses: * The experimental evaluation consists of two problems that are not of interest to the ICLR community. I have certainly never seen either of them used in a ML paper. I have no idea how they relate to actual optimization problems I care about (i.e., training deep neural networks). * The experimental work doesn't look thorough -- where are the learning rate sweeps, comparisons to other optimizers, etc etc? * The paper spends a substantial of space (pg 2-4) deriving well known results (under assumptions that amount to strong convexity lambda_max to lambda_min ratio controls covergence). I strongly suggest that the authors use the results and language of optimization, rather than going from first principles for no good reason. * The final reparameterization is not very interesting -- although much ado is made about "using a neural network parameterization", it's just a linear map at the end of the day. * Since the reparameterization is linear, this makes the overall idea very similar to a preconditioner. This should be touched on, and compared to e.g., KFAC, shampoo, the many other linear preconditioners that people use. As with the optimization comment above, I think this work needs to be grounded more in the literature. * GCNs are tangentially relevant, but don't seem to be used in any really meaningful way. Technical comment: right after eqn 15, it says that H is positive semidefinite. Where does thos come from? Isn't the base problem meant to be non-convex, in which case by definition H should have some negative eigenvalues at some point? This paper is clearly unready for publication. The main idea -- using a structured linear reparameterization -- is under-developed, and the experimental results are on problems that the ICLR audience don't really care about. <doc-sep>The authors derive a neural reparameterization of non-convex optimization problems in order to accelerate their convergence. They do this by deriving how the slowest components of the optimization variables can have their convergence rate improved by preconditioning with a NTK-based matrix. They make connections between this approach and Group Convolutional Networks. Experimentally, they show this approach improves upon baseline gradient-based optimization on a two datasets. **Main comments** -Overall, I think the paper is quite novel and the experiments fairly convincing. -I really enjoy how much the authors walk through the individual steps of the gradient math which derives their neural reparamaterization in 2.1 and 2.2. It is easy to follow and clear. -However, one drawback of this approach is that it appears that it seems to only help the early stages of optimization, as this is how it is used in the experiments. I think the authors should take more care to make this point more clear. In particular, what prevents one from using this hessian approximation for $\\bar{M}$ as in Section 2.3 in early stages of training when using Adam? It would be nice to see ana ablation of the different components of their method, to understand exactly what component of the approach is contributing to the improved performance. -How does this approach compare to gradient-based optimization in terms of memory consumption? How would this scale to large-scale datasets with larger parameter spaces, e.g. deep network training? **Minor Points** -The authors seem to pose the title and introduction to refer to any non-convex optimization problem, but in some parts of the paper they seem only focused on neural network optimization (e.g. Fig 1). It would be good to smooth out these inconsistencies. -The abstract on OpenReview and the abstract in the article do not match. -In the experiments, why is the term "linear" used to refer to the gradient-based baselines? I am not sure this is the best term to use and was confusing to me upon my first read. Overall, I lean slightly towards acceptance. This is due to the clarity and novelty of the paper, as well as encouraging experimental results. However, I think some more experimental verification is needed for ablating the different components of the proposed approach and for demonstrating its applicability to a broader range of problems. <doc-sep>This work proposed a neural reparametrization scheme to accelerate a large class of nonconvex nonlinear optimization problems. The proposed method is grounded on analysis that the dynamics of gradient flow are related to the condition number of the system. More specifically, by reparametrizing the optimization problem with a graph convolutional network (GNN), the proposed method can modify the condition number and obtain convergence speed up, the acceleration is demonstrated on optimizing synchronization problems and persistent homology of point-clouds. The paper introduced a new network reparametrizing method for accelerating optimization for nonlinear problems. Overall, the reviewer finds the paper is a bit hard to follow, and the presentation of the paper can be significantly improved. The experiments are interesting but the comparison is not quite comprehensive. First, the reviewer is not fully convinced by the benefits of reparametrizaiotn. The reparameterization using a neural network can improve convergence speed, but on the other hand, the memory cost could be higher. Second, it is a bit unclear to the reviewer why in Section 2.2, the authors considered the NTK. The weights for NTK require an ultrawide network and the weights barely change. It is a bit abrupt without much explanation of the motivations behind it. Third, the speed up in Figure 2 does not seem impressive. The authors only compared with a very baseline optimizer. More comprehensive comparisons are needed to draw the conclusion. Overall, the reviewer finds the paper is a bit hard to follow, and the presentation of the paper can be significantly improved. The experiments are interesting but the comparison is not quite comprehensive.
This paper proposes speeding up certain optimization problems common in physics by reparameterizing their parameters as the output of a graph neural network. The reviewers appreciate the idea, but are not convinced enough to recommend the paper for acceptance. They point out the following weaknesses: * The method amounts to linear preconditioning, and hence it's reasonable to expect a fairly complete comparison to the many linear preconditioning approaches that have been proposed previousl. The reviewers are not satisfied with the currently provided comparison. * The main idea is not presented clearly enough. In particular, it's not obvious the proposed method is best described as neural reparameterization, since it seems to amount to linear preconditioning. * The experiments are not persuasive enough: The presented problems may not be relevant to all of the target audience of ICLR, and the experimental evaluation does not seem sufficiently exhaustive. The suggested areas of improvement provided by the reviewers seem reasonable to me: I therefore recommend not accepting the paper in its current form. To make the paper more accessible and appealing, the authors may consider rewriting the paper to more closely match the perspective taken by the reviewers, and to provide a more thorough comparison to the previous approaches and the existing literature.
This paper addresses long-text generation, with a specific task of being given a prefix of a review and needing to add the next five sentences coherently. The paper proposes adding two discriminators, one trained to maximize a cosine similarity between source sentences and target sentences (D_{coherence}) and one trained to maximize a cosine similarity between two consecutive sentences. On some automatic metrics like BLEU and perplexity, an MLE model with these discriminators performs a little bit better than without. This paper does not include any manual evaluation, which is critical for evaluating the quality of generated output, especially for evaluating coherence and cohesion. This paper uses the task setup and dataset from "Learning to Write with Cooperative Discriminators", Holtzman et al., ACL 2018. That paper also includes many specified aspects to improve the coherence (from the abstract of that paper "Human evaluation demonstrates that text generated by our model is preferred over that of baselines by a large margin, significantly enhancing the overall coherence, style, and information of the generations."). But this paper: --Does not compare against the method described in Holtzman et al., or any other prior work --Does not include any human evaluations, even though they were the main measure of evaluation in prior work. This paper states that "To the best of our knowledge, this paper is the first attempt to explicitly capture cross-sentence linguistic properties, i.e., coherence and cohesion, for long text generation." There is much past work in the NLP community on these. For example, see: "Modeling local coherence: An entity-based approach" by Barzilay and Lapata, 2005 (which has 500+ citations). It has been widely studied in the area of summarization, for example, "Using Cohesion and Coherence Models for Text Summarization", Mani et al., AAAI 1998, and follow-up work. And in more recent work, the "Learning to Write" paper that the dataset and task follow from addresses several linguistically informed cross-sentence issues like repetition and entailment. The cosine similarity metric in the model is not very well suited to the tasks of coherence and cohesion, as it is symmetric, while natural language isn't. The pair: "John went to the store to buy some milk." "When he got there, they were all out." and "When he got there, they were all out." "John went to the store to buy some milk." would have identical scores according to a cosine similarity metric, while the first ordering is much more coherent than the second. The conclusion says "we showed a significant improvement": how was significance determined here? <doc-sep>The paper proposes a method for improving the quality of text generation by optimizing for coherence and cohesion. The authors develop two discriminators--a "coherence discriminator" which takes as input all of the sentence embeddings (i.e. averaged word embeddings) of the document and assigns a score, and a "cohesion discriminator" which takes as input the word embeddings of two consecutive sentences and assigns a score. In the former, the score is the cosine similarity between the encodings of the first and second half of the document. In the latter, the score is the cosine similarity between the encodings of the two sentences. Both discriminators use CNNs to encode the inputs. The discriminators are trained to rank true text over randomly drawn negative samples, which consist of randomly permuted sentence orderings and/or random combinations of first/second half of documents. This discriminators are then used to train a text generation model. The output of the text generation model is scored by various automatic metrics, including NLL, PPL, BLEU, and number of unique ngrams in the outputs. The improvements over a generically-trained generation model are very small. Overall, I did not find this paper to be convincing. The initial motivation is good--we need to find a way to capture richer linguistic properties of text and to encourage NLG to produce such properties. However, the discriminators presented do not actually capture the nuances that they purport to capture. As I understand it, these models are just being trained to incentivize high cosine similarity between the words in the first/second half of a document (or sentence/following sentence). That is not reflective of the definitions of coherence and cohesion, which should reflect deeper discourse and even syntactic structure. Rather, these are just models which capture topical similarity, and naively at that. Moreover, training this model to discriminate real text from randomly perturbed text seems problematic since 1) randomly shuffled text should be trivially easy to distinguish from real text in terms of topical similarity and 2) these negative samples are not (I don't think) at all reflective of the types of texts that the discriminators actually need to discriminate, i.e. automatically generated texts. Thus, even ignoring the fact that I disagree with the authors on exactly what the discriminators are/should be doing, it is still not clear to me that the discriminators are well trained to do the thing the authors want them to do. I have various other concerns about the claims, the approach, and the evaluation. A list of more specific questions/comments for the authors is below. - There are a *lot* of unsubstantiated claims and speculation about the linguistic properties that these discriminators capture, and no motivation of analysis as to how they are capturing it. Claims like the following definitely need to be removed: "learn to inspect the higher-level role of T, such as but not limited to, whether it supports the intent of S, transitions smoothly against S, or avoids redundancy", "such as grammar of each of the sentences and the logical flow between arbitrary two consecutive sentences" - You only use automated metrics, despite acknowledging that there is no good way to evaluate generation. Why not use human eval? This is not difficult to carry out, and when you are arguing about such subtle properties of language, human eval is essential. There is no reason that BLEU, for example, would be sensitive to coherence or cohesion, so why would this be a good way to evaluate a model aimed to capture exactly those things? - Also related to human eval, there should be an intrinsic evaluation of the discriminators. Do they correlate with human judgments of coherence and cohesion? You cannot take it for granted that they capture these things (I very much believe they do not), so present some evidence that the models do what you claim they do. - The reported improvements are minuscule, to the extent that I would read them as "no difference". The only metric where there is a real difference is on number of unique ngrams generated cross inputs, which is presumably because its just learning (being encouraged to) spit out words that were in the input. I'd like to see the baseline of just copying the input as the output. - You mention several times that these models will pick up on redundancy. It is not clear to me how they could do that. Aren't they simply using a cosine similarity between feature vectors? Perhaps I am missing something, but I don't see how this could learn to disincentivize redundancy but simultaneously encourage topical similarity. Could you explain this claim? <doc-sep>The idea of training discriminators to determine coherence and cohesion, and training those discriminators as part of an NLG system using policy gradients, is an interesting one. However, there are two major problems with the papers as it stands: 1) it completely ignores the decades of NLG literature on this topic before the "neural revolution" in NLP; 2) the presentation of the paper is confusing, in a number of respects (some details below). To claim that this is the first paper to capture cross-sentence linguistic properties for text generation is the sort of comment that is likely to make experienced NLG researchers very grumpy. A good place to start looking at the extensive literature on this topic is the following paper: Modeling Local Coherence: An Entity-Based Approach, Barzilay and Lapata (2007) One aspect in which the presentation is muddled is the order of the results tables. Table 2 is far too early in the paper. I had no idea at that point why the retrieval results were being presented (or what the numbers meant). You also have cohesion in the table before the cohesion section in 3.2. Likewise, Table 1, which is on p.2 and gives examples of system output, is far too early. Perhaps the biggest confusion for me was the difference between cohesion and coherence, and in particular how they are modeled. The intro does a good job of describing the two concepts, and making the contrast between local and global coherence, but when I was reading 3.1 I kept thinking this was describing cohesion ("T that follows S in the data" - sounds local, no?). And then 3.2 seems to suggest that coherence and cohesion essentially are being modeled in the same way, except shuffling happens on the word level? I suppose what I was expecting was some attempt at a global model for coherence which goes beyond just looking at consecutive sentence pairs. I wonder why you didn't try a sequence model of sentences (eg bidirectional LSTM). These are so standard now it seems odd not to have them. Do you describe the decoding procedure (greedy? beam?) at test time anywhere? I liked Table 4 and found the example pairs with the scores to be useful qualitative analysis. "Based on automated NLP metrics, we showed a significant improvement" - which metrics? not clear to me that the improvements in Table 3 are significant. Minor presentation points -- "followed by a logically sound sentence" - might want to rephrase this, since you don't mean logical soundness in a technical sense here (I don't think). The comment in the conclusion about being "convinced" the architecture generalizes well to unseen texts is irrelevant without some evidence.
This paper attempts at modeling coherence of generated text, and proposes two kinds of discriminators that tries to measure whether a piece of text is coherent or not. However, the paper misses several related critical references, and also lacks extensive evaluation (especially manual evaluation). There is consensus between the reviewers that this paper needs more work before it is accepted to a conference such as ICLR.
## Contributions This paper presents SBEVNet, a neural network architecture to estimate the bird's-eye view (BEV) layout of an urban driving scene. Given an image captured by a stereo camera, SBEVNet performs an inverse perspective mapping (IPM) to obtain an initial feature volume, which is further processed to generate the BEV layout. The system is trained end-to-end in a supervised learning setup. ## Strengths **S1** The problem considered here is very relevant to perception groups in the autonomous driving community. This area has only recently seen work crop up. Approaches like MonoLayout [A], MonoOccupancy [B], and PseudoLidar [C] are closely related to this submission. **S2** The paper is easy to follow, and provides a majority of the details needed to understand and assess the approach. **S3** The authors also seem to provide code (and promise a public release), which might help ensure reproducibility. ## Weaknesses I see a few major and a number of other minor concerns that impact my perception of this paper. I'm hoping the discussion period helps address some of these, and I'm open to revising my score in light of evidence contrary to the following claims. It appears that this paper uses MonoLayout [A], MonoOccupancy [B], and PseudoLidar [C] as primary baselines. Much of my review stems from my understanding of [A, B, C] (and my 'surprise' at a few contradictory trends observed in this paper.) **Problem setup** It is unclear from reading the paper and supplementary material if the problem setup is infact "amodal" layout estimation (i.e., if scene points outside of the camera view are predicted in the BEV layout). Approaches like (Schulter et al., 2016) and (Mani et al., 2020) operate in this "amodal" setup, while others such as PseudoLidar [C] and (Lu et al., 2019) only predict points that are visible in the input image. Does this approach, for instance, hallucianate hidden intersections and roads? (It seems not, since a visibility mask is explicitly employed in the loss function -- cf. Fig. 1 and Eq. 12, 13). **MonoLayout baseline** The primary baseline considered in this paper is "MonoLayout" (Mani et al., 2020). Upon examining the MonoLayout [A] paper, I find a surprising and troubling trend. This paper reports very poor performances of MonoLayout on the KITTI dataset (the original MonoLayout paper reports mIoU for the "car" class to be around 26.08, while the current submission reports 2.43 -- cf. Table 2). I've noted that MonoLayout makes its code and models publicly available (its publicly available pretrained models claim an mIoU of 30.18 for the "car" class), as highlighted on their GitHub page. Also, other baselines like "MonoOccupancy" have surprisingly low scores in this paper (an order of magnitude), compared to scores reported in the MonoLayout paper. I wonder if there is something different in the experiment and/or training protocols employed in the current work, as opposed to those in the MonoOccupancy and MonoLayout papers? For example, the MonoOccupancy baseline as reported in the MonoLayout paper achieves an mIoU of about 24.16 (for the car class) (MonoLayout paper - Table 1), while the same baseline has a dismal performance (mIoU of 7.11 for car class) in Table 2 of the current manuscript. The fact that this performance gap is not explained in the paper makes it hard to analyze the merits of the proposed approach. Save for a single sentence "The results of MonoLayout ... and MonoOccupancy ... are inferior due to lack of any camera geometry priors in the network", I've not found any other discussion of this performance gap/discrepancy. I also find it a tad weird (and unexplained) that the performance of various baselines do not seem to follow a set pattern/trend across the CARLA and KITTI datasets. In the MonoLayout paper, I notice that changing the dataset from KITTI to Argoverse does change absolute mIoU scores a bit, but preserves the ranking of various baselines (i.e., MonoLayout > OFT > MonoOccupancy on both KITTI and Argoverse). In the current submission, the trends seem to be changing across the two datasets (cf. Tables 1, 2). Yet another set of baselines that seem to underperform here are the PseudoLidar variants. In the MonoLayout paper (cf. supplementary material, Table 5), Pseudolidar is evaluated on the KITTI dataset, and the reported mIoU for vehicles is 59, whereas in this paper the best performance on this class achieved by a pseudolidar model is 45.64. Further, the MonoLayout paper's version of the (stereo) Pseudolidar baseline seems to perform quite competetively (mIoU 59.0) to SBEVNet Ensemble (mIoU 60.17 for "car", cf. Table 2). This seems to indicate that well-tuned baselines could perhaps achieve better performance? In Appendix A.2, the authors seem to indicate that they used a very different process to train MonoLayout (i.e., using random images from the train set as opposed to using OpenStreetMap and/or adversarial training). I suspect this might have resulted in a performance gap? I feel that OFT [D] could be cited and used as a baseline, particularly to measure layout estimation accuracy for the "car" class. **Qualitative results** Unfortunately, there seems to be a dearth of qualitative result figures to get a better sense of the approach. In particular MonoLayout and MonoOccupancy seem to obtain crisp reconstructions of cars (cf. MonoLayout paper), while in Figure 2., cars are splayed throughout the image in the SBEVNet results. This is also surprising; in my opinion, these results do not adequately substantiate the impressive reported mIoU. **Missing mAP metric** Other papers such as MonoLayout and OFT seem to report the mAP (mean average precision) metric in addition to the mIoU metric, because mAP often turns out to be a more accurate estimate of prediction performance (due to integrating over various recall values). In practice, this leads to less-than-perfect predictions being scored well (and this could explain the splayed-out results in Fig. 2 scoring a high mIoU). Evaluating mAP would be a stricter criteria, and will allow an additional point of comparison with prior art. ## Minor remarks The following remarks have had no impact on my assessment of the paper, and as such I don't expect the authors to respond to these. Concurrent approaches such as [F] can be cited and discussed. The paper could be structured better. For instance, input image sizes and baselines could be moved over to the main paper, rather than being listed in the appendix. ## References [A] Mani, Kaustubh, et al. "MonoLayout: Amodal scene layout from a single image." The IEEE Winter Conference on Applications of Computer Vision. 2020. [B] Lu, Chenyang, Marinus Jacobus Gerardus van de Molengraft, and Gijs Dubbelman. "Monocular semantic occupancy grid mapping with convolutional variational encoder–decoder networks." IEEE Robotics and Automation Letters 4.2 (2019): 445-452. [C] Wang, Yan, et al. "Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomous driving." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019. [D] Roddick, Thomas, Alex Kendall, and Roberto Cipolla. "Orthographic feature transform for monocular 3d object detection." arXiv preprint arXiv:1811.08188 (2018). [E] A Parametric Top-View Representation of Complex Road Scenes. CVPR 2019. [F] Lift, Splat, Shoot: Encoding Images From Arbitrary Camera Rigs by Implicitly Unprojecting to 3D. ECCV 2020.<doc-sep>The paper proposed to estimate the semantic layout in the bird eye's view from a pair of stereo images. The main novelty/contribution lies in how to organize and exploit the information from the stereo images. The proposed framework builds upon inverse perspective mapping, and projected stereo feature volume. The performance was evaluated on the KITTI and CARLA datasets. Given a pair of stereo images, there are various options to exploit the image information, where this paper provides a framework by exploiting the stereo information in the bird eye's view. - A principled question is what is the real superiority of estimating the layout in the bird eye's view. From the application' view, the semantic estimation from the camera'view already provide much information which the stereo images could further improve the performance. From the applications's perspective, I would like to see discussions and experiments in showing the superiority in using the bird eye's representation. - In the ablation studies, the paper already provide different variants of the network architecture in exploiting the stereo image information. I believe there are multi-task learning based framework form this task, where the semantic layout estimation and stereo estimation are jointly estimated and optimized. Whether that pipeline will provide extra benefit? - In section 3.4.4, the paper claimed that "We pass the concatenated stereo BEV feature map and IPM BEV feature map to a U-Net (Ronneberger et al., 2015) network to generate the semantic map C". However, the loss evaluation applies to the IPM features and Stereo features separately, namely, $\\mathcal{C}_i^{IPM}$ and $\\mathcal{C}_i^{Stereo}$. If two estimations are made as the network output, which one will be used for performance evaluation? The other following question is : If two separate estimations are made as the network outputs and compared with the ground truth for loss evaluation, whether a consistency loss between these two estimations will further constrain the network learning? - The paper conducted experiments on the KITTI and CARLA dataset. It is well understood that the CityScape dataset has been widely in evaluating semantic segmentation where the stereo images are available. I would to see more evaluation on these real-image dataset rather than synthetic dataset such as the CARLA dataset. - The paper title and abstract should highlight "semantic" and "bird eye's view" as the paper proposed to learn the semantic layout in the bird eye's view. The current title did reflect these properties. All in all, taking all the above comments into consideration, I would like to hear from the authors' response, which could lead to updated rating in either directions.<doc-sep>Interesting problem but the paper can be improved This work aims to directly estimate the world layout in front of the vehicle from a pair of stereo cameras. It is based on cost volume but it does not explicitly predict the depth values of each pixel. Instead, it warps the cost volume features to the bird eye view (BEV) and do semantic segmentation from BEV using U-Net. I think the problem is interesting and I believe it has never been mentioned and/or addressed before. Moreover I think the motivation is also valid, as the BEV semantic segmentation from camera sensors can be one important perception input for navigation and planning. I like the idea of skipping the explicit 3D reconstruction and directly shoot for the final goal; I believe we usually get better performance when we directly minimize the loss we want to minimize. Moreover, it could potentially introduce some inspiration to other works, e.g., the direct extension - 3D (point/volume) semantic segmentation. Though I like this work, I also has several concerns: 1. Is IPM feature really important? I only see it is effective on the synthetic dataset CARLA but not on KITTI. What is the possible reason? My guess is that the ground estimation is very bad for the real-world data. I am also curious what is the performance if only IPM feature is used. 2. In introduction, this paper claims estimating accurate depth is not sufficient due to occlusion. However, I don't see how this work could handle occlusion. Instead the occluded part is masked out during training. Please explain this statement. 3. What is the range of the layout estimation? From CARLA, it is 39m, and from Figure 5 and Figure 6, it is 35m. If it is the case, the short range of the estimation makes it hard to act as a major component in the perception system; the best use case is for short range detection and system redundancy. But actually I can imagine that it would not get very good result in long range, as there is always a trade-off between baseline (for accuracy) and camera overlap (for coverage) in stereo estimation. 4. What is the image resolution for the inference time test? It seems quite slow if the resolution is 512x288 (for CARLA) or 640x256 (for KITTI). 5. For the experiment, I think it is better to report mean $\\pm$ std with multiple trainings, as there are training noise. Other suggestions and clarifications 1. When IPM is first introduced in Page 2, it is better to explain it in a short sentence. The current version is not clear and there are typos. 2. I believe this work is based on binocular stereo pairs (correct me if I am wrong), so please explicitly say that in the paper. Also, using left/right image instead of reference/target image is less misleading. 3. For disparity feature volume, it is better to use the prevalent name - cost volume. It is called cost volume in the introduction but later called disparity feature volume, I think it is better to be consistent. 4. It is unclear how IPM feature are obtained: from pre-determined parameters or ground estimation? I think pre-determined parameters will not work very well because ground is not always a perfect plane. 5. It is unclear what is the ensemble method used here. If it just takes the best of several models, I will not be convinced. After rebuttal: I still think this work has a interesting task setup, though it indeed has many faults (after reading the responses and other reviewers): 1. It seems that IPM is not really useful in practice. 2. It is also not sufficient to large occlusion, and thus there is no explanation for its advatange over `estimating accurate depth` 3. Range is short and latency is high 4. After reading reviewer1's comments, I think it could use the same experimental setting as the existing methods for a fair comparison. The other methods might be not properly trained with the new setting. 5. It is still not clear how to emsemble several models (with different trained weights) in this work. Thus I am changin my rating to 6, and I will not fight for this work.<doc-sep>The paper proposes an end-to-end network for layout estimation from stereo images. The approach is built off previous stereo matching networks, which built and process a 3D disparity volume. The stereo estimate is used to project image features into a birds-eye-view representation which is processed using a U-net which predicts a semantic scene layout. The approach is evaluated on the KITTI and Carla generated datasets. Strengths: * This is the first work to attempt semantic layout estimation from stereo images * The approach is geometrically grounded, and can properly leverage stereo information to improve layout estimation * The approach performs well on the two datasets evaluated. Since this paper is focused on a new problem, there are not existing works to directly compare to. However, the paper provides reasonable baselines by modifying existing networks for this task * Avoids the need for an intermediate representation (i.e. point cloud) by directly mapping features from the disparity volume into birds-eye-view coordinates * Plots in the appendix are interesting Weaknesses: * While the task itself is new, closely related forms of the problem have been studied. For example 3D object detection from monocular/stereo, and monocular layout estimation. It would have been helpful to see results on the closely related task of 3D object detection to better compare against prior works. * The IPM module appears to be very sensitive to the accuracy of the ground plane. In the synthetic CARLA dataset, where a ground plane can be accurately computed, there seems to be a large advantage of using the IPM module. On real-world data like KITTI, the use of the IPM module gives very limited improvement in performance of the stereo-only baseline. * The task is closely related to 3D object detection which has been using similar components. The core components of the approach have been used in various forms in prior work. The paper (Orthographic feature transform for monocular 3d object detection, Roddick 2019) uses a very similar method to project image features into a birds-eye-view representation.
This paper addresses the problem of estimating a “birds-eyed-view” overhead semantic layout estimate of a scene given an input pair of stereo images of the scene. The authors present an end-to-end trainable deep network that fuses features derived from the stereo images and projects these features into an overhead coordinate frame which is passed through a U-Net style model to generate the final top view semantic segmentation map. The model is trained in a fully supervised manner. Experiments are performed on the CARLA and KITTI datasets. While R2 was positive, they still had some concerns after reading the rebuttal and the other reviews. Specifically, they were not convinced about the value of the IPM module. This concern was also shared by R4, especially in light of the relationship to Roddick et al. BMVC 2019. R1 had concerns about the experiments, specifically the quantitative comparisons to MonoLayout. The authors addressed these comments, but it is still not clear if the differences can be attributed to the number of classes, how they are weighted, or the training split used? R3 had questions about the utility of BEV predictions in general. However, as stated by R2, there is a lot of value in approaching the problem in this way. In conclusion, while there were some positive comments from the reviewers, there were also several significant concerns. With no reviewer willing to champion the paper, there is not enough support to justify accepting the paper in its current form.
This paper considers the problem of DP isotonic regression. For general loss functions, an inefficient algorithm is proposed to achieve a utility with a logarithmic dependency on the alphabet size for pure dp, An efficient algorithm is provided for $\\ell_1$ and $\\ell_2$ loss functions. Strengths: 1. This paper considers the problem of DP isotonic regression. For general loss functions, an inefficient algorithm is proposed to achieve a utility with a logarithmic dependency on the alphabet size for pure dp, The result is tight if the paper considers the minimax setting. I think this is a very good result for an initial work in this area. 2. An efficient algorithm is provided for $\\ell_1$ and $\\ell_2$ loss functions. Weakness: 1. My most complaints are on the presentation. First, the paper does not distinguish between the minimax setting and the instance based setting. Specifically, I was quite confused when I first looked at the discussion about the tightness. If the paper explicitly defined the two concepts, it could easily say the algorithm is minimax optimal but not instance optimal. Furthermore, it would be much better if the authors could define the isotonic regression and the width in the introduction, making it easier for the readers to evaluate the results. 1. it remains interesting to see the effects of different loss functions, both to the sample complexity and computation complexity. Minor: 1. No hyperlinks for theorems. NA <doc-sep>This paper considers the problem of private isotonic regression in which given a dataset D consisting of n samples, the goal is to output a monotone function f that minimizes the empirical risk $L(f;D)=(1/n)\\sum_i \\ell(f(x_i,y_i)$. The authors give a pure DP algorithm for the most general version of the problem which considers a poset X and a Lipschitz loss function $\\ell$ and they obtain an expected excess empirical risk of $width(X)*log(X)/n$. For a fully ordered set, this algorithm is efficient and the idea is to privately choose a maximal point \\alpha via the exponential mechanism and then recursively obtain the final function f by gluing together the functions obtained from recursing on the two partitions of [m] created via \\alpha. The authors note that a simple implementation of assigning the (unnormalized) empirical risk as the score function results in a large error loss, so instead, they use a clipped version of the loss function as the score function resulting in reasonably low sensitivity. The more general DP algorithm has a similar flavor, except now one has to privately choose multiple maximal points \\alpha, which leads to a less efficient algorithm (due to multiple calls to the exponential mechanism). The authors also obtain a near-matching lower bound of $(width(X)+log(X))/n$. They achieve this by reducing to a known DP lower bound for DP algorithms that output a binary vector that is close to the input. They also show that while there is a gap between the demonstrated upper and lower bounds, there are posets that tightly realize each bound. Originality: The contributions are original and would be of much interest to the overall machine learning community. It would be good to discuss existing DP algorithms on the closely related topic of simple linear regression in the Related Work section (see Daniel Alabi, Audra McMillan, Jayshree Sarathy, Adam D. Smith, Salil P. Vadhan: Differentially Private Simple Linear Regression. Proc. Priv. Enhancing Technol. 2022(2): 184-204 (2022)) Quality: The methods used are standard techniques in DP such as exponential mechanism, composition for the upper bounds, and reductions from known DP problems for the lower bounds. I am fairly confident that the work is sound, although I have not checked every single detail. Clarity: The paper is mostly well-written, just for completeness, it would be good to explicitly state the running time of the DP algorithm for the general posets. Significance: Isotonic regression is an important primitive in the machine learning toolbox. This work advances the state of the art on DP machine learning algorithms by adding this problem to the DP machine learning toolbox. The algorithms and proofs presented are relatively straightforward and easy to follow, and the authors also leave a set of intriguing open questions which may lead to further understanding of the complexity of machine learning tasks under DP constraints. See in comments above. No potential for negative societal impact. <doc-sep>The paper studies the problem of diferentially private isotonic regression. It first introduces an algorithm and its excess risk. It then studies a lower bound for solving this problem privately, and shows that the gap between the two bounds is tight, in the sense that for each bound there exist posets for which each bound is tight, and thus the gap cannot be closed. The algorithm runs in near linear time for totally ordered sets with $\\ell_1$ and $\\ell^2_2$ losses. Privacy is guaranteed by relying on the exponential mechanism to iteratively select threshold functions on smaller partitions of the domain. Then it applies standard composition to allocate the privacy budget across iterations. *Strengths* - The specific problem of isotonic regression has not been studied before in the privacy literature. - The paper provides a clear characterization of the problem introducing upper and lower bounds, and assumptions that allow for improvement or tightness of the results. - The paper is clearly written and well organized. *Weaknesses*: - The paper could provide more tangible intuition on the results, for example in what settings this would be a meaningful practical algorithm, and in what settings it is still a first attempt that needs improvements to be applied. Either a discussion or small synthetic experiments could help understand these results, especially given that there is no previous work on the area. This would give an intuition on the price of privacy, the easiness to tune clipping, etc. - Width(X) can be an extremely large quantity and make this algorithm impractical. <doc-sep>This paper is the first to deal with DP isotonic regression: where the domain is some *partially* ordered set X and the goal is to find a *monotonic* f:X\\to[0,1] that minimizes a certain empirical loss. The paper first discusses the totally-ordered set X case and then the partially ordered set case by implementing a generalization of the totally-ordered algorithm. This is a the first paper to tackle this problem. Strengths: * first to deal with this problem * poses upper- and lower-bounds that depend on both log(|X|) and width(X) (the max-length of an anti-chain in X). Weakness: * upper and lower bounds don't match yet fully That is of course, expected from a first paper. I think this is an interesting paper that is likely to instigate follow-up works on this version of ERM and many other variants of "constrained" ERMs. A clear accept. First paper to tackle a new problem, and as such the painting isn't "complete" yet.
Most reviewers found the paper well written with no serious doubts regarding the correctness. We hope authors incorporate the comments from the reviewers in their final revision to improve the presentation.
This paper studies the problem of neural network compression using analysis in the high-dimensional NTK regime. Their main results show that under this regime, the spectral properties of both NTK and CK matrices are independent of the distribution of the weights up to normalization and centering. Instead they depend on a number of parameters that define the activation functions at each layer. This finding informs a new compression technique where a new (compressed) network can match the activation parameters at each layer to enjoy the same spectral properties as the original net. This NTK-LC technique is evaluated on synthetic data by qualitatively comparing the distribution of eigenvalues and on real data by comparing test accuracy with naive baselines. Strengths: The paper has a clear motivation in the field of neural network compression, a relevant problem that is lacking theory. It is clearly written with thorough theoretical results and experiments on both synthetic and real-world data. Weaknesses: 1. The claim in line 49 seems to be a central theme of the paper but has no follow-up discussion on its meaning and implications. 2. Theoretical claims are presented in the asymptotic regime of infinite n and p (Assumption 1). 3. A particular GMM distribution is chosen for the input data of the studied model without justification for why it is the relevant distribution to be analyzing. 4. The results in Figure 2 rely on a qualitative measure of "closeness" to evaluate the method instead of a metric that can be quantified and compared. Many of the markings in the top histograms are barely noticeable and require magnification to be seen. 5. The results in Figure 3 are compared with "naive" baselines instead of competitive state of the art methods. The authors have addressed the limitations of using NTK theory to explain the behavior of modern neural networks. <doc-sep>The authors showed asymptotic eigen-spectral equivalence conditions for fully-connected NTK given GMM data and certain assumptions, based thereon they proposed a net compression scheme with sparse and low-precision random weights, and demonstrated with examples. [+] Results linking NTK and random matrix theory with DNN compression is of timely interest to the field. [+] Though I cannot say I followed all proofs, the main ideas and motivations are well presented. [-] A lack of comprehensive experimental comparison with baseline approaches is limiting the practical significance of the findings. See above. <doc-sep>This paper characterizes the asymptotic spectral equivalence between NTKs of dense and quantized networks. It shows that under certain assumptions of data (high-dim, Gaussian mixture data) and network architectures (wide MLPs), quantized networks have the same NTK eigenspectra of unquantized ones. This finding allows the authors to perform model quantization with little performance degradation. The paper is very well written -- the authors crafted their paper with immense care and taste for mathematical detail. The main results of the paper (Theorem 1 and Theorem 2) are novel and subsume previous studies [2, 32] as special cases. Overall, I think this is a high-quality paper. One weakness of this paper is in its numerical evaluation. As I detailed below, the baselines used for the model pruning (randomly removing weights) seem to be too brutal and too weak. It is beneficial to incorporate more realistic baselines such as magnitude-based pruning. One limitation of the paper, as mentioned in my questions above, is its lack of natural baselines for model pruning in the experiment sections. I encourage the authors to consider incorporating them.
In the paper, the authors provide theorems that establish that for GMM input data, the NTK matrices of dense and quantized DNNs have the same eigenspectra in the asymptotic limit of high input data dimension and sample size. These results motivate network compression algorithms which demonstrate good empirical performance even outside the regime for which the proofs are established. The theorems provide a novel extension that contains previous studies as special cases. The baseline comparisons included in the paper are somewhat limited in nature, and the authors should re-evaluate their choice to use the word "lossless" with quotes, and instead use a more accurate term that does not require quotes.
In this paper, the authors have proposed a new approach to determine the optimized subset of weights instead of simply conduct full weights updating. In order to better update the weights, they measure each weight's contribution to the analytical upper bound on the loss reduction from two sides (global and locally). After evaluation, a weight will be updated only if it has a large contribution to the loss reduction given the newly collected data samples. The experimental results show that their method can achieve a high inference accuracy while updating a rather small number of weights. Strength: The idea is easy to follow and seems applicable to be adopted. Paper is well structured and written in general. Weakness: 1. Lack of explanations: (1) from reward measurement side (motivation side): In the introduction, the authors did not explain why they pick the loss as the weight measurement criteria instead of others (e.g., accuracy). While they report the accuracy in the evaluation part as one evaluation results. (2) from the update algorithm side: The paper did mention their weights updating method is determined via both global and local contributions, and they talked in 3.1 'It turns out experimentally, that a simple sum of both contributions leads to sufficiently good and robust final results'. however, it is not convincing that those two facts can have the equal impacts on the final prediction. (3) from the updating setting side: It seems that the defined updating ratio is one important factor as discussed in section2, not enough contents are provided in the paper to describe how to calculate this ratio. (4) re-initialize mechanism: Re-initialize is also another important factor in the weight updating as discussed in section 3.2 'trained from the last round for a long sequence of rounds. Thus, we propose to re-initialize the weights after a certain number of rounds', however, the computation of how many rounds the network needs to be re-initialized seems not plausible. 2. Evaluation: (1) lack of comparison: It would be good if authors can apply their method on some recent works (or models), which can also show others how flexible their method can be adopted or applied (2) there is no contents in the paper showing how authors decide their experiment settings, for example, why authors always select k (weight changing ratio) as very small 0.01, 0.05, 0.1, 0.2 instead of 0.5 (3) in Fig2, it is curious why authors apply different settings on different datasets when comparing their methods (4) for section 4.2, it would be good if the authors can also try other initialization ways, for example using the average weights in each round window instead of directly using the latest round weights (5) in Table 1, it seems full updating still can beat the combined method, however, in Fig2, authors did not explain why DPU has better performance than other settings even compare with the full update (6) in Fig3, while DPU with re-init can achieve best performance than others, there is no explain about why it did not perform well in the first few rounds (7) the authors did not mentioned how many runs which they have conduct their experiments to provide the results 3. Some parts need to be further improved for example (1) Fig3, it would be good if authors can add some texts information for {1000, 5000}; (2) Section3 is a little bit hard to follow need to be reorganized (3) Related work can be further improved to better cover most recent works<doc-sep>Summary: The paper proposes a weight-wise partial updating paradigm which adaptively selects a subset of weights to update at each training iteration while achieving comparable performance to full training. Experimental results demonstrate the effectiveness of the proposed partial updating method. Strengths: 1. The paper is well written. 2. The process of upper-bounding the loss difference is clear. 3. Experiments are conducted on various datasets with various net structures to support the proposed method. Weakness: My major concern is about novelty and contribution. Although the paper show some application scenarios of partial updating, I still think that pruning would be more proper. Furthermore, the metric of global contribution and local contribution is quite like choosing two similar weight norms to select top-k weight, which is very similar to pruning tasks. So I suggest rejecting this paper. ---- The authors’ rebuttal and the revised version have not fully addressed my concerns. It is not surprise that partial updating outperforms pruning by a large margin, as the inference of small updating still uses the whole weights of the network. Comparing to pruning, the technical contribution of this work is limited, so I would like to keep my original rating. <doc-sep>Summary: This paper presents a method to reduce the bandwidth required to update DNN models on edge devices. The key insight is that model updates typically incorporate new data (training samples), and that after doing so, a minority of weights capture the majority of change due to retraining. The authors propose a method by which to identify this weight subset, and compare the relative size (and test accuracy) of that update to that of other solutions (such as sending the entire network or sending a random subset of the weights on each retraining round). Experiments with a number of existing data sets and models illustrate that the approach reduces update size more than 77% while maintaining reasonable test accuracy. === pros === + Paper is sufficiently motivated. As edge devices use more (and larger) models and as their count increases, the relevance of partial update techniques to accomodate model decay will remain. + The proposed technique provides a "sister" technique to pruning, not identifying nodes with greatest weights to retain, but identifying weights with the greatest changes to retain. The policy is informed by choosing weights that minimize the difference in loss between the fully retrained network and its partially updated version. + The paper is rounded out by practical items, such as encoding weight ids, a policy to determine when to retrain the network from scratch (re-initialization), and avoiding sending updates when validation error does not significantly change. + The evaluation looks at a variety of angles, the ratio of initial training set to update size, different data sets and architectures, and compares to a random partial update strategy as well as a simplified version of their approach "GCPU". === cons === - The overall presentation is difficult to parse. - The technique owes much to pruning methods and methodologies. The technical approach (choosing weights, iterative rewinding) follows from recent work on pruning. It would be great to have that discussion in related work, moving it out of Section 3.1 and Section 4. - Ultimately, existing pruning techniques can reduce networks by 90%. By Amdahl's law, this implies that these techniques reduce communication by 7-10%, not 70-99%. - Equally important, does the technique work well on pruned networks? Unimportant updates may not be as available in such networks. On the other hand, if you do the comparison and all updates are important, then over the course of the lifetime of the installed NN, using DPU instead of pruning would be the winner. - Experiments in key graphs aren't clear: is there re-initialization in Figure 2? Figure 3 performance never falls relative to full updating during re-initialization. While the text (S3.2) makes it seem that the nodes reset all weights, using only 1% of the weights would impact test accuracy relative to full updating. === suggestions / questions === Overall, I found the work interesting, useful, and complete (aside from eval sec questions above). It would be useful to introduce a metric that combines update size with accuracy loss at the beginning of the paper. The evaluation does this, but consider pulling it forward and defining it explicitly. Each round incurs a communication cost in bytes and experiences some accuracy, so, for example, one can capture changes in accuracy per byte, i.e. model improvement by update size. Since you are comparing to other techniques that can reduce the bandwidth similarly, we want to optimize this ratio. Some networks work very well with small k. But how low can you go? I.e., how does one choose k? Perhaps the accuracy/bytes metric could be informative. It would be interesting to discuss on why winning lottery ticket theory gets us 80-90% reductions, but this technique admits 99% reductions by retaining information in the rest of the network. The startup procedure is not clear. The graphs and discussion in S3.2 make it sound like the entire network re-initializes. Can we be clear about what the first network looks like? I'd assume all the weights. But if we start from random values (sending the seed), the first round only updates $kI$ weights. Can the test performance of the network with only 1% of its weights be 65% (Figure 2)? Similarly, if DPU is re-initializing, why is the test accuracy monotically increasing -- the installed network would go back to ground zero. Clearly I'm missing something, or your measuring the performance of the network at the server (w^f) and not the installed network (w^r). Similarly Table 2 should have a column for the number of rounds that required no communication. DPU won't send updates if validation error doesn't decrease significantly. It isn't clear whether you gave the same benefit to Full Updating. === writing / terminology / notation === Overall the presentation is difficult to get through. For instance, Section 3 has many awkward constructions. It seems like there's a simple picture here, similar to the Train, Prune, Retrain flow of pruning work. It seems deeply analogous, with the exception that rewinding replaces pruning. The evaluation section refers to Alg 1 and Alg 2, but Section 3.0 refers only to "step 1" and "step 2". Are there better words than step? This section also refers to the second step as an "optimization" step. You end P1 by saying you're optimizing eq 1 in step 2, then you say step 1 optimizes the same equation. The last sentence of the S3.0P3 re-iterates what was said in P1. I'm sorry, but it's a bit of a slog. The use of notation is consistent. There are a couple of things that felt like speed bumps. I kept wanting to parse \\delta W and \\delta D using \\delta as a variable, like $kI$. At the end of section 2, introducing a new form of w^r as w~, was confusing. Do we need w~? Sometimes you use L0 norms (S2 eq 2) and other times you use summation (S3.1). You use curly braces S4P1 for the sizes of the initial training and updates. It looks like a set, not a configuration. The text says the two sizes "represent the available data samples." But here it's just a configuration -- it's not the set of samples at all (and it wouldn't be b/c not all updates R are present). ==== nits ==== Please learn the difference between that and which. Remove "in order to." Capitalize Figure and Section. Some references use name style, other use indices. But the bibliography is all by name. #confused. <doc-sep>This paper proposes a deep partial updating paradigm that can reduces the computational and communication cost on the edge devices by only updating most important weights in each round instead of a full update. It also proposes metrics to select weights by global and local contributions and the experiment results show the efficacy of the proposed methods. In summary, the method proposed in this paper looks practical and easy to implement, but the theoretical justification needs further clarification. I'm not sure about the significance of this paper as I'm not an expert in this area, so I prefer to leave this to other reviewers to decide. In general, the paper is well written and easy to follow, and the motivation is sound. However, the justification of the global and local contributions need to be clarified further. The inequality of Eq.(3) can hold only if f is L-smooth and convex, which indicates the loss function is assumed L-smooth and convex. So what's the justification of the definition of global and local contributions when the loss is non-convex which is the most common case in the experiments? Without the theoretical justification, the global contribution that selects weights with largest values is basically as the same as pruning, and the local contribution basically measures the changed loss caused by the update of a weight. Although they may be still practical but the novelty is limited. The experiment results show that the proposed method can obtain similar performance with the full updating but costs much less communication overhead. It seems a very practical method in this area and the paper provides an interesting empirical study. The simple combination of global and local contributions outperforms each individual contribution, I'm wondering if authors have tried more other ways to combine them? And why this way is better? One minor comment regarding the structure of the paper: as the initialization strategy plays an role in this method, it would be better to put the experimental results of comparing different initializations to the formal content, and the appendix can be put after the bibliography in one file. ################ Feedback to the authors' response ############### As the authors have addressed some of my main concerns and provided nice extra experimental results, I will raise my score to 6.
The paper proposes an approach to selectively update the weights of neural networks in federated learning. This is an interesting and important problem. As several reviewers pointed out, this is highly related to pruning although with a different objective. It is an interesting paper but is a marginal case in the end due to the weakness on presentation and evaluation.
This paper uses CycleGAN to map neuronal activities of mice (as measured by Calcium traces) pre- and post-learning. The main contributions are (1) empirical results of using CycleGAN to learn the pre- and post-learning mapping look promising. (2) using both attention mask (which is for gating residual concatenations) and Grad-CAM to help with interpretation. (3) sorting neurons with an autoencoder’s reconstruction error, essentially sorting them based on their importance. Strength - Extensive ablation studies, although most are not in the main text but in the appendix. - Using an autoencoder to sort neurons without bias seem to work better than sorting them by firing rate. - Using paired synthetic data to show the effectiveness of applying CycleGAN. And for real data, cycled reconstruction and other distribution metrics are promising, interpretations from both attention masks and Grad-CAM comply well with experiment settings. Weakness - Figures are not explained clearly. What are 6-89 in the right columns of attention mask figures (like 3, 4, 5)? - The writing is not very effective: for example, the second to last paragraph in section 2.4 can be simplified to a much shorter one with the addition of an equation describing it. An equation, paired with the module figure, will also be easier to understand than this long paragraph. Question / suggestion - Is the model shared among all mice, or does each have its individual model? Namely, is this learned mapping more universal or more individual? - Formulation wise: self-attention can also be applied among different neurons (just like in graph attention networks), and this might eliminate the need for pre-sorting. Essentially disentangling spatial and temporal information, modeling the spatial relationship as a graph with a learned adjacency matrix. In this way, neurons will be permutation invariant/equivariant, and the sorting is not needed, and the whole model can be learned end to end. And lastly, it’s not a 9-page paper at all: there are too many places in the experiment section referring to appendix sections that **require** one to look into them for getting a full context. The same is true for the model architecture part. I feel I’m forced to read a 32-page paper… Empirically, applying CycleGAN to reveal the mapping of pre- and pos-learning neuronal activities shows good results! Although the architecture or method novelty is not significant, and there is some unclarity of the writing, it can be a good starting point for further explorations. <doc-sep>This paper presents a new method for learning the transformation in neural population activity that takes place during task learning. The method is based on CycleGAN, but includes additional modifications related to neural data and the manner in which it is collected. The paper also presents visual interrogations of the learned model to better understand details of the learning process. strengths --------- The paper proposes a novel and creative analysis method for understanding learning-related transformations in neural activity; I am not aware of work like this in the neuroscience literature. Inclusion of self-attention is a nice way get a better handle on these potentially complex transformations. Is there more that one can say about the masks extracted in Fig. 4? The GradCAM localization maps are also very useful for understanding what the model learns; in particular, the "positional attention maps" in Fig. 5 are really cool. It would be neat to investigate these neurons/positions in more detail, especially if you could show this model uncovered some aspect of the data that would have been difficult to find with traditional methods. concerns -------- Intro: "In other words, given the neural recordings of a novice animal, can we translate the neuronal activities that correspond to the animal with expertlevel performance, and vice versa?" This is a super interesting question, and the answer this paper appears to give is "yes". However, I'm left scratching my head about what *exactly* one can learn from this translation. I'm not suggesting a new analysis (though ultimately the usefulness of this method will depend on whether or not it can uncover new insights/guide new experiments), but a more thorough discussion on how these translations can be used would make this a more compelling introduction. 2.2: I found this section hard to follow; perhaps moving Fig. B.1 (or something similar) to the main text would help with this? Even more useful (but less general) would be an explanation of the CycleGAN in the context of the neural data. For example, "Let X and Y be the neural activity before and after learning, respectively. The GAN-based framework consists of a generator G: X->Y that maps novice neural activity to expert neural activity; and a second generator F: Y->X that maps expert neural activity to novice neural activity..." This would make it easier (for me at least) to have an intuitive understanding of what the different losses correspond to. 2.3: Again, describing MAE(X, F(X)) etc would be a lot clearer in the context of the neural activity 2.3.1: Why does this ordering process work? I understand that, in order to use 2D convolutions, there must be some non-random ordering to the cells, but it's a bit bizarre to me that reconstrutcion quality from an autoencoder would be meaningful in this way. Is it possible to motivate this choice better? Another useful baseline would be to just use 1D convolutions in time and remove the spatial structure. Of course this means you can no longer use architectures out of the box, but also removes a poorly understood aspect of the preprocessing. 2.3.2: What is the motivation for this spatiotemporal transformation? While useful to show that the model can handle this, it seems fundamentally different from the types of transformations present in the neural data. 2.4: There are a lot of details here that are important to document, but distract from the main point of the paper. Perhaps move most of this to supplemental and use the extra space for a model/process diagram? 3.1: The raw MAE numbers are difficult to interpret as presented in the text. Maybe one part of table E.1 could be moved to the main text? Also, I think presenting Figure 2 first is a faster way to get an intuition for what the model is doing, and how well it is working. minor ----- typo, second paragraph in introduction: Prince et al exteneded the framework *to* work with... table A2: day 4 -> day 1 rewards 2.3: where do the splits 3000, 200, 200 come from? Is this the total number of segments? how did you arrive at this number, is this related to the stride of the sliding window? seems this would result in highly correlated data samples, is that an issue? 3.2: Would be nice to see parts of Fig. F.1 in the main text; maybe show a single neuron, and present more in the supplemental The paper addresses an interesting neuroscience problem from a unique perspective; however, I am still unclear exactly what the method is learning, and how it can be used to gain additional insights into the data. My initial assessment is to not accept this paper. <doc-sep>The paper proposes to learn a mapping for neural activity in the mouse visual cortex: the mapping is from neural activity before learning to neural activity after learning, and this is achieved using CycleGAN. The paper also performs additional analysis to interpret the weights learned by the generator and discriminator networks, as well as assess the quality of the networks' reconstruction of the neural activity. The approach mapping pre-learning neural activity to post-learning activity using GANs is interesting, as is the methodology to sort neurons based on an autoencoder reconstruction loss. It was also good to see the paper systematically exploring how to choose a loss function for training the GANs -- a non-trivial issue for GAN training in general, and not always addressed in GAN papers. However, there are some major concerns about the paper: 1. The overall motivation of _why_ one would want to map pre-learning to post-learning neural activity was not clear. Although the introduction and discussion briefly discuss interpretability for neural learning, it was not clear how this analysis contributes to that. 2. Using GradCAM maps to visualise "regions of interest" for the discriminator was interesting, but this analysis does not seem to lead to any deeper understanding about the neural activity. While the discriminators learn that the activity around the reward regions are critical to distinguishing pre- and post-learning activity (Figure 5), it is not clear why the generators don't do this. Moreover, it is possible that this effect would vanish, or other regions of interest might appear if the networks were conditioned on information about the stimulus or trials. It is also not clear how this analysis is generally applicable, or what insights can be gained if it were applied to neural data from a different task. 3. Reordering the neurons in the data using an autoencoder reconstruction loss appears to be a critical preprocessing step in the training pipeline -- however, the choice of an autoencoder over other approaches to reordering are not clearly motivated. Although this does lead to better reconstruction of the neural data from the GANs, it appears to make the subsequent step with the CycleGAN redundant: if you can accurately reconstruct neural data from the autoencoder, then why train an additional set of adversarial networks to do the same thing again? 4. It would be nice to have an estimate of the compute resources and time required to train all the networks in the pipeline (autoencoder, CycleGAN) and perform the post-hoc analysis with GradCAM, etc. In the light of doubts about motivation and benefits of the method, it would also be relevant to know how computationally expensive it is to implement. 5. There was an overall lack of clarity, and particularly in the methods and results section: - all equations are inline and not numbered and therefore references to terms in the equation are hard to keep in mind while trying to understand section 2.2 and 2.3; - extensive details about architecture and training frequently detract from understanding what steps the training pipeline consist of (perhaps these should be moved to the appendix) - there was no explanation of how to interpret the GradCAM maps in Figures 4 and 5, or what the colours / numbers mean, and the explanation of their overlap with reward regions appeared to be handwaving. It would be nice to see the pre- and post-learning neural activity for precisely those neurons that the discriminator assigns attention to in Figure 4. - the description of the results was confusing, and many of the plots that the conclusions here rely on are in the supplementary section, making it hard for a reader to follow any reasoning based on these plots. Minor comments: 1. Heatmaps in Figures 2 and 4 are hard to interpret without colourbars 2. Neural activity in Figure 1 is barely visible due to the colour scheme 3. The explanation for how plots in Figure 5 is generated appears only in the caption, with no elucidation in the text. 4. There does not appear to be information of recording time per trial in the main paper The motivation for the methods presented in the paper are not clear, the conclusions are not very convincing and consequently, it is hard to judge the contributions of the paper. A lack of clarity in the methods and results section exacerbates this.
This paper received 2 marginally below and 1 marginally above ratings. We discussed the paper with the reviewers and there was broad consensus that 1) the paper lacked clarity; 2) multiple modeling choices were debatable (e.g., ordering or embedding of neurons and convolution over neurons!!) and not sufficiently justified (and these choices will critically impact the conclusions drawn from the analysis); 3) we were not convinced by the relevance of the synthetic data to reflect a meaningful biological process; 4) we did not see any meaningful knowledge gained for biology from this whole analysis. My recommendation is thus to reject this paper.
The paper theoretically and empirically analyzes the link between connectivity patterns of deep networks and their convergence. Based on this analysis, the authors propose a couple of training-free metrics (effective width and effective depth) and use them to do training-free NAS. The paper presents a thorough link between gradient descent convergence and structural properties of deep networks by proving a link between number of unique paths, number of incoming paths at certain nodes in the network and the least eigenvalue of a Neural Network Gaussian Process (NNGP) variance. Essentially, the authors look at how the NNGP variance changes as the data propagates through the model and relate it to how many unique paths it goes through and how many incoming edges land at the last layer (in the DAG topology used to formulate this problem). These principles are then baked into their “effective depth” and “effective width” metrics that do not require any training or forward/backward passes to compute. Some empirical results are shown for this convergence. The remaining paper looks at training-free NAS by cutting down the search space. Results are shown on CIFAR-10/100, ImageNet-16-120, and ImageNet. The paper has following strengths: 1. This is a principled approach to study an important problem, i.e., understanding how deep network architecture topologies and connectivity patterns relate to training convergence. Knowing this is obviously very valuable because (a) we can design better models directly without a long, time-consuming search, (b) in future, better understanding of how architectures impact accuracy can enable us to create even more novel models. Indeed, it is a hard problem, so coming up with an intuitive method for this problem is very hard. 2. The proof seems very intuitive and the resulting metrics (effective depth and effective width) are very simple to compute and do not require any training or even forward/backward passes. One can just look at the architecture connectivity and tell with reasonable confidence if it is a good or bad architecture. 3. Empirical results for training convergence and NAS-bench architectures, etc., present some interesting insights (although, they also bring up some questions, see below). 4. Authors have shown experiments all the way to ImageNet which is nice. Despite the strengths, there are some weaknesses and questions. Addressing these would likely make the paper much stronger: 1. The empirical validation of the theory (section 5.1) is done using only three architecture topologies. This is not that expensive (particularly for MNIST and CIFAR-10). Can the authors look at many different deep nets with different connectivity patterns and somehow present more data on convergence? 2. Experiments section could in general be stronger. For instance, the ImageNet result on TE-NAS is not good enough. We see some accuracy improvement, but we also see model size increase compared to vanilla TE-NAS. Moreover, since the training-free NAS itself is very cheap, further improvements in search time do not make a significant difference. In Figure 5, each point represents a subset of models that achieved a similar accuracy. Were there any interesting patterns in terms of their number of parameters/MACs? Example: did the white/black circles contain models of a certain size in terms of parameter counts? Were the models deeper but narrower within a circle? Or shallower but wider? 3. The very first paper that pioneered the creation of such connectivity-based metrics was reference [R1] cited by the authors. This paper needs to be discussed in much more detail in Section 2.2/2.3 as there are many interesting synergies. For instance, [R1] showed the link between connectivity patterns and training convergence too. [R1] showed very concrete training convergence curves and showed that their proposed metric correlated very well with convergence. The metric proposed by [R1] is also training-free. Indeed, the present study is much more fine-grained, more theoretical, and is more generally applicable than [R1]. 4. Other weaknesses involve things like there is no characterization of differences in kernel sizes, etc., but this is not critical and can be a good future work. 5. Proof for equation (14) in appendix B has a typo? K_{ii}^{l} should have a square on \\sigma (right below “Proof of Lemma 3.2”)? Honestly, this paper could be much stronger if there was a bit more focus on empirical results for justifying the theory (e.g., if section 5.1 was much stronger). [R1] Bhardwaj, Kartikeya, Guihong Li, and Radu Marculescu. "How does topology influence gradient propagation and model performance of deep networks with DenseNet-type skip connections?." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. Authors could comment on role of kernel sizes, etc., and how the connectivity alone does not take this into account in the present study. <doc-sep>The authors proposed a formal analysis framework to estimate the the upper bound of training loss convergence for DAGs with variety of network topologies. Based on that, they proposed a plug-and-play method to speech up previously reported NAS methods by apply a filtration. The results demonstrate the proposed method to search better networks and faster. + To my best knowledge, this is perhaps the first theoretical study directly focusing on the fine-grained NN topological connectivity rather than a general NN function, despite some prior works exploring part of that, e.g., [50] for NN width/depth, [69] for skip connection. It is an important yet understudied direction, and this work appears to lay a good foundation. + I have gone over the details of the derivation for the convergence analysis of DNN regarding the connectivity patterns, and it seems sound. For the bound estimation, the authors chose a simplified theoretical model of DAGs + MSE loss, using NNGP kernel. The key step is to estimate the flow of NNGP variance and mean through unidirectional information paths in the specific topology. The theoretical results are examined by a series of simulations in Sec 5.1 and Sec 5.2. + The notions of ‘effective depth’ and ‘effective width’ are clearly defined, and the authors also gave clear guidelines how to use them in NAS. It is shown to accelerate two latest NAS methods and outperform more, across multiple benchmarks. The experiments validate the effectiveness of the proposed method. In general, I think this is a cool and well-polished paper. One question is the authors demonstrated their down-selection technique to two training-free NAS approaches (NAS-WOT and TE-NAS), which are already very fast and therefore show only non-significant reduction of search time. I wonder why not trying the proposed approach on more costly and accurate search methods, and see if the accuracy-search efficiency gain is still favorable. Current discussion is sufficient. <doc-sep>This paper studied how a wide network’s NNGP kernel can depict the optimization dynamics of a particular DNN topology, by propagating the NNGP kernel spectrum and showing the topology to affect the bound of convergence rate. Based on this observation, the authors created two notions of “efficient depth” and “effective width”, that can be plugged-in existing NAS methods to filter out “unpromising” connectivity patterns for speedup. Strength: This is an important new piece of work towards connecting deep learning theory and NAS. Prior arts already adapted theoretical properties of general DNNs, but never established their correlations with the concrete NN architecture topology, except some empirical correlation observations. This paper is the first to theoretically justify the optimization implication of general DNN topology, which is likely to become a milestone for future work in this frontier. Throughout the paper, the theory and application aspects are tightly coupled, and the story is coherent. Their claim is theoretically sound (a non-surprising, yet nice adaptation of NNGP proofs). In the experiments, multiple benchmarks are reported, with error bars. Section 5.3 is very helpful in understanding how the effective width/depth principles are used to improve NAS in a plug-in fashion. I especially like how the authors can choose d and m in a principled justified way, not ad-hoc. Overall the writing is very good, clear and easy to follow. It’s a mature paper. Weakness: NNGP is a rough characterization of optimization dynamics, hence its preciseness in comparing architectures is limited. More importantly, as mentioned on lines 280, “Although our d and m are only inspired from the optimization perspective, not the complexity or generalization, our method mainly filters out bad architectures at a coarse level, but does not promote elites in a fine-grained way.” I was not sure whether this optimization-only selection bias will lead us to missing architectures that are excellent in complexity/generalization (hence offsetting their perhaps mediocre optimization behavior). The authors applied their principles to accelerating TE-NAS and NAS-WOT. Those are two earliest training-free NAS approaches. Could the authors also try more recent ones and see improvements, e.g., Zen-NAS or zero-cost proxy NAS? Moreover, how about non-training-free NAS methods, can they be accelerated by such pre-filtering too? In Tables 1 and 2, the FLOPS of searched architectures could be included too. No particular negative impact. Limitation was discussed by authors.
This paper studies the relationship between connectivity of a deep network and its convergence, both theoretically and empirically. The paper also studies simpler metrics such as effective depth and width to guide the architecture search. Overall this is an impressive theoretical paper supported by empirical evidences. All the three reviewers find the paper a valuable contribution to an important theoretical problem in deep learning. After reading the rebuttals, Reviewer rAbP recommended to accept this paper in its current form. Reviewer D7qw felt that all the concerns had been well addressed, and increased the score by one. Reviewer 6D9f agreed with the authors' response.
In this paper, the author proposed a transformer-based encoder-decoder framework for label-free text style transfer. The described task under the unsupervised setup is important and instructive for the text style transfer domain. The model architecture is well demonstrated and the writing is easy to follow up. The experiment results show satisfying performance even comparing with state-of-the-art supervised methods. However, I have some concerns that may lead to the weakness of the paper: 1. About the assumption: The author claimed the method is label-free. However, the "unsupervised" model is based on an assumption that two adjacent sentences should have the same style. With the assumption, the training of the model is actually weak-supervised because in each step the paired sentences are provided with the same style. This assumption is actually utilizing the context-level supervision instead of the sentence-level labels. This idea is also previously used in [1]. 2. About the framework: The model adds the exacted style vector to all the hidden states of the encoder. How can the author guarantee that the encoder will not extract the style information of the input? Also, is it possible that the style vectors still contain the content information from context? 3. About the style vector: The model changes the style of the sentence by adding a direction from the source style vector to the target style vector. The approach may work under the assumption that the style vector space is linear to the semantic meanings. But there is no regularizer or training loss to guarantee the linearity assumption of the style extractor. Why didn't the author directly replace the sentence style vector with the target style vector? 4. About the dataset: The model is only evaluated on one dataset. It could be more solid if the author conduct experiment on other commonly-used style transfer datasets such as Yelp and Personality-Captions [2]. Besides, the split Amazon review dataset only has two sentiment classes as "positive" and "negative". It could be more persuasive if the model is tested on other datasets with multiple sentiments, to verify the effectiveness of the proposed re-styling strategy. 5. About the evaluation: The author only reported the performance of content and style preservation (Acc and self-BLEU). The sentence generation quality is expected to report by testing the BLEU score of the generated sentences. 6. In Figure 4, there is no clear pattern between positive and negative sentence embedding. The difference in embedding space is mainly caused by different topics, which in my understanding are the content of sentences. This means the style vectors cannot eliminate the content information and also failed to separate sentences with different sentiments. Reference: [1] Zhang et al. Improving the Dialogue Generation Consistency via self-supervised Learning, 2019 [2] Shuster et al. Engaging Image Captioning Via Personality, 2018<doc-sep>This paper tackles the problem of extracting and modeling text (writing) style without using labeled data. Traditionally, modeling text style requires either paired sentences (supervised) or two pools of unpaired sentences (so-called unsupervised). This paper exploits 1) language model pretraining and 2) free supervision signals in text to achieve modeling text styles without labeled data. For the 1st point, the authors (correctly) hypothesize that a large pre-trained language model (e.g. T5) already “knows” about style information and one can isolate the style information using the right fine-tun signal. For the 2nd point, the authors assume that text style (e.g., sentiment) is slow-moving and consistent for adjacent sentences (I guess It’s a similar signal is exploit by Next Sentence Prediction in BERT, and CBOW in word2vec?). And this is used as the “free” supervision signal to their model finetune. In the experiments section, the authors test their model on transfer learning tasks. The experiments (Fig 2 &3) seem to suggest that at at given high content preservation score ( > 50), the proposed model is not as accurate as other supervised models. But with low content preservation, the model can steadily improve accuracy by modifier more words (Fig 2). In Figure 2, the TextSETTR accuracy has almost (inverse) linear response wrt to the content preservation score. But in Figure 3, the plot for TextSETTR stopped at “30-50%”. What would happen if the modification percentage is higher? Would TextSETTR get closer to 100% accuracy? Another small issue with Figure 3: I believe the task is binary (pos vs neg). It might be more useful to plot the accuracy from 50%~100% instead of from 0% ~100% since 50% is the practical lower bound performance. <doc-sep>########################################################################## Reasons for score: This paper proposes a novel approach to the label-free style transfer task where an input is corrupted via different strategies and fed into an auto-encoder which is additionally conditioned on its prior adjacent context sentence via a "style encoder" which adds its mean pooled hidden state to the former before decoding. Both encoders are initialized to and leverage the strength of pre-trained T5 model. Additionally the amount of addition/deletion of tokens is tunable at both training and inference time. The overall idea is quite compelling, but the paper's argument could be improved greatly with revisions to its existing experimental setup and more evaluation overall to better and more thoroughly back its claims. ##########################################################################Pros: Pros: 1) The authors propose a novel approach to the label-free style transfer task that is based on evaluating how training under different combinations of 3 noising strategies ( Noise, Back Translation and Noisy Back Translation ) on input texts can be used in conjunction with an auto-encoder and style encoder over the prior sentence context to then do inference given an input text and a small number of examplars of the source and target styles. The idea is laid out fairly clearly both for training and inference though certain particulars there and in the experiments section were a little unclear and could have benefited from some formal notation ( see next section ). 2) The quantitative results on the Amazon dataset for their best model on both the full data and few shot regimes are quite impressive compared with the other label-free style transfer paper they compare against ( Xu 2020 ) 3) The few qualitative examples shown are impressive ( particularly the American <-> British ones ) 4) The tuning hyper parameter is a useful addition ( though it'd be interesting to see how dataset dependent it is ) ########################################################################## Cons: 1) Overall the writing was a little unclear at certain spots and could have benefitted greatly from some equations explicitly stating the setup. For instance i was unclear if the context representation was added ( which the text suggests ) or concated to the noisy encoding before being decoded ( the later is suggested by Figure 1 especially since the 4 float values for the tuning rate ranges are said to be prepended ). Similarly the sampling strategy used ( as opposed to greedy decoding ) 2) Doing quantitative evaluation on only one dataset ( Amazon ) and then only showing examples of how the model does qualitatively over another dataset ( English Common Crawl "C4" ) without doing any human eval is a little disappointing. The idea is novel enough where even just doing some more automated evals would be suffice for me. For instance, why weren't automated metrics given for the English Common Crawl dataset? Those results and having information on training set size and the average token size of each example for C4 should be given. Also the authors compare against Lample 19 for the pos->pos and neg->neg setup for the Amazon data, why not show the results for the SYelp data as well? Does the 20-40% add/delete tuning work better there as well or is it dataset dependent? 3) There are two issues with your use of the Amazon dataset. First it doesn't really provide apples to oranges comparison against the prior papers as they train/test on the same data from Li 18 which has ~ 270 K training examples whereas the work here generates 23.6 M training examples. It seems you should either see how those papers do with that much data or limit your dataset to be of at least comparable size to be fair. Second, the Amazon test set is only of size 500 so assessing results on that alone seems in-suffice. 4) The paper hypothesizes that style is a "slow moving" feature consistent over large spans of text hence the use of only the prior adjacent sentence as context. The paper shows that using just an adjacent sentence gives promising results, but doesn't show that its necessarily better than just using examplars or using a leading paragraph to derive the style from. I don't think this is exactly necessarily to address here, but for future work it would be nice to see such a comparison. Additionally, how would using a 1000 examplars as opposed to 100 at inference time affect performance? A graph showing how accuracy and content preservation were affected by that would be interesting to gain better understanding. Similarly showing how just the NBT strategy did alone (as opposed to N + NBT ) would be interesting. 5) I didn't find the multiple aspect UMAP embedding visualization particularly convincing for how well the embeddings separate the "sentiment" aspect as there is substantial overlap within each category ( particularly software ). I don't know if this is particularly necessary for your argument in my opinion ( especially compared with evals on other datasets), but if so then it'd be interesting to have quantitative numbers for those separations and compared with how it differs from just taking the T5 embeddings and doing the same UMAP? 6) The "replace" noise strategy feels pretty arbitrary. Is there any motivation behind using that as opposed to using a LM or another strategy to replace tokens? 7) A citation for using Self Bleu as opposed to Multi-Bleu in the Evaluation Procedure section would be helpful. Additionally a citation of Ke Wang, Hang Hua, Xiaojun Wan "Controllable Unsupervised Text Attribute Transfer via Editing Entangled Latent Representation" ( Neurips 19 ) particularly for its "tunable" aspect could be an addition to the Related Work section. 8) This is nitpicky and probably for future work, but the use of examplars doesn't necessarily limit the user to a pre-defined set of styles ( like the unsupervised case does ), however it would be interesting to see what would happen given out of domain examplars for either the source or target classes at inference time ########################################################################## Questions during rebuttal period: Please address and clarify the cons above ######################################################################### Possible prior citation <doc-sep>This paper proposes a method for text-style transfer where they dont need label information of the interested style. The extend t5 model to develop their architecture which models style extract a style vector from arbitrary text and use this vector to condition the decoder to perform style transfer. However the current presentation of the paper is hard to follow. Which raises the following concern: 1. As they need to provide two sentences which has to be chronological sentences, it is not possible to obtain always. Hence, they randomly select sentence pair, but then two sentences may not bear same style. How the authors are incorporating the same? 2. For inference they need sentence exemplar both both style. This contradicts their previous claim. They have not compared with 3. How noise introduction in helpful for style corrupted sentence generation? They do not use any heuristic and from a single sentence there can be multiple variation of the corrupted version, are all those taken at the time of training? Then the style it is learning is possibly not the intended one as different corrupted sentences might need different style to reconstruct the sentence back. 4. Model portion is extremely cryptic. What is back translation etc? At least should be explained in one line. 5. Due to the unreadability of the model, I cannot provide judgement on the result section.
This paper proposes a new method for label-free text style transfer. The method employs the pre-trained language model T5 and makes an assumption that two adjacent sentences in a document have the same style. Experimental results show satisfying results compared with supervised methods. Pros. • The paper is generally clearly written. • The proposed method appears to be new. • Experiments have been conducted. Cons • The fundamental assumption of the method is not convincing enough. (Issue 1 of R3, Issue 4 of R4, Issue 1 of R2) • The proposed model is also not convincing enough. (Issues 2 and 3 of R3, Issue 3 of R2) • There are problems with the experiments. For example, it would be better to use more datasets in the experiments. (Issue 4 of R3, Issue 2 of R4) Discussions have been made among the reviewers. The reviewers appreciate the efforts made by the authors in the rebuttal, including the additional experiments. However, they are not fully convinced and still feel that the submission is not strong enough as an ICLR paper.
1. The proposed solution makes sense and is a reliable extension. Considering the baseline picks the max of the posterior $p(D|x)$, it's a nice idea to extend the algorithm to include more randomness and increase the exploration to improve robustness of the results. 2. The paper is mostly well written. Notably, the introduction (Section 1.) is extensive, easy to read and presents excellently the state of the art and the motivation for the work. 3. The results corroborate the improvement of the algorithm, displaying enough significance to warrant using their solution rather than the original baseline. 1. Main issue relates to Section 3.2. This is the main novelty of the paper and I feel that it is badly presented. Judging from the text, there is not sufficient information that would allow a user to reproduce the algorithm. When using the tracking algoritm, is an image subvolume with the seed at the center used? The output of the original algorithm is a vector that maximises $p(D|x)$. This is just a direction given a subvolume and a previous centre point, how is the next voxel selected? Is the next subvolume in the image within the direction of the vector used? What do the authors mean when they stay state "the median location of all living agents is computed after each step"? Is the location of all living agents the majority voted voxel in the direction of the previous vector? 2. Unless I misunderstood, the "stochastic tracking strategy" is just the centerline tracking algorithm of Wolterink 2019 initiated at different seeds in a sphere with radius _r_ and using some stopping criterion based on the surviving agents. How is this different than just running the algorithm various times with different initialisations and computing $f(c_1, \\dots, c_n)$ where $f(\\cdot)$ is a function that returns the desired object. In my view, the proposal is too incremental and lacks novelty. <doc-sep>- The design of multiple stochastic agents reaching consensus makes the method less sensitive to local errors made by the CNN orientation classifier, which is a major advantage over the previous non-stochastic approach. - The writing of the paper is very clear (the clinical problem, the challenges, the method, the experiments and results etc.), the structure is well organized, making reading the paper an pleasant experience. - The major weakness of the proposed method is that it contains many hyperparameters that may require careful tuning. For example in the stochastic tracker, the number of agents, the radius around the seed point, the thresholding values for determining the stopping criteria, and also the distance threshold that was used for computing the evaluation metrics. How did the author tune these parameters? How sensitive are they to influence the tracking results? Have the authors done any analysis on these parameters? - The proposed method was evaluated on a rather small dataset. Considering the variability in different subjects in this type of data, would the method with the same hyperparameter setting still work? If not, how easy would it be to tune those parameters? <doc-sep>The paper is well written, both in terms of structure and in terms of language The method is analysed in an adequate and easy to understand manner. The method looks both interesting and powerful There is an interesting and in depth discussion about limitations and abilities of the proposed methods The method that the authors propose by their admittance is inspired by Wolternick et al. To what extend is this paper novel compared to the proposed solution of Wolternick et al ? There is no comparison against other methods that perform centreline extraction, making the experimentation section of the paper lacking significantly. <doc-sep>The paper is well-written and nicely illustrated, follows the standard structure, adequately summarizes relevant prior work and also discusses its own contributions well. I found the method section lacking, in particular the introduction of the deep neural centerline tracking which this paper is centrally based on, but that was probably for lack of space. Maybe it would help to explicitly point the reader to the prior work which describes this in much more detail; the sentence "inspired by a method …" does not sound like one would find all necessary algorithmic detail in the cited paper. The method is definitely well-suited for this task, and likely superior to previous works on similar images. I am not sure how (un)common 4D MRI is for the small intestines – the imaging itself may be quite a challenge here. In particular, the motivation for directly tracking the centerline instead of starting with a segmentation mask is sound. Fig. 4 was particularly helpful in understanding how the stochastic tracking works in practice. One weakness is that the dataset is relatively small; the authors have only 14 MRI datasets from healthy volunteers. Therefore, the evaluation also does not perform a proper training / validation / testing split, but a leave-one-out cross validation is used. Given this small dataset, this is certainly a good idea, although I wonder if the one split used for algorithmic development should have been excluded from the evaluation. (The authors state that they re-trained this fold with a different random seed.) I also think that the surface DSC is a strange choice, given the fact that the reference annotations do not even have radius information. Since the whole evaluation is based on (surface-based) precision and recall measures, a centerline-based variant would seem more adequate and even simpler. In the end, this is not a serious weakness, though – I believe the results would not change much.
The paper presents a method to extract intestine centerline in 3D cine-MRI. The methodological contribution lies in adding stochasticity to an existing centerline tracking method by establishing a consensus among the multiple stochastic agents. R1, R2, and R3 rated the methodological contribution as small or incremental. Although strongly building on an existing prior work, there is some novelty in the methodological aspects linked to the stochasticity, as recognized by R3 after the rebuttal, and in the methodology, as mentioned by R1. R2 and R1 raised some issues regarding the clarity of the methodology and requested further analysis of the results. Most of these questions were addressed satisfactorily in the rebuttal and revised version. R2 and R4 both raised some concerns regarding the hyperparameter tuning and setting, although the rebuttal claims to keep experimental optimization of the hyperparameters to a minimum relies instead on physiological prior knowledge, authors might want to look at a more in-depth analysis of their sensitivity and influence in the future. The same goes for considering other baseline methods, a larger database, and different evaluation metrics. After the rebuttal, there was however a consensus among the reviewers on the value of publishing this work in MIDL in its current state.
This paper introduces EgoTaskQA, a new benchmark with questions and answers for diagnostic analyses on spatial, temporal, and causal relationships between entities in goal-oriented task videos. The dataset is extended from the ego-centric multi-agent LEMMA dataset with four different types of QA annotations: descriptive, predictive, explanatory, and counterfactual. The paper also provides experiments with state-of-the-art models on the benchmark along with human performances for diagnoses on the reasoning tasks in goal-oriented task videos. * Building upon an existing dataset considering what's missing in the existing works. * The data annotation process is systematic and the authors provide the reasoning and procedures behind the annotation in the paper. * Provides extensive evaluation of state-of-the-art models along with naive baseline and human performances. * Provides detailed information including procedures of annotation, statistics, model setup, and datasheet in the supplementary materials. * Has a website providing data exploration along with code and model checkpoints. * The tables are hard to follow without bolding. For example, the highest numbers in Table 2 and the increase of performance in HCRN in Table 3 should be easier to read if bolded. * Terms like "Most Likely" in Table 2 are vague and confusing. The authors should consider explaining more about the term to clarify. * There are four columns but only two category columns in Table 3. The authors should consider adding an appropriate name for each column. <doc-sep>Augmented two datasets based on Lemma dataset - a normal and indirect dataset. Both datasets had contributions of four question types (descriptive, predictive, explanatory, and counterfactual) across different question scopes (world, intents and goals, and multi-agent) from an egocentric view. The main difference between the indirect dataset from the normal dataset was the usage of words like "something" instead of object names to limit textual exploitation. They compared results between these two datasets and the indirect dataset results were worse than the normal dataset results, shedding light on the flaws of general SOTA video text alignment models as exploiting textual relationships rather than using sufficiently knowledgeable spatial temporal reasoning modules. Good contributions of augmenting datasets, novel Q/A for a video QA dataset (as described in table 1), good results and discussion, limitations described are relatively small, good figures in general (aside from grammatical issues described in weaknesses), normal/indirect dataset idea was a great idea and showed good results, Poor grammar in benchmark vocabulary/spatial relations, certain figures' grammar is distracting from the paper (fig. 1 and 2), certain figures can be very difficult to understand due to complexity, doesn't explore model architecture/reasoning modules at all, lack of forward guidance for future papers/works, weak explanation for broader impact. <doc-sep>This paper created the EgoTaskQA benchmark for a direct evaluation of question-answering on real-world egocentric videos. The designed questions are aiming for video understanding of the human tasks from different perspectives, action dependencies and effects, intents and goals, and agents’ beliefs about others. This benchmark will help to evaluate agents capability in a comprehensive way with descriptive, predictive, explanatory ad counterfactual questions and thus help to develop more intelligent agents. - This paper introduces a new benchmark, EgoTaskQA that contains 40K balanced question-answer pairs. Those questions are targeted to understand the video from multiple perspectives to evaluate the agents’ intelligence. - The questions are very broad, ranging from descriptive, predictive, counterfactual, and explanatory to evaluate the agent over the spatial, temporal, and causal understanding of the tasks. - The generated questions in the proposed benchmark take care of the diversity and balance of each kind. - The splits of normal and indirect of the dataset will help understand whether the model is using the correlation between the questions in the training and evaluation set without understanding the task, instead using language shortcuts in relations among questions. - Evaluate model performance on question scopes, types, targeting semantics, and overall answer categories will show its overall capacity in understanding. - The ablation of object information and language-only shows the objects are very important visual clues in QA tasks. - The data scope is in-door goal-oriented tasks so this might cause limitations for a broader community. <doc-sep>This dataset uses videos from the LEMMA dataset (egocentric videos of human-human / human-object interactions) for studying video QA. The authors extend LEMMA with annotations of objects, agents, and their relationships from Amazon Mechanical Turk. Then, the authors build causal dependency relationships between agents and objects in the videos. The questions in the dataset used in QA is then automatically generated with 4 types of questions (descriptive, predictive, counterfactual, explanatory). The dataset is evaluated on a set of 6 existing video models, showing a gap from the model to human performance. - Based on Table 1, and the authors review of related works, there is a richer set of questions in this dataset compared to baseline (although I have some concerns about the questions themselves, see first point in weaknesses). The proposed question types (descriptive, predictive, counterfactual, explanatory) and the generated causal dependency relationships are interesting for understanding the performance of Video QA models. - The authors benchmark a set of state-of-the-art video models on their dataset, with both normal split as well as a split for studying indirect references. These benchmarking efforts helps the community understand the strengths & weaknesses of existing models. - The full dataset is not available at this stage, even to the reviewers (please correct me if I'm wrong). Also, based on samples I'm seeing from the data, the automatically generated questions in this work seems a lot less clear than prior work such as AGQA [A]. From the author's website, some samples "Q: What does the person want watermelon to be for doing the action cut something using something in the video?" "Q: What will the status of the last object that has status change change to if the actor pour from something into something in the future?". Could the authors clarify why the generated questions here are less clear? - Due to the above, I have concerns about the ability of this dataset to evaluate QA of the different question types proposed by the authors. Furthermore, comparing Table 2 here with Table 2 in AGQA, many of the question categories are fairly similar. Since this dataset (40K questions) is also a lot smaller than AGQA (3.6M), the significance of this dataset is not clear to me - would appreciate clarifications from the authors. - There was just 1 trial run for all the benchmarking. While I understand that running video models are expensive, this does bring some concerns on the variability of results across runs and reproducibility. The ML reproducibility framework was not used here. [A] AGQA: A Benchmark for Compositional Spatio-Temporal Reasoning <doc-sep>This work introduces a new video question answering benchmark that consists of egocentric videos with fine-grained annotations of object states, object-object, human-object and multi-agent relations, and causal action relations. The dataset also contains four types of question-answer pairs, including counterfactual and explanatory questions, in addition to questions that aim to capture intents, goals and object states and changes. Finally, the work compares 6 video question answering models on two data splits. - The work significantly expands an existing egocentric dataset with 40K questions covering 4 different types. This dataset will be very useful to the embodied AI and VQA/VQC research communities. - The paper comes with a comprehensive related work with a summary table that contrasts the contributions of this work w.r.t. relevant literature. - Evaluation compares 6 VQA models. Several ablation studies are also present to demonstrate the usefulness of object information and language supervision (albeit masking with a common term vs. using the noun terms does not necessarily imply better action understanding and further analysis is needed) - Unfortunately, reviewers are not able to access the data without exposing their identities. The website mentions filling in a form/license data agreement for downloading the data with a note: *During review process, we refer to the website for data examples and temporarily forbids full data download.* It is unclear how the dataset will be distributed afterward, i.e., i) on which platform (the datasheet in the supplementary material mentions *dataset could be accessed on our website* but would that be restricted access?) and ii) whether all code will be released and scripts for ease of reproducibility of the reported experiments will become available. - It is unclear how the correctness of answers generated by functional programs is verified. Similarly, how questions are machine-generated remains fairly unexplained. To my understanding, neither the code for the QA construction is not open-sourced nor the paper contains sufficient details on QA quality. Is the evaluation of the quality of the generated answers limited to the one described in lines 239-241 (randomly sampling 50 questions from each category)? It seems that some categories have very low accuracy. - The observed performance increase of text-only models on object state change questions is worthy of further analysis. Perhaps the associations captured are not action-aware or context-aware, but rather simplistic linguistic co-occurrence patterns, as the experiments in Section 4.3 suggest. However, the performance drop between normal and indirect splits for text-only models is marginal. Would be nice to open source models, both for reproducibility and to allow future research to perform more exploration on what exactly language-guided models are able to learn.
The reviewers are positive regarding the high level of the contribution of the work for the NeurIPS 2022 Track Datasets and Benchmarks. The authors properly addressed all reviewers comments and concerns during the rebuttal period.
This paper presents a model-based contact servoing framework to perform control the contact between a compliant tool and the robot's environmet. Contact is parameterized as a binary contact flag, line of contact and an end-effector wrench. Dynamics are learned in a latent space with an encoder-decoder framework that is used to predict the contact parameters at every step using a pointcloud observation and the input wrench measured at the robot's wrist. Strengths: - Data collection procedure is self-supervised without needing human intervention or labelling. - In-depth analysis of different ablations of the model to study and evaluate the effectiveness of different components of the system Weaknesses: - Generalization: Experiments are restricted to a single spatula. Furthermore, the surface in contact with the spatula also remains the same. Would be interesting to see generalization to new spatulas, or even just train/test on more tools as well as analyzing variations in other parameters such as the contact surface. <doc-sep>The paper presents a robot control framework for controlling contact forces at the tip of a pre-grasped tool. Remarkably, the proposed approach considers a situation where the compliance of the grasped tool can be used to change the geometry of the contact. The authors presents a method to learn a dynamic model which allows to predict the effects to the robot actions on the contact geometry and on the robot end-effector location. This dynamic model is used in an MPC framework to control contact forces in a variety of scenarios, including controlling the contact forces at the tip of a flexible spatula to scrape a target object in presence of obstacles. The paper presents a novel solution to a novel problem. Controlling tool contact forces is a very interesting problem which (to my knowledge) has been addressed only in the rigid case. The submitted paper considers the non-rigid case which is a significant novelty. The major limitation of the paper is the need of a complicated data-gathering phase which requires precision equipment (Photoneo) and human supervision. <doc-sep>The authors proposed a learning approach for modeling tool-environment interaction which learn the dynamics from real world data. Key of proposed method is embedding the latent spaces from the robot's sensor data and decodes the contact feature representation which consists of binary contact value, the line of contact, and an end-effector wrench. The authors verified the scraping operation of a clay-like object using a spatula attached to the tip of an actual robot as a task involving contact between the environment and a compliance tool. Experimental results show that the proposed method is superior in all evaluation items (Contact force error, Binary Accuracy, and Contact Line Error). Furthermore, by adding the data extension and wrench offset action proposed by the authors, the robot is able to scrape objects from the table while avoiding contact with obstacles. The overall writing of the paper is easy to understandable, and the attached video also helped the reader understand. Most methods using supervised learning to learn robot policies from real-world data do not consider the adequacy of the generated trajectory or contact because they give the model predictions directly as motor commands. In contrast, the authors use MMPI to account for losses to trajectory and contact predictions, and they clearly describe their specific method. Furthermore, the authors also demonstrate the effectiveness of the proposed method by conducting multiple quantitative evaluations. One concern is that there are few comparisons with other methods. Since the authors evaluated the accuracy of the task only based on differences in the structure of their proposed model (with or without vision and data augmentation, etc..), it is better to include the results of comparisons with other studies if the authors want to claim the effectiveness of the proposed method. Many previous studies of tasks involving contact exist. [1] showed that by using imitation learning to learn the contact force with the floor surface, the robot can properly mop the floor even if the grasping position and length of the mop are different. The reviewer has some major comments. The reviewer agrees with the issue on line 21. However, as the solution shown by the proposed method is limited, it would be better to show the consideration and results of the tool-environment interaction when the grasping position of the spatula is changed. Line 49 states that a variety of trajectories are realized, but basically the robot is just moving the spatula straight. Therefore, the diversity of trajectories is not shown. In order to demonstrate the potential of the proposed method, it would be helpful to have results for varying the positions of objects and obstacles. The reviewer understood that by collecting training data with a random action policy, contact information between the tool and the spatula could be collected. As shown in Fig. 2, the reviewers also understood that the robot predicts its behavior every step based on the observed data. On the other hand, obstacles and objects are not included in the training data. How do they recognize the objects and rub them with the spatula? It is unclear how the approach trajectory to the object is learned from the random trajectory. Explaining the data flow during learning and during execution will help the reader understand. Finally, how are the Desired contact trajectories in Figure 4 calculated? Are they arbitrarily determined by human? [1] Sakaino, Sho. "Bilateral control-based imitation learning for velocity-controlled robot." 2021 IEEE 30th International Symposium on Industrial Electronics (ISIE). IEEE, 2021. <doc-sep>The paper propose a method to learn the dynamics of these contact features from real world data with unknow tool geometry, and propose a controller that uses learned dynamics model for visuo-tactile contact servoing and show that it is effective at performing scraping tasks with a spatula. Strengths: 1. propose a contact configuration: a binary contact mode; 2. present a framework for modeling compliant tool-environment contact interactions by learning contact feature dynamics. 3. propose a learned model architecture to capture the dynamics of contact features, trained in a supervised fashion using real world self-supervised data. 4. design and demonstrate a controller using the contact feature dynamics to realize diverse goal trajectories. Weaknesses: 1. authors believe the line contacts model can be straightforward to extend to patch contacts by using a richer contact descriptor, but do not proof it. 2. The model is used for compliant tool-environment contact interactions, but only test on a spatula.
This paper presents an architecture for modeling the interaction of a compliant tool and the environment. The contact dynamics is represented by the contact indicator, contact line, and end-effector wrench. The method is demonstrated on hardware using a compliant spatula to scrape an obstacle. The authors successfully addressed many of the reviewers concerns, most notably by adding more examples with multiple and unseen tools to demonstrate the generalization capability. The paper makes a valuable contribution to the conference by providing a novel solution to an interesting problem.
This paper tackles the trustworthiness of concept bottleneck models (CBM) by improving the accuracy-interpretability tradeoff using a concept-based architecture (Concept Embedding Model or CEM) which represents each concept as a supervised vector. Furthermore, the authors propose two metrics for evaluating concept representations. Strengths: 1) The paper tackles an important problem of trustworthiness and accuracy-interpretability tradeoff. 2) The paper is well-organized and easy to follow. 3) The authors performed multiple experiments demonstrating that the proposed approach would work reasonably well in different scenarios and improves the accuracy-interpretability. 4) Scalable to real-world cases with lacking complete concept supervision. Weaknesses: 1) Lack of necessary statistical analysis. Some improvements do not seem to be significant. It would be better to provide task and concept accuracy as well as statistical significance, e.g., error bars, in a Table. 2) Does RandInt regularizer increase the training time and cost? I think providing and comparison of the training cost (or model sizes) can also be helpful. 3) Have you tried applying RandInt regularizer to Bool-CBM, Fuzzy-CBM, or other baselines? Based on the provided results in Figure 6, it seems that a lot of improvements are due to adding the RandInt regularizer. For a fair comparison, I would like to see if adding that to the existing methods can improve their performance as well. The authors discussed the limitations of the proposed method in Section 6. <doc-sep>The paper extends the concept bottleneck method by generating two embedding vectors for each concept; one representing the embedding when the concept is active while the other representing when the concept is inactive. These two embedding are linearly combined through a scoring function (similar to the gating mechanism) the produces a probability score of which embedding is to be used. The concept embedding vectors are then used in the downstream task for prediction. This extension increase the model capacity to encode more information about concepts. Strength - easy to read - simple extension Weakness - constraining the architecture of all other baselines as similar to the architecture of the proposed method seems to be unfair. - I still can not see how this model can be used in scenarios where incomplete concepts exist. In particular, in eq 1, how can the model be trained without a full supervision of concepts? - the information bottleneck metric in section 4 is a bit unclear. more detailed explanation would be preferred. No. <doc-sep>Concept bottleneck models implicitly learn to explain the downstream tasks in addition to learning how to perform them. However, these models forgo predictive performance on the downstream tasks. The authors propose Concept Embedding Models (CEMs), a novel family of concept bottleneck models that address this issue. Strengths: - The proposed techniques perform better than CBMs and their variants on downstream tasks. - Concept representation learned by CEMs is more aligned to the ground truth concepts and successfully captures the semantics of the images. Weaknesses: - CEMs assume that the datasets contain annotations of concepts which is not valid in practice and are often quite expensive to obtain. I would encourage the authors to list the limitations of the proposed approach. <doc-sep>This paper introduces a model named concept embedding model (CEM) based on concept bottleneck model (CBM) architecture. Compared to CBM, CEM contains an embedding generator layer that considers two embedding representations (one for activate and one for inactivate) and then produces an embedding representation for one concept. Results show that the model produces high task accuracy and interpretability at the same time compared to CBM-family models. Strength: This paper tries to solve a challenging research question: to design XAI models which are good at task performance and interpretability. The authors conduct experiments on multiple datasets and evaluate different models using different metrics. Weakness: (1) The proposed model is not thoroughly studied. For instance, the relationship between c_hat+ and c_hat−. This is the novelty of the proposed model compared to CBM. For instance, for one concept, do these two embeddings represent opposite concepts? (2) There is some unclearness about the baseline models in the paper. For example, the CEM uses m=16 to represent one concept in c_hat (bottleneck). What is the dimension of c_hat for CBMs? In Appendix A.5, it says “γ = k · (m − 1)” for Hybrid-CBM. Does it mean that the dimension of c_hat is k · (m − 1) dimension? Why not k*m as in CEM? (3) The lack of justification for the proposed CAS. In Fig.3, the baseline model “no concepts” has an even better score than Boolean-CBM and Fuzzy-CBM. (4) The qualitative results are not very convincing. Fig.5 c and Appendix Fig. 5 show the samples and their nearest neighbors for one concept. However, it does not reflex information about the concept and for a well-trained classifier, it should find out the visually similar samples based on Euclidean distance in the embedding space. This paper does not emphasize the advantage of using two separate concept representations (c_hat+ and c_hat-), which is the novelty of the CEM, and does not evaluate the interpretability and user trust thoroughly.
This paper proposes Concept Embedding Models, which learn interpretable high-dimensional concept representations to exploit the tradeoff between accuracy, interpretability, and interventions on concepts. Reviewers vote for accepting this paper. The authors are encouraged to further improve this work based on reviewers’ comments in the camera ready and put the new experiments and discussions during the author-reviewer discussion phrase into the final revision, in particular the following: - Add statistical significance test of experimental results - Compare training costs and model sizes - Better justify the proposed CAS mechanism - Investigate the robustness of learned concepts - Address the fairness concerns raised by reviewers in comparison with baselines
This paper investigates how the SARAH stochastic recursive gradient algorithm can be applied to Trust Region Policy Optimization. The authors analyze the SARAH algorithm using its approximating ordinary and stochastic differential equations. The empirical performance of SARAPO is then compared with SVRPO and TRPO on several benchmark problems. Although the idea of applying SARAH to reduce the variance of gradient estimates in policy gradient algorithms is interesting and potentially quite significant (variance of gradient estimates is a major problem in policy gradient algorithms), I recommend rejecting this paper at the present time due to issues with clarity and quality, particularly of the experiments. Not enough of the possible values for experimental settings were tested to say anything conclusive about the performance of the algorithms being compared. For the values that were tested, no measures of the variability of performance or statistical significance of the results were given. This is important because the performance of the algorithms is similar on many of the environments, and it is important to know if the improved performance of SARAPO observed on some of the environments is statistically significant or simply due to the small sample size. The paper also needs improvements in clarity. Grammatical errors and sentence fragments make it challenging to understand at times. Section 2.3 seemed very brief, and did not include enough discussion of design decisions made in the algorithm. For example, the authors say ``"the Fisher Information Matrix can be approximated by Hessian matrix of the KL divergence when the current distribution exactly matches that of the base distribution" but then suggest using the Hessian of the KL of the old parameters and the new parameters which are not the same. What are the consequences of this approximation? Are there alternative approaches? The analysis in section 3 is interesting, but the technique has been applied to SGD before and the results only seem to confirm findings from the original SARAH paper. To improve the paper, I would suggest moving section 3 to an appendix and using the extra space to further explain details and conduct additional simpler experiments. Additional experiments on simpler environments and policy gradient algorithms (REINFORCE, REINFORCE with baseline) would allow the authors to try more possible values for experimental settings and do enough runs to obtain more conclusive results about performance. Then the authors can present their results applying SARAH to TRPO with some measure of statistical significance.<doc-sep>The paper extends Sarah to policy optimization with theoretical analysis and experimental study. 1) The theoretical analysis under certain assumption seems novel. But the significance is unknown compared to similar analysis. 2) The analysis demonstrates the advantage of Sarah over SVRG, as noted in Remark 1. It would be better to give explicit equations for SVRG in order for comparison. 3) Experimental results seem to show empirically that the SARAH is only comparable to SVRG. 4) Presentation needs to be improved. <doc-sep>This paper proposes a new policy gradient method for reinforcement learning. The method essentially combines SARAH and trust region method using Fisher information matrix. The effectiveness of the proposed method is verified in experiments. SARAH is a variance reduction method developed in stochastic optimization literature, which significantly accelerates convergence speed of stochastic gradient descent. Since the policy gradient often suffers from high variance during the training, a combination with variance reduction methods is quite reasonable. However, this work seems to be rather incremental compared to a previous method adopting another variance reduction method (SVRG) [Xu+2017, Papini+2018]. Moreover, the advantage of the proposed method over SVRPG (SVRG + policy gradient) is unclear both theoretically and experimentally. [Papini+2018] provided a convergence guarantee with its convergence rate, while this paper does not give such a result. It would be nice if the authors could clarify theoretical advantages over SVRPG. Minor comment: - The description of SVRG updates in page 2 is wrong. - The notation of H in Section 3.1 ("ODE analysis") is not defined at this time.
The use of SARAH for Policy optimization in RL is novel, with some theoretical analysis to demonstrate convergence of this approach. However, concerns were raised in terms of clarity of the paper, empirical results and in placement of this theory relative to a previous variance reduction algorithm called SVRPG. The author response similarly did not explain the novelty of the theory beyond the convergence results of what was given by the paper on SVRPG. By incorporating some of the reviewer comments, this paper could be a meaningful and useful contribution.
This paper proposes a meta-learning mechanism to address current generalization limitations in dynamics forecasting works, in which the majority of works learn deep learning models capable of capturing one dynamics at a time. To this end, they propose *DyAd*, a two-staged approach in which an encoder learns the time-invariant task-specific hidden features of a given observation sequence and influences a forecasting network that aims to learn the shared dynamics of the heterogeneous domain. A time-invariant loss function for the encoder is proposed to encourage time-task disentanglement between encoder and forecaster, leveraging weak-supervision from task-specific parameters. They leverage style-transfer adaptive instance normalization for forecaster adaptation and propose a novel encoder-influenced padding layer called *AdaPad* to address unknown boundary errors. Theoretical proofs are provided to quantify loss term trade-offs in the generalization error and show that there is a relationship between error and task relatedness for the source and target domains. Ablations are provided for each proposed model component and are evaluated on three physical dynamics tasks, including synthetic flows and real-world sea surface temperature and ocean current tasks. **Strengths:** - This work tackles a novel task for dynamic systems through the application of meta-learning to enable learning a set of dynamics functions with shared underlying mechanics under one model. It provides ample discussion on common failings of learning dynamical systems and proposes fixes to address them. It is placed well within relevant literature and indeed tackles an insofar uncommon setting. - The architecture choice of a task-feature encoder that adapts a forecasting network per task via inter-layer controlled padding is intuitive and the rigorous ablations to support the addition of each component strengthens the work. - Ample baselines are provided from classical video prediction models to applied meta-learning methods used as comparisons. - The provided code in the Supplementary Material is clean to run and has reproducible results regarding baselines and proposed model. - The writing and presentation is clear with little confusion on details concerning architecture or the training setting. The network components are explained well and effectively leverages visualizations in its presentation. **Minor Code Clean-up:** README.md: - A note regarding the base size of the dataset (>150GB) from 'data_generation.py' and how to generate smaller testing versions would help for more approachable evaluation of the provided code. - There are a variety of unused imports throughout all of the provided scripts which should be cleaned up to reduce unneeded dependencies. data_generation.py: - Has mkdir errors given no path checking for output folders on subsequent runs - Requires a local module import for phi.geom Sphere not already included - Variable "Resolution" is undefined and needs removing **Minor Writing Clean-up:** - Line 623: 'backpropogation' The authors properly address limitations in the claims of their proofs and what information should be gleaned from them. Additionally, limitations in the metrics used to support evaluation are discussed in the Appendix. Potential negative societal impact are not discussed in the work and is denoted as such in the Author's Checklist. <doc-sep>The paper proposes a method to predict dynamical systems when system parameters can differ between training and evaluation. First, a time invariant network is trained to predict the dynamical "invariants" which could be the number of vortices in a flow or system parameters. Then a prediction network is trained to forecast the system dynamics, taking the dynamic invariants as an input. ### Strengths - Interesting and innovative idea. (I may not be aware of prior work using this approach) - Good experimental results, although maybe a bit limited datasets. - Good and clear presentation ### Weaknesses One potential weakness of the method is that one needs access to the invariants for training. - <doc-sep>A physics informed meta-learning architecture is introduced to model different kinds of dynamics. An encoder generates a time-invariant latent state $\\hat{z}$ representing physical properties of the observed dynamics. In combination with the dynamics field, a decoder takes this latent state to condition its prediction of the next dynamic steps. ### Originality Strengths: - The AdaPad layer looks intriguing, as it allows the boundary condition (BC) to depend on $x$. In contrast to traditional padding methods (zero/constant or mirror), AdaPad thus seems able to account for dynamic BCs, such as Neumann and Cauchy. However, it might help if it additionally receives $x$ as input. Have the authors experimented with this? - Fairly simple but highly efficient and well designed architecture to model different kinds of dynamics with one and the same forecaster ### Quality: Strengths: - Overall very detailed and well motivated model architecture, data description and evaluation - Clear demonstration of successful encoding of physically relevant factors (figure 6) - Exhaustive ablations showing the relevance of the different introduced network components Weaknesses: - Apart from the ablations in table 2 and in the appendix, it would be informative to see the effect of different choices of $m$ ### Clarity and significance: Strengths: - Superiority of the model shown on both synthetic and real-world datasets Weaknesses: - I'm not fully clear about the conclusion of the proof and would appreciate an intuition about the result and what it actually means. - AdaPad operator not explicitly evaluated on changing boundary conditions Limitations not addressed by the authors. I'd be curious to learn about situations where this model might have difficulties.
This work proposes a model-based meta-learning method to forecast physical dynamics. The proposed approach is able to generalize across heterogeneous domains as demonstrated in convincing sets of experiments. The reviewers found the work to be well motivated, clear and self-contained. Authors justified the proposed model architecture and the ablation studies conducted showed the importance of the network components. The authors also provided an adequate description of the data and the evaluation strategy, as well as theoretical guarantees on the generalization error in several settings.
This paper proposes a new dataset for the chemistry domain as a real-world QA task. The authors collected chemistry questions from a web page and used crowdsourcing to annotate the questions with labels and conditions required for solving them. As baseline models, the authors propose an end-to-end neural network model and a symbolic solver that uses pre-defined rules. They demonstrate that the neural model struggles to solve the task but the symbolic solver outperforms it with the pre-defined rules that cover only a part of predicates in the dataset, arguing that their dataset is challenging and can foster the research of real-world problems in the chemical domain. Pros: - The paper proposes a new dataset, ChemistryQA, that consists of 4,500 questions with labels of target variables and conditions provided in the questions. - The paper proposes a graph-search based solver with an extraction mechanism which is carefully implemented for this task. - Appendix has thorough lists of the predicates and units in the dataset and functions used in the baseline solver. Cons: - Overall, I think that the writing of the paper can be improved more. There are some typos and formatting issues, which reduce the paper strength. Besides in Sections 2.1, 2.2, and 3.2.1, the paragraphs refer to figures and tables in Appendix. This seems to violate the submission instructions. - My primary concern is on the quality of collected questions. In Section 2.2, the authors say that they performed some check-and-verify mechanism in the first batch, which should be described in detail. Some related questions: + Did the author compute the inter-annotator agreement on sampled questions? + What kind of rules did you define for the verification? + How many workers did work on the annotation in total? + Did the same pool of workers work on the annotation for the first batch and the subsequent batches? + Is there actual human performance on the collected questions? Seemingly, it is not guaranteed that posted questions on the web page are reasonably answerable. - The purpose of employing two baseline models is not explained well. In the introduction, the authors say that "to verify the dataset is consistent with the purpose of evaluating AI' comprehensive capability". However, their hypotheses for the experiments are not clearly stated. - It seems that the authors stopped implementing the predefined functions for the symbolic solver at the point where the solver outperforms the neural network model. The authors could have implemented more, but it is not clearly explained why they only implemented the predefined functions for the 35.5% predicate coverage. What would happen if there are a larger number of functions implemented? - Comparison with existing datasets can be elaborated more. For example, because the Option answer type should include different types of entities and the option is just a form, directly comparing it with value, formula, and equation does not make sense to me. - In the error analysis, 18% of cases are classified into Other Reason, which does not look like a small number to me. Can the authors break this down into more detailed categories? Typos: - Section 2.1, It this website -> In this website - Cite Devlin et al. (2019) for using BERT. - There is an ill-formatted cell in Table 4. - There is inconsistent use of decimal commas (4,500 vs 4418) - Add whitespace before citations (e.g., in Section 2.4). <doc-sep>Strengths: A new QA dataset for chemistry QA consisting of 4500 questions and covering 200 topics. Crowdsourced annotation of variable and conditions. Weaknesses: - Strong baseline results are missing. Intermediate steps is missing in the annotations which are really helpful in training an end-to-end model. The topic distribution is missing from the paper. Experimental results and analysis is not enough. Overall: The idea of curating and annotating a new dataset for Chemistry QA dataset is good. I feel a stronger baseline would have helped much in understanding and analysing the quality dataset and annotations. Also the question complexity analysis/topic distribution is missing. Overall the paper writing could be improved a lot, in the current version it is difficult to follow. Question: If the whole problem can be converted into a set of triples and conditions they why not use graph based QA techniques? It will be interesting to see how neural symbolic machines/Neural module network perform on this dataset ? Topic distribution, question type distribution etc are missing. Any specific reasons for using 12 layers of transformer encoders and 6 layers of transformer decoders in Extractor plus Graph Search Based Solver? Which graph search algorithm is used in section 3.2.2 and Table 5? Typos: It this website, -> On this website <doc-sep>*Summary*: This paper proposes a new dataset based on textbook / classroom chemistry questions for complex knowledge retrieval and aggregation. The authors scrape several thousands questions from online repositories and add additional natural language annotations signifying the quantities to be solved for in each question, as well as the declarative knowledge. Two baselines, one end-to-end neural and another symbolic, both fail at this dataset. *Strengths*: The dataset targets the important question of how to build models that can retrieve knowledge while performing complex reasoning. *Weaknesses*: As-is, the dataset fails to target the knowledge retrieval component---models are either expected to magically know how to calculate the answer, or use hard-coded functions that complete a graph of values. The neural baseline also seems a bit non-standard, raising questions of how well modern systems can actually do on the task; furthermore, the end-to-end neural system is disadvantaged in that it likely has not seen much chemistry-related content during fine-tuning, whereas the symbolic baseline has access to a host of human-defined functions. Furthermore, dataset quality is a bit difficult to assess without more samples. *Recommendation*: 3 . This benchmark is motivated by the lofty goals of encouraging the development of models that can combine knowledge retrieval, complex reasoning, and language understanding. However, it’s unclear to this reviewer whether it will prove useful in making progress towards such goals---they’re too conflated to be meaningfully evaluated within this context. To improve the benchmark and make it more amenable toward advancing those research goals (versus just being a difficult datasets that current models cannot handle), I’d recommend explicitly targeting and evaluating this knowledge retrieval component as well. For instance, given a specific knowledge-base that’s guaranteed to span the facts necessary to answer the questions, how well can a model (1) retrieve relevant information and (2) use such relevant information to answer questions? Questions: “Chemical Calculation Problems cannot be solved by end-to-end neural networks since complex symbolic calculations are required”: this is a hypothesis---there are many tasks where “complex symbolic calculations are required”, but end-to-end networks excel. What extent of knowledge is required to solve this task? For instance, many old semantic parsing datasets came with databases, and it was guaranteed that within the database, an answer would occur. What would a corresponding knowledge graph for this case look like, and how complex would it be? “Unlike similar tasks’ annotation, we cannot collect all the atomic operations needed before starting annotation, since the set of chemical operators is not closed.” The set of mathematical operators is also not closed (e.g., in math word problems). Why is this approach better than collecting all the operations represented in the dataset (even if it doesn’t cover all of the operations that one could conceivably see)? The annotation interface / process looks quite regular---you aren’t expecting too much variation from the templates given. Given that you can help crowdworkers with these templates, why not just use these templates as the baseline for a formal meaning representation that would encompass the knowledge needed for the task? Can you give more details about the annotation process, beyond the short paragraph near the end of section 2.2? (“We employed crowdsourcing for this annotation work...around 336 hours)”. I’d be surprised if any crowdworker could label this sort of data well. What quality control filters did you put in place? Can we see more (random) samples of the dataset, so we can better assess its quality? End-To-End Solver: where did you get this model architecture from, such that “This method represents a class of powerful neural networks, which achieved state-of-the-art performance on many QA tasks.”? I’ve never seen BERT used in a seq2seq setting like this (instead, people tend to use models trained on input/output pairs, like BART or T5). I’d like to see how this compares to using BART or T5, since it’s not clear that the BERT initialization would be good for generation. Graph-Search based Solver: the need to implement specific functions (78, in this case) is significant, and undermines the point of this dataset, in my opinion. There’s no inherent value in learning to solve chemical equations well---the hope is that, in the process of doing so, we’ll get modeling insights into what methods work well and can be generally applied to other knowledge-intensive tasks. This graph-search based solver seems narrowly scoped to ChemistryQA and difficult to adapt to other tasks, and it’s not entirely clear why we should value its results. Token-level accuracy: Is it guaranteed that the output of the graph-search based solver will be the same length as the gold output? How? Else, how is token-level accuracy computed?<doc-sep>Paper Summary: * This paper presents a question answering dataset called ChemistryQA. It is different from existing datasets in that ChemistryQA requires open knowledge and complex solving processes. It provides triplet like extraction annotation which isolates language understanding and domain knowledge. Experimental results show that a neural encoder-decoder model and an extractor-plus-solver do not work well. Strengths: * The dataset contains real-world QA that requires the ability to perform complex chemistry calculation and reasoning. It is difficult for crowdsourcing workers to generate such complex questions. * The authors proposed a novel annotation method that target variables and conditions are labeled in a triple-like format. Weaknesses: * The dataset seems small to acquire the ability to perform complex calculation and reasoning. The training, validation, and testing datasets consist of 3,433, 485, and 500 questions, respectively. * The paper does not show statistics of the dataset such as the average length of questions and answers and the unique number of answers. * The paper does not show the performances broken down by question types. Although the end-to-end solver achieves an answer accuracy of 0.164, I think it is important to show more detail on what it can and cannot do. * The authors uses a pre-trained BERT as the encoder of the end-to-end solver and trained the decoder from scratch. I think pre-trained encoder-decoder models such as T5 and BART are better as the baselines of the end-to-end solver than the model used in this paper. Review Summary: * The paper is well motivated. ChemistryQA can be a useful dataset to evaluate the ability of chemistry calculation and reasoning, while the dataset seems small to acquire the ability. I think it can benefit a lot with a more comprehensive analysis of evaluation results of baselines.
The authors propose a new dataset, ChemistryQA which has complex questions requiring scientific and mathematical reasoning. They show that existing SOTA models do not perform well on this dataset thereby establishing the complexity of the dataset. The reviewers raised several concerns as summarised below: 1) Writing is not very clear 2) The quality of the dataset is hard to judge as some crucial information about the dataset creation process is missing 3) The size of the dataset is small 4) some stronger QA baselines need to be included Unfortunately the authors did not provide a rebuttal. Hence, its current form this paper cannot be accepted.
The paper proposes a method to learn neural radiance fields that represent the underlying scene free of reflective components in the scene, i.e. explicitly represented the transmitted regions of the scene. Prior work in representing transmitted radiance field relies on reflection removal from the input image sequence, however this is a challenging problem and typically results in photometric inconsistencies. The proposed method uses a novel formulation leveraged on the observation that reflective components in the radiance field are sparser than the transmitted components. A patch-based rendering scheme is used to handle the local characteristics of reflective/transmissive components. Strengths: The paper is well written and the exposition is clear. The paper provides a through introduction and a motivation for the solution, before properly explaining the proposed solution. As such I find the paper to be a useful contribution to the community and beneficial for the reader. The use of the transmission encoder with pyramid-scale features is interesting and the choice of Wg and Wl is properly motivated. The recurring edge constraints are the core strength of the paper and the description provided in section 4.2 is succinct. The qualitative and quantitative results in the paper and supplemental material clearly demonstrates that the transmitted radiance field is captured free form noise due to reflection. Weaknesses: The authors rightly point out that weighting coefficients are dependent on several factors. The viewing direction (wrt lights in the scene) and the camera position are correlated and more discussion is warranted on whether an MLP that encodes the weighting coefficients is sufficient in general. Yes, the authors discuss the limitations of the work <doc-sep>This paper targets to solve the novel-view synthesis problem with reflection removal, that is, novel-view synthesis of a transmitted object from images corrupted by reflections. A naive baseline, that applies reflection removal techniques to each input image before NeRF, does not solve the problem as the resultant image would not be multi-view consistent; This is because most reflection-removal techniques cannot take advantage of multiple viewpoints. This paper solves this problem by introducing 1) transmission feature integration and 2) recurring edge constraints. First, Transmission feature integration is based on the idea of pixel-NeRF that the feature from other viewpoints can assist the training, and the paper used “transmission feature” instead of the vanilla pixel feature in pixel-NeRF. Second, recurring edge constraints are based on the assumption that a reflected component is sparse in its presence in the aligned image. The paper also collected a new dataset for real multi-view images corrupted by reflections, and the proposed method shows promising results. ### Strengths - Promising results. The proposed method shows promising results on real multi-view images corrupted by reflections. The comparison with other methods such as NeRF, NeRF-W, and RR + NeRF, also shows that the proposed method performs superior both qualitatively and quantitatively, especially when the number of input images is limited. - New dataset of multi-view images with and without reflections. The paper shows the newly collected multi-view images, which can facilitate further research on multi-view reconstruction and reflection removal. ### Weaknesses While the paper proposes an interesting method with promising results, there are some weaknesses that can be improved: - The presentation of the manuscript can be improved. There are some ambiguous definitions or explanations: - [Line 123] What is transmission and reflection entanglement? If it means transmission and reflection have an inherent ambiguity, then the proposed method cannot disambiguate either. “Due to the absorption, reflection, and refractive effect ~” should be further clarified. - [Motion inconsistency] The terminology “motion inconsistency” (used frequently all around the paper including the abstract) used for recurring edge constraints is somewhat misleading. The key idea used for recurring edge constraints is that the reflected component may not exist in some viewpoints and thereby have a sparse presence. The reason for this phenomenon is the size of the reflector is limited, which causes the reflected object to be outside the reflector and disappear in some viewpoints. It has nothing to do with motion and thus the term “motion inconsistency” is not the appropriate term to describe the method. Maybe the reflected object is at a different depth from the transmitted object and moves differently in the image (e.g., larger disparity when it is further), but it is not the information that the proposed method directly uses. The description in the main paragraph (line 187-) is already clear, so just choosing a better terminology would improve the clarity of the proposed method. - [Line 210] What is \\Psi? The notation seems to be not defined. - Some important details about the transmission feature are missing. What network is used for feature W? From Line 155, I assume the network is based on ERRNet but it is difficult to see which part of the ERRNet is used as there are many components in the ERRNet. Line 162 is not enough for understanding the exact structure. Also, Line 232 explains the pretraining of the transmission encoder briefly and it is somewhat confusing if the method is different from the original ERRNet. The network structure and the training detail needs to be added to the supplemental material. - Missing baseline. A baseline (that might be interesting) is missing, that is RR + pixel-NeRF (without transmission feature). One of the main contributions of this paper is using the transmission feature, which is the combination of 1) reflection removal and 2) pixel-NeRF (assist the training of NeRF). If these two parts are divided into the reflection removal part and the pixel-NeRF part, it can be another baseline of RR + pixel-NeRF, which will be a more fair and interesting baseline. - REC has a limited performance (at least quantitatively). The second main contribution of this paper is using recurring edge constraints (REC), but the effect of REC seems to be marginal quantitatively as shown in the ablation study (Table 2). The PSNR without REC is 22.48, which is almost the same as that of the complete model (22.75). It would be interesting to see how REC works in more challenging data. The questions in the above section (Questions) include some limitations that are not handled in the paper: non-planar reflector and large reflector. The proposed method may not work for those cases of reflectors. <doc-sep>This paper proposes a novel neural radiance field rendering method that is dealing with specular reflection on the object’s surface. The proposed method aims at recovering only the transmission radiance behind the reflection. To that end, this paper proposes to prepare two dedicated networks, i.e. T-MLP and R-MLP, to learn the transmission features and reflection features. This is achieved by applying a single image reflection removal method to the training data to separate the background and the reflection. The learned transmission and reflection color radiance are then combined in a convex combination. In addition, in order to guide the learning of background high-frequency details, this method also applies recurring edge constraints which utilize the observation that background edges appear consistently in multiple different views. Strengths 1. This paper is generally well-written with clear motivation in the introduction section. It clearly defines the current problem and challenge left by existing NeRF-based methods, which is the reconstruction of scenes behind the transparent surfaces with specular reflection. 2. The comprehensive experiments show that the proposed method consistently outperforms the state-of-the-art methods by a considerable margin, in both qualitative and quantitative evaluations. 3. This paper proposes a new NeRF purpose dataset, which is particularly focusing on the scenes behind the specular reflection. The proposed dataset may impose a strong impact on future research in this area. Weakness 1. I found the performance comparison with respect to baseline method MVSNeRF is a bit unfair because the selected baseline methods are not designed to deal with reflection, and hence it tends to predict the reflected scene as is. Therefore, the quantitative PSNR results are much worse than the proposed method as expected. Especially, in Figure 3, MVSNeRF almost reconstructs the exact appearance of the target view. 2. For a NeRF method, it is also important to know the performance of the proposed method applied to normal (non-reflective) scenes. Otherwise, the usage of the proposed method is just limited to reflective scenes. In the submitted paper and supplementary material, all the examples and benchmark data are performed on the scenes with reflection. The authors are suggested to provide more comparison (quantitative) and real normal scene examples in the rebuttal period. 3. What is the processing speed and the network complexity of the proposed method compared to baseline methods? In order to prove the effectiveness of the proposed method, it is crucial to verify that the performance gain is not coming from the extra number of parameters in the network as well as the pre-processed edge map and reflection purged features. 4. From the ablation study, the recurring edge constraints (REC) only bring in very little improvement, but it is considered as one of the two contributions in the method section. It seems that the proposed method is not very effective. 5. It is true that the proposed method outperforms other baselines on the reflective NeRF dataset by a large number. However, the method itself is quite straightforward with limited novelty. It is critical to understand the effectiveness of the proposed method by providing the performance comparison on normal datasets, and hence prove the validity of the proposed method. The limitation of the proposed method is to apply it to any normal scenes or NeRF datasets. If it cannot perform well on non-reflective scenes, the generalizability of the method will be the biggest limitation. <doc-sep>This paper proposes a novel view synthesis network specially designed for see-through scenarios. This paper introduces a transmission encoder, which separately estimates the transmission amount against the specular highlight's reflection. In addition, this paper introduces a recurring edge constraint to account for the frequency of edges. [Strengths] + The application and approach of the transmissive scenario sound interesting to me. The specular reflection on glass in the see-through scenario has been rarely discussed in the neural rendering field yet. I found that this new research problem is interesting. Existing solutions such as vanilla NeRF seem to fail when there is a specular reflection in input images, while the proposed method works properly. [Weaknesses] - Even though the motivation of the proposed method sounds interesting, I'm not fully sure if this paper is completely developed and evaluated to solve the technical challenges. Specular reflection works very differently from transmission. For instance, when the camera motion occurs, the specular reflection and transmitted image move in opposite directions about the depth position of glass surfaces. The proposed model doesn't seem to account for the physical phenomenon. Instead, it just tries to separate the transmission and reflection along the given view vector, which is not physically plausible. This observation should be valid from a specific view angle. If the method accumulates multiple observations in a voxel grid, the accurate separation cannot be achievable by increasing the number of observations. I would like to hear more in the rebuttal. - The evaluation of this paper is one of the weakest points. Except for the main results shown in the teaser, most results do not include strong specular reflection. According to the proposed formulation of the recurring edge constraint, the proposed method may work properly when there are strong contrast edges in the transmitted image. The main result of the picture frame is the case. In other cases, the results do not include any strong specular reflection. I think the results look very cherry-picking with a very small number of examples. I would like to see more results to validate the performance of the proposed method. Limitations are clearly mentioned in the main paper.
This paper proposes a novel neural radiance field rendering method that is dealing with specular reflection on the object’s surface. The authors present a novel method to solve the limitation of the existing NeRF-based methods for the scenes behind the transparent surfaces with specular reflection. The review results are two A(7) and two BA(5). After carefully checking out the rebuttals and discussions, I recommend the paper to be accepted for this NeurIPS.
The paper empirically studies the reason for the phenomenon that deep neural networks can memorize the data labels, even the labels are randomly generated. New geometric measures by replica mean-field theory are applied in the analysis. The findings of the paper are interesting. It shows the heterogeneity in layers and training stage of the neural net: i) Memorization occurs in deeper layers; rewinding the final layer to the early weights mitigates memorization. ii) When memorization happens, the early layer still learn representations that can generalize. iii) In the training, early activations stabilize first, and deeper layers weights stabilize first. iv) Near initialization, the gradient is dominated by unpermuted examples. I have the following questions/comments: - It is better to further explain the intuition of the Manifold Geometry Metrics. The current Figure 1(B) is not very clear. - In Manifold Capacity, what do P and N exactly mean? Is this P the number of classes as used elsewhere? - The paper explains that by training on permuted examples, the network can learn generalizable representations at the initial training stage because the gradient ignores permuted examples. But why in the later training stage, the early layers and later layers show different generalization properties? In general, this paper carries well-organized experiments. One shortcoming is that the paper does not provide a methodology to solve the generalization problem or further theoretical analysis of the observations. But the empirical discoveries are novel and can be beneficial to the deep learning community. ########### Updates: Thanks for the authors' response. The modified version improves clarity. I think this paper provides nice observations and initial analysis to the community and can be beneficial to future work, so I recommend this paper to be accepted.<doc-sep>The authors apply MFTMA to DNNs trained on CIFAR with label noise to analyze their behaviors between generalization and memorization. Based on experimental results, they claim that what is involved in memorization are not lower layers but higher layers. This claim is convincing. Another claim that this is not caused by a vanishing gradient effect is plausible, too. I'm sure these results give some insights into understanding generalization and memorization by DNNs. Questions. Why do the authors consider only convolutional layers, not fully-connected layers, for the analyses? In the experiment of rewinding individual layers, the three FC layers are left untouched. Why? Is MFTMA the only method that can examine/verify the above finding? Comments. At the first reading, I didn't understand what "restored examples" means, and it took me a while to understand it. The caption for Fig. A.7 has an error; CIFAR100 should be Tiny ImageNet. <doc-sep>### Summary: This paper investigates memorization in deep neural networks (DNNs). Authors leverage mean field theoretic geometric analysis method (MFTMA) to analyze when and where memorization occurs in a DNN. Through empirical analysis, they show that i) generalizing feature are learned initially and that memorization happen later in training mostly in the top layers ii) we can mitigate memorization by rewinding the top layers parameters to earlier values. They also show that their MFTMA metrics can highlight the phenomena of double decent. Finally, they demonstrate that gradient descent initially ignores noisy example and focus on correctly labeled examples. ### Reasons for score: I lean toward acceptance. This paper makes interesting observation regarding memorization of deep network, it performs a good empirical study which provide enough evidences for the different claims. Although, MFTMA could be a better explained in the main paper. ### Pros: - As stated above, the paper makes interesting observation regarding memorization of deep network. - It performs a thorough empirical study. ### Cons: - I found it hard to understand MFTMA without referring to the appendix A. It would be nice to expand the explanation of MFTMA in the main paper. In addition, it would be good to further explain Fig 1. B which contains a lot of information. - Does the observation scale to larger dataset such as ImageNet ? - Experiments are run for only one seed. <doc-sep>This paper analyses memorization in DNNs, from the lens of memorization = fitting random labels, and finds that it seems to happen in later layers. These results are obtained using the MFTMA framework, a manifold analysis tool, testing geometric properties of individual layers. The analysis also attempts to explain why such a phenomenon exists, and makes a few interesting observations. This paper does not propose any new algorithm, but instead settles some important questions by infirming or affirming past speculation on layer behaviour found in the literature. I find three particularly interesting results in this paper: - later layers seem to be responsible for memorization, while early layers seem to converge last but consistently learn "generalizing" features (although this may not be true for other architectures) - increasing the dimensionality of the network to induce double descent _decreases_ the manifold dimensionality of the last layer. This is consistent with overparameterization making everything smoother/flatter and more easily disentangleable in the last layer. - for examples with the wrong class, gradients initially vanish (due to destructive interference), which seems to be a driving force for the initial good generalization performance. Downsides of the paper: - The setting explored here is somewhat artificial, (1) the requirement on a high enough epsilon (random label proportion) may not represent real use of DNNs (I write this having seen Fig A.8; this is also a common criticism of double-descent results) (2) the models trained here don't seem to exceed 40% testing accuracy, again not necessarily representing real use of DNNs (this is a bit surprising considering even models from back in 2013 had above 60% accuracy on CIFAR100). - Although the results of the paper do not hinge entirely on it, the reliance on MFTMA limits the interpretation somewhat: while an interesting tool, it's not clear to me that it allows us to make strong statements about the geometry of neural networks. In particular for the early layers, MFTMA may not be able to capture the geometry of features which might still be somewhat entangled yet possess a lot of richness. - I have some issues with the presentation of the paper - This paper does not really introduce a novel lens on generalization or significantly new ideas (although I'd argue it formalizes existing ideas and properly tests them empirically). On the value of the contribution: - I think having empirical evidence of the studied phenomena is valuable, more so than previous speculation on them. - The empirical results presented here do open the door for new questions to be answered and may help focus the ongoing investigation of memorization and generalization in DNNs Additional comments: - Something seems wrong with Figure 2B-middle two columns. Aren't permuted and restored examples the same inputs X but with the corresponding Y changed? If this is the case, then their UMAP should be the same, the only difference between the second column and the third column should be the coloring of the points. I presume that the figure shows a different minibatch of Xs for these two columns; I would highly recommend not doing so and using the exact same inputs. It would be consistent with the text, and the presentation, e.g. Fig 1A. - All Figures: the label fonts should be bigger. From the ICLR formatting guidelines: "use 10 point type [for text]", and "all artwork must be neat, clean, and legible." Having to zoom in and out to be able to read figures properly hurts accessibility and legibility, which detracts from the quality of the paper. Packing text, results, and figures in an 8-page document can be hard, but synthesizing information, including visual information contained in figures, is an essential skill in conveying knowledge. -- Here are a few suggestions for this particular paper: Figure 1A seems unnecessary, the text conveys these 3 concepts clearly; Figure 1B is important and should take the entire width of the page, with legible fonts; Figure 2A's subplots all share the same X and Y axis, making their naming redundant and taking up space; Figure 2B's column labels are also repeated needlessly, taking up vertical space; Figure 3's X axis doesn't need individual layer name labels, and could be replaced with a single "Layer depth" label -- 3A and 3B also share this axis, leading to wasted vertical space (space that could be used to make fonts larger); idem for Figure 4A, individual layers do not need to be named, but rather the concept of layer depth can be conveyed with a properly labelled colorbar gradient -- 4CDE could be less wide and leave more horizontal space to make fonts larger. - In Figure 5A, it's not immediately clear that the X axis are individual layers, the log(nabla) label should be on the colorbar rather than on top of the figure. I'd also suggest flipping the X and Y axis, as the X axis is typically used for time; this would allow there to have the three subplots side by side with a shared labelled colorbar on the right (matplotlib seems to be used here, see matplotlib.pyplot.subplots's sharex/sharey arguments for examples).
The paper offers novel insights about memorization, the process by which deep neural networks are able to learn examples with incorrect labels. The core insight is that late layers are responsible for memorization. The paper presents a thorough examination of this claim from different angles. The experiments involving rewinding late layers are especially innovative. The reviewers found the insights valuable and voted unanimously for accepting the paper. The sentiment is well summarized by R2: "The findings of the paper are interesting. It shows the heterogeneity in layers and training stage of the neural net". I would like to bring to your attention the Coherent Gradients paper (see also R1 comment). This and other related papers already discusses the effect of label permutation on the gradient norm. Please make sure you discuss this related work. As a minor comment, please improve the resolution of all figures in the paper. In summary, it is my pleasure to recommend the acceptance of the paper. Thank you for submitting your work to ICLR, and please make sure you address all remarks of the reviewers in the camera-ready version.
This paper addresses the problem of learning generalizable context in RL. In particular, it suggests learning disentangled context representation of each confounding in the environment using the proposed model, DOMINO, which optimizes decomposed MI objectives. It adopts the contrastive learning method when learning the disentangled context representation, regarding trajectories sampled from the setting of the same confounding as positive pair and of different confounding as negative pair. The authors also provide a theoretical basis for how optimizing their decomposed MI objective can make $I_{NCE}$ a tighter lower bound by alleviating the underestimation of MI. By learning policy conditioning on the learned context vector, DOMINO can achieve higher generalization performance compared to both model-based and model-free baselines. Strengths: The paper is well written and clear to understand. Using contrastive loss when learning disentangled representation of each confounding is novel and intuitive. And it is intriguing to get an idea of sampling negative pairs from different episodes. The experiments are comprehensive and the results are impressive. Weaknesses: However, the proof of Lemma 1 and Theorem 1 lacks mathematical rigor. Also, there is some missing specific information about notations in the proof, thereby undermining the clarity and soundness of the paper (e.g., $w_y$ and $E$). Visualization of the learned context embeddings does not show how effectively each confounding is encoded. Yes, the authors adequately addressed the limitations and potential negative social impact of their work. <doc-sep>This paper studies a contextual reinforcement learning (RL) setting where the environment dynamics are parameterized by independent factors, which the authors refer to as “confounders.” In each episode, the underlying factors can vary. They present a method for contextual meta-reinforcement learning (RL) called DOMINO, which learns to encode the RL agent’s current trajectory into a set of independent context vectors. These independent context vectors can then be used as inputs to the transition model in model-based RL (MBRL) and as an input to the policy in model-free RL, thereby providing the agent with an inferred context for the underlying environment factors in any given episode. Importantly, their method assumes the underlying environment factors are similarly independent. The main contributions of the paper are the method, DOMINO, for learning independent context vectors from the trajectory and their analysis and experimental results demonstrating the favorable properties of this method (including improved empirical performance against baselines learning entangled context vectors), when the underlying independence assumptions are valid. Strengths - The paper provides a simple method for improving context-aware meta RL in an environment with multiple independent factors of variation that impact the transition dynamics. The method itself is clearly described. This seems to be the first method to directly exploit an explicit assumption of independence among the underlying environment factors of variation. - The method performs well against sensible baselines. Importantly the method performs well against an ablation that does not learn disentangled context vectors. Weaknesses - The reported results in the Table 1 and 2 have high overlap between the authors’ DOMINO and MINO methods and the baselines. The signficance of these results could be made clearer by reporting the results of a Welch t-test between the proposed method and the baselines. - Similarly, the performance comparison plot in Figure 1b should have error bars. It should also state what method of averaging was used for the plotted values - The paper can benefit from a full pass to improve the clarity of the writing. There are numerous missing details about basic figures, such as what measure of uncertainty is represented by the error bars for each plot and table. There are also several ambiguous phrasings and sentences with confusing wording. For example - A key aspect of this paper is the analysis of InfoNCE as a “loose bound” of the mutual information. However, the authors never define whether this bound is an upper or lower bound. While this detail can be inferred from context, I think it is important to make this point clearer to the reader. Relatedly, the definition of “MI underestimation” in L45 is unclear. - Given that the independence assumption is core to this work, it is unclear how significant this setting will be in practice and for future work. - Moreover, it seems important for the experiments to assess how valid such an independence assumption is in practice, and crucially, what is the price in performance one might expect to pay for making this assumption. An experiment assessing the performance of DOMINO and MINO on a more complex environment whose underlying factors of variation are not mutually independent would improve this paper by providing a more complete picture of the effectiveness of this method. - There seems to be an underlying assumption that the N independent context vectors aim to encode information about the underlying factors of variation in the environment. However, this connection is actually never explicitly made in the writing, making the jump from discussing MI in terms of environment factors to context vectors (4.1 to 4.2) unclear. - It seems that DOMINO requires setting the number of context vectors N equal to the number of environment factors of variation. In general, we may not know this value exactly. Adding a sensitivity analysis to how dependent the performance is on setting N to this exact value would provide important information on how applicable this method is in practice. Minor comments: - L22: “mythologies” should be “morphologies”. - L47-48: “First the context encoder embeds the past state-action pairs into disentangled context vectors” is an inaccurate description, as it must first be optimized to do so (as next described in L48-49). - This paper could consider citing related work in unsupervised environment design [1,2,3,4] and more generally, RL work in procedurally-generated environments [5,6]. These works are deeply related as they effectively perform meta-RL over a space of environment variations with an implicitly learned context. Ignoring this line of work seems like a significant oversight. References [1] Dennis et al, 2020. Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design. [2] Jiang et al, 2021. Prioritized Level Replay. [3] Jiang et al, 2021. Replay-Guided Adversarial Environment Design. [4] Parker-Holder et al, 2022. Evolving Curricula with Regret-Based Environment Design. [5] Raileanu et al, 2021. Decoupling Value and Policy for Generalization in Reinforcement Learning. [6] Cobbe et al, 2019. Leveraging Procedural Generation to Benchmark Reinforcement Learning. The core assumption of this work also acts as its primary limitation: The environment factors of variation are assumed to be independent, and their number known a priori. The authors should make an effort to emphasize this limitation and to what extent they believe such an assumption of independence may be applicable in practice. <doc-sep>This paper tackles the problem of generalization in MDPs where the dynamics changes are assumed to be caused by multiple independent factors, denoted as context. The proposed framework (DOMINO) learns a context encoder that maps trajectories to a latent context via decomposed mutual information using noise-contrastive estimation (InfoNCE). The authors combine DOMINO with model-free and model-based RL algorithms, and perform experiments in classic environments, as well as in the Mujoco benchmark, in settings where multiple confounders change simultaneously. Additionally, qualitative visualizations of the latent context vectors are presented using t-SNE. Strengths: - The idea of capturing the different confounders that may affect the dynamics of the MDP into different latent contexts is novel and interesting. - The experimental results show that the proposed method can, in general, achieve better performance than the state-of-the-art. Weaknesses: - The paper needs improvement regarding the clarity of the mathematical definitions, such as the objective functions. - It is not clear whether the improvements are because of the decomposed mutual information framework, or because of other algorithmic improvements (see below). The paper could benefit from a discussion regarding the assumption of independent confounders. For instance, how difficult it would be to adapt the algorithm to the case where we have co-related confounders? <doc-sep>This paper proposes a decomposed mutual information method to learn disentangled context information, which can generalize reinforcement learning algorithms into unseen environments. The experimental experiments demonstrate that the proposed method can achieve better performance than the previous methods. Strengths: 1. The writing of this paper is pretty well, and the idea of it is easy to follow. 2. The figures in this paper are very clear and very well. 3. The extensive experiments show the effectiveness of the proposed method. Weakness: 1. Based on the title, I assume that this study focuses on the meta-reinforcement learning problem. The conventional meta-reinforcement learning methods include an adaptation process, but this paper makes no mention of this process. Additionally, the paper states that it intends to train a general context-encoder to solve the adaptation problem, indicating that the paper's context is the dynamics generalization in reinforcement learning (this paper also mentions it in line 84), which is in contrast to the title of the paper, which refers to meta-reinforcement learning. 2. The second problem of this paper is the novelty. The paper aims to maximize the mutual information between contexts extracted from historical information and the historical trajectories. However, this paper does not make clear the relationship with [1,2,3] which also attempt to maximize the MI between context vector and historical trajectories. Furthermore, this work does not compare the performance with [3] and even does not acknowledge it, despite the fact that [3] focuses on a similar problem to this paper. As a result of the missing contribution and experimental comparisons with [1,2,3], I believe this paper's uniqueness is somewhat limited. 3. The number of learned context vectors $c$ is set as the number of environments in the study, which is the primary hyperparameter of the suggested technique. However, in a real-world setting, the number of environments is not available, making it unfair to compare it to the baseline TMCL, which doesn't rely on such prior information. This increases my concerns about the technical soundness of this paper. In conclusion, while the writing and experimental results are excellent, this paper suffers from the aforementioned clarity and novelty issues. If the authors address my concerns in their response, I will consider raising my score. ------------------------------------------------- After Rebuttal ------------------------------------------------ I think that the additional experimental results and discussion in the revision resolve my concerns about the clarity problem of the submission, so I increase my score from 4 to 6 accordingly. Minors: I believe that RIA considers context information and constructs confounder sets with multiple confounders, so I believe that RIA should be discussed in the introduction's confounder discussion (Line 42). [1] Haotian Fu, Hongyao Tang, Jianye Hao, Chen Chen, Xidong Feng, Dong Li, and Wulong Liu. Towards effective context for meta-reinforcement learning: an approach based on contrastive learning. [2] Li, L., Huang, Y., Chen, M., Luo, S., Luo, D., & Huang, J. (2021). Provably Improved Context-Based Offline Meta-RL with Attention and Contrastive Learning. arXiv preprint arXiv:2102.10774. [3] Guo J, Gong M, Tao D. A Relational Intervention Approach for Unsupervised Dynamics Generalization in Model-Based Reinforcement Learning[C]//International Conference on Learning Representations. 2022. Please refer to the "Weakness" listed above.
This paper proposes DOMINO, an optimization framework, for contextual meta reinforcement learning. The reviewers generally agree that the paper is well written, the idea is novel and interesting, the evaluation is comprehensive and the results are impressive. Reviewers also raised a few concerns in the initial reviews, such as the proof of Lemma 1 and Theorem 1, and the mathematical definitions. Throughout the discussion phase, most of these concerns were sufficiently addressed, and the review scores were increased accordingly. Overall, the quality of the revised paper has improved significantly during the rebuttal. Thus, I recommend accepting this paper. Please incorporate the remaining reviewers' suggestions in the future version of this paper.
The paper proposes the use of playbacks for UDA. The author uses the trained model and an offline 3D object tracker to generate high-quality pseudo-labels of the target domain. After that, the original model is fine-tuned on the generated pseudo-labels to improve performance on the target domain. The paper can be understood in general and the writing is easy to follow. The results of the paper are practical. It's reasonable because it can generate more accurate 3D boxes for the target domain, especially for those long-distance objects. The authors have done experiments on 5 data sets to show the generalization capability of the method and compare the two decent baselines with the proposed method. With the video of the same scene over time, I am also curious if the effect of using the point cloud of the previous/next frames to enhance the point cloud data of the current frame, which may further enhance the effect and generalization ability of pseudo-labels. The major novelty of the paper is the combination of offline-tracking and self-training techniques, which is practical for real-world engineering problems. However, in general, I think the novelty is still limited to the ICLR community. In my view, the only difference between the proposed method and ST is the introduction of video information (an assumption) and the offline tracker to make the pseudo-label more accurate.<doc-sep> This paper proposed an unsupervised domain adaptation method for 3D lidar-based object detection. The idea is simple and straightforward: using cross-domain detector + offline tracking to provide pseudo-labels, inspired by similar UDA efforts for 2D detection. Experiments are conducted over multiple self-driving perception datasets, and results validated the effectiveness of the proposed method. Pros: * The idea is simple and straightforward. The approach is technically sound. * The presentation is clear. The introduction is easy to follow and enjoyable to read; Related work is thorough properly reflects the current states. Technical details are clearly described so that reproducing should not be very difficult. * The experiment showcases solid performance improvement over baselines (self-training and statistical normalization) * The paper also conducted very detailed and convincing ablation studies. * Consistent improvements have been seen in several datasets and two different detectors. Cons: * I have some concerns regarding claimed contributions/novelty. * Offline tracking is not adequately benchmarked to justify the choice of extrapolation. * The online tracker is not on par with the current state of the art. * There is not enough pseudo GT quality analysis against manually labeled ground-truth. The usage of "video" to produce confident pseudo labels for unsupervised domain adaptation has been stressed in the introduction. However, as the related work described, this has been explored before with a similar technical approach for 2D detection (offline tracking to produce labels); see Roy-Chowdhury et al. It's hard to say if extrapolation is a significant contribution unless adequately benchmarked, showcasing the offline tracker has improved using this trick. Such benchmarking could be done on the KITTI tracking benchmark to compare with/without extrapolation procedure. The current ablation on UDA tells little information as improvement is not significant. There is no comparison against other trackers. Based on the reported numbers, the online tracker adapted from Diaz-Ruiz et al. 2019 is subpar from the current state-of-the-art Kalman-based online tracker. It's hard to justify why this one is chosen. Why not Weng et al. 2020 or Chiu et al. 2020, as mentioned in the paper? In particular, the tracker in Weng et al. 2020 is open-sourced. Please provide an mAP evaluation of the pseudo GT quality over some sequences with GT labels. Although not required, it would be great to see whether the author plan to release the code. --------------------------------------------------------- Post-rebuttal comments --------------------------------------------------------------------------- I carefully read the rebuttal and other reviewer comments. The author addressed my concerns on pseudo-label quality assessment and comparison against SOTA trackers. From the experimental perspective, I am very convinced the paper did a great job now. Please incorporate these additional experiments into the paper making it more complete. That being said, similar to other reviewers, I am not very convinced about the author's reply on novelty/contribution. It's true it has not been applied in 3D, which is new. However, I am not convinced by the claims in rebuttal, such as "using physics-based dynamics models" (I think you are referring to kinematics-based instead of physics-based), "3D extrapolations" (which could induce potential problems due to the multi-modal future uncertainty), and "self-training" (which is not new). Thus, if the paper gets accepted, I strongly encourage the author to rewrite the introduction and properly reflect the core contributions. Overall I am still on the positive side. But I am fine with both decisions. <doc-sep>The topic of adapting 3d object detectors to new domains is important. The paper clearly motivates the problem, clearly presents the methods and shows detailed experiments. I really enjoyed reading the paper. My main concern is that the two components of the method (self-training with pseudo labels and generating more pseudo labels with an object tracker for object detection) have been developed and widely used in the computer vision domain for 2d object detectors. The main novelty of this paper lies in using the counterparts of the two components in 3d for the new 3d object detection task. The use of self-training is almost the same as all previous methods. There are a few interesting engineering parts in using 3d object tracker to expand the pseudo labels such as label extrapolation and interpolation. Another question I have is that when the object detector gets stronger, do we need a stronger object tracking algorithm in order to provide additional useful information. If the tracking algorithm is too weak, relative to object detection methods, the augmented pseudo labels will be too noisy to provide any help. Discussions or experiments on this point would be very helpful in understanding the application domain of the proposed method. Although the novelty of the method is rather small, the authors have made good efforts in supporting the work with extensive experiments. The authors have evaluated their method on five datasets (all the 2 out of 5 combinations). The results are good across all the scenarios. The paper is clearly written and the method is well motivated. I am not sure whether a paper with extensive experiments and relatively small technical contribution should be considered as a good paper for ICLR. After reading other reviews and the rebuttal, I opt for acceptance. <doc-sep>Pros: - The proposed method is simple yet effective and has wide uses in real-world applications - Solid experiments across 5 benchmarks. - This method does not rely on the source domain data and learned trackers. Cons: - The object detector will detect objects accurately only when they are close to the self-driving car. The claim is not supported when there is a large domain gap (e.g., different LiDARs or significantly different scenarios). The proposed model will fail to handle this situation. - For the static cars, why don't use ego-motion to model the temporal relationships? It should have a better performance than EKF. - The generation of the pseudo-labels depends heavily on the confidence scores obtained from the object detector. Confidence scores > 0.8. How is the threshold of 0.80 chosen? Would other thresholds be more effective? - Why does the author only post results in 50-80m in Tab. 3. The accurate detection in 0-50m is more important, although the relative improvement may be less. - The method is somewhat similar to the existing tracker-based UDA methods, thus the novelty is limited. However, the application of 3D detection and the extensive experiments are great and may benefit further research significantly.
This paper proposed an unsupervised domain adaptation method for 3D lidar-based object detection. Four reviewers provided detailed reviews: 3 rated “Marginally above acceptance threshold”, and 1 rated “Ok but not good enough - rejection”. The reviewers appreciated simple yet effective idea, the well motivated method, the comprehensiveness of the experiments, and well written paper. However, major concerns are also raised regarding the core technical contributions on the proposed approach. The ACs look at the paper, the review, the rebuttal, and the discussion. Given the concerns on the core technical contributions, the high competitiveness of the ICLR field, and the lack of enthusiastic endorsements from reviewers, the ACs believe this work is not ready to be accepted to ICLR yet and hence a rejection decision is recommended.
This paper studied optimization of minimax games and proposed a recursive optimization algorithm called Level k gradient play (Lv.k GP). As an update based on predictive updates, Lv.k GP does not require second-order information, which is computationally more efficient than existing algorithms. Theorem 4.1 showed that as k increases, the level k reasoning of parameter vectors is approaching a limit point. And Lv.$\\infty$ GP is equivalent to an ideal algorithm Semi-Proximal Point Method (SPPM). Table 1 then summarized different algorithms and viewed them as approximations of SPPM, and Lv.k GP approximation accuracy improves as k increases. Theorem 5.1 showed local convergence of SPPM toward stationary points, and Theorems 5.2 and 5.3 showed that for a specific bilinear game and quadratic game, the SPPM converges in terms of squared norm of parameter distances. Experiments on training GANs using 8-Gaussians, CIFAR-10 and STL-10 are conducted to show that Lv.k GP and Lv.k Adam can be used to help existing GAN optimizers (e.g., Adam) provides noticeable gains in performance and stability. Strengths: 1. Minimax game optimization is an important problem which has lots of applications such as GANs, while being difficult to resolve and optimize. The research is well motivated. 2. The proposed Lv.k GP algorithm seems novel to me, and the authors provided its approximation to SPPM and showed the convergence properties of SPPM. 3. Experimental results are supporting the proposed methods Weakness: 1. The proposed algorithms are Lv.k GPs, while the convergence properties are studied for SPPM method, which creates a gap. 2. It is not clear that whether the claim of Lv.k GP is faster than (first order approximations) of second order methods is true or not. The authors discussed the potential negative impact of using GANs to generate images, which is reasonable to me. <doc-sep>This work introduces a novel algorithm, closely related to existing "lookahead approaches", for solving minmax games. In "level k gradient play" each agent first arrives at a prediction of the opponent's move through k steps of recursive reasoning *the counter move to the counter move to ... gradient descent*. They then make a step of gradient descent in the direction of the gradient computed using their own present position and their opponent's position according to the recursive reasoning. The authors show convergence results for the (theoretical) infinite recursion version with $k = \\infty$. Finally, the authors show numerical experiments on GANs, demonstrating improved IS and FID scores. ### Strengths: - The algorithm seems natural and well-motivated to me. - The algorithm appears to provide substantial improvements in GAN training ### Weaknesses: - The emphasis on SPPM is confusing and makes it hard to understand how the proposed algorithm relates to other methods in the literature. - Overall, there is a lot of hand-wavey language in the paper that hints at the advantages of the proposed method or results of the paper, that are never made concrete. Overall, my present view of the paper is that the results are suitable to be published in NeurIPS. However, the paper needs to undergo a major reorganization/rewrite to clearly convey its key points. I am not sure such a revision is within the scope of a single conference review cycle which is why I tend towards rejection and encouraging the authors to resubmit to a future cycle. The authors discuss the additional computational cost of Lv.k in the very end of section 6. However, I was surprised that they did not provide further investigation of this concern by comparing, for instance, and adam model with k times as many epochs to an Lv.k Adam. <doc-sep>This paper proposes a Level K Gradient Play algorithm to stabilize the learning dynamics in minimax games (GANs). By combining the proposed Lv. K algorithm with Adam optimizer, this paper could achieve similar results with SOTA GAN model with 30 times fewer parameters. **Strengths**: This paper proposes the Level K Gradient Play Algorithm for stabilizing the training of GANs. The proposed method has a theoretical guarantee, with the assumption that the gradient of the loss function is Lipschitz continuous. Moreover, the paper proves and analyses the convergence properties of the Lv. K algorithm. **Weaknesses**: 1. **On the definition of SPPM** (line 157). The author claims that "SPPM players arrive at a consensus by knowing precisely what their opponents’ future strategies will be" in line 159. However, the stationary point $\\omega^*_{t} = [\\theta_t^*, \\phi_t^*]$ obtained with the reasoning step in Line 139 should not equal to the future strategies $\\omega_{t+1} = [\\theta_{t+1}, \\phi_{t+1}]$. In another word, the term $\\phi_{t+1}$ used to updating $\\theta_{t+1}$ does not equal to the term $\\phi_{t+1}$ updated by $\\theta_{t+1}$. The author should consider changing the notation here (Line 156). Or it may result in a misunderstanding that the Level.$\\infty$ GP algorithm could use the opponents' future strategy to update its current gradient. 2. **Efficiency of the Level K algorithm.** As a level K algorithm would have to compute the gradient for **K** times for $\\theta$ and $\\phi$, this algorithm would take more time for a single step than a regular algorithm. I would recommend the author compare the method with baseline models regarding time efficiency (similar to Appendix. Figure~6 but with X axis as time). 3. **Difference with Other GAN optimizers**. The proposed methods could be seen as "given current generator, we use K step updates to find a better discriminator and use that discriminator to update the generator, and vice versa." However, some theory shows that, if we use a good (e.g., optimal) discriminator in the beginning, then we could obtain no gradient for the generator (Wasserstein-GAN). The $\\omega^*$ in this paper could be seen as the optimal $\\phi$ and $\\theta$ with the other kept fixed. Then what is the theoretical foundation between this paper and W-GAN that makes both methods work? 4. **Limited Experiment**. This paper only conduct experiment on a small "8-Gaussians" experiment and CIFAR-10. As the author claims an improvement against BigGAN, which is good at high-resolution images, would the algorithm in this paper also be applicable to BigGAN or other big models? 5. The proposed method is to stabilize the training of GAN. However, the author also claims that this algorithm uses 30 times fewer parameters. What's the correlation between stabilizing gradients and small models? Are there any theoretical results on this issue? As stated in the **Weaknesses** section. <doc-sep>This work propose Level k Gradient Play, a new dynamical system for non-convex non-concave min-max optimization. The key feature of the dynamics is that each player tries to anticipate what the opponent will do in the following round and adapt to it instead of the opponent's current iterate. Under mild assumptions, the Level $\\infty$ dynamics are well defined and enjoy local convergence for quadratic games and global convergence for bi-linear ones. In terms of practical algorithms, Level k algorithms are shown to converge to a Level $\\infty$ solution as $k \\to \\infty$ for sufficiently small learning rates based on a contraction property. This means that they can heuristically be used as replacements for Level $\\infty$. Level k Adam variants are shown to have good empirical performance. Regarding strengths, the presentation of the algorithm intuition and the key technical results is very clear. The proposed algorithm and analysis are to the best of my knowledge both novel when compared to other approaches in the non-convex non-concave optimization literature. While the theoretical guarantees are not particularly strong (many approaches can solve bilinear problems or have local convergence guarantees), the empirical results are promising. The only weakness I detect is that this work is similar to [1], which is not referenced. Just like in this work, the agents try to predict the strategies of the opponents in the next turn and adapt to them. This is once again computationally feasible for small learning rates via a contraction argument. Once again the dynamics globally converge for bi-linear games and higher learning rates lead to faster convergence but may be computationally intractable just like this work. While the overall technique may be similar to [1], the individual arguments are sufficiently different and the analysis of Section 4.1, Theorem 5.1 and 5.3 and the experimental analysis are unique to this work. Overall I propose to accept this work (Accept, 7). I have read the response of the authors which addressed my concern. I have thus increased my score to (Strong Accept, 8). [1] Optimal No-Regret Learning in General Games: Bounded Regret with Unbounded Step-Sizes via Clairvoyant MWU, Georgios Piliouras, Ryann Sim, Stratis Skoulakis, arXiv:2111.14737, 2021 N/A
This paper proposes a novel recursive reasoning algorithm for minimax games, in which players try to anticipate their opponent's next round move instead of reacting to the current round. Importantly, this is achieved without requiring expensive second order information. Reviewers found the paper clearly written and well motivated, addressing an important problem. The work appears novel, and there is good experimental evidence that the algorithm delivers on its promises.
Summary: The augmentation of NLP samples is an important task with no clear "applicable to all" mechanism. This is in sharp contrast to computer vision where techniques like rotation, modification of hue, saturation as well as umpteen other techniques exist. This work tries to address the issue by proposing a technique that carefully amalgamates multiple previously known approaches to generate diverse label preserving examples. The experimental results on RoBERTa highlight the applicability and importance of this data augmentation approach on the downstream task of text classification (GLUE). Strengths: 1. Empirical results. Performance better than previous approaches (although minor). 2. Paper Clarity 3. Each formulation is backed by a strong intuitive understanding. 4. Contrastive training (negative sampling) is one of the crucial contributions of this work. It seems to be making every previously known augmentation approach better. Please feel free to highlight other major contributions. Weaknesses (Minor): 1. Ad-hoc regularization parameter selection is necessary for getting performance gains. This makes it difficult to conclusively prove that this is an "applicable to all" data augmentation scheme. 2. It would have been better to see the performance gains on more difficult text-classification tasks (non-GLUE), or underperforming models (non-BERT based). Since the gains are not much. It becomes difficult to fathom if the gains are actually due to good objective function or a case of chance for choosing better examples. Comments/Questions: 1. What is the augmentation size being used in the setup? I suspect the size plays an important role in such setups and this hasn't been discussed much in the paper. Also, please show the performance trends based on different augmentation sizes. 2. How do you measure the diversity (as mentioned in the paper title) in the generated samples? 3. Rather than using the ad-hoc approach for selecting which augmentation "stacking" scheme is helpful, it would have been better to compare/use an approach highlighted in "Learning to Compose Domain-Specific Transformations for Data Augmentation" [NeuRIPS 2017]. Correction: 1. Related Work: Contrastive learning - Under an unsupervised setting, ontrastive -> contrastive Overall: This work highlights the importance of incorporating contrastive training for data augmentation. Please let me know if I have misunderstood something(s)<doc-sep>Paper proposes a contrastive learning-based approach to combine different data augmentation techniques for NLP tasks. While the widely used consistency loss focuses on a single example, the proposed contrastive objective allows capturing the relationships among all data samples which helps in producing diverse and informative examples. For experiments, the paper explores 5 data augmentation approaches with Roberta-large as the classification model. Empirical results on the standard GLUE benchmark leads to an impressive 2.2% average improvement. Authors also found that back-translation and adversarial training combination leads to better performance than other DA combinations. Strengths: 1. The proposed framework can be applied with any text data augmentation methods. It's a solid work that will help the NLP community in developing better DA techniques. For example, [Kumar et al. 2020] shows that any pre-trained model can be used for data augmentation. I believe seq2seq model like T5, BART based augmentation combined with CoDA, will further push the state of the art for text DA. 2. Paper provides clear motivations and describes their methods, experiments in detail. Authors study DA in both low-resource and rich-resource setting. Ablation studies have been conducted to investigate gains from different components. 3. Authors plan to release their code which is good for reproducibility. Weakness: My understanding is that all numbers reported in the paper are from a single experiment. As a reader, I would like to see some variance with the results. Apart from this, I don't see any major issues with the paper. Questions: 1. Since one of your goals is to improve the diversity of the augmented data, have you tried replacing more words in the c-bert model? By nature, c-bert is bound to replace max 15% of the tokens while maintaining the sentence length. Methods such as back-translation or seq2seq models do not have such restrictions. Also, have you considered using a pre-trained seq2seq model for DA as in [Kumar et al. 2020] 2. Fig 5, back-translation, and adversarial training have similar performance. This result is intriguing. Do you have some further insights into it? Typos: - Sec2.2. "the first term correspond" -> corresponds - Sec 4, Contrastive Learning para, "ontrastive learning" -> "Contrastive learning" References (additional DA citations): 1. Kumar, V., Choudhary, A., & Cho, E. (2020). Data Augmentation using Pre-trained Transformer Models. ArXiv, abs/2003.02245.<doc-sep>The paper proposes a novel data augmentation framework, which explores different combinations of isolated label-preserving transformations to improve the diversity of augmented samples. The authors find that stacking distinct label-preserving transformations produces particularly informative samples. The paper also introduces a contrastive learning objective to capture the global relationship among the data points in representation space. In my opinion, the exploration of different combinations of isolated label-preserving transformations is the major contribution of this paper, which may inspire future works for data augmentation. However, the contrastive regularization object is a bit incremental, and I cannot see a big difference compared with Moco or SupCon. Strength: + The idea of stacking distinct label-preserving transformations is intuitive. + The integration of the consistency training objective and the contrastive regularization objective is interesting. Weakness: - Lack of novelty, the contrastive regularization object is a bit incremental, and this object is very similar to MoCo or SupCon. - The model has first to obtain the augmented samples, which is computation expensive for large-scale datasets and may hinder the practical application of the model. Moreover, the overall improvements are relatively small compared with R3F, and there is a lack of variance analysis. Questions: What is the computational complexity of CoDA? Why using MMD distance in section 3.1? Is stacking distinct label-preserving transformations the default setting for CoDA in your GLUE experiments? What if other strategies (mix, random) work better in datasets like QNLI, RTE. MRPC, and so on. Why not report results on those datasets? What is the major difference between your contrastive regularization and MoCo or SupCon? As the improvements are relatively small, could you please provide the test of statistical significance? What if you stack cut first and then back? Does the order affect the performance?
This paper concerns data augmentation techniques for NLP. In particular, the authors introduce a general augmentation framework they call CoDA and demonstrate its utility on a few benchmark NLP tasks, reporting promising empirical results. The authors addressed some key concerns (e.g., regarding hyperparameters, reporting of variances) during the discussion period. The consensus, then, is that this work provides a useful and relatively general method for augmentation in NLP and the ICLR audience is likely to find this useful.
The paper presents a zero-shot action recognition framework by learning an universal mapping from video to semantic space. The unseen action embeddings are re-positioned through leveraging the distribution of unlabelled test set. The universal mappings from unseen action to test videos are first defined and the target embeddings are treated as weighted Frechet means. The unseen action embeddings are re-positioned as a semantic regularization. The results on UCF101 and HMDB-51, UCF Sports and J-HMDB validates the proposed method. Strength: 1. The zero-shot learning for action recognition is good direction however the applicability is limited in real world scenario. 2. Figure 1 is very clear to give the illustration of the concept and the high-level idea of the whole paper. 3. The performance are very stronger when compared with other baselines. Weakness: 1. The technical details is a little hard to follow since I am not very familiar with the zero-shot learning. In my opinion, the proposed method is a general approach across video and image domains. Why not implement the experiments of such method on the image zero-shot learning? 2. Is there any visualization to intuitively show the learning results by the transductive universal transport? The paper gives a new method for zero-shot action recognition by learning the transduction from unseen action to test videos in hyperspherical space. The performances are good when compared with other state-of-the-art methods. However, the details of the technique is a little hard to follow for me. In my opinion, the paper forms well and the writing is very good. I give accept currently because of my unfamiliarity of such domain. By the way, I will follow up the work in the following reviewing sections. <doc-sep>The paper targets transductive zero-shot action recognition. To alleviate models biased to seen categories, the authors propose to re-position unseen action embedding through transduction. There are three steps in the proposed method: first, finding an optimal mapping from unseen actions to the mapped video in the shared hyperspherical space. Second, defining target embeddings as weighted Frechet means with the weight given by the transport couplings. Third, re-positioning unseen action embeddings along the geodesic between the original and target. The zero-shot classification performance of the proposed method is tested on the UCF-101 and HMDB datasets. The zero-shot spatio-temporal localization performance is tested on the UCF Sport and J-HMDB datasets. While the paper demonstrates state-of-the-art results, one important concern is about fairer comparisons, especially the zero-shot spatio-temporal localization experiments (Table 3). - The setting of the proposed method is different from some compared papers. For example, the authors focus on transductive ZSL, while Mettes et al. (2021), Kim et al. (2021), and Brattoli et al. (2020) focus on inductive ZSL. - The proposed method uses both action and object information, while Brattoli et al. (2020) use action information only, and Mettes et al. (2021) use object information only. Without fairer comparisons, it is hard to assess the effectiveness of the proposed method. Another concern is that the importance of some critical components is not adequately evaluated. - This is also related to the comments above mentioned. The proposed method uses both video features and object information. Is this critical to obtain a good performance? The importance of video features and object information is not properly evaluated. One way to show this is to evaluate the performance of the proposed method using only one type of modality. Partial information is given in Figure 3 and the Fusion paragraph on page 7. Based on Figure 3 and the discussion, it seems that the proposed method does not outperform Brattoli et al. (2020) and Mettes et al. (2021) under the same experimental settings as the compared methods? Typo: in 3.2 implementation details: (2+1)D -> R(2+1)D The paper can be further strengthened by demonstrating fairer comparisons and adequately evaluating the importance of the critical components. <doc-sep>This work tries to address the problem of zero-shot action recognition. Particularly, the paper aims at preventing the case that many unseen action categories in the target domain are simply never being selected during inference. Using the distribution of the unlabelled test set, the embeddings of unseen actions in the target domain are reweighted and repositioned along the geodesic such that they are better aligned with embeddings of training actions in the source domain. In experiments, Empirically, the proposed method has been evaluated on benchmark datasets for tasks zero-shot action classification and spatio-temporal action localization. Strengths - Work on the problem of reducing the biases between seen categories in the source domain and unseen categories in the target domain the semantic space. - Evaluate the approach on benchmark datasets for two tasks: zero-shot action classification and spatio-temporal action localization. Weaknesses - Novelty seems incremental. The proposed transductive universal transport algorithm for embedding reposition seems like a simple weighting method guided by the distribution of the unlabelled test set. The paper merely uses existing approaches to solve the transductive optimal transport problem but does NOT bring any new insights. - Generalization seems a concern. The proposed approach heavily depends on the distribution of the unlabelled test set. It seems sensitive to the distribution and the number of clusters. As shown in Figure 2, using the target embeddings seems on par with repositioned embeddings. Also, the proposed approach seems to only works in the case with a small number of clusters. This paper proposes a sensible solution to reduce the bias between the source domain and the target domain for the task of action recognition. But novelty seems incremental. Also, some experiments seems a bit unconvincing and the approach seems not to scale to general settings. <doc-sep>This work introduces transductive universal transport for zero-shot action recognition, where no training examples for unseen classes are available. To address the biases of prior approaches towards seen classes during inference, this paper re-positions unseen action embeddings through transduction by using the distribution of the unlabelled test set. Experimental results on several action recognition datasets demonstrate the effectiveness of the proposed method. Strengths 1. The use of transductive universal transport for zero-shot action recognition is new. 2. The experimental results show the effectiveness of this new method and also better performance than prior states of the art. Weaknesses 1. The approach needs access to the entire testing video set to obtain distribution information of testing videos. This is an unrealistic setting. When used in practice, a machine learning model should expect to see one testing example a time. 2. Many symbols are not clearly defined, making the math descriptions in this paper hard to read. For example, in Eq. (2), it is unclear what w_s, w_u, sum_|Lu| sum_|Ls| are, what weights for labels, i.e., w_lu and w_ls, mean, and why u_s and u_u are sets of labels. 3. Does the use of transductive universal transport bring any computational overhead to the zero-shot learning model? It is interesting to see comparison of inference time and complexity with the baseline and some states of the art. The idea of using transductive universal transport for zero-shot action recognition is new, and the performance is good. But the core setting that the entire testing set is available during training to get the distribution information is unrealistic. The writing, especially the math part, needs improvement.
This paper was reviewed by four experts in the field and received mixed scores (1 borderline accept, 3 borderline reject). The reviewers raised their concerns on lack of novelty, unconvincing experiment, and the presentation of this paper. AC feels that this work has great potential, but needs more work to better clarify the contribution and include additional ablated study. The authors are encouraged to consider the reviewers' comments when revising the paper for submission elsewhere.
This work starts by questioning the apparent robustness of quantized networks and demonstrates that such robustness is more so a failure of the attack algorithm in picking up the gradient signal. The authors address this by tuning a scalar multiplier applied to the network logits, which doesn’t modify the model’s decision boundary. Through analyzing the Jacobian, two approaches are proposed to determine the scalar $\\beta$ without tuning it by performing the attack. This approach is quite effective on quantized networks and even provides significant improvement on floating-point networks combining with existing attacks like FGSM and PGD. The proposed modification might seem trivial at first, but it constitutes an important factor the community hasn’t taken notice of, to the best of my knowledge. A few questions: 1) I don’t see any mentions of tuning the attack step size; if we set the new $\\eta$ to $\\eta/\\beta$, we can keep the Jacobian intact and isolate the effect of temperature scaling for XENT. 2) how about sweeping $\\beta$ and plotting against adversarial accuracy? This is surely expensive, but it would paint a clearer picture of the optimality of $\\beta$ found using the proposed approaches. This can be done in tandem with an attack step of $\\eta/\\beta$. I like the result overall, though the effect of $\\beta$ on the Jacobian and the softmax can and should be separated, if my understanding is correct. The proposed approaches for determining $\\beta$ largely depend on the Jacobian; therefore, there should be more investigation on if scaling the Jacobian correctly is more important than getting good error signals from softmax.<doc-sep>**Update**: Thanks to the authors for addressing my comments. As it was pointed out by the authors, temperature rescaling is mostly applicable to non-linear loss functions. For linear loss functions, temperature scaling only linear rescales the gradients. The difference between the proposed PGD++ attack and PGD with linear DLR loss is small (see the author's response to AR4). The improvements are most significant for FGSM but FGSM is not recommended for the robustness evaluation. Given the limited technical novelty and small improvements for linear loss functions, my score remains unchanged. ###### Summary The paper studies the robustness of binary neural networks (BNNS), which at first look have higher robustness than full-precious neural networks. The authors highlight the problem of poor signal propagation in BNNs, which makes gradient-based attacks difficult. To address this issue, the authors proposed a 1) single scalar rescaling of the jacobian to improve the signal propagation; 2) parameter-free hessian norm scaling technique. In the experiments, the authors demonstrated that the modified attacks reduce the accuracy of BNNs to zero and outperform existing gradient-based attacks against floating-point networks. ###### Reasons for score I vote for a weak acceptance of this paper. The paper shows that BNNs are not robust and introduce an interesting gradient rescaling technique, which can also be used to attack full-precision networks. The rescaling technique is well explained, easy to apply for any existing attacks, and has low computational overhead. However, as I will discuss below, I see some problems comparing the proposed attack against a well-tuned PGD attack. ###### Pros: 1) The paper studies the robustness of BNNs. The robustness of BNNs is not well studied, and understanding the robustness of BNNs is an important research direction. 2) The authors highlight the issue of signal propagation in BNNs. To address this issue, they devise a novel low-computational complexity technique. 3) Experimental results for BNNs and full-precious models demonstrate that the modified attack is effective. ###### Concerns and questions: - In the experiments, the authors use a single parameter for the step size for both PGD and FGSM attacks. On the other hand, the proposed method computes an optimal rescaling to achieve good signal propagation. Even though the proposed technique has a low computational budget, I believe the authors should do a grid search for the optimal step size for PGD and FGSM attacks for a fair comparison. - In the experiments, PGD L2 attack was unable to reduce the naturally trained models' accuracy to 0. This seems strange and unlikely as gradient shattering should not happen for naturally trained models. Can the authors explain these results? Is it possible that there might be an implementation issue? ###### Comments and suggestions: - The proposed technique amplifies the error signal in a nonlinear way for nonlinear losses such as cross-entropy. However, for other losses such as multiclass-hinge loss, CW loss, the proposed method will simply linearly rescale the error signal. Attacking CW loss might be useful as it avoids the issues of saturated softmax gradients. Will this technique be useful for attacking with CW loss? - The authors claim that this method improves the efficiency of white-box attacks against full precision models. Is it possible for the authors to get the results for mnist_challenge and cifar10_challenge to see if the method outperforms an optimally tuned PGD attack? <doc-sep>This paper studies the robustness of quantized neural networks against adversarial attacks. The authors use some slight modification of existing methods to successfully increase the attack success rate. In general, I think the idea is interesting. But I have some concerns that need to be addressed: 1. I am not fully convinced by the arguments made at the beginning of Section 4. The authors claim that poor signal propagation such as gradient vanishing or gradient exploding should be a problem for adversarial attacks. However, I do not think the reasonings provided here is specifically for binarized neural networks. Equations (3) and (4) also works in regular full precision networks. I do not think there are any problems which only present in BNNs, so the arguments here are not strong enough. If poor signal propagation is a problem for attacks, why we don't see that in full precision networks? More discussions on this are welcomed. 2. ResNet and DenseNet REF models in Table 3 seem to be surprisingly robust under PGD L2 attacks (column 3). This adversarial accuracy seems to be comparable to models with adversarial training. I think the authors need to provide some explanation on this. 3. (Minor) Please refrain from only using color to distinguish curves/bars in figures as it may not be friendly to readers with color blindness. 4. (Minor) The authors may need to re-organize some sections to make the paper easier to follow, for example, the "related works" section before the "experiments" section.<doc-sep>The paper identifies the gradient vanishing issue in the robustness of binary quantized networks. Therefore, it proposes to use temperature scaling approach in the attack generation. It has two methods for the temperature scale: (1) singular values of the input-output Jacobian and (2) maximizing the norm of the Hessian of the loss. ------------Updates after rebuttal------------- Thanks the authors for answering my questions. However, I don't think my comments are well addressed. Even though the paper [d] may not provide public available code, the authors could either use results from the paper [d] or implement the proposed attack on the models used by [d] to see the difference. Strengths: + The proposed method work well on adv trained models and floating-point models. + Practical approach by a simple modification to existing gradient based attacks. Weaknesses: - Binary quantization is not a well accepted method, since it can in general introduce >5% accuracy loss. There are a lot more valuable quantization schemes to investigate such as low-bit-with fixed point, power of 2, and additive power of 2 (Y. Li, X. Dong, and W. Wang, “Additive powers-of-two quantization: An efficient non-uniform discretization for neural networks,” in International Conference on Learning Representations, 2020). - The novelty is limited, since it brings the temperature scaling approach, an existing method, to the problem of attacking binary quantized models. - The paper writing is not constructed for easy understanding. Comments and questions: 1. I would like to see comparisons with other attacks that are particularly designed for quantized models. 2. The third paragraph in Introduction, the paper tries to justify two techniques, but they are still not well motivated. 3. The fourth paraph in Introduction mentions both full precision networks and floating-point networks. What’s the difference between these two? 4. Table 1 results are surprising. Is the same observation made by other ref works? 5. The method is to replace the softmax with a monotonic function (softmax with a single scalar) during the attack generation. Then for testing the attack success rate, I think the neural network should still use the original softmax (without scalar). Then the attack success rate won’t be degraded?<doc-sep>**Update** : Since most of my issues have been addressed, I have changed my rating from 4 to 6 Summary: This paper studies the robustness of quantized networks against gradient-based adversarial attacks (for L2 and Linf norms), showing how quantized models suffer from gradient vanishing, giving a false sense of security via gradient masking. To circumvent this issue, the authors propose temperature scaling approaches that can overcome this masking, achieving near-perfect perfect success in crafting adversarial inputs for these models. ########################################################################## Reasons for score: The paper's ultimate goal is to get better gradient-based attack performance on quantized (binarized, in this case) networks. However, key steps that should have been tried first for benchmarking such as adaptive PGD attacks have not been performed. Moreover, it is not clear what benefit the proposed method has in this scenario compared to gradient-free attacks like Boundary++. The paper's contributions, although including some nice analyses on temperature scaling based solutions, are too weak to be accepted in their current form. ########################################################################## Pros: - Improvement in attack success rates for full-precision networks, even for FGSM, seems like an exciting result. Further analyses and methods on top of this could be used to further increase the strength of these first-order gradient attacks. - Jacobian and Hessian based detailed analyses of temperature scaling, and what different solutions correspond to in terms of robustness is quite insightful and interesting. ########################################################################## Cons: - Gradient masking is a relatively well-known phenomenon in adversarial machine learning. In cases when normal first-order gradient attacks fail, techniques like adaptive PGD attacks, gradient-free attacks, or even black-box transfer attacks are some straightforward methods to overcome gradient masking. Thus, it is not clear why the authors did not try non-gradient attacks before jumping to a complicated algorithm. At the very least, those attacks (like Boundary++) should at least be part of benchmarks for comparison. - For starters, please refer to 'Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks': they have a [publicly available implementation](https://github.com/fra31/auto-attack ) as well - All of this is crucial, especially since the paper claims (Section 4) that "we would like to point out that the focus of this paper is to improve gradient-based attacks on already trained BNNs" - In general, investigating if a model exhibits gradient masking is not a contribution: standard checks like comparing transfer rates, multi-step to single-step performance, attack rates for increasing attack budgets, etc are often used to check for gradient masking. - Figure 1: For which norm are these numbers reported? Without knowing the norm, it is hard to say if Figure b is a sign of gradient masking or not. - Section 2.2 "..these attacks have been further strengthened by a random initial step". This is partially true: the real benefit comes from having multiple random restarts. Having just one random initialization by itself is not that useful. Please re-run evaluation experiments with random restarts (20 is a good number). - Section 3: What does "adversarial accuracy" refer to? Is it accuracy on perturbed inputs f(x') = y, or success rate of the adversary when trying to change predictions aka f(x) ~= f(x')? Please clarify - Section 3.1 "... clearly indicate gradient masking issues.." please elaborate: not every reader will be familiar with the set of checks used for gradient masking. - Issues with Cross-Entropy based loss and how they promote certain magnitudes of logit values are not new. The authors might want to have a look at Section 4.1 of [Reliable Evaluation of Adversarial Robustness with an Ensemble of Diverse Parameter-free Attacks](https://arxiv.org/pdf/2003.01690.pdf) to see if there are similarities/differences in the proposed temperature-based variant, and how the proposed method is better than the one in the Difference of Logits-Ratio based loss? This work seems to be a key and relevant part of related work and should be included in comparisons/benchmarking. - "implementation of our algorithm will be released upon publication". Please anonymize and attach the code in response. - The benefit of using the proposed FGSM++/PGD++ attacks on full-precision models trained with adversarial robustness seems to be negligible (Table 4), and should not be overstated in results. Also, since these attacks all have random seeds, please perform experiments multiple times for statistical significance and report summary statistics. ########################################################################## Minor Edits: - Section 2.2 "..perturbations to the images..." the definition here is for adversarial examples in general, and should thus be "perturbations to data" - Section 2.2 "Gradient-based attacks can be... written as Projected Gradient Descent (PGD)" this is true only for first-order gradient-based attacks, not all gradient-based attacks (examples JSMA). Please correct. - Section 4.1 "...since most of the modern networks consist of ReLU nonlinearities" this can (and often is) circumvented using Fake-ReLU. Example implementation [here](https://github.com/MadryLab/robustness/blob/89bdf8088a8f4bd4a8b86925a2801069ec281fee/robustness/tools/custom_modules.py#L5) - Section 5 "...and they hypothesize that linear networks would be robust to adversarial attacks." this is not their conclusion, and seems to be out of context. - Section 6 should preferably be either towards the end or at the beginning? Not clear why it is in the middle of other sections Please address and clarify the cons above
The paper studies the robustness of binary neural networks (BNNS), showing how quantized models suffer from gradient vanishing. To solve this issue, the authors propose temperature scaling approaches that can overcome this masking, achieving near-perfect perfect success in crafting adversarial inputs for these models. The problem is interesting and important. However, the major concerns are that the technical novelty is limited raised by two Reviewers, small improvements for linear loss functions. The most related work is not compared in the experiment.
The authors tackle the issue of out-of-distribution detection for deep learning classifiers by proposing two regularization terms that can be added to a loss-function to fine-tune a pretrained network in order to ensure a calibrated probability output of P(X) that can be used to detect OOD samples. Existing approaches rely on either using a function of the logits P(Y|X) or directly estimating P(X) based on high-level features of the deep learning classifier. However, the logits of a deep network can be overconfident (mis-calibrated) [1] and are often not linearly correlated with P(X) for OOD samples [2]. Furthermore, density estimation on a biased low-dimensional projection of X might not produce an unbiased estimate of P(X). Instead, the authors propose to compute P(X) based on the softmax logits (P(Y|X)) after first (re)-calibrating the joint distribution P(X,Y) (g(x,y) in Eq. 3).
Towards this goal, they derive a density-consistency regularization (DCR) term based on the asymptotic testing of the consistency of P(Y) (based on logits) and its empirical density function, in a batch-wise fashion. This regularization term effectively (although asymptotically) re-calibrates v(Y) and thus, according to the author’s derivations, re-calibrates the joint distribution h(X,Y). Since a discriminative model already optimizes for the accuracy of P(Y|X), calibrating P(X,Y) = P(Y|X) P(X) should ensure a calibrated P(X) which can be used as a reliable OOD score. As a second contribution, this paper introduces a contrastive distribution regularization (CDR) term that incentivizes a high likelihood ratio between augmented samples (assumed to be distribution-deviating) and in-distribution samples. [1] Matthias Hein, Maksym Andriushchenko, and Julian Bitterwolf. Why relu networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 41–50, 2019 [2] Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection. In Advances in Neural Information Processing Systems, pages 21464–21475, 2020. Strengths: * The proposed approach is simple and the derivation is straightforward. * The idea of converting the asymptotic consistency test into a regularization term for fine-tuning seems novel to me. The idea of calibrating P(Y) to ensure the calibration of P(X) also seems new. * The authors present an extensive empirical study in which their approach outperforms most state of the art benchmarks. Weaknesses: * The writing could be clearer. For example, the authors claim to propose a new approach for estimating P(X) when they are actually re-calibrating the logits-based estimation of P(X). “Calibration” was not once mentioned in the paper even though it is equivalent to the assumption of Eq. 3 (that the learned model is faithful to the joint distribution). * The authors emphasize on the novelty of the density-consistency regularization term. However, from the empirical study as well as previous literature, it is highly likely that the contrastive-loss term is the main driver of the improved performance. Ablation studies do not disentangle the contribution of this term from that of the density-consistency one (Table 2 is lacking a row for enabling CDR alone). * It is difficult to assess the significance of these contributions due to the lack of a detailed (rather than aggregated) ablation study and the use of smaller networks than commonly deployed for the ImageNet results (which I believe are more interesting than those of CIFAR-10/100). [3] Minderer, Matthias, Josip Djolonga, Rob Romijnders, Frances Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, and Mario Lucic. "Revisiting the calibration of modern neural networks." Advances in Neural Information Processing Systems 34 (2021): 15682-15694. 1. The performance on ImageNet is not as impressive as that on CIFAR-10/100 which brings up the question of whether this approaches scales with the number of classes or size of dataset. Furthermore, larger networks that are becoming more commonplace for ImageNet-derived tasks might be better calibrated [3]. It would be interesting to verify that this approach can be as effective on these modern architectures. 2. While the authors claim robustness to hyper parameter choices, the FPR metric does seem to change significantly (on the average benchmark) with different choices of “r” and batch size. It would be interesting to see if that's on a subset of the dataset or a common theme. <doc-sep>A method is presented to fine-tune pretrained networks by introducing regularization terms such that the marginal density $p(y)$ estimated from logits matches the empirical density $n_y/N$. The claim is that this method allows the sum of logit-exponentials $\\log(\\sum_y \\exp(h_y(x)))$ to be used as a score for OOD detection. Effectiveness on various datasets is examined with good results. The exposition is heavy on mathematical proofs at the cost of conceptual clarity and readability. A lot of emphasis is placed on various proofs, but the intuition and concept do not come across as clearly. For instance, in a paper on OOD detection, what is used as the OOD score should be clearly emphasized. However, here the fact that $\\log(\\sum_y \\exp(h_y(x)))$ is used as the eventual OOD score is mentioned only fleetingly, and is easy to miss on the first reading. Some recommendations for improvement: 1) I feel that the training algorithm should be included in the main manuscript instead of the appendix since it clarifies how the various regularization terms are calculated and gives a clearer picture of the overall flow. Some of the details of the lemmas and proofs can be moved to the Appendix. 2) Mathematical notation should be simplified and made consistent to improve readability. For instance, $y$ is used both as the dependent variable in Eq. (2) and also as the index of summation in the denominator. The term $\\Upsilon^y_n$ in Eq. (9) seems to be a scalar. However, in Eqs. (12) and (13), the matrix transposition operator is applied to it. Can the authors please clarify how to interpret $\\Upsilon^y_n$? Also, notation should be explained where it is introduced. For instance $\\mathbb{S}$ is introduced in Eq. (16), but is explained almost a page later on line 274. The paper is well motivated, though, and the results seem good, but there is a question about benchmarks (explained in the next question before). No concerns with potential negative societal impact. <doc-sep>This paper highlights the drawbacks of current OOD detection methods that most of them try to model ID p(x) using the features that are intially trained to fit p(y|x). To solve the distribution misalignment, a novel density-consistency regularizaiton is proposed. A contrastive criterion is also used to flatten p(x) so that OOD samples can locate in low-density area to be identified easily. The proposed method is evaluated upon both OE and non-OE settings, and the performance exceeds the previous methods in a large margin. Pro: - The paper is well-motivated, and the methods can address the problem intuitively. - The method rely on proven assumptions with theoretical proof. - The paper obtains significant great results across various experiments with fixed hyperparameters. Con: - The results in Table 1 are super charming. However, for CIFAR-100 experiments, I am more interested in the results when CIFAR-10 is OOD, as CIFAR-10 and CIFAR-100 are generally looks alike, but they are from different label distribution. This result usually clearly show how the method capture clues to distinguish ID/OOD. In CV, OOD samples are usually considered as semantic anomalies to ID datasets. However, a common shortcoming of current OOD detection evaluation protocols use different datasets as OOD, so that some methods can use low-level superficial shortcut to make good results (e.g., a simple low/high-resolution image classifier might do extra well on some OOD datasets). But actually, we hope the model uses semantic difference to distinguish ID/OOD. I hope the proposed method gets the better result not by superficial shortcut. The easiest way to proof is to show (ID: CIFAR-10, OOD: CIFAR-100) and (ID: CIFAR-100, OOD: CIFAR-10) cases. - The paper points out the semantically overlapping objects between ImageNet and iNaturalist. In some other works such as `ViM: Out-Of-Distribution with Virtual-logit Matching`, `https://github.com/hendrycks/natural-adv-examples`, `Scaling Out-of-Distribution Detection for Real-World Settings`, some cleaner OOD datasets for ImageNet are provided. Authors are suggested to do experiments on these clean ood data. Or to support the reason that the proposed method does not do well on Texture / iNaturalist due to the label noise, the author shall show the appropriated sampled semantically-overlapping objects get high confidence. - It would be great if the authors provide more visualization or deeper analysis on what DCR and CDR impact (maybe on the feature space) to help readers better understand the method. The authors do not include the discussion on the limitation of the method. We hope the authors can provide more analysis on its weakness in discusssion. <doc-sep>The paper proposes to improve OOD detection with energy score [1] by incorporating a density-based regularizer during training that promotes better estimate of P(Y). In particular, the regularizer is derived based on the rejection criteria of a hypothesis test to ensure the consistency of Monte Carlo mean of P(y) (estimated per batch) and the empirical mean in the training set (n_y/N). [1] Liu et al., Energy-based out-of-distribution detection, NIPS 2020. Strengths - The motivation that existing OOD detection scores suffer from poor density estimation is clear and the task is important. - The proposed method based on law of large numbers and central limit theorem is simple. Weaknesses - The rationale behind the proposed method is unclear to me. Currently the works promotes Monte Carlo estimate of P(Y=y) to be closer to n_y/N in the training set. However, during test time, this assumption may not hold: OOD detection concerns with label shift and it is unclear why one wants to overfit the density of label distribution in the training set. - While both Lemma 1 and Lemma 2 hold when N goes to infinity, the batch size used in practice is 256, where Monte Carlo estimate can result in large variance and unreliable estimation. Therefore, there exists significant gaps between the proposed theoretical insights and practical implementation. - While y is in low dimensional space, obtaining P(Y) requires integrating over x, which are high dimensional vectors and therefore the effects of dimensionality on the accuracy of estimation is non-negligible. The paper included discussions on limitations.
This paper develops a method for improving out-of-distribution detection in deep learning based on a novel regularization term. There was significant variance in review scores with two championing the paper for acceptance and two borderline scores (7, 7, 5, 4) resulting in an aggregated score just above borderline accept. The reviewers arguing for acceptance found the method novel, the simplicity of the algorithm compelling and the experiments extensive and convincing. One reviewer was concerned that baseline comparisons provided in the paper seem less strong than reported in other work. Two reviewers questioned the mathematical derivations and some of the underlying assumptions of the paper. That two reviewers are arguing for acceptance is a signal that the paper could be a useful contribution and interesting to the community. Since the experiments seem extensive and seem to demonstrate that the method consistently works well, and given that it is simple to implement, that seems to validate the underlying assumptions and it could provide a useful baseline. Therefore the recommendation is to accept the paper. Please make sure to address the remaining reviewer concerns in the final manuscript.
########################################################################## Summary: The paper proves that the generalization curve of a linear regression problem can be designed. The paper discusses both the under-parameterized and over-parameterized case and shows that the generalization curve can be designed in either case. The paper presents only theoretical results. ########################################################################## Reason for score: My vote is for accepting the paper. The subject it addresses is of importance and I believe the results that are presented are of sufficient interest. ########################################################################## Pros: 1. The generalization error is an important aspect for ML algorithms. The paper addresses the case of linear regression, one of the simplest ML algorithms. However, showing that the generalization error can be controlled even for a simple model as this is nonetheless important. 2. The paper is well written, the problem it addresses is clearly discussed and the development of the proposed method is well detailed. ########################################################################## Cons: 1. I would have liked to have some numerical examples to illustrate the design of the generalization curve for a simple case. 2. In the setting in the paper you draw the new elements either from a normal distribution or from a mixture distribution when you increase the dimension. In a practical settings, where I already have the data, do such hypothesis still remain true? ########################################################################## Miscellaneous: 1. Could you please elaborate on the statement that 'the true linear model is \\beta = 0 \\in R^{d}'. For me it is not clear what is the purpose of the statement, do you mean that the model parameters are all zero? 2. There are some typos present, for example 'The quantiry ...' in the paragraph after lemma 3, they should all be spotted by a spell checker. <doc-sep>Previous work has shown peaks in generalization error as the capacity of the model increases (called the double-descent phenomenon). The submitted paper proposes methods for generating data that would arbitrarily change the number and positions of peaks in a generalization-vs-capacity curve for linear regression, where the number of features controls the capacity, and shows that properties of data can play an important role in this phenomenon. This paper tackles an important problem in a quite active area of research with clear presentation and coherent organization. Existence of this data serves as an impossibility result that shows that relating the double descent phenomenon to the properties of model and interpolation without further assumptions on the data is futile. However, there is a critical discrepancy between the generalization curves studied in this paper and previous work that I describe below and, therefore, I'm leaning towards rejection. I will raise my score if the authors can show that the effects on the number and positions of the peaks hold in the original setting, as I believe this is an important paper otherwise. The generalization error in this paper is normalized by the square of the number of features and this can have major effects on the shape of generalization-vs-capacity curve. The number of features is what controls the capacity so, for example, if the regular (unnormalized) error is flat across different capacities, the normalized curve will be a decreasing sequence. Neither the generalization error in a classical bias-variance curve nor the error that matters to a practitioner is normalized. I skimmed through the double descent paper by Belkin et al and they also seem to be using the typical generalization error which is not normalized. The motivation for normalization in the paper is that the closed form error, $||(A^\\top)^+x||^2$, sums over d dimensions and so the generalization error has to be normalized by d^2. This does not seem right. $(A^\\top)^+$ itself has factors that sum over d dimensions and are then inverted, so the effect of d will cancel out. Minor remarks: - \\beta and A are not clearly defined in the problem setup. -- Update: The issue with normalization is fixed in the new version and I am increasing my score.<doc-sep>Short summary: The paper claims that the double descent phenomenon arises from 'specific interactions between properties of typical data and the inductive biases of algorithms' and constructs a distribution that generates an arbitrary generalization curve for linear regression, specifically building a multiple descent generalization curve both in the under and overparametrized regimes. The model that is used in the paper is a linear regression model over an increasing (in a revealing manner) set of coordinates or features. The authors construct the distribution that gives the peaks at custom coordinates by having features being independent standard normal when they want the test error to decrease and to be a (independent) mixture of gaussians when they want the test to increase. ++++++ Main points: While the math to my understanding is clean and the exposition is clear, my main concern is how the authors relate their findings to Double Descent. This worries me in two related ways. First, from the perspective of the complexity of the model. While adding a dimension to the linear regression adds a parameter, I'm skeptical how this relates to the complexity of the model in how we view complexity in machine learning and in the research area of double descent in particular. I would be much more convinced if the authors could show a case where adding a feature in the random features sense where the features are of the whole vector (say apply a random rotation and then do the inverse transform sampling) or adding a neuron in a two layer network and still being able to decrease/increase performance arbitrarily (or close to it in some sense). Even doing the same as in https://arxiv.org/pdf/1903.07571.pdf, where they choose a random set of indices of increasing cardinality would convince me much more. The second related issue, is the distribution of the features. I would not mind it if the classifier would use the features uniformly, but increasing/decreasing the hardness of the distribution at each coordinate feels very artificial in the following sense: Assume that the first coordinate is the label (or something close to it), but the next coordinates are pure noise. Then both our train and test will increase when we increase the number of features. In my intuition, this is very far from what is studied and claimed in the double descent literature (for example in the sense of Belkin's interpolation point or Nakkiran's Model Complexity, we expect the train error to decrease when model capacity increases). I do believe that the question of whether we can construct an arbitrary generalization curve is very important and that it should be studied and explored more deeply, but I'm not convinced by the set-up in this paper. I would be willing to change my opinion in the case the authors will address the above points in a satisfactory manner. Minor comments: 1) The related work in the body of the paper is lacking: (i) One notable paper that should be present is: Advani & Saxe 17' https://arxiv.org/abs/1710.03667. (ii) While Nayshabur 15' observe the double decent without realizing it and Neal 18' study the bias-variance tradeoff, Nakkiran 19' https://arxiv.org/abs/1912.02292 is the first to demonstrate it in a convincing fashion and should be cited as such. 2) I would appreciate an explanation for why the loss is scaled by $1/d^2$, this feels rather arbitrary. <doc-sep>This paper studies the double/multiple descents of the prediction error curve for linear regression in both the under-parameterized and over-parameterized regime. The strength: while many papers have studied the double descent for linear regression estimate or the minimum $\\ell_2$ norm solution, this paper shows multiple descents when d=O($\\sqrt{n}$) a setting is barely studied by others. Further, while multiple descents have been numerically discovered by other concurrent works, they have theoretically proved that such multiple descents exist. The weakness: The major weakness of the paper is the model settings. Specifically, 1) it is unclear why the prediction error is normalized by the number of features, and 2) the bias term is left out in the prediction error due to the true coefficients being zero and only the variance term is considered. First, for normalization, the authors claim that this normalization is necessary for comparison. Indeed, the entire results are hinged on this normalization, i.e., without the normalization, the proof can NOT show the existence of the multiple descents neither in under-parameterized regime nor overparameterized regime. The reasons I found this normalization is weird are the following: i) Normal linear regression problem does not have such normalization on the prediction error. It is unclear why we want to divide a one-dimensional error by the feature size. ii) Other double descent works mainly deliver two messages: a) Given a fixed sample size, what is the best model gives the best estimate of the response. The answer is a larger model, i,e, adding more features, may help. (e.g. Xu's PCR paper) b) Given a fixed feature size, what is the best sample size that gives the best estimate of the response. The answer is using a smaller sample size may help. (e.g. Hastie's double descent paper) For both cases, I do not see any reason to normalize the prediction error of response by feature size. If this normalization is for the purpose of the model selection penalty, it is unclear why we should encourage a larger model instead of penalizing it. A reasonable quantity for such normalization is the MSE of the coefficient, i.e., $\\|\\hat{\\beta}-\\beta^*\\|^2$. There are many applications where people are more interested in the coefficients rather than the response. Maybe the authors should consider this quantity instead of the prediction error. For the second weakness of the model settings, the bias term has been left out of the prediction error when the true coefficients are assumed to be all zero. Because of this setting, all features are just pure noise, irrelevant to the response. Then, we can check that 0 is the best estimate when all features are just pure noise, and it seems that there is no motivation for us to learn anything from the random noise. If the main purpose of this paper is to deliver a message that using only irrelevant features and adding more of them can help to improve the prediction error, this effect is known already in those double descent paper in the overparameterized regime. Showing multiple descents does not add much value because it never beats the trivial estimate 0 in this setting. Because of these major weakness, I recommend rejection for this paper. But I will possibly change my evaluation if the authors can provide a very convincing explanation of the model settings and motivation. Besides these, another suggestion for the paper is that the proof of the Theorems and the statement of Lemmas takes a lot of places. I think they can be replaced by more detailed discussions of the model settings and messages or conclusions from the main theorems. For example, is there any intuition about what kind of multiple descents curve is more favorable? Also, despite the attractive title, I think it is still hard to design the generalization curve without taking the bias term into consideration. The room can be left for the analysis of the bias term. After response: Thanks for addressing the concern about normalization. It appears that other reviewers have a concern about such normalization as well. I suggest the authors remove the results with normalization entirely from the main paper and only have it in the appendix for anyone that is interested in such normalization. On the other hand, without normalization, the results have changed for the under-parameterized regime (which makes more sense to me) and the proof looks quite different in the over-parameterized regime as well. I did not have time to check the proof and I believe it is better to resubmit the paper as new because of the major changes. Finally, I still have concerns about the fact that only variance is discussed. I suggest the authors state their results in a setting where both bias and variance exists and the features added to the model are related to the response. Otherwise, it is a weird message that it is good to add pure noise as features. It feels like although we can design multiple descents in the overparameterized regime when noise is large, it is very likely that the 0 estimate achieves the best prediction risk. So there is no point to go into overparameterization and multiple descents at all. In summary, I have raised the score to 5. I believe it can be 6 or 7 if all issues are addressed, but I am afraid that the paper looks basically new after these changes and thus I am not sure whether it should be still considered for this conference.
While there was some interest in the analysis, the consensus view was that the original treatment was not sufficiently well-motivated, and the revision was too dissimilar from the original submission for it to be evaluated for publication in this year's ICLR.
Summary: The paper proposes a multi-stage directed exploration algorithm, XTX. It first imitates previous high score trajectories and then switches to an exploration policy with novelty bonuses. Conceptually, XTX is a method that extends Go-Explore which only acts randomly after reaching the frontier of familiar states. The paper argues that with novelty bonuses, the agent will be encouraged to explore more promising actions. This can especially be helpful when the action space is large like text-based games. Empirically, XTX shows strong performance on a large set of text-based games. Pros: The paper is generally well-written and easy to follow. The novelty of XTX is clearly elaborated. The method surpasses the existing method with a large margin on text-based games. The ablation studies show the individual components introduced by XTX can bring improvements. Cons: One weakness of the paper is these experiments did not clarify why the novel part of XTX (i.e. exploration with novelty bonus on the frontier) is helpful over random actions. The paper hypothesizes that novelty bonuses can encourage the agent to select promising actions in large action spaces. However, the ablation study (Figure 2) casts doubts on this hypothesis. XTX brings significant improvements over Go-explore in Zork1 but not other games. The difference doesn't seem to be correlated to the size of action spaces. Questions: I don't fully get why the method is motivated to solve the problems with large action space. How can an agent receive a novelty bonus if it did not enter that novel state by trying random actions? Do the authors assume the generalization of the neural network plays a key role here? Other Suggestions: The author might want to try other hard-exploration tasks. For example, minigrid or maze can be tested, if not Atari games like Montezuma Revenge. Since these are environments where existing exploration methods are developed, we can have a better understanding of how exactly XTX compares to other exploration algorithms, rather than the existing text-base game agent without directed exploration. Reason for the Score: The write-up and experiments in this paper are of good quality. The method itself is novel and the empirical finding in this paper might be particularly interesting for the audience of text-based RL. I have minor concerns author's claim about why this method works better than existing exploration algorithms while I'm happy to increase the score if they are addressed. <doc-sep>This paper presents a new exploration algorithm, eXploit-Then-eXplore (XTX), for text-based games which require extensive exploration. The authors propose an algorithm that explicitly disentangles exploitation and exploration strategies within each episode. XTX first learns the exploitation policy that imitates the promising trajectories from past experiences, then uses exploration policy to discover the novel state-action spaces. Finally, the authors demonstrated the outperforming results in the Jericho environment. This paper is well motivated and most parts are well written, but the main method section is written to be difficult to follow. The results demonstrate empirical gains in the Jericho environment. However, the baselines consist only of simple algorithms without an exploration strategy. The detailed comments and questions are as follows: 1. In the experiment, the performance is compared with DRRN and MPRC-DQN, which lack exploration strategy. XTX seems to be an exploration method very similar to Go-Explore. Moreover, in the paper, Go-Explore and PC-PG are mentioned as the most closely related approaches, but they are excluded from the baseline algorithms. It would be better to demonstrate the results of them together. 2. (Page 5, section 3.1.2, sampling trajectories) It is hard to follow the explanation. Can it be understood as a kind of weighted behavior cloning? Moreover, I understand the motivation of biased sampling towards high scores, but don’t understand the motivation for the length. I think that a shorter trajectory length is not necessarily better. Can you give an intuitive explanation? 3. In the paper, policy $\\pi_\\text{il}$ is modeled as GPT-2, and policy $\\pi_\\text{inv-dy}$ is modeled as DRRN. Is there any reason why each policy is modeled differently? Especially, the policy $\\pi_\\text{il}$ is renormalized over the valid action set, is there any reason or advantage to learn the policy with GPT-2? 4. In the experiments, the results demonstrate XTX underperforms DRRN on ENCHANTER. Is there any intuitive explanation for this result? It would be better if a discussion about what characteristics in the ENCHANTER made the XTX not work would be added. This paper is well-motivated, written overall, and demonstrates state-of-the-art performance in the Jericho environment. However, there are relevant but missing baseline algorithms (Go-Explore, PC-PG) for the main table of experiments. I think the results of these algorithms should also be included in the main table, and I think this can further support the main arguments of the paper. <doc-sep>This paper introduces an agent with a built-in exploration strategy that is aimed at text adventure games, or more generally, environments with large action spaces and sparse rewards. The exploration strategy is constructed from two independent policies: one trained with self-imitation learning on successful trajectories, and one trained on an inverse dynamics intrinsic reward. The agent plays episodes by starting with the exploitation policy for a number of steps that depends on the experience collected up to that point, and then switching to the exploration policy. The paper is well-written, describes the contributions clearly, and places itself in the context of the existing literature on exploration. It includes results on a number of text exploration games from a recent benchmark, where it shows by and large a significant improvement relative to the baselines included. The main contribution is an exploration strategy with an in-episode switch from an exploitation policy to one aimed at exploration. This approach to combining exploration and exploitation is different from much of the existing literature, where typically a single policy is used throughout the episode, and often throughout training, that merges two reward signals. Since the switching policy in this paper is the element that looks most hardcoded, and therefore potentially brittle, it would be valuable to investigate a bit more whether a more flexible solution is also possible here. While different, Agent57 (whose predecessor NGU is cited) might offer inspiration here: it also uses multiple policies, and manages the switching with a learned (bandit) mechanism. A significant difference is that there the switching only happens between episodes, but a similar switching mechanism might be considered here within episodes nonetheless. The in-episode switch is there to ensure that exploration happens at the edge of the known region of state space, where it is needed and meaningful. That is a very sensible thing for the agent to do, but there are other exploration strategies that effectively also do that, such as random network distillation (Burda et al., 2018) and inverse dynamics (Pathak et al., 2017), which the authors use to train their exploration policy. While the exploration region is less explicitly located at the edge of the known state space region in those algorithms than in this paper, the prediction errors that they rely on for intrinsic reward generation are more likely to occur at that edge. One question I have for the authors here is whether the inverse dynamics reward signal itself can be used to indicate when to switch from explore to exploit. In that case, the two-policy solution can be simplified again to a single policy that merges the two behaviours. I did not see this ablation in the paper, but I believe it would be a good thing to include. Conversely, it would be valuable to see the performance of the strategy proposed here on other exploration benchmarks, such as the hard exploration games from the Atari suite (Bellemare et al., 2016). While I appreciate that text adventure games are in some ways different from their video counterparts, since they have a different observation space (language, not pixels) and action set (again language, not moves), they are still both RL environments, and general agents should be able to play both. Furthermore a game like Montezuma’s Revenge has a bottleneck aspect similar to the one that many text based games have, as well as the need for exploration on the frontier of the known region of state space. All in all it seems that the proposed strategy here could work on a wider range of environments than addressed in the paper. If that is not the case, it is still a valuable contribution, but if it is, it would be good to know. A last comment: the agent proposed in the paper has another unusual feature in that its exploitation policy is trained only by self-imitation. While it is important to find the edge of the explored region of state space, and the self-imitation training regime can help with this, the XTX strategy can also be implemented with an exploitation policy that is trained in a more traditional way, with one of the many RL approaches available. Can the authors comment on why they chose the self-imitation approach instead? The paper is well written, presents a marked improvement over the baselines provided (I’m not sufficiently familiar with the text adventure game literature to be certain those represent state of the art, but I will assume they do unless corrected), and provides an interesting approach to the exploration problem through the two-policy architecture. I recommend acceptance, but I also feel the paper could be strengthened by addressing the questions raised in the main review section. <doc-sep>In this paper, the authors propose eXploit-Then-eXplore (XTX), a training strategy for agent solving human-generated text-based games. XTX consists of two training phases: 1. In the exploitation phase, the agent samples high quality experiences (in terms of score and trajectory length) from its replay buffer. Using the sampled trajectories, an action generation module is trained. At a certain game step $t$, the action generation module takes the observation $o_t$, as well as the two most recent past actions $a_{t-1}$ and $a_{t-2}$ as input, and generates the new action $a_t$ in a word-by-word auto-regressive manner. This process is referred as self-imitation by the authors. 2. In the exploration phase, in addition to the Q-learning loss as used in DRRN, the authors use two auxiliary losses to encourage the model to capture useful representations. First, the inverse dynamics loss $L_{inv}$ optimizes a module that predicts an action $a_t$ given two consecutive observations $o_t$ and $o_{t+1}$, where $o_{t+1}$ is resulted by $a_t$ given $o_t$. The second loss $L_{dec}$ is a regularizer that optimizes a module reconstructs an action $a_t$ from its encoding $f_a(a_t)$. During training, the two phases take control in an (almost) alternate manner, however, there is a coefficient $\\lambda$ controls the interpolation between the phases. The authors show that it is beneficial to not having the exploitation take control solely. On a subset of games from the Jericho suite, the authors show their agent outperform prior works. **Strengths** 1. The disentanglement of exploration and exploitation makes sense. The phase-alternating pipeline is nicely designed. 2. The paper is clearly written, it is relatively easy to understand how the model look like (although the intuition of each component isn't too clear). 3. The set of ablation experiments in Section 4.2 are well designed. **Questions and concerns** 1. What's the reason of choosing this subset of 12 games? While the list seems to cover a wide range of game difficulties, but why not using the entire Jericho suite? 2. The authors cited the INV-DY agent (Yao et al., 2021) in their Section 3.1.3, and actually, if I understand correctly, the entire Section 3.1.3 is describing Yao et al.'s model, without any new contribution. Why do not the authors compare their agent with INV-DY in result tables? 3. In Section 3.1.PHASE 1, the authors describe two criteria that switch the agent to exploration phase. Can the authors elaborate on the second criterion, what does it mean if the number of steps in an episode equal to the longest of the $k$ sampled trajectories? If an agent moves back and forth between two locations, which may result a super long steps, but this behavior is not necessarily desired. 4. In Section 3.1.2, Sampling trajectories, the authors describe the way they use to sample trajectories. However, to my understanding, the loss shown in Eqn. 5 is a game-step-wise loss. Does the authors also sample game steps from the sampled trajectories (if so, how?), or they compute this loss on all game steps within the sampled trajectories? 5. In the paragraph under Eqn. 2, the authors mentioned that "Note that the action distribution over actions $a$ induced by $\\pi_{inv-dy}$ is conditioned only on the current observation $o$". However, according to Eqn. 6, it is also conditioned on $o'$, which is the next observation, i.e., $o_{t+1}$. **References** 1. Keep CALM and explore: Language models for action generation in text-based games. Shunyu Yao, Rohan Rao, Matthew Hausknecht, and Karthik Narasimhan. EMNLP 2020. 2. Reading and acting while blindfolded: The need for semantics in text game agents. Shunyu Yao, Karthik Narasimhan, and Matthew Hausknecht. NAACL 2021. ---------------------------- Nov 29, 2021 We had a good discussion among reviewers, let me give the authors some update. **1. Increased my score to 6.** This is because the authors have somewhat addressed my comments, I'm relatively satisfied. There are a few concerns remaining, as listed below: a) on modelling novelty: the novel components are only a) the sampling strategy in exploitation phase and b) the two-phase pipeline. It was a bit weird to (almost) "copy and paste" a subsection from a prior work into the main body of this submission, which may confuse readers by giving a false message about the contribution. However, if other co-reviewers are fine with it, I'm fine too. b) After a few paper updates, the main results (in Table 1) is only marginally higher than prior work. The authors can add more discussion addressing this in their cameraready. **2. We recommend the authors to remove the Dragon row from the result tables** (or rerun when the Jericho team fixes the bug): As Reviewer PsKh find out, the proposed agent's scores exceed the max score on that game. I happen to know some core Jericho contributors, and we tested the [Dragon game](https://ifdb.org/viewgame?id=sjiyffz8n5patu8l). Usually, when reaching the goal, this will pop up: ``` Dragon's Treasure Store The Dragon's secret hoard is open before you. By the flickering light of your little candle, you can make out a heaps of treasure stacked untidily around the floor. You can see piles of gold and heaps of jewels, many rising higher than the top of your head. The Dragon has told you it has no use for the treasure and it is now yours. You are rich beyond your wildest dreams! *** You have won *** In that game you scored 25 out of a possible 25, in 101 turns. Would you like to RESTART, RESTORE a saved game, UNDO your last move, give the FULL score for that game or QUIT ``` In that game, the scoring function works like this: ``` 1 for buying the box 1 for finding the screwdriver 2 for finding the candle 1 for finding the matches 1 for for opening the castle door 1 for building the hand-glider in the right place 2 for getting the sword/booklet 1 for escaping from the tower using the hang-glider 2 for killing the Troll to get the horn 5 for talking to the Troll to get the horn 2 for killing the dragon 5 for charming the dragon instead of killing him 5 for finding the treasure = 25 points maximum total (e.g., multiple ways to get the horn) (minus 2 points for each RESCUE or 'dead' recovery) ``` Given this -2 points for each RESCUE action, an agent can get negative total points. Because the original game did some *short* to *unsigned char* converting, this caused underflow (-128 vs. 128). This may because the author of the Dragon game (in 2003) didn't expect machines to play his game, because most humans will give up playing before reaching this underflow point :) So the weird numbers are not the authors' problem. As I mentioned, they can either remove that row, or to rerun whenever Jericho fixes that. While I like this paper in general, my main concern is its novelty and contribution. As mentioned in my questions and concerns (Q2) above, the entire Section 3.1.3 is describing prior work (Yao et al., 2021), the "Learning from trajectories" part of Section 3.1.2 is describing another prior work (Yao et al., 2020). Actually, neither (Yao et al., 2021) nor (Yao et al., 2020) is compared in result table. As a consequence, to my understanding, the contribution of this paper is the two-phase pipeline and the sampling strategies in Section 3.1.2. I am not sure if this paper contains enough contributions to publish at ICLR. Please correct me if I understood wrong.
I thank the authors for their submission and active participation in the discussions. All reviewers are unanimously leaning towards acceptance of this paper. Reviewers in particular liked that the paper is well-written and easy to follow [186e,TAdH,Exgo], well motivated [TAdH], interesting [PsKh], novel [186e] and provides gains over baselines [186e,TAdH,PsKh] with interesting ablations [186e,Exgo]. I thus recommend accepting the paper and I encourage the authors to further improve their paper based on the reviewer feedback.
The paper presents a novel adaptive second order method: ''OASIS'', for large scale optimization problems (convex and nonconvex). The search direction is obtained by preconditioning the gradient information with a matrix obtained by approximating the Hessian diagonal matrix (via Hutchinson's method with a momentum term). The learning rate is updated adaptively by approximating the Lipschitz smoothness parameter. On the theoretical front, convergence analysis is provided for the adaptive learning rate case, for the convex and strongly convex setups. Similar analysis is also provided for the fixed learning rate (LR) case, for the strongly convex and nonconvex settings. Finally, extensive empirical results are provided which show that the proposed method achieves comparable, and sometimes better results than other existing methods. ------ Pros ------ The paper has the following strengths. 1) Very well written paper providing a clear motivation for the problem considered. 2) The theoretical results involving the convergence analysis of the method are rigorous, and cover both convex and nonconvex settings. 3) The empirical evaluation is extensive and provides a good indication of how the method performs in practice. ------ Cons ------ The paper has the following weaknesses. 1) There is currently not much discussion on the interpretation of the bounds appearing in the convergence analysis. For instance, how do these results compare with those for existing second-order methods (e.g., AdaHessian)? 2) It would have been helpful to have provided some kind of proof sketch of the theoretical results, or atleast an overview of the key steps which I believe might be common to more than one theorem. At the moment, no such explanation is provided for any of the theoretical results. ------ Further remarks ------- 1) The setup in Fig. 1 is really not clear to me. What is meant by number of samples (x axis of left plot)? Is there an underlying optimization problem considered here such as a quadratic function with matrix A? Some more detailed explanation on the experimental setup considered for the figure will be very helpful. 2) In Fig. 2, the parameter $\\lambda$ is not clearly defined, I believe this occurs much later in the experiments section. 3) In eq. (6), is the matrix $D_k$ formed by just generating a Rademacher $z$ and forming $D_k = z \\circ \\nabla^2 F(w_k) z$? Because in the AdaHessian paper, they also consider a spatial averaging step for better estimation of the Hessian diagonal. Also, should $z$ be $z_k$ as in eq. (8) later? 4) As mentioned on pg 4 in the discussion on literature for adaptive LR, the present paper draws upon ideas from the literature on first order methods for adaptive LR. So couldn't one do the same analysis for AdaHessian for adaptive LR? 5) It wasn't clear to me why AdaHessian (eq. 6 and 7) doesn't approximate the diagonal well, while OASIS (eq. 8 and 9) does a better job. Because we see a (temporal) average in eq. 7 as well, which means AdaHessian should also smooth out the Hessian noise over iterations. Is there any conceptual reason behind this? 6) In Section 3.2, shouldn't the distribution from which the sets $\\mathcal{I}_k, \\mathcal{J}_k$ are sampled be specified (e.g., uniformly at random)? Or is it the case that the conditions on the distribution are subsumed by assumptions 4.14-4.16?. Also, in assumption 4.16: the sentence ''where the samples $\\mathcal{I}$ are drawn independently'' should be removed since there is a single random variable $\\mathcal{I}$. 7) Since $z_k$ is random, one would imagine that this randomness is accounted for in the convergence analysis, which doesn't seem to be the case. For instance the theorems 4.6, 4.9 seem to be worst case bounds. Moreover, the bound in theorem 4.9 depends on $\\hat{D}_k$ which is a random variable. This point needs further clarification. 8) In theorem 4.17, its better to write $\\eta_k = \\eta$ for consistency of notation. 9) Both theorems 4.17, 4.18 show convergence to neighborhoods of stationary points, and not to the stationary points themselves. There is a discussion after theorem 4.18 regarding this aspect, but it seems a bit strange why this (e.g. decaying learning rate) is not accounted for in the analysis to begin with? The paper is written in a very clean manner, and is easy to follow. Sufficient background is provided in the introduction which gives the reader a good context to understand the problem setting and contributions. The preconditioning step is a modification of an existing method AdaHessian, and the adaptive LR part builds on techniques used for deriving adaptive LR rules for first order methods (Mishchenko and Malitsky 2020). So the novelty aspect is a bit limited in that respect. The theoretical results are outlined rigorously, although it is not clear what is the novelty of the theoretical results compared to those for other second order methods. The empirical evaluation is quite extensive and satisfactory in my view. I am giving it a 6 at the moment since I have other comments (in ''Further remarks'') which I hope can be addressed during the rebuttal phase. ------------- Post rebuttal --------- As mentioned in the comments, I am satisfied with the author's response to my concerns and I am happy to increase my score to 8. <doc-sep>The paper designs and analyses an algorithm for minimizing a separable function. It provides deterministic as well as stochastic versions, which either fully compute the gradient or sample it. The algorithm estimates the diagonal elements of the Hessian via Hessian-vector products. The algorithm makes use of this information for finding a better search direction and the step length, eliminating the need for a line search. It provides convergence guarantees and a number of experiments on classical ML problems as well as on deep nets. Strengths: - The paper considers a fundamental problem in ML, i.e., minimizing a separable function. - The algorithm and its convergence are proven for many cases, i.e., deterministic, stochastic, convex, non-convex. - Empirical evidence is given that the algorithm outperforms comparable approaches like AdaHessian, etc. - No need to tune a learning rate since the step lengths are determined by the curvature of the function. This can be really a huge advantage in the stochastic setting, i.e., in deep learning. Weakness: - The empirical evidence/experiments are rather limited. Deterministic case: Only two experiments are provided (logistic regression, non-linear least-squares) and only two data sets. Furthermore, a comparison to other minimization methods would be very beneficial in this case, and not only to AdaHessian and AdGD. (Yes, it is stated in the paper that comparison to only diagonal preconditioners is made but in general, there are many more methods to solve this case, e.g., quasi-Newton methods or trust region Newton-CG methods which are also used for computing the optimum in the provided code. These methods make use of the same information as the presented method and hence, a comparison to these methods would also be useful for a better global picture.) Stochastic case: Again, only a very limited number of experiments is provided here. Having not to tune the learning rate is an enormous plus here and it would be nice to verify the algorithm's robustness on a number of different problems/nets. The experiments suggest that OASIS would be a viable replacement for SGD, Adam, etc. But for such a bold statement, more experiments are needed. I like the paper, the algorithm, and its versatility. Especially that one does not need to tune a learning rate can be very beneficial. I did not fully read the convergence proofs though they seem sound. According to theory and experiments, one should always use this algorithm. It would be nice to justify this claim by a more comprehensive study, e.g., more problems, datasets, and other algorithms in the deterministic case and more nets and data sets in the stochastic setting. Only then one can tell if it is superior to state-of-the-art approaches. If such experiments were provided in the paper I would have given a higher score. <doc-sep>This work proposes OASIS, a second-order method which approximates the diagonal of Hessian matrix and uses the information to rescale the gradient vector. The main difference between OASIS and the existing method AdaHessian (Yao et al., 2020) is on the ways they approximate the diagonal of Hessian, that AdaHessian uses a formula similar to Adam and OASIS uses an exponential average. Moreover, OASIS also incorporates the adaptive stepsize in (Mishchenko & Malitsky, 2020). The authors established the convergence guarantees of OASIS under various settings including convex, strongly convex and nonconvex cases, using various learning rate schedulers such as the adaptive learning rate, fixed learning rate and line search. Empirical results on various machine learning tasks are provided to evaluate OASIS. The paper is nicely written and easy-to-follow. The topic on how to effectively leverage the Hessian-vector oracle in large scale machine learning tasks is definitely important and interesting. For the main ideas, the authors show in Figure 1 that OASIS approximates the diagonal of Hessian much more accurate than the Hessian momentum in AdaHessian, which is the main point made in the paper (I have the feeling that the Hessian momentum is not solely for approximation? like the first-order momentum vector may not be an accurate approximation for the gradient vector, but is effective for acceleration). Another point is that OASIS incorporates the adaptive stepsize in (Mishchenko & Malitsky, 2020), which allows it to adapt to the local Lipschitz constant (wrt a weighted Euclidean norm) and thus reduces the tuning effort. However, it seems to me that these ideas are a bit straightforward and not particularly novel. From my perspective, AdaHessian is a ''diagonal-Hessian-variant'' of Adam and OASIS is the corresponding variant of RMSProp. It seems that the adaptive learning rate can also be incorporated into AdaHessian by choosing a different weighted norm. For the theory part, I appreciate the thorough analysis of OASIS under various settings. However, I was hoping for more insightful discussion on these results, such as how the theorems would suggest a better parameter choice. Currently, they are only convergence guarantees, which could be far from the practical performance. The theorems in Section 4.1 generalize the results in (Mishchenko & Malitsky, 2020) in the deterministic setting while there seems to be no theoretical advantage of such generalization (BTW, is there any bound on the scale of $Q_k$ in Theorem 4.6? It seems that it can be of the order $O(k)$ which kills the convergence). For the empirical results, the authors considered various machine learning tasks and the deviation is also plotted in the figures, which are appreciated. However, the improvement in most of the results seems marginal to me, and thus may not be appealing to practitioners especially since the Hessian-vector oracle is around twice as expensive as the gradient oracle (for neural nets). Moreover, OASIS still requires a learning rate scheduler as shown in the CIFAR results, which makes the statement "Our methodology does not require the tedious task of learning rate tuning" not well-supported. Minor comments: - Equation (7) is not centered. - Typo in the citation "Adaptive gradient descent without descent. In 37th International Conference on Machine Learning (ICLM 2020), 2020" - I think Lemma A.1 is covered by Theorem 2.1.5 in Nesterov's updated book "Nesterov, Y. (2018). Lectures on convex optimization (Vol. 137). Berlin, Germany: Springer International Publishing". I appreciate the authors' efforts on the comprehensive analysis and empirical evaluations of the proposed OASIS. The paper is also very well written. However, both the theoretical and practical results seem incremental to me. The construction in OASIS also seems a bit straightforward. Moreover, OASIS still requires parameter tuning in some of the experiments, and thus is not "fully adaptive".
The paper presents a novel approximate second order optimization method for convex and nonconvex optimization problems. The search direction is obtained by preconditioning the gradient information with a diagonal approximation of the Hessian via Hutchinson's method and exponential averaging. The learning rate is updated using an estimate of the smoothness parameter. The merit of the paper has to be evaluated from the theoretical and empirical point of view. From the internal discussion, the reviewers agreed that the new algorithm is a mix a known methods, mainly present in AdaHessian, with a small tweak on the exponential average. Moreover, the theoretical guarantees do not seem to capture the empirical performance of the algorithm nor they provide any hint on how to set the algorithm's hyperparameters. For example, in Theorem 4.6 the optimal setting of $\\beta_2$ is 1. That said, the most important theoretical contribution seems to lie in the fact that AdaHessian did not have any formal guarantee. Hence, this paper is the first one to show a formal guarantee this type of algorithms. From the empirical point of view, the empirical evidence is very limited for the today standards in empirical machine learning papers. The reviewers and me do not actually believe that the proposed algorithm dominates the state-of-the-art optimization algorithms used in machine learning. However, in the internal discussion we agreed that the algorithm has still potential and it should be added to the pool of optimization algorithms people can try. Overall, considering the paper in a holistic way, there seems to be enough novelty and results to be accepted at this conference. That said, I would urge the authors to take into account reviewers comments (and I also add some personal ones here). In particular, a frank discussion of current theoretical analysis and empirical evaluation is needed. Some specific comments: - AdaGrad was proposed by two different groups at COLT 2010, so both papers should be cited. So, please add a citation to: McMahan and Streeter. Adaptive bound optimization for online convex optimization. COLT 2010. - Remark 4.7, second item: Neither Reddi et al.(2019) nor Duchi et al. (2011) *assume* bounded iterates, that must be proved not assumed. Instead, they explicitly project onto a domain that they assumed to be bounded. - The convergence of the gradient to zero does not imply convergence to a critical point. To prove convergence to a critical point you should prove that the iterates converge, that in general is false even for lower bounded functions. Indeed, consider $f(x)=log(1+exp(-x))$, the iterates would actually diverge while the gradient still go to zero.
The authors propose to utilize the part-level feature representation for the cross-domain generalization problem in point cloud applications. Given part-level features grouped from point-wise representations, the authors first align them to a learned feature dictionary via cross-attention, and then aligned-features are aggreged with a part-weighted maxpooling strategy. In addition, contrastive learning is conducted in both shape-level and part-level. Empirical results on standard DG benchmark datasets are presented for validation. Strengths: 1. The method is well motivated. The authors find that part-level features present smaller distribution divergence than shape-level feature in the cross-domain tasks. Therefore, they propose to adopt part level features in the DG tasks. 2. Some interesting components are proposed and well justified. The proposed part-template features implicitly achieves the domain alignment by aligning both domains to the learned feature dictionary. The proposed part feature aggregation module outperfoms the popular max pooling module. 3. The proposed method achieves state-of-the-art performance on DG benchmarks. Weaknesses: 1. I am wondering the relationship between the proposed part feature based DG method and the general point cloud models (e.g., PointNet++) that utilize part/local features. For example, in the PointNet++, the part level feature is extracted and aggregated hierarchically, which is quite similar to the strategy adopted in this paper. Could you clarify it? 2. Based on the first question, could the proposed module be adopted in general point cloud models? For example, could we replace the last max pooling layer of PointNet++ with the part feature aggregation module proposed in this paper? 3. As for the proposed techniques, the contrastive loss is widely adopted as the learning regularization and utilizing part level features is also a common practice. In my opinion, the main contribution is the implicit domain alignment with the learned feature dictionary and the part feature aggregation module. So I suggest that the authors include more related work on the application of dictionary learning in cross-domain problems, such as [1] [1] Li, Shuai, et al. "Category dictionary guided unsupervised domain adaptation for object detection." Proceedings of the AAAI conference on artificial intelligence. Vol. 35. No. 3. 2021 Not found. <doc-sep>The authors present a new method for generalizing point cloud classification from synthetic to real data. The authors argue that the local geometric features are more generalizable than the whole shape. They focus on part-based feature representation, and design a part-based domain generalization network for the point cloud classification task. The key idea is to build a common feature space using a part template, and then align the part-level features of the source and the target domain to this template. The authors demonstrate that the proposed method achieves state-of-the-art performance on the 3DDG benchmark. - I like the idea to align local geometric features to solve domain generalization on point clouds. This idea is novel and significant. The technical approach to implement this idea is sound, and the experimental results demonstrate good performance. - I also like the idea to verify the hypothesis that local geometric features are more generalizable than global features in Fig. 2. However, I would like to point out a few issues here. (1) It is true that in general reducing the part size leads to better generalization. But where is the limit? At the very least each part can be reduced to a point, but I do not believe that point-based features are the most generalizable. It could be more interesting to identify by how many points per part we would reach to limit of generalization here. (2) 512-part-level and 256-part-level mean 512 and 256 points per part, respectively I guess. This sounds confusing as I can also think of it as 512 parts and 256 parts. It is better to revise this wording, like 512-points-per-part and 256-points-per-part. - I also value the clarity of the writing, which is very nice and easy to read. - Despite its great values, the paper suffers from the following issues. (1) In terms of technical approach, the contrastive learning part is less well connected to the part-based features for domain generalization. For example, if the authors wish to use contrastive learning, at least shape-level contrastive loss should be used for the baseline methods as well. Or the comparisons should be separated with a table with no contrastive learning utilized. In Table 1, as I understand, the baselines are without contrastive loss but the PDG is with contrastive loss. Please correct me if I misunderstood. (2) My second concern is that the experiments conducted are somewhat simplistic. I expect deeper analysis and more experimental settings to be done. Please see my comments in the question section. I think the related work section in this paper is quite short and needs some revisions. First, while not exactly the same, I found in the literature there are some 3D tasks that link different domains together such as scans and CAD object retrieval. I think this is worth some further discussions about the connections of these specific tasks with the domain generalization problem presented in this paper. [A] SHREC'17: RGB-D to CAD retrieval with ObjectNN dataset, 2017, 2018. <doc-sep>The presented method detects “global” features (PointNet or DGCNN) locally on sampled points. Then learn relations between those local representations as part-level aggregation. The performance is further improved by contrastive learning. Authors evaluate the approach on several cross domain datasets where the method is learnt on one domain and tested on another one. The target (test) domain is inaccessible during training. Domain adaptation is an important and well old problem for 3d point cloud processing. Pros: -Clear motivation, I like the motivation by features distance between domains (Fig. 2) -While the idea of learning part-based models from local features is old and highly researched, the presented method on 3D point clouds with Neural Networks focused for the domain adaptations seems novel. -The problem of domain adaptation of 3d point cloud processing when the target domain is unavailable during training is a very important and unsolved problem. -Most of questions I had during reading were further answered Cons: -The approach is motivated from many previous works that focus on domain adaptation for 3d point cloud that are not cited, though the approach is novel by using it in neural networks. -Authors did not evaluate on some datasets pairs (training-testing) that will allow much broader comparison. That raises several questions why. I believe it will be also good to report why presented numbers of other methods differ from original papers. It looks there is no potential for negative social impact of the work.
The paper works on domain generalization of 3D point cloud classification, and proposes a part-based domain generalization network for the purpose, whose key idea is to build a common feature space of part template and align the part-level features wherein. Three reviewers appreciate the contributions, including the clear motivation, the implicit domain alignment by part-template features, and the proposed part feature aggregation module. They also suggest to improve the paper by clearer definitions of parts, better organization of contrastive learning in the paper, a more complete citation of closely related works, etc. After discussions between the authors and reviewers, consensus is reached on accepting the paper. Congratulations!
This paper proposes a new model architecture for 3D problem which leverages the powerful backbones pretrained from the 2D task. The idea is straightforward. The input point cloud is projected into 2D pixels using an encoder model, then the 2D pixels is colored by a coloring module, and the colored images are fed into a pretrained ViT backbone and then predictions are made by the task-specific heads. The overall approach provides an elegant solution to leverage the representations of 2D models. The experimental results demonstrate superior performance on public benchmarks including ModelNet40, ShapeNetPart datasets. 1. Novel model architectures to utilize pretrained 2d models. To my knowledge, the idea of using projection into 2D and coloring module is new. 2. The idea is simple yet effective. The pretrained 2d models are easy to get and the results are promising. The proposed approach has some limitations on the feasible tasks to apply on. For example, it may not work if we want to conduct a 3D segmentation task. Related to the question above, the comparison in Table 4 is not fair due to lack of model complexity analysis. <doc-sep>In this paper, the authors introduce point-to-pixel prompting (P2P), a learning framework for leveraging pre-trained image transformers for 3D tasks. The method is mainly motivated by the data scarcity issue in 3D domains. P2P learns a geometry-preserving transformation from point cloud to 2D grid, and then a projection to prepare the 2D grid data to be processed by a pre-trained image transformer expecting image tokens. The main benefit of P2P, to my understanding, is the ability to achieve comparable accuracy to other 3D models with much fewer parameters that need to be trained with 3D data. This is validated on two tasks, 3D object classification and 3D part segmentation. Strengths ----------------- - the question of whether knowledge can be transferred from large pre-trained image models for use with 3D domains is interesting - the point-to-pixel prompt pipeline, which is nicely visualized in Figure 2, appears to be novel and is simple and elegant - this work is a nice demonstration of ideas from NLP transferring successfully to other domains (in this case, to 3D point cloud processing) - the paper is well-written and easy to read Weaknesses ----------------- - $\\text{\\textbf{Unclear motivation,problem, and significance}}$: The current set of claims in the introduction are that A) there is a data starvation problem in 3D domain (L34-35) and B) pre-training point cloud transformers suffers from an imbalance between the number of trainable parameters and limited training data, leading to insufficient optimization and overfitting (L40-41). However, the data starvation problem seems to only exist for specific object-centric datasets such as ShapeNet. By contrast, consider the large Scannet and Waymo datasets. Moreover, recent advances in 3D rendering (e.g., NeRF) suggests that highly lifelike synthetic 3D data may soon become available. Therefore, scarcity of large datasets does not appear to be a fundamental concern. Moreover, point B) seems plainly false since recent methods like Point-BERT work just as well as P2P on, e.g., ModelNet40. - As a result, it is unclear what the actual problem is that is being addressed here and *why* this prompting method is needed at all. The main benefit of P2P seems to be in the use of fewer model parameters, but its unclear why this is important. - $\\text{\\textbf{Multiple unsubstantiated claims}}$. These can be addressed with careful editing. - (L54) “The end-to-end optimization pipeline and the strategy of freezing the pre-trained image model promote the *bidirectional* knowledge flow between points and pixels”. To my understanding, the flow is *unidirectional*; pre-trained image features are being used to learn a better representation for points. - (L271) “Firstly, our P2P outperforms traditional 3D pretraining methods” (on ModelNet40). P2P’s largest model achieves the same performance as Point-BERT. - Similarly, claims of “superiority” of P2P (L64, L370) are clearly not supported by the accuracy results in the experiments. References ----------------- - Dai, Angela, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. "Scannet: Richly-annotated 3d reconstructions of indoor scenes." In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5828-5839. 2017. - Sun, Pei, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo et al. "Scalability in perception for autonomous driving: Waymo open dataset." In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2446-2454. No. Limitations of the P2P framework should be discussed in the main text (e.g., in the conclusions section). <doc-sep>The paper proposes point-to-pixel prompting to leverage 2D pre-trained models to help 3D point cloud recognition tasks. The main modules include a geometry-preserved projection and a geometry-aware coloring, which fill the gap between 3D point clouds and 2D images. The experiments on ModelNet40 and ScanObjectNN show that P2P achieves comparable performance on classification tasks with only a few trainable parameters. Strengths: 1. The paper first proposes a prompt-tuning method to adopt 2D pre-trained parameters in 3D, which is an interesting and novel exploration. 2. With p2p prompting, the model can achieve competitive results on the shape classification task with much fewer trainable parameters. Weaknesses: 1. Although the method leverages extra 2D image knowledge, it does not show clear performance or speed advantages over previous 3D networks on both classification and part segmentation. The parameters that need to be trained are fewer but the whole model is larger. The 2D prior knowledge is not fully exploited in this method. 2. The design of simply adding the point features in the same pixel seems trivial, and even with the explanations in Line190-197, I don't really think it preserves geometry. Also, no more experiments are conducted to analyze these design choices. 3. More results on scene-level point cloud understanding with datasets like ScanNet or S3DIS are expected to illustrate the effectiveness of the prompt-tuning pipeline. The limitation is discussed in Sec 4.3. <doc-sep>In this paper, the authors propose to leverage the pretrained image model for point cloud downstream tasks. Specifically, they introduce a Point-to-Pixel Prompting to transform a point cloud as the corresponding image, by geometry-preserved projection and geometry-aware coloring. Strengths 1) The paper is well written with clear motivation and good organization. 2) Leveraging 2D pretraining for 3D tasks is an interesting topic. 3) Point-to-Pixel Prompting is novel. Weakness 1) My main concern is the experiment result. Apparently, the proposed design does not improve the performance. 2) What are the computation cost and model size for the prompting procedure? See Strengths And Weaknesses
The paper presents a method of prompt tuning to transfer 2D pre-trained weights to tackling 3D understanding problems. All reviewers are positive about the novelty of the method. With large 2D pretrained models, higher performances are still expected from xwSJ, which is also a reasonable comment. Other 3D understanding tasks, such as segmentation and detection of outdoor scenes, are strongly encouraged, as they are the true needs of the industry.
This paper proposes a novel approach to biological sequence optimization that models the task as a linear bandit problem and uses Thompson Sampling to guide a Directed Evolution algorithm towards optimizing the linear fitness function. The approach ("TS-DE") is a variation of DE that is demonstrated to outperform classical (random/blind) DE alone in simulation that achieves a Bayesian regret that is optimal in population size and number of generations. ​ The DE and TS-DE algorithms are structurally the same, using crossover and point mutations to evolve a population of sequence variants over generations. The difference is that TS-DE tracks a posterior of the fitness function between generations, and in each generation draws a fitness function estimate from this posterior, and (roughly speaking) limits the recombined variants and mutation positions to those which would see improvement under that fitness function estimate. The result is more efficient convergence to optimal or near-optimal sequences. ​ It is not surprising on its own that a method which takes advantage of side information in crossover and mutation steps should outperform a completely random mutation approach. It is disappointing that the authors did not compare this approach to at least one baseline that also also uses this side-information, such as training and scoring a linear model on proposed sequences to use as a filter between rounds. ​ Nonetheless, a key strength to the paper is its mathematical rigor, and the "main result" of the paper, that proves that TS-DE achieves a Bayesian Regret of $\\tilde{O}(d^2\\sqrt{MT})$. The proof summary in the body of the paper was helpful for intuition, and great care was taken in the appendix to support each step of the proof. ​ The main limitation/weakness of both this main result and the algorithm itself arises from the two necessarily assumptions: (3.2) which requires the true fitness function to be a linear function with weights drawn independently from Gaussian distributions, and (3.5) which restricts measured values to having homoskedastic, iid, Gaussian, additive noise. Neither of these assumptions are likely to hold, or even approximately hold, in practice, and there is no discussion of the sensitivity of the method and results to violations of these assumptions. ​ A secondary limitation/weakness is in the applicability of this method and results to biological experiments, for two main reasons: the linear model, and biological feasibility of intervening to filter between generations. While the authors claim in Appendix 2 that the linear model can be generalized somewhat beyond binary motifs, it is not explained how this is done, and the application seems to still be restricted to linear models. The biological feasibility / utility here may be a bigger issue, and is discussed in the Weaknesses section below. ## Strengths ​ 1. The proposed Algorithm 1, to the reviewer's knowledge, is novel, though straightforward, and sufficient explanation has been provided for others to implement these ideas easily in simulation. 2. The paper's structure and organization was thoughtful and easy to follow, with key contributions highlighted and more technical steps relegated to the appendix. 3. The mathematical rigor is high, with great care taken in the appendix to explain both the intuition and logic behind all mathematical claims. The approach to combining DE and Bandit Theory overall was creative and carefully developed. 4. The main result, that TS-DE achieves optimal Bayesian regret at least with respect to population size and number of generations, is strong. 5. Simulations show TS-DE outperforms DE handily - though this result is not surprising, it was worthwhile to confirm this through a demonstration. ​ ​## Weaknesses ​ 1. Several major assumptions would appear to not hold in practice. Primarily, the assumptions that the fitness function is linear in a known, relatively small basis of binary motifs. That aside, there is also the issue of accurately estimating the "known" variance sigma, the assumption that that noise is uniform (while, in practice, heteroskedasticity is common). The paper does not address the sensitivity of the performance of their approach or its bounds under minor violations of these assumptions.support on real data ​ 2. An initial concern was that the linear motif assumption in the paper was not easily generalizable to realistic protein engineering settings. In Appendix B the authors explain that, in real world applications, these assumptions were loosened, so that individual base pairs were considered for mutation, and features were scalar-valued (not binary-valued). While more details on these extensions would have been appreciated, it is understandable that they may be saved for the other publication mentioned and/or remain proprietary. 3. One way to alleviate these concerns is to actually show the performance of the algorithm on some simple but more realistic tasks, e.g. this package allows the authors to test their algorithms against some competing methods (https://github.com/samsinai/FLEXS). ​ 4. Unclear how the mutation rate mu is chosen. Presumably the choice of mu has some impact on the convergence of the algorithm if not bounds, yet this parameter does not appear in the analysis or experiments. ​ 5. Novelty of the approach - while this exact formulation is novel, there are other recent examples in the literature that combine Thompson Sampling with Bandit problems, such as https://arxiv.org/pdf/2205.10113.pdf. The method in this paper seems to be very similar to the paper under review, though it frames itself as a multi-armed bandit approach, and therefore compares itself against multi-armed bandit algorithms, including UCB methods. Moreover, while the Thompson Sampling element of the algorithm is key to proving the main result (the bound on regret), in effect the algorithm (Alg 1) itself comes out as a fairly standard directed evolution approach ​ 6. The paper claims that the resulting regret bound is optimal in M and T, and this is remarked upon, but not proven or cited. ​ 7. A key argument for the linear bandit model is made in the Remark at the end of Section 2. In short, it says that this problem is NOT a multi-armed bandit problem, primarily because we are not free to choose any action, but are limited by biology to mutation and recombination. This is the reason why this method is not compared against multi-arm bandit methods in simulation, and why its regret is not compared to the regret of multi-arm bandits. At first this seems entirely reasonable. However, Algorithm 1 introduces crossover and mutation steps that involve sequence filtering based on calculating model scores, and intervening at this step. This adds hugely significant cost to DE in a biological setting. In classic/random DE, the crossover and mutation steps are random, not because of a lack of side-knowledge that could in theory be used to direct these steps, but because the biological steps of mutation and crossover naturally occur randomly. While it is trivial "in silico" to intervene and filter out undesirable mutants according to side information, doing so in practice would require sequencing after evolutionary rounds to see what was produced, and then somehow separating out desirable and undesirable variants in the population before continuing. Alliteratively, the whole mutation process could be done in silico, and then the populations $S_t$ could be constructed by hand from the in silico list of variants, but doing this completely negates the advantage of DE as an inexpensive process AND negatives the Remark at the end of Section 2 explaining why the model is limited to linear bandits. If we are going to allow this level of filtering and/or individual construction of variants, we shouldn't have to limit ourselves to variants that could potentially arise from DE mutation and crossover steps. Given the description in Appendix 2 of applying this model in practice, and the use of CRISPR technology in its implementation, it seems likely that the sequences were synthesized from an in silico selection step that could have easily been expanded to include sequences unobtainable through crossover or limited point mutations, again raising the question of whether the the assumption that "this is not a multi-armed bandit problem" is *really* a limitation of the biology or just a desirable limitation for the sake of the mathematical results. ## Limitations ​ - The main limitations of this paper is it's reliance on strong and unlikely assumptions that the fitness landscape be linear and noise be iid Gaussian, and the questionable applicability in biological experiments identified in Weakness (7) - were not discussed in the paper. Neither of these negate the contributions of the paper, but should be presented, and if possible, some discussion of the sensitivity of these results to violations in these assumptions should be added. I also think benchmarking against other sequence design work on more conventional challenges would strengthen the paper significantly. There is still a bit of a gap between the theoretical results and applicability that I'm not convinced this work would help close. ​ - It would be helpful if authors contrast their work with algorithms discussed by Sinai and Kelsic 2020 on model-guided sequence design, in particular discussion in section 5 (both with better DE modeling and contrasting with Algorithms like CbAS, Brookes et al 2019). ​ Minor Some grammatical issues: 1. In second paragraph "DE, one of the top molecular technology breakthrough in the past century, demonstrate human's ability to engineer proteins at will": breakthrough -> breakthroughs, demonstrate -> demonstrates, human's -> humanity's. 2. The following paragraph was especially confusing, since the topic sentence claimed that DE "remains expensive and time-consuming" but the subsequent sentences all defend the opposite claim that DE is "generally easy" and has been "exponentially improved." 3. There's also a small issue in Figure 1.1 in the recombination example, where a child with a dark gray motif 4 arises from parents without that motif. 4. The comment after Assumption 3.5, that "Our goal is to maximize the Bayesian regret" presumably was a mistake and the authors meant "to minimize" instead. ​ ​ <doc-sep>This paper focuses on a specific application **local** directed evolution. It starts with a set of candidate sequences and proposes an algorithm that performs two operations on the batch of sequences: recombination and common point mutation. The response function is modelled as a linear function of d protein motifs, where each motif can be or off. Authors analyze Bayesian regret of their procedure supported with comparison to classical evolutionary strategy. Strengths ---------- - I like the formalization attempt of the experimental pipeline authors are using and designing an algorithm for it - Its nice this found a real-world application and improved over classical evolutionary approaches Weakness --------- - The authors do not do justice to the field of directed evolution. There are evolutionary based methods which operate on a current batch of sequences and amend it as described here - more aligned with local (or evolutionary approaches). However, there are other approaches which can synthesise specific mutants and operate on the combinatorial space likewise with high-throughput methods. Perhaps an important citation from this line of work is the paper which for the first time uses BO with Gaussian processes and has regret guarantee which appeared in PNAS 2013, by authors Romero, Krause and **Arnold**. This is a ripe field that falls under the directed evolution keyword as well. - Focus on regret minimization is a big minus. I see no reason why to focus on regret minimization. A more reasonable benchmark is to report the best variant so far. Thompson sampling often samples very greedy steps in order to achieve low regret, but in practice we do not want this at all, we want to explore given the uncertainty of $\\theta$. We want to be as diverse as possible to find meaningful new information and candidates. - There are two sub-methods introduced, first the operators are introduced in one order, explanation of them is in reversed order and they appear in the algorithm yet reversed. - I have significant doubts about the proof of Theorem 5.2 - namely the sudden emergence of modulus of concentration? Which I have no clue about. - the term H2, should be bounded by d\\sqrt{TM} not the other exponent as authors propose - Also one hast to explicitly state that this is for linear objective supported on unit hypercube. This proof does not work generally. A reader might be tempted to generalize to general linear bandits in general, but this is not true and it's not sufficiently clearly spelled out. However, due to the simplicity of the objective (hypercube + linear) I have no doubts that the algorithm would eventually converge since there is a random component due to $\\mu$. (One just needs to learn d components sufficiently fast) - The function in this case is the additive function of motifs, this is an extremely simplistic assumption for many practical applications where complex multi mutation epistasis occurs and is of interest to be modelled by data-driven methods. - There isn’t any meaningful baseline algorithm for example and algorithm that would try to estimate the effect of all pointwise mutations, i.e. S-> S_1, first mutation, S->S_2 second mutation etc. This would probably converge in O(d\\sqrt{TM}) steps as well and I believe even faster. - The paper tries to give a very general introduction to directed evolution, however in 4 projects involving DE evolution never was the goal to change motifs of the protein and work in this greedy fashion. Instead enzymatic proportions were to be improved using a few selected mutations in the vicinity of the active site. I think the formulation authors have is fine and is probably motivated by their experimental setup, but this is **a form** of directed evolution not **the directed evolution**. I want to stress that this is an important point. If the goal of this paper is to introduce this problem to the broader ML community, we better do this carefully without over generalizing. I am fine with the setup, but one has to clearly say that this is a specific setup of DE. For example, in my opinion, this paper does not address the major challenges in DE e.g. epistasis, combinatorial problems etc. - Lacking theoretical understanding. I am not sure the proof is correct, since there is a magic quantify appearing suddenly. This is my justification for a lower score. I think the theorem might be correct, but the way there is not clearly stated. Also, I see no reason why scaling in d^2 is necessary for such a simple objective (linear on a hypercube). - The paper lacks an algorithmically meaningful baseline, which could be executed synthetically. <doc-sep>This paper study a cross domain problem for biological sequence optimization with bandit problem, and tries to provide theoretical understanding of directed evolution under bandit theory with Thompson sampling. # strengths 1. The problem is interesting. 2. The paper gives a theoretical understanding of bandit problem with directed Evolution. # weaknesses 1. The linear setting is always a problem in real world application. I am not sure whether the linear bandit assumption suitable enough for real application. 2. I do not go through the proof, while it may seems that the proof may directly follow the standard proof sketch of bandit theory. It will be much better if the authors provide a brief description about the proof difficulties during adopting the typical TS bandit proof technique. empty <doc-sep>The paper introduces a variant of a linear Bandits model for directed evolution (DE). DE is concerned with iteratively optimizing a population of individuals by selecting a subset of promising individuals for mutation and recombination in each step. The utility of individuals is modeled with a linear function parameterized by a parameter theta. Theta is refined via a new variant of Thompson sampling. The difference to classic Thompson sampling is that the one cannot directly sample individuals due to the stochasticity of mutation and recombination. The method has sublinear regret bounds, that were confirmed in a simulation experiment. It was successfully applied to the optimization of CRISPR sequences. The paper is very well written. Even with no background on biotechnologies, one can read it in one go. The idea to include mutation and recombination into linear Bandits is new, as far as I know. The paper only mentions protein design optimization and gene editing, so I believe that its impact is mostly limited to biotechnology. I like that the paper not only claims that the method has real-world applications, but has already been able to show that the method has been successful in real-world applications. The results in the simulation experiment look promising too, however the baseline (the basic DE approach) seems to be rather simple. The theoretical claims are well supported. The authors do not address the potential negative societal impact of their work. Their main application -- gene editing -- is an exemplary case for an ethically controversial topic. I see that the entire debate on this issue cannot be addressed in a single ML paper, but one could have at least pointed out that it is an issue and referred to more detailed discussions of it. In particular, because such cases are specifically mentioned in the submission guidelines.
The initial round of reviews for the submitted manuscript was mostly positive in tone, but this enthusiasm was tempered by a number of deep technical issues -- and some more philosophical issues regarding the presentation and framing of the results -- raised by the reviewers. Fortunately, the author rebuttal and author--reviewer discussion phases went a long way toward clearing up some initial confusion and clarifying the contributions of the authors, which swayed the prevailing opinion of the reviewers toward acceptance. I want to commend the authors for their enlightening contributions to that discussion, which assuaged most of the reviewers' initial complaints. However, I would also like to stress that it is critical that the fruits of this discussion (especially with reviewers X52n and fbLu) be incorporated into a revised version of this manuscript. The reviewers are unanimous in this opinion.
The paper proposes ConvMAE, a hybrid convolution-transformer architecture that is friendly to MAE-like pre-training. MAE was originally proposed with ViT, and due to omitted mask tokens in the backbone encoder, MAE is not trivially extensible to convolutional networks. The work extends MAE by resorting to the hybrid design of first using convolutions, and then using transformers. The masking is done block-wise (at the resolution of the transformer); and masked convolutions are used to avoid potential cheating. Extensive experiments are done on ImageNet classification, object detection, semantic segmentation, video classification. Various ablation analysis is also provided. (+) Self-supervised learning, especially masked auto-encoding for images is an emerging topic in computer vision. A breakthrough in this direction can bear huge significance. The work aims at fixing the limitation of MAE by introducing an hybrid architecture of convolutions and transformers, which is definitely important and relevant to the NeurIPS audience. (+) The paper is well written, and is clear enough for readers to follow through with good illustrations. (+) The experiments are extensive and conclusive. The downstream transfers include image classification, object detection, semantic segmentation, and even video understanding is involved (which by itself could be an independent investigation). The ablations and the conclusions are also covering most of the things I can think of -- a solid paper clearly with a lot of hard work behind the scene. (-) I think the "Conv" part of "ConvMAE" is an over-emphasis. The architecture only has 4 conv layers in the bottom of the network, while it has 11 transformer blocks for the base model (ViT-B has 12 blocks in total). So my current understanding is that ConvMAE has a similar architecture as in: Xiao, Tete, et al. "Early convolutions help transformers see better." Advances in Neural Information Processing Systems 34 (2021): 30392-30400. This means the majority of the architecture is still transformers, and in this regard, the difference/significance over original MAE is not that salient. This is the biggest concern about the paper -- it has a risk of over-sell with the term "Conv" in it. (-) One minor concern is about the scalability of ConvMAE. The paper is highly focused on the model size of the base model. It is unclear the benefit of ConvMAE can still hold when the model size further scales up, as shown in pure ViT-based MAE. (-) Some minor typos need to be fixed with proof-reading:, e.g., should define what is FOV at page 2, and the mask ratio should be 75% instead of 25% for MAE if I recall correctly. I do not see potential negative societal impact concern. The paper also points this out at the end, which is adequate to me. <doc-sep>This paper proposes a self-supervised framework using a hybrid convolution-transformer architecture, to obtain multi-scale, hierarchical representations. Masked convolution is introduced to prevent information leakage in convolution blocks and block-wise masking strategy is applied to improve computational efficiency. The resulting model achieves competitive performances in image classification and dense prediction tasks such as object detection. strengths: 1. This work effectively extends the self-supervised MAE framework to the hierarchical, convolution-transformer hybrid architecture. 2. The resulting model outperforms existing self-supervised models in classification and dense prediction tasks. adequate <doc-sep>This paper proposed a new self-supervised learning framework by integrating hybrid convolution-transformer architectures and masked convolution into the masked auto-encoders. The proposed method can achieve computational efficiency and low pretraining-finetuning gap at the same. Extensive experiments on several computer vision tasks demonstrate the effectiveness of the proposed method. __Strengths__ - The paper is well written and easy to follow. Sufficient technique details are provided. - The proposed method is well motivated and simple. Several key components are proposed to address heavy computational cost and pretraining-finetuning discrepancy. - The proposed method is flexible and can be applied in both image classification and object detection. __Weaknesses__ - It seems hybrid convolution-transformer architectures have been explored in previous works but show how very similar performance to MAE (Lines 45-47). Why the proposed method can make them work for MAE? The differences from previous work and the contribution of the paper remains vague. - Some parts of the method are not clearly illustrated. For example, in “Block-wise Masking with Masked Convolutions”, the authors state that “Uniformly masking stage-1 input tokens would cause all tokens of stage-3 to have partially visible information and requires keeping all stage-3 tokens”. Why the proposed method can address this issue? What is the key idea of the proposed method? - The required training epochs vary from different methods. I wonder whether the proposed method can still outperform others under the same training epochs. __Post Rebuttal__ I thank the authors for their response. Most of my concerns have been addressed. I increased my rating and recommend acceptance for this paper. Yes. <doc-sep>This paper addresses the difficulty of applying MAE training with convolutional layers. The proposed ConvMAE adopts masked convolutions in the early stage of convolutional layers by applying convolution on the masked featuremaps. In this way, the information leak is prevented. With the proposed ConvMAE training, the ViT with early convolutional layers can benefit from the MAE training and achieved better transfer learning results comparing to standard ViT. It achieves superior performance on ImageNet & MS-COCO. 1. Novel training strategy to enable MAE training for models with convolutional layers. 2. Strong performance on various transfer learning tasks. The comparison with the MAE training with standard ViT backbones may be unfair, due to introducing extra computation cost with the convolutional layers. It would be helpful to further break down the improvements. <doc-sep>The paper starts with the hypothesis that a multiscale hybrid convolution-transformer can learn better representations using masked inputs than vanilla ViTs. The original masking scheme proposed in the MAE paper can be computationally prohibitive when directly applied to hybrid models. This paper presents a multiscale block-wise masking strategy with masked convolutions to efficiently train a hybrid transformer-convolutional model for representation learning. The paper shared a broad range of empirical results on classification, detection, segmentation, and video understanding tasks to show the effectiveness of the proposed technique. Originality: The novelty of the paper lies in its proposed multi-scale hybrid convolution-transformer encoder, which can generate hierarchical representations and possibly exceed the performance of vanilla ViTs. The idea of hybrid models already exists in multiple pieces of literature (CoAtNet, Early Convolutions etc.). Masked convolutions were introduced in the PixelRNN paper (https://arxiv.org/pdf/1601.06759.pdf). The strength of this paper is in its novel combination of existing ideas to produce a very simple hybrid framework that effectively combines the strength of convolutions and transformers. I also like the idea of performing masking at the late stage and then progressively upsampling the mask to larger resolutions to avoid the requirement of keeping all tokens in stage 3. The proposed setup naturally generates hierarchical representations and fits nicely with Feature Pyramid Networks. It is a nice way to generate a feature pyramid with local context via convolutions and global context using transformers. Quality: The paper primarily describes experiments using ViT-B scale networks. It covers a broad set of vision tasks but it does not cover scale. It would be nice to see whether the proposed scheme continues to outperform existing masking techniques for larger models. There is also limited runtime comparison with existing techniques. The paper shares very informative results of ablation experiments comparing random masking, regular convolutions, multi-scale decoders etc. Clarity: The paper is very well written, with a nice flow, and explains the concepts with ease. Nit: line 56 pretraing -> pretraining Significance: The paper proposes a simple and effective hybrid convolution-transformer encoder, which naturally generates hierarchical representations from an image and outperforms a number of existing techniques. Yes, authors adequately addressed the limitations.
The reviewers are positive about this submission initially. After the authors' rebuttal, one reviewer pointed out that the name `ConvMAE' is not proper to describe the current work. The authors respond by claiming using an alternative name, which is acknowledged by the reviewer. Overall, all the reviewers stand positive for this work and AC stands with the reviewers. The authors shall take the suggestions from the reviewers to further polish the current work in the camera-ready submissions.
Summary ---------- This paper presents an approach to uncertainty modeling in recurrent neural networks through a discrete hidden state. The training of this discrete model is done using a reparameterizable approximation (in particular, using the Gumbel-Softmax trick). The authors show the utility of this method on a variety of problems, including showing effective out of distribution detection and improved calibration in classification tasks. Comments ---------- This paper presents a relatively simple idea that builds relatively directly from previous work, and uses the now common Softmax-Gumbel trick to enable differentiability. The main strength of this paper is the thorough experimental evaluation, on a wide variety of problems. The main weakness of this paper is the very unclear presentation of the method. In section 2.1, the authors do not define all quantities, the mathematics of the method is interspersed with discussions of the approaches of others, and the writing is unclear. The authors must clarify the presentation of their method, and have this presentation be distinct from discussion of previous work. Overall, the experimental results seem compelling and interesting. The authors should clarify their discussion of the partially observed RL task. In the partially observed task, is the agent only provided lagged measurements of the state? The presentation if quite confusing and the authors should state what this task is as clearly as possible. Post-Rebuttal ---------- I thank the authors for their response. Both of the sections are now more clear, although the authors should make an effort to polish the narrative of the paper and the clarity of exposition throughout. The discussion of epistemic versus aleatoric uncertainty in the appendix is also interesting. I have increased my score from 6 to 7. <doc-sep>The paper proposes a novel approach for uncertainty estimation with RNNs. More precisely, the task is to both fit a model on the data and to learn the uncertainty of the fitted model at the same time. The proposed approach fits a random model, with its randomness adjusted to the level of uncertainty. The probability of the potential outputs on a given input is then estimated by sampling the model (i.e., re-evaluating it multiple times on the same input). This, in turn, can also be used to estimate the uncertainty of the model. One important detail that the paper does not discuss but would be important to understand is how S_t is trained/updated? (Actually, the same question goes for \\tau.) In fact, referring to S_t as states is quite confusing; from the formulas it seems that they are used as weights. The authors should discuss these questions in detail. Apart of these issues, the paper is relatively well written and the considered problem is important to various applications. The proposed model also makes sense on the high level (although the missing details make it hard to claim the same in general). Finally, empirical evaluations show the effectiveness of the method, and also that its performance is comparable - and in many cases superior - to vanilla LSTM, Bayesian RNN. RNN with variational dropout, and a deep ensemble of LSTM based model. REMARKS Section 2.2. Setting \\varphi to be a dot-product does not seem right: as its two attributes are \\theta_t \\in R^d and S_t \\in S^{d x k}, the dimensions do not match. Simple matrix-vector product does work though. In fact, Section 2 could be somewhat polished; it is not always easy to understand what is part of the proposed method, and what is explained in relation to other models only. Additionally, it would be helpful to have a brief recap at the end of the section about how the uncertainty estimation is done for the model. In (1), t_i does not seem to be defined. Actually, should it not be {t,i}? Additionally, \\alpha_i two lines below (2) should be \\alpha_{t,i}, presumably.<doc-sep>Summary: This paper proposes a method to quantify the uncertainty for RNN. Different from the traditional Bayesian RNN, the proposed method is more efficient. At each time, based on the current hidden state and memory, it generates a probability distribution over the state transition paths on the transition probability by using the Gumbel softmax function. The next state is computed based on the weighted average of the sampled states and its uncertainty can be qualified by the sample variance. The hyper-parameter tau of the Gumbel function is learnt from data to better capture the inherent uncertainty in the data. To demonstrate their method, they perform several experiments. First, they show that their model can capture the stochastics in language better than other methods Second, they demonstrate their method performs better in classification on benchmark datastes than baseline methods such as the ensemble and BBB methods in terms of both prediction accuracy and efficiency. Third, they evaluated their method for out-of-distribution detection and their experiments again show their method performs better than the baseline methods on benchmark datasets. Finally, they show that when applied to reinforcement learning, their method is better than existing methods in sample complexity. Strengths: The proposed method for uncertainty quantification is efficient, compared with other methods such as Bayesian RNN. The performances of their methods have been evaluated for different tasks on benchmark datasets and show competitive performance versus the baseline methods. Weaknesses: First, technical novelty is minor; it is largely based on the exiting work on Gumbel function. More importantly, is unclear why the Gumbel softmax function, even with the learnt tau parameter, can capture the data uncertainty and better theoretical justification is needed. Second, it is unclear how to compute the aleraeroic and epistemic uncertainties separately from their method as the latter is needed for OOD detection. Third, it is unclear how to quantify the accuracy with the estimated uncertainty and how the improved uncertainty quantification can translate into improved performance in classification /regressions. Fifth, the experimental comparisons are only done for baseline methods for each task. The authors should also compare their methods to SOTA methods for each task. Finally, they need do an ablation study on their method to figure out what contributes to their method’s improved performance for certain tasks. <doc-sep>This work proposes a novel method to estimate uncertainties in recurrent neural networks. The proposed model explicitly computes a probability distribution over a set of discrete hidden states given the current hidden state in an RNN. Leveraging the Gumbel softmax trick, the proposed method performs MC gradient estimation. A temperature parameter is also learned to control the concentration of state transition distribution. To estimate uncertainty of a given input, the proposed model is run multiple times to draw samples for estimating the mean and variance. Experiments are conducted in a variety of sequential prediction problems, including a reinforcement learning task, demonstrating the effectiveness of the proposed uncertainty estimation method. Pros: Estimating uncertainty of predictions is important for data-driven machine learning models, especially for detecting out-of-distribution data; The proposed method directly quantifies and calibrates uncertainty, and therefore does not use much more parameters (compared to BNNs) and requires less parameter tuning; The paper selects a good range of task domains and strong baseline methods, demonstrating comparable performance. Cons: While the proposed method demonstrates good performance on both modeling stochastic processes and estimating out-of-distribution data, it is unclear whether the method itself can separate epistemic uncertainty from aleatoric uncertainty if both exists; meanwhile, most of the selected baseline methods focuses exclusively on estimating the epistemic uncertainty; if possible, it is desired to see a comparison of the proposed method with baseline methods that are designed to exclusively model aleatoric uncertainties for RNNs; It is mentioned that a large number of states improves performance in the experiments for predicting OOD data; a plot for the relationship between performance and the number of states used would be useful to understand how sensitive the performance is to the number of states used; If possible, the authors should also discuss the proposed work’s relationship with the sampling-free method of Hwang et al. [1] and how the choice of using discrete state distribution would outperform a parametric distribution. [1] Hwang, S. J., Mehta, R. R., Kim, H. J., Johnson, S. C., & Singh, V. (2020, August). Sampling-free uncertainty estimation in gated recurrent units with applications to normative modeling in neuroimaging. In Uncertainty in Artificial Intelligence (pp. 809-819). PMLR. ------------------------------------ Update: the major concerns above have been addressed in the appendix of the updated manuscript. I'm moving my initial rating of 6 to 7.
This paper proposes a method to quantify the uncertainty for RNN, which is an important problem in various applications. It provides results in a variety of domains demonstrating that the proposed method outperforms baselines. However, these experiments would benefit greatly from a comparison with SOTA methods for the specific tasks in addition to the considered baselines (e.g. covariance propagation, prior network, and orthonormal certificates). The paper could also be improved by adding a theoretical justification to explain how the Gumbel softmax function is able to capture the underlying data and model uncertainty.
- **Summary**: This paper investigates the impact of different calibration strategy (pre-combination, post-combination and its dynamic variant) on the performance of a deep ensemble. It presents both theoretical and empirical proof to show that well-calibrated ensemble member does guarantee calibration in the final ensemble. - **Strength**: * A coherent theoretical account for the issue of calibrating Deep ensembles. Accompanied by empirical evidence from CIFAR datasets. * Although not stated explicitly, a new calibration approach (dynamic calibration) is introduced, which empirically leads to better performance. - **Weakness** * Novelty may be limited: one central contribution of this paper is to provide a mathematical derivation to confirm the observation made in Rahaman and Thiery (2020) and Wen et al. (2020). Although I appreciate author's work on providing mathematical explanation for recent empirical findings, I'm not sure if the submission in its current form is contributing significant novel theoretical insight beyond the fact that ensemble prediction is less confidence, since max of the mean probability is no greater than mean of the max probabilities. On the other hand, the empirical investigation is conducted on a single vision task (CIFAR-10/-100). This paper can be made stronger by investigating synthetic situation where the ground truth is known, or extend experiment to also other data modalities (like Guo et al. (2017)). * Organization: Given the place of the new approach (dynamic temperature scaling) in the experiment, it might be worthwhile to devote some paragraph to introduce the procedure in more detail. - **Recommendation**: Based on reason stated in weakness, I recommend rejection since the either theoretical or the empirical contribution of this paper does not seem to be substantive enough for ICLR.<doc-sep>The paper makes an analysis of calibration in ensembles of deep learning models. Through some theoretical developments, the paper supports that a given ensemble cannot be more confident than the average individual members for regions where the ensemble is well calibrated. Empirical results, on CIFAR-100 and three different deep models, report a comparison of ensemble calibration, where calibration is done over all members in order to achieved a calibrated ensemble decision, over individual calibration of members with no feedback from the ensemble decisions. Results show that individual member calibration does not lead to calibrated ensembles, and as such calibrating directly on the ensemble output is required for obtained a proper evaluation of its uncertainty. Different ensemble calibration approaches are also compared. Pros: - Overall well-written paper. - Straightforward proposal, simple yet meaningful on several aspects for better understanding of the link between calibration and ensembles. - Rely on theory to support some claims, which strengthen the proposal. Cons: - The proposal is somewhat trivial, although I do not have knowledge that it has been investigated in detail elsewhere. Before reading the paper, I expected the results (i.e. calibration of individual members will not lead to calibrated ensemble decisions; calibration at the ensemble level is required), the paper is somewhat confirming this in a more explicit manner. - Evaluation on only one dataset (CIFAR-100) in the main paper, with another dataset for the appendix (CIFAR-10). - Results on CIFAR-10 in the appendix are not very compelling. - It is hard to make sense of the results in Table 1 and similar. Differences are small and difficult to interpret. - The explanations and organization of the paper are hard to following in some specific part. Although the paper is making a well-founded analysis of a hot topic in the last few years (i.e., ensembles are a way to evaluate uncertainty on decisions), I found it having some relatively trivial developments. And the conclusion is intuitive and expected. However, it is the first time I see this point well articulated, and the authors have made a good effort to develop theoretically backed explanations to support this. <doc-sep>Update after the author response: I've read the other reviews, and agree with R2 and R3. I think the paper is useful (emphasizes you need to calibrate the final ensemble, not enough to calibrate members), and has some nice conceptual contributions (explaining that if ensemble accuracy > average member accuracy (which is usually the case), and the ensemble is calibrated even in just a global/weak sense, then the members must be uncalibrated). This could spur more research into conceptually analyzing ensembles, and seems interesting. But I understand the other reviewer's concerns that it's not clear what practical impact this will have, so I'm keeping my score at a 6 (instead of raising to a 7). ######################################################################### Summary: This paper tackles the problem of calibrating an ensemble. They show experimentally that calibrating all members of an ensemble is often not enough to calibrate the combined ensemble, so instead we need to calibrate the final predictions of the ensemble. Additionally, they show that using a different temperature parameter for different regions of outputs can improve calibration. They explain why if the ensemble members are top-label calibrated (even in a very weak sense they call “global” calibration”), and the ensemble is calibrated, then the ensemble is less accurate than the average member of the ensemble. ######################################################################### Reasons for score: They make interesting observations about calibration of ensembles that could guide practitioners. For example, that it’s not enough to calibrate the members of the ensemble. They also raise an intriguing connection between calibration of ensemble members and ensemble accuracy, one would not expect a priori that if both are calibrated the ensemble would do worse than the average member. I could see this result being interesting to people who study ensembles as well. There are some weaknesses in writing and execution, but overall this paper is probably worth publishing if edited. ######################################################################### Pros: - I think it’s a nice observation that calibrating the members of an ensemble may not yield a calibrated ensemble. It’s easy to come up with toy examples where this is the case, but it’s interesting that this seems to be the case in practice. - They make an intriguing observation that if the ensemble members are in fact calibrated and the ensemble is calibrated, then the ensemble accuracy is at most the average member accuracy ######################################################################### Cons: - I believe the writing can be substantially simplified. The core ideas are simple and nice, but it takes a lot of effort to get to them, and I believe the authors should put in more work into making this understandable. - Some of the results seem unrealistic and can be omitted. For example in the start of section 4.1, the first couple of results require that the ensemble member regions and ensemble regions are the same. This seems rather unrealistic. The assumptions in prop 1 seem too strong to me. I’d remove the mentions of regions and I’d instead mention the other results (prop 2, 3, 4) in the main paper, Section 4.1. You could just move the propositions, and give some intuition for why the results are true. Removing regions should also considerably simplify the notation and setup. - I’m not quite sure what you mean in the intro when you say “Eq. (1) doesn’t explicitly reflect the relation between … and the underlying data distribution p(x, y)”. The definition in Equation (1) uses p(x, y). I’m not sure why all the definitions in 3.1 and 3.3 are defined in a way different from the standard ways in the calibration literature e.g. in Kull et al 2019 or Kumar et al 2019. - Temperature scaling is performed on logits, not on the actual probabilities. From equations 24, 25, and 26 it looks like you might be doing temperature scaling on the probability space (in equation 24, 25 the first argument to f is the probability, not the logit), which looks a bit odd. - Prop 4 should also hold when K = 2 (2 ensemble members) I believe. Happy to provide an example. - Some symbols are undefined. For example, \\delta(y^{(i)}, \\omega_j), I don’t believe \\delta is defined. I think it should be 1 if they are equal and 0 otherwise? ######################################################################### Questions and things to improve: - Please answer the cons above. - Ensembles are particularly useful because they tend to be more calibrated out of domain (Lakshminarayanan et al 2017). It could be useful to see which of these methods (calibrating the members, or the entire ensemble) is better calibrated when we have domain shift (e.g. training data = CIFAR-10, test data = CIFAR-10C, Hendrycks et al 2019). - Having confidence intervals for the calibration errors would be nice (and also using more modern, debiased estimators to estimate the calibration error) e.g. in Kumar et al 2019. ######################################################################### All cites mentioned are already in the paper, except: Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. Dan Hendrycks, Thomas Dietterich. ICLR 2019. <doc-sep>- In general, my opinion is aligned with AnonReviewer1 the theory and the empirical contribution do not feel sufficient. - I also agree with AnonReviewer3 and AnonReviewer4 but feel less excited about the prons and more worried about the cons. At this point, I'm not against the acceptance of the paper, although I'm still staying on the rejection side. I'm increasing my score because we are at least talking about a borderline. -------------------------- Summary: The paper study calibration of ensembles of DNNs and its relation to the calibration of individual members of ensembles. The work demonstrates that i) members of an ensemble should not be calibrated, but the final ensemble may require calibration (especially if members of an ensemble are calibrated) ii) provide theoretical results to support the statement iii) propose an adaptive calibration scheme (dynamic temperature scaling) that uses different temperatures based on the confidence of a model. Concerns: 1) The main question of the paper "Should ensemble members be calibrated?" feels trivial, because the community is aware of the simple example that provides an answer. The Deep Ensembles [Lakshminarayanan2017] have miscalibrated members---conventional DNNs, but the predictions of an ensemble are, in-most-cases, calibrated. Thus the answer is "No". 2) The paper mostly is clearly written, but section 4.1 "Theoretical Analysis" is *extremely* hard to follow. Even though I re-read it many times, I'm still not sure if I understood it correctly. The most confusing part is the conclusion "In practice, there is no constraint that the ensemble prediction should be calibrated, thus ensemble prediction calibration is required even for top-label calibrated members.". It seems that no listed results were used to produce this statement. 3) The calibration of ensemble has been proposed in [Ashukha2020, 5 Discussion & Conclusion]. ("The resulting ensemble predictions ..., requiring calibration functions to be optimized for the ensemble prediction, rather than ensemble members.") 4) The two main contributions (4.1 Theoretical Analysis, 4.2 Temperature Annealing for Ensemble Calibration) feels not related, they are basically two independent topics packed in the one paper. 5) The empirical comparison exploits the calibrations score (e.g., ECE). ECE is a biased estimate of true calibration with a different bias for each model, so it is not a valid metric to compare different models (see Vaicenavicius2019). The fact is even mentioned in the current paper ("It should be noted that for finite number of samples ... ") but still is ignored in the empirical study. What I suggest is to use the squared kernel calibration error (SKCE) proposed in [Widmann2019] along with de facto standard, but biased ECE. The SKCE is an unbiased estimate of calibration. There might be some pitfalls of this metric that I'm not aware of, but the paper looks solid and convincing. Also, please put attention to Figure 83 in the arХiv version. Yes, ECE is the standard in the field, but it is the wrong standard that prevents us from meaningful scientific progress, so we should stop using it. 6) The results provided in Table 1 seem to be close values (0.6119 vs 0.6129, etc.), so at least standard deviations need to be reported. Also, there is no mentioning of several runs per results in the text. The paper toches the nice topics but, overall it feels like "ok, but not enough". The theory is interesting but it does not give us a lot of insides (maybe it's very subjective). The dynamic temperature scaling is not proofed to outperform the basslines. The contributions feel disconnected. The writing quality needs to be improved. Comments: 1) As far as I can tell, the citation "The weights assigned to the probabilities are either optimized using AUC as in (Ashukha et al., 2020) ..." is incorrect, as there is no mentioning of optimizing weights using AUC in the paper. 2) typo: It should be noted that for A finite number of sampleS [Lakshminarayanan2017] Lakshminarayanan B, Pritzel A, Blundell C. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in neural information processing systems 2017 (pp. 6402-6413). [Ashukha2020] Ashukha A, Lyzhov A, Molchanov D, Vetrov D. Pitfalls of in-domain uncertainty estimation and ensembling in deep learning. ICLR, 2020. [Vaicenavicius2019] Juozas Vaicenavicius, David Widmann, Carl Andersson, Fredrik Lindsten, Jacob Roll, and Thomas B Schon. Evaluating model calibration in classification. AISTATS, 2019. [Widmann2019] Widmann D, Lindsten F, Zachariah D. Calibration tests in multi-class classification: A unifying framework. In Advances in Neural Information Processing Systems 2019 (pp. 12257-12267). https://arxiv.org/pdf/1910.11385.pdf
This paper studies ensemble calibration and the relationship between the calibration of individual ensemble member models with the calibration of the resulting ensemble prediction. The main theoretical result is that individual ensemble members should not be individually calibrated in order to have a well-calibrated ensemble prediction. While other recent work has found this to be the case in empirical results, this paper substantiates the empirical results through theoretical results. Pros: * Theoretical study of ensemble calibration with meaningful insights Cons: * Contributions limited to theoretical study of known observation and dynamic temperature scaling. * Dynamic temperature scaling is not shown to outperform baseline methods. * Limited experimental validation: CIFAR-10/CIFAR-100. The authors engaged in a extensive discussion with reviewers and made changes to their paper, including adding standard deviation results over multiple runs and the SKCE calibration measure. Overall this is solid work and could be accepted to the conference; however, reviewers agree that parts of the work are lacking, in particular: 1. limited experimental evaluation (one type of task, one/two datasets only), and 2. given known literature the benefit of the derived theoretical results to practioners is not clear. The discussions have been unable to resolve this disagreement.
The paper presents an extension to the commonly used Gaussian Process based HPO by incorporating the learning curve dynamics to decide the next HP configuration to be tried out. For this, the authors propose the use of a kernel that encodes the previous HP iterates using a neural network. The method is shown to reach lower regret values for the same computational budget compared to the baselines considered. ### Post rebuttal Given the detailed rebuttals by the authors, the updated baselines, I'm confident about increasing my rating for this paper. It might aid their arguments, if the authors were to move some sections of the appendix to the main paper. --------------------------------------------------------------------------------------------------------------------------------------- ## Pros: The paper is quite clear (see the subsequent comments), and easy to follow. The method proposed is simple, and is an intuitive extension to the methods in the literature. The paper is also well placed in the context of previous methods. The experiments section is quite strong in the experiments and baselines covered. ## Cons: 1. The authors start with the motivation that the rank correlation of performance at various budgets is poor. However, this is seemingly contradicted in Fig 7 left where the exclusion of the learning curve in HPO doesn’t worsen the performance much over all the datasets. The authors show experiments where the inclusion of LC leads to better results for some datasets. The authors should either provide references for the poor correlation, or show statistics of how intermediate performance is a poor predictor of final rank. 2. On training the deep convolutional kernel: The neural net used is a MLP with 128 and 256 hidden units. This is quite a large network. How do the authors reliably train this network with only a few (xi, j, Yi, j-1) tuples? In the initial phases, how reliable are the networks predictions to terminate a run? How are the hyperparameters of this training chosen? The authors should give details, and report ablations on network size, architecture, training parameters (lr, batch size etc). In the absence of these, it is hard to judge the merits of the proposed \\phi, as I do not understand how these specifics were arrived at. Also, the authors should report how the additional time of training the deep kernel changes the wall clock time measurements. 3. Evaluations: The use of Epochs and Steps to describe the x-axis in various plots is a little confusing. Are these the same? Also, it is ideal that the authors include the true performance (say test accuracy on CIFAR/ImageNet exps), and the true wall-clock times somewhere in the paper, in addition to the regret plots presented. Also the proposed method’s rank fluctuates quite a bit in the initial steps in Fig 3 and 5; can the authors comment on this? 4. Minor: “Reversing the training update steps” in the Intro para 1 makes it sound like undoing the update steps. The authors might consider rewording Para 3 of Motivation to aid readability, as it took me a few reads to grasp the point. The authors say “gradient descent and Adam” on Page 5 last para, and “gradient ascent and Adam” in A.4. 5. Additional comments: These are general comments the authors might consider discussing. Do the authors find that the trained deep conv kernel is transferable across tasks? This might have interesting implications, if it can be. The authors write at the end of Section 6 “the additional use of an explicit learning curve representation might not lead to a strong improvement in every scenario”. While experimental evidence has been provided, can the authors describe what factors determine if incorporating LC dynamics leads to better HPO? The presented work is interesting, barring a few points commented on above. The motivation for this work needs further clarification from the authors. The experimental evidence of the efficacy of the method is strong. However, the paper in its current form misses some important details, and ablations. If the authors can address these points adequately in their rebuttal, I’d be quite happy to raise my score. <doc-sep>A gray-box hyperparameter optimization framework has been developed based on a multi-fidelity acquisition function and a surrogate model incorporated with learning curve dynamics. The proposed method was built on top of deep kernel learning [Wilson et al., 2016] and multi-fidelity Bayesian optimization [Kandasamy et al., 2017]. Experimental results on three different settings were provided in optimizing hyperparameters for MLP, RNN, and CNN, respectively. Pros: - Incorporating learning curve dynamics into the surrogate model is well motivated and supported by the ablation study on the NAS-Bench 201 dataset. - Extensive experimental results have been provided in terms of tabular datasets, NLP tasks, and NAS. Cons: - Predicting learning curves is not new for HPO as it has been well explored by previous works like [1]. While the proposed method tries to involve the budget information for modeling curve dynamics, the technical novelty of this work is still somewhat limited since it seems like a direct combination between [Wilson et al., 2016] and [Kandasamy et al., 2017]. - The multi-fidelity acquisition function is not well supported by the ablation study. What is the comparison result between DYHPO and DYHPO w/o MF? - Some necessary baselines are missing in the current experiment, such as [1] and [Wilson et al., 2016]. [1] Learning Curve Prediction with Bayesian Neural Networks, ICLR’17. Overall, the paper is easy to follow and well-motivated. While some ablated models and baselines are missing, the experimental results are comprehensive and seem to be solid. The main concern of this work is the lack of technical novelty compared with existing works. <doc-sep>This paper present a new Bayesian Optimization algorithm that integrates a Deep Kernel over both the hyperparameters x and the fidelity budget j (typically number of epochs). It also present a slightly modified version of the expected improvement acquisition function to for the fidelity budget. The paper shows on several benchmarks that the proposed algorithm, called DyHPO is highly competitive, better performing than BOHB and DEHB on most of the benchmarks, otherwise performing similarly. Strengths: - Multi-fidelity is very important to obtain practical HPO algorithms for deep learning. - The proposed deep kernel accounts for the correlation between learning curves to avoid naively stopping trials to early like HB does. - The modification proposed to account for the learning curve is fairly simple - The experiments are quite convincing, with DyHPO outperforming other HPO algorithms almost systematically. The baselines are good, with Hyperband, BOHB and DEHB being serious multi-fidelity contenders. Weaknesses: - It is very unclear to me how we can guarantee that the algorithm will often resample x to make it continue. If most dimensions of the search space are real, it seems to me x will most likely be different than previous ones, leading the algorithm to never continue trials. The empirical results clearly show that this is not the case, however the explanations do not make clear why it would not be the case. - Only figure 8 reports training time in seconds (I assume including HPO time to suggest new trials as well). With the frequent query on the algorithm (every 1 epoch) I assume there must be a significant amount of overhead, starting and stopping trials very often. Also the algorithm itself must be fairly slow compared to hyperband, with the deep kernel that requires training. It would be best to report more results on the running time of DyHPO. - There are no clear experiments with learning curves that are best later on to synthetically show that DyHPO performs well in this case. This would be extremely valuable to support strongly that the reason why DyHPO works so well is that it indeed let the best trials train even though they do not perform well at the beginning. The algorithm presented in this paper is an appreciable improvement to multi-fidelity variants of Bayesian Optimization, especially because it accounts for the correlation between the learning curves and avoid relying to strongly on low fidelity to make hard decisions on trials to stop or continue training. The experiments are convincing, they are broad and compare good baselines. The paper lacks important details in my opinion with respect to the optimization of the expected improvement and how it ensures a good fraction of the trials continue training. It also lacks analysis of the execution time of the algorithm and more explicit experiments showing it can avoid stopping good trials that progress slowly in the first epochs. I consider the work good enough for publication, but it could benefit from some clarifications and additional analysis. <doc-sep>This paper proposes a gray-box optimization method for hyperparameter optimization of deep neural network models. In order to deal with the different budgets available for training NNs in the framework of multi-fidelity optimization, the proposed method uses a multi-task Gaussian process modeling that simultaneously measures the similarity between not only the inputs x but also the outputs y trained with different budgets. In particular, the multi-task Gaussian process model is constructed using a deep kernel with a feature extractor instead of the existing kernel function. The performance of the proposed method is evaluated by experiments on three types of neural nets: MLP, RNN, and CNN. MLP and RNN are treated as usual hyperparameter optimization in 7 and 8 dimensions, respectively, while CNN is treated as a rewrite of NAS as hyperparameter optimization. major concerns 1. There are several parts of this paper that are not clearly related to similar studies. e.g. - multi-fidelity BO with deep models - Li+ "Multi-Fidelity Bayesian Optimization via Deep Neural Networks" NeurISP2020 - EI for multi-fidelity BO - Picheny+ "Quantile-Based Optimization of Noisy Computer Experiments with Tunable Precision" Technometrics, 55(1):2-13 - Lam+ "Multifidelity Optimization using Statistical Surrogate Modeling for Non-Hierarchical Information Sources" 56th AIAA/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference. 2015. 2. There is a lack of explanation about the architecture of the deep kernel (why this architecture, what makes this architecture effective for multi-fidelity optimization, etc.). It looks to be just a presentation of a kernel architecture that happens to work well. 3. It is not appropriate to plot only the average riglet in the experimental results, so the variance should be plotted as well (if it is difficult to plot, it can be reported separately). 4. Although the effectiveness of the multitask kernel is evaluated in ablation study, modeling using multi-task kernels in multi-fidelity optimization has already become popular and has been evaluated in various studies (e.g., https://arxiv.org/abs/1406.3896, https://arxiv.org/abs/1605.07079, https://arxiv.org/abs/1903.04703). Rather, what we should consider is what parts of the deep kernel structure are effective and why. I cannot support the acceptance of this paper due to insufficient evaluation of the novelty and the effectiveness of the proposed method. <doc-sep>This paper is concerned with multi-fidelity HPO (the authors call this "grey-box", which is a non-standard term). They propose a surrogate model for learning curve data (e.g., metric values at each epoch) based on a deep GP kernel. Different to most previous work on synchronous multi-fidelity HPO, they decide for each running trial when it should be continued. Experiments are presented, where the method is compared to a range of synchronous HPO baselines. The experiment are fairly small-scale and do not use parallel evaluations. A potential strength of this paper is the proposal of a novel surrogate model for learning-curve data which, despite involving a neural network, seems to be operational on just the data observed during a single HPO experiment. There is a lot of prior work proposing learning-curve surrogates (see below), some cited here, but most are either quite simple (multi-task GP) or require "warmstarting" on data from previous HPO experiments. Having said that, I could not find any mentioning of this point, and I am really curious about the authors explaining how their "deep GP" model can be trained just on the very limited data observed during a single HPO experiment. For an expensive tuning problem, you probably have 20-40 configurations, most of which do not run for many epochs. And even if some configurations run for many epochs, learning curve data is exceedingly noisy. The details really matter here. With a standard BO surrogate, I just need to refit the GP hyperparameters now and then, which can easily be done even for little data. With DyHPO, you need to presumably update a deep kernel, i.e. re-train a neural network. The paper does not say how this is done in a fully automated fashion. Is training started from scratch, or from the last recent weights? The first is expensive, while the latter is prone to get stuck at the previous solution and ignores the new data. How long do you re-train? Do you re-train after getting each new observation? While using complex (deep neural) surrogate models in BO is an obvious idea, many previous trials have failed, because complex NN models are just not easy/fast to update as part of sequential decision making, and in any case cannot be fit robustly to very small datasets. This fact has been clearly spelled out, for example in [7] for the closely related problem of bandit optimization. I'd be personally really surprised if this work was different and solved these difficult issues, but would be willing to give benefit of doubt, if a lot more information was provided here how the authors pulled it off. As it stands, the authors do not even mention there could be issues here. The most obvious weakness of this paper is that very relevant prior work is ignored, namely on asynchronous multi-fidelity. Most prominently, ASHA [1] is well known and implemented in Ray Tune [3] or AutoGluon [4]. The baselines compared to against in this paper (Hyperband, BOHB) are all synchronous (and quite dated by now), meaning that many trials need to run to a certain level until another decision is taken. If you force methods to be synchronous, this puts them at a disadvantage. They need to delay decisions until some rung is completely filled, which delays decisions and slows them down. This is explained in the ASHA paper [1]. It is well known that for large scale multi-fidelity HPO, asynchronous scheduling works much better than synchronous, see for example the comparisons in [2]. Algorithms like ASHA are behind commercial automated tuning services [5]. It is quite astonishing that part of the research community is still considering synchronous methods like Hyperband or BOHB the state of the art. For example, the paper claims that it is a new idea that DyHPO "never discards a configuration". That is precisely what ASHA [1] does as well (known as pause-and-resume scheduling), and what Freeze-Thaw BO suggested long ago. I'd be surprised if a well configured ASHA method (available in Ray Tune [3]) would not be competitive or beat the approach suggested here, despite not requiring a complex surrogate. When doing such comparisons, it is important to also take decision time into account, because updating a surrogate model can be very expensive. In DyHPO, this likely means re-running MLP fitting, which is probably really expensive. I also find the motivation as to why DyHPO works better than previous methods unconvincing. The authors claim that rank correlations between early evaluations (at few epochs) and late ones are poor. In my experience, this is just not the case, these correlations are in the majority pretty good, which is exactly why multi-fidelity methods work very well for DNN tuning. Sure, there are examples such as regularization, but the question is whether that matters. The authors should provide numerical evidence for such a claim. Now, even if these correlations are poor, it is not clear to me why DyHPO could do anything about that. Learning curve prediction is just hard, because by far most of the data is from early evaluations, but you are interested in late performance, so you need to extrapolate. Why would some vanilla deep kernel be good at that? The only way to really know about certain anti-correlations that can be exploited, is to either fit models to data from past HPO experiments (which DyHPO does not do), or to built the knowledge into the model (which they don't do either). Just because an NN is involved, does not mean it will do magic for you. The reason why DyHPO works better than competitors here is that it is asynchronous, but the others are synchronous, so at a disadvantage. Also, the reason why model-based HPO is better than random search based methods (like Hyperband) is mostly because the latter cannot exploit: they need to draw new configs always at random. The experiments are pretty underwhelming. Apart from most relevant baselines missing (DyHPO is asynchronous, all competitors are synchronous), the curves are also not very meaningful, because the x axis is number of epochs instead of wall-clock time. DyHPO needs to update a complex surrogate model, including retraining a neural network, and the costs for doing that have to be taken into account. All experiments are also sequential, no parallel evaluations are used (this could easily be done by using Ray Tune [3]), again this falls far short of the current state of the art in automatic tuning of large neural models (e.g., methods like ASHA or PBT). Finally, there are quite some works on using complex surrogates to model learning curves in the context of HPO, for example [5]. It is not clear why this was not compared against, as code is available. The work of Wistuba and Grabocka is cited, which proposed deep kernel surrogates before (so this paper, against their claim, is not the first to do this in the context of multi-fidelity), and in fact Perrone etal (2018), cited here, did this even earlier, just not in the context of learning curve data. The paper of Wistuba is quite careful in explaining why a complex surrogate cannot be trained robustly on the data from a single experiment, and proposes an algorithm to warmstart from past data. It is dismissed here (as competitor) for doing so, but as I said above, I am not sure how DyHPO solves the apparent issue that complex NNs cannot be trained on the small amount of data observed in HPO. [1] ASHA: https://arxiv.org/abs/1810.05934 [2] MOBSTER: https://arxiv.org/abs/2003.10865 [3] Ray Tune: https://docs.ray.io/en/latest/tune/index.html [4] AutoGluon: https://auto.gluon.ai/stable/index.html [5] https://www.determined.ai/ [6] https://openreview.net/forum?id=S11KBYclx [7] https://arxiv.org/abs/1802.09127 The paper may have some merits in suggesting a "deep kernel" surrogate model which, although quite related to previous work, is stated to work even if just fitted on the small amount of data from a single experiment, in an online fashion. However, this has been tried several times before with little success, and details explaining why the current approach should work are missing. The proposed method uses asynchronous scheduling (much like Freeze-Thaw), but is compared against synchronous scheduling baselines, which have a major disadvantage. Comparisons to SotA methods like ASHA (or PBT) are missing, these are not cited. There is also quite a range of prior work on learning curve modeling for HPO, which is not compared against. Open source code for doing a better comparison is publicly available (for example, Ray Tune). Experiments are smallish scale, mostly on tabulated benchmarks, and again are not close to what is possible today with parallel computation. Compared to missing alternatives like ASHA, the proposed method is fairly complex and quite likely rather non-robust to handle. For example, it requires retraining a neural network model each time a bit of new data is obtained, which is very difficult to do.
This paper presents a new method for performing Bayesian optimization for hyperparameter tuning that uses learning curve trajectories to reason about how long to train a model for (thus "grey box" optimization) and whether to continue training a model. The reviewers seem to find the paper clear, well-motivated and the presented methodology sensible. However, the reviews were quite mixed and leaning towards reject with 3, 6, 5, 3, 6. A challenge for the authors is that there is already significant related literature on the subject of multi-fidelity optimization and even specific formulations for hyperparameter optimization that reason about learning curves. A common criticism raised by the reviewers is that while there are extensive experiments, they don't seem to be the right choice of experiments to help understand the advantages of this method (e.g. epochs instead of wall-clock on the x-axis, choice of baselines, demonstration that early results are used to forecast later success, etc.). Unfortunately, because there is significant related literature, the bar is raised somewhat in terms of empirical evidence (although theoretical evidence of the performance of this method would also help). It seems clear that some of the reviewers are not convinced by the experiments that were presented. Thus the recommendation is to reject the paper but encourage the authors to submit to a future venue. It looks like the authors have gone a long way to address these concerns in their author responses. Incorporating these new results and the reviewer feedback would go a long way to improving the paper for a future submission.
Summary: The paper proposes two benchmarks for continual language modeling: one evaluating character-level multilingual drift between languages which share similar characters and second evaluating word-level drift between English corpora of different domains. The setup is online in the sense of evaluation: they evaluate on the new sentences and then train over them (unlike image datasets), and catastrophic forgetting is hence characterised as having higher error than was in the past when there is a switch between the domains/languages. Hence, the loss functions measuring forgetting quantify the height and length of the rise in error. They compare a mixture of expert baselines with gating by different gating methods on this setup. Primary Concerns: 1. There are few sentences and terms that are hard to understand and to me they seem imprecise. Examples would be: (1.1) Intro: “human children still manage to acquire multiple languages without being explicitly asked to keep them separated” -- not sure if I buy this as it is known that if children are exposed to situation where there are many languages, they get confused, sometimes many kids find it hard to learn any of them, and it becomes important to give them guiding signal. Do you have any reference to support this hypothesis? (1.2) Section 3, second para: “preventing data leakage”: what do you mean by data leakage? (1.3) Section 3, third para: hard to follow, notation isn’t clear. And it seems there is a typo in S_i = \\sum_j T_i. (1.4) Section 3, fourth para: “for a model to be resilient to forgetting, it must adapt quickly”: this statement is not correct because if a model adapts quickly to a new distribution, the parameter change would lead to forgetting and that’s primary the reason why there are regularization based approaches for continual learning enforcing models to be in the vicinity of old parameters. Too much adaptivity does not ensure less forgetting. (1.5) Section 3, loss after switch: what do you mean by a switch? How do you know when a switch happens (task label is not given)? In practice the loss curve is not smooth. How do you identify the switch? Fig 1 (a) is too smooth, does not represent the real loss curve. 2. Regarding experiments, is it not possible to design much simpler methods which work for this problem? If it's known there is expected to be a character/word-sequence distribution shift, I believe it's likely they can be detected easily with traditional n-gram models and style distinguishing attributes typically used for author identification [1,2]. Why isn't it possible to use a baseline which consists of experts for one domain/language where the character-sequence decides which expert to use instead of these weaker gating-based methods? Also, English/czech/german/french seem very distinguishable and share little in common in terms of character sequences [3], hence I am doubtful of the finding that combining these models will improve any single language performance. 3. Why is it not possible to apply traditional continual methods like Experience replay to this setting-- you simply store intelligently selected past sentences in memory (when say error shoots up) and replay using them. There are many other continual learning approaches that potentially could be applied here. Any particular reason for not using them? [1] Koppel et. al., Computational Methods in Authorship Attribution [2] Sapkota et. al., Not All Character N-grams Are Created Equal: A Study in Authorship Attribution [3] Gerz et. al., On the Relation between Linguistic Typology and (Limitations of) Multilingual Language Modeling (edited) <doc-sep>Strengths: This paper proposes a new evaluation framework and gives two available evaluation datasets Weakness: - the paper needs a major rewrite to improve fluency and to better state motivation and contribution - the empirical validation is weak. Reasons for accept: The advantages of this paper are: 1) this paper proposed a new evaluation benchmark and dataset to promote the related research of online continual learning; 2) the proposed plastic gate allows it to distribute different distributions among different experts, which has certain effects from the experimental results. Reasons for reject: The shortcomings of this paper are: 1. This paper is not enough novel and has not contributed enough to continual learning related research; 2. The core motivation of this paper is not clear enough. The abstract mentioned that "it is hard to demarcate task boundaries in actual tasks", and then said that a new benchmark, new metrics, and gating technique are proposed. Stacked statements like this can hardly capture the main problem to be solved. 3. The advantages of the new metrics are not clear. Because from the experimental results, PPL and PPL@sw have a strong correlation. Therefore, please explain its advantages in detail (including the advantages of this evaluation framework compared with the evaluation framework of related literature, and verify it) 4. The baseline uses LSTM and does not use CNN, Transformer, .etc, which shows that its generalization is limited. 5. Can you provide the experimental results when λ is other values, and the combination of the number of modules? 6. Because what you are proposing is a continuous language modeling evaluation framework. Is it possible to evaluate some of the latest online continual learning systems? For example: 1) Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis 2) Learning a Unified Classifier Incrementally via Rebalancing Or other Task-Free Continual Learning related work. This will have a good evaluation effect on measuring the versatility of your evaluation framework. <doc-sep>############################################################################################# Summary: This paper introduces a dataset and benchmark for language modeling in the online continual learning framework. The key characteristics of this benchmark are: the data are temporally correlated, there are no task identifiers presented to the model (task-free setting), and the evaluation is performed in an online fashion. The benchmark consists of a Multilingual character-level dataset and a Multidomain word-level dataset. The authors introduce several metrics and evaluate using several simple baselines of mixtures-of-experts and products-of-experts. ############################################################################################# Pros: 1. The paper is clear and well-written. 2. The authors provide sufficient details on data collection and modeling 3. The relevant work section is extensive 4. The design choices in constructing the dataset are well thought out and make sense given the objective of the paper. In particular, the dataset along with the proposed evaluation metrics captures the three stated objectives of the benchmark. 5. The authors are upfront about materials left out of the main text. It’s nice when potential questions are anticipated and answered, for example, “why weren’t continual learning SOTA models evaluated?” and “why weren’t transformers considered as baselines?” The authors answer these questions candidly. ############################################################################################# Cons 1. The dataset seems incremental over existing work 2. The introduced evaluation metrics are described intuitively, but are not analyzed empirically or theoretically 3. The necessity/value of the introduced dataset is not adequately justified in relation to existing challenges in the continual learning setting. A component of this is showing where existing models fail (and why this dataset will help improve them). ############################################################################################# Recommendation and explanation I recommend rejection for the previously outlined reasons. ############################################################################################# I also have some questions that I hope the author can help address: 1. What is the key innovation over existing work such as d’Autume et al. who also study language models in the continual learning, task-free setting? 2. What failure of current models does this benchmark address? Note that the answer to this question should also be empirically demonstrated. ############################################################################################# Additional feedback 1. This benchmark could very well be a valuable contribution that fills a hole in the existing body of work, but the paper in its current form does not adequately establish this. The rebuttal should better address how this benchmark fits into existing work by comparing it to existing datasets and more relevant baselines. 2. The paper as a whole is well written, but I question some of the choices in syntax: terms such as “demarcation” and “desideratum” are spirited but may be better replaced by plainer alternatives.<doc-sep>This paper’s main contributions are (i) to propose two new benchmarks for online continual learning in the context of language modelling and (ii) evaluate the performance of a number of composition-of-experts-based models on the new datasets using a number of metrics. The multilingual benchmark, derived from an existing multilingual news corpus, consists of sequences of characters where the language is periodically switched, and the MultiDomain benchmark consists of sequences of English words where the corpus is periodically switched. The comparative performances of the various baselines on the two datasets, as well as an analysis of the mixture weights in one of the models during training, are used to provide insights into the qualitative differences between the datasets. Overall, I am inclined to recommend acceptance for this paper on the margin because it makes a good contribution towards evaluating continual learning models in more real world settings, more specifically in the context of online learning. The datasets proposed are well-suited for purpose for reasons outlined below, and the evaluation using various composition-of-experts models is fairly conducted and followed up with an informative analysis. The key downside of the paper is that no standard continual learning baselines are trained on the proposed datasets; I would be inclined to increase my score if results were shown for 1 or 2 algorithms specifically designed for continual learning with neural networks (as discussed in more detail below). Positives: • There is a need to start evaluating continual learning in closer-to-real-life settings; in providing datasets that facilitate evaluation of continual learning models in an online setting without task boundaries, this paper makes a positive contribution in this direction.
 • The datasets are simply composed, but seem well suited for evaluating online continual learning because (i) language data is sequential, (ii) by imposing a truncated exponential (and thus memoryless) distribution on the length of subsequences, it is hard for models to cheat in predicting the next task switch, preserving task-agnosticity, and (iii) in both datasets, the subtasks share latent similarities, creating the possibility for forward/backward transfer between them.
 • The analysis of the experiments provides interesting insights into the datasets and differences between the baselines. E.g. Figure 1d effectively shows how the weights of one of the Product of Experts models switch after a task change, indicating a degree of specialisation of the modules, and 1e uses the correlations of the mixtures weights used for different subtasks to highlight the latent similarity between pairs of subtasks.
 • The paper is clearly written and easy to follow.
 Main Concern • Limited set of baselines. While a range of composition-of-experts baselines are used for evaluation, it would have been much better to also include other methods specifically designed for online continual learning, such as those cited in the paper [1, 2] or, though not strictly online, a replay-based method such as CLEAR, which works in the task-agnostic setting. It is claimed in the paper that including state-of-the-art online continual learning methods would have involved “non-trivial adaptations significantly departing from the original models, which would limit any possible conclusions we could draw” as they are designed for image-based datasets. I don’t fully understand the basis of this claim; perhaps the authors could elaborate - as far as I am aware, for example, [1] is not restricted for use on image-based datasets.
 • Since the subtasks do have discrete boundaries, even though these are not passed to the model during training, it would be possible to evaluate methods that use task boundaries for consolidation on the proposed datasets by either providing knowledge of the boundaries (although this breaks the task-agnosticity) or by using methods that can detect task boundaries - e.g. EWC uses the Forget-Me-Not Process [3].
 • Overall, not evaluating the datasets with any standard continual learning baselines is an important weakness.
 Other comments • The proposed method, plastic gates, which performs best amongst the baselines used when combined with product of experts models, seems simple and effective but I am inclined to question how novel it is, since it just amounts to multi-step online gradient descent on the mixture weights.
 • The metrics used for evaluating continual learning, loss after switch and recovery time after switch, which are one of the main selling points of the paper are suitable for the datasets provided, but would not be applicable in a setting where either the task boundaries are not known or there are no hard task boundaries to be identified.
 • Typo Section 2 Paragraph 2: “MNNIST” -> “MNIST”
The initial reviews were mixed for this paper. On one hand, some of the reviewers highlighted that the proposed datasets could be useful to researchers. On the other, reviewers found a few important flaws with the current manuscript including missing baselines, issues with the proposed tasks, and possibly inaccurate/imprecise statements. Our discussion after the author's response focussed on whether the positives aspects of the current paper outweighed some of the perceived weaknesses of the paper. In particular, while some of the initial criticisms from the reviewers were successfully addressed by the authors (including possible imprecisions and to a certain extent motivation), all the reviewers remained convinced that standard continual learning baselines could be adapted to this setting. They also conjectured that these missing baselines might not allow readers to appreciate the strength of the proposed datasets. In their response, the authors argued that adapting models would require research. The reviewers are under the impression that it would be useful to test baselines more or less "as-is" even if the authors do not think these baselines will be competitive. For example, in the discussion, a reviewer suggested that "an experience replay baseline could [...] have been implemented" where the replay buffer includes the hidden states of an LSTM. It might also be useful to study baselines that do not strictly obey the proposed setting, again to get a better understanding of the proposed tasks (including how difficult it is). Overall, having some of these baselines would be one way to better connect the proposed work to the current continual-learning literature.
The work presents a new Parallel Tempering scheme that adapts a variational reference distribution within a parametric family. They adopt a parameter to minimize the forward KL divergence between the parametric family and the target distribution. They combine a fixed and an adaptive reference that leads to better restart rate performance than the baseline. Strengths -Interesting and witty idea combining a fixed and adaptive reference in the scheme. -Extensive theoretical analysis of the proposed scheme. The authors provide theoretical guarantees for the performance and convergence of the method. -Good presentation of the work. Weaknesses -A lot of toy experiments but not real world datasets. It would be interesting to see the method applied in a bigger model and a bigger dataset (like an image dataset MNIST, CIFAR10). -Structure is a bit odd since there are no conclusions and no discussion of limitations, future directions, societal impact. But again this is a theoretical work so societal impact is not applicable in this case. I would like to see the other stuff more in a separate paragraph though. This is a theoretical work so negative societal impact is not discussed and the limitations are briefly but not clearly discussed in the main text (Subsection 3.5). <doc-sep>The authors proposed an improved version of the parallel tempering algorithm to solve the non-scalability issue with respect to the data size. In particular, the authors show that in the large-data limit, the restart rate degrades arbitrarily to 0, which strongly affects the communications between the chains associated with the target distribution and the prior distribution. To tackle that issue, the authors proposed to adopt variational inference based on the exponential family. Theories and experiments show much better restart rates. Pros: I like the authors' insight on the weakness of parallel tempering with respect to the data size. Given a fixed schedule of parallel tempering, the communication efficiency does raise a concern in large-data limits. A major reason I am suspecting is that as the number of data points increases, the major mode becomes more dominant, which also inspires the authors to use a tunable prior based on variational inference. Cons: 1. I think the proposed method is not the right solution to tackle that issue. As is known that parallel tempering not only cares about communication efficiency (or restart rates) but also focuses on the exploration-exploitation trade-off. The current method seems to solve the issue of communication inefficiency, but the impact of exploration is not clear. If we don't know **how much exploration is sacrificed**, why not just adopt a prior that is close enough to the target distribution? In that way, we can maintain a large enough restart rate via the most vanilla method. 2. The combination with a fixed reference further increases my concerns about this method in exploration, which has to resort to a different prior for exploration. 3. Regarding the theories, I feel this paper is more suitable for a journal review. * I am familiar with Syed's JRSS-B'21 paper but the proof details of this work are not carefully checked. NA <doc-sep>This paper proposes to learn the prior distribution adaptively for parallel tempering. In particular, the prior distribution is tuned to optimize a proxy objective (forward KL divergence to the posterior) with a simple gradient-free moment matching procedure. In theory, the variational prior reference proves to outperform fixed reference, but in practice it may get stuck in a single mode, which the authors resolve by mixing the adaptive and fixed reference distributions. Empirically, the proposed method achieves a big gain over existing methods on Bayesian inference tasks. Strengths: - The paper is very well written and easy to follow. - The introduced algorithm is intuitive and theoretically sound. In the large data limit, the moment-matched reference could achieve the best possible restart rate of 1/2. - The authors fixed the collapsed reference by adding fixed reference back in practice, which seems to work well empirically. To be fair, I'm not familiar with the datasets the authors used in the paper, so I don't know how convincing the empirical results are. Weaknesses: - Lack of discussions about the assumptions in theoretical analyses. For Propositions 3.1-3.3, the conclusions only hold under some assumptions mentioned in the Appendix. Adding some discussions or giving some intuitive explanations about the settings would be helpful for readers to understand the implications of all these propositions. - All the experiments are done on traditional inference problems with relatively toy models. In this case, I would expect sampling to be "easy". For models like deep neural networks, the posterior could be very complicated and I don't think the combination of a fixed and an adaptive reference would be enough. The authors discussed the limitations in the paper and I don't see any negative societal impact of this work.
The idea of this paper is to tune the reference distribution for parallel tempering to improve efficiency. The key idea is simple: Assume the reference distribution is in the exponential family and use sufficient statistics. Experimental results show that this typically helps in terms of metrics like effective sample size per iteration, though not necessarily in terms of effective samples per second. There are theoretical guarantees which each rely on a long list of assumptions which are deferred to the appendix. While I realize the limitations of space, I echo the reviewers that more discussion of the assumptions should be included in the paper of which should be considered more minor or major. Still this paper proposes a novel approach that is plausibly useful in at least some settings so I recommend acceptance. A minor point: The font sizes are poorly chosen, to the point of being unreadable if the paper is printed. I had to resort to zooming into individual figures on the computer to reference which was quite tedious.
The paper presents a method for improving tail-label performance in extreme multi-label learning setup where the number of target labels can be extremely large. It is based on the finding that the distribution of the norms of the learnt weight vectors also follows a power-law as does the distribution of the samples among labels. The main contribution of the paper is proposing methods for re-ranking which encourages precedence of tail-labels and a data augmentation mechanism. It achieves improvements when applied to sota methods on relevant PSP metrics. Some of the concerns regarding the paper are : - The approach overall seems more like an ad-hoc post-processing step rather than a learning algorithm. It is possible that the impact of RankNet proposed in section 3.2 can be achieved in a more simple way of reranking scores. In the code provided, it was not clear where RankNet as described in section 3.2 was implemented. - The theorem 1 seems incorrect. The probability model is not completely specified as it is not clear what exactly is meant by the test point being randomly sampled. Is it uniformly at random (as seems to be from the proof) or from the distribution that is same as the training distribution (as the typical i.i.d assumption in ML). Also, it seems to compute expectation of some event {y_j \\in \\beta^{k}}, which is strange as expectations can be computed only of random variables. Overall, the statement of the theorem seems quite vague and imprecise. There are some notational issues also, the W, and w symbols in the theorem don't match the preceding text. - In terms of the experimental results, it is not clear what happens with vanilla p@k and nDCG@k. Even though it is mentioned on page 6 para2 that the these metrics are computed but these are not given anywhere. Also, the Table 4 does not seem to be of much consequence as the re-ranking method can be potentially be applied to all the competing methods. - Other minor comments - the references are improperly given. In some places abbreviations are used for conference names, and in others full names are given. In many places, arxiv versions of the papers are mentioned, even though the corresponding papers are published with conferences/journals.<doc-sep>Summary: ======= In prediction problems with millions of labels also known as Extreme Multi-label Learning (XML) problems, e.g., recommender systems, the model predictions are not as good for the tail (rarer) labels. This paper proposes two models for this problem. The first model is re-ranking-based, that is, it reranks the prediction scores of a standard XML model. The second model tries to augment the rarer labels to reduce the skew in data. Results shown on several real-world datasets highlight the superior predictive ability of the proposed reranking model for tail labels compared to a host of competitive baselines. Comments: ========== The paper solves an important problem which has several industrial applications of extreme multi-label learning. The proposed methods are novel, perhaps less so to someone who is an expert in XML. The experimental evaluation is highly impressive. Both the proposed methods outperform a host of highly competitive baselines on a variety of datasets by significant margins. However, I have a couple of concerns regarding the proposed methods: 1). The RANKNET method which re-ranks the XML model's predictions needs to be compared against a baseline which also performs re-ranking for an apples-to-apples comparison in Table 2. Sure, the improvements due to reranking (vs no-reranking) are impressive, but how would a simple re-ranking approach which is not population-aware perform? How is the lambda chosen? By CV? Since you can stack RankNet modules to make it deep, how many were used for results in Table 2? How sensitive are the results to the number of modules? 2). The data augmentation for the tail labels seems arbitrary. Why only Input dropout and Input swap? Also, it is unclear how one should split the data between head and tail labels? More importantly, how are the model scores for head and tail labels integrated to make a final prediction? <doc-sep>This paper considers the setting of extreme multi-label classification, where labels typically follow a power-law distribution with many infrequently-observed labels (so-called tail labels). In this setting it often happens that multi-label classifiers more often predict frequent labels as positive than infrequent labels. In practical applications this is not always wanted, and the authors present a new algorithm that favors tail labels over frequent labels. To this end, a specific ranking-based loss function that consists of two parts is minimized. The first part of the loss ranks positive tail labels higher than positive frequent labels. The second part is more standard, and ranks positive labels higher than negative labels. Improving predictions for tail labels is an interesting research goal that has not been thoroughly addressed in the literature, but I am not convinced of the theoretical results and the introduced algorithm. Theorem 1 does not hold because an important condition is missing. The theorem would only hold if w_j^T x > 0 for all x. However, in practice, such a condition cannot be guaranteed. The formulation of the theorem is more difficult than needed, but what the authors want to say is the following: "P(y_j|x) is a monotonically increasing function of the norm of w_j". The proof that is found in the appendix cannot be correct because one can easily construct a counterexample when w_j^T x < 0, and the proof is also more complicated than needed. In fact P(y_j|x) is just a transformation of w_j^T x via a monotone function g with [0,1] as codomain. Useful choices for g are the logit or probit link, but not an exponential function (as stated in the proof). With this insight one can easily see that, when w_j^T x < 0, the probability P(y_j|x) will decrease when for example all coefficient in w are multiplied with a factor two. In that case the norm of w_j all increases, and we have a counterexample for the theorem. To my opinion, the proof makes a few very strange constructions, but I cannot immediately see where the mistake is. I also do not understand why the link function is only introduced in the appendix, because it is a key concept to link w_j^T x and P(y_j|x). To increase readability, I would advise to discuss this early in Section 2. I also do not understand what w_j represents in the case of tree-based models. More discussion is needed. For tree-based models, one doesn't have a weight vector per class, isn't it? I am also not convinced of the algorithm that is introduced in Section 3.2. The method is very ad-hoc, without any theoretical justification. As a result of pairwise terms, it might also be computationally challenging to optimize the proposed loss for extreme multi-label datasets. Isn't there a much simpler solution? Using the terminology of Section 2.1, one could simply improve the performance for tail labels by adjusting the threshold t for such labels only. Has such a simple solution been considered in literature? In that way one could fit standard probabilistic classifiers during training, following by a reasoning on probabilities in a post-training procedure. Similar to the approach of the authors, one could take label frequencies into account during this post-training procedure, resulting in a threshold t that depends on label frequency. In the experiments it is not clear to me why only four XML datasets are used. Why were the other datasets in the XML repository not analyzed? Please provide a good motivation or analyze all datasets.
The paper presents some interesting insights, but all reviewers have agreed that it does not meet the bar of ICLR. The theoretical results require revision as several issues have been indicated in the reviews. The authors have tried to correct them during the rebuttal, but the reviewers remain unconvinced. Also the novelty is limited as re-ranking is a well-known concept and decoupling of head and tail labels is an approach often used in practice across many applications. The authors should also clarify the way the RankNet method is used and implemented to clarify the issue raised by Reviewer 1. Finally, let me notice that adjusting thresholds for labels has been considered in the XMLC literature, in the context of optimization of the macro F-measure (Extreme F-measure Maximization using Sparse Probability Estimates, ICML 2016).
The authors propose a new method for integrating graph-based models with boosting. This is done using the typical method involving residuals and weak-learners, but adding a step where information is propagated in the graph. The approach is also simple, as no GNNs or other auxiliary models are required. It is also shown how the meta-loss introduced by the authors provides convergence given some moderate assumptions. According to the experiments reported, the proposed model is better than the current state of the art in the considered domain. The notation used is easy to understand, as is the mathematical explanation in section 3, which is presented in a comprehensive but concise manner. "for EBBS we must fit the weak-learners to gradients from both the training and test nodes". This is the sentence that I am most concerned about, as the use of test data in the training phase may render the results obtained invalid. Although the authors give their own explanation of why the test nodes should also be used during training, i.e. for the propagation of information in the graph, if the test labels are used during the train there is no longer any separation between train and test. I have this doubt because the labels are used to calculate function-space gradients. Is this correct? Algorithm 1 could be described in a little more detail. The analysis of the convergence of the method by theorem is very good. The remarks connected to the theorem are interesting, but could have been treated in more detail (they are in part in the supplements). If experiments are done with different random seeds (as stated), then results in Table 1 should be reported with their corresponding standard deviation or confidence interval. Why are some results reported with different decimal places in the table? I am talking specifically about the CS column, but also for the Slap, DBLP and Phy columns one decimal place could be added. If the datasets were taken from Ivanov & Prokhorenkova (2021), why was Wiki not taken? The reason should be that, being a homogeneous dataset, then, as explained by Ivanov & Prokhorenkova (2021), "neural network approaches are sufficient to achieve the best results". It would still be interesting as a comparison. Also as a comparison with them, the House and VK datasets could also be used for classification. They also report the standard deviation of all results. Also, the results in the table match those of Ivanov & Prokhorenkova (2021), but I do not understand why their LightGBM results row has become the CatBoost row in this article for the Slap, DBLP and OGB-ArXiv datasets. Is this perhaps an error? Although the difference between OGB-ArXiv and the other datasets is properly explained, I think it still makes sense to put those results together with the others in Table 1. "Although tabular graph data for node classification is widely-available in industry, unfortunately there is currently little publicly-available, real-world data that can be used for benchmarking." This sentence is very vague and I am not fully convinced of its veracity. The mention of the method called CatBoost+ is interesting, but it is given too little space. Why is it not considered in "ours"? If the idea is picked up by some other work, let it be mentioned properly. "revealing that it may be more robust to non-ideal use cases". That's why it might be interesting to add homogenous datasets and see if it applies there too. While in the main part it says "This suggests that in new application domains it may conceivably be easier to adapt EBBS models", in the supplements it says "It shows EBBS can be run with mostly shared hyperparameters across all datasets". I don't think there are enough experiments/results to say that, but I'd stick with "suggest" in the supplementary materials as well. Maybe add a sentence about the possibility of exploring this area more in future work. After the rebuttal I have strengthen my opinion on the quality of the paper. I believe it is a nice contribution to the field. The work is well structured, with a good theoretical basis to support the proposed methodology. The empirical results are very promising, although the small amount of datasets combined with the lack of confidence intervals does not allow for meaningful conclusions to be drawn. The only major doubt concerns the use of test data in the training phase, which may have compromised the whole experiment. <doc-sep>Following in the success of boosting methods for tabular data, this paper introduces a new boosting approach for data that graph-based with tabular features. The proposed approach, efficient bilevel boosted smoothing (EBBS), has convergence guarantees as well as empirical successes compared to competing methods. **Summary** This paper investigates tabular, graph-based data for classification and regression tasks. The proposed approach is an end-to-end, bilevel, combination of label propagation and boosting. The authors contribute not only an empirical analysis of the proposed approach on 8 datasets demonstrating the effectiveness of the proposed approach as well as a theoretical analysis. **Merits** I believe that this is a strong paper that clearly outlines a proposed approach for boosting in this non-iid setting of graph data. The proposed approach has a convergence guarantee and is shown to be very effective empirically. Overall, this seems to be a strong result. The supplemental material seems to give statistical significance of the improvements shown. Compared to the best previous method BGNN, combining boosting and GNNs, EBBS achieves stronger empirical results. The algorithms and theoretical results are discussed as well. **Weaknesses** Here are a few concerns / suggestions: * It could perhaps be made stronger by including some of the additional analysis that is in the supplemental material that investigates the trade-offs and ablations of the approaches, in the main body of the text. * I think that the paper could be made much stronger with a simple motivating (perhaps synthetic) example that illustrates where and when EBBS can be useful compared to competing methods. While convergence guarantees and motivations are described, a clear simple example (which might further be useful in using ablations to identify contributions of different parts of the solution) could strength the paper. **Minor Notes** * Why is "GraphData" one word in the title? * Figure 1 would be easier to read if Y-axis was the same in both plots This paper provides both theoretical and empirical results for a boosting method for graph structured data. The results appear to advance the state of the art and the submission seems to have valuable contributions. <doc-sep>This paper proposes a new way to integrate graph-based models with boosting based on principled meta loss, named EBBS. In experiments, the proposed method outperforms tabular baselines, GNN baselines, and some hybrid strategies like BGNN over some node classification/regression datasets. Strengths: This paper proposes EBBS, efficient bilevel boosted smoothing, a novel way to combine GNN and Gradient boosting for learning tabular graph data. The addressed problem of integrating boosting into GNNs is very interesting to me. Also learning over tabular graph data should receive a wide audience given its importance in the industry. Empirical experiments show EBBS outperforms baseline methods on multiple node classification and node regression datasets. Weakness: In my opinion, the main weaknesses are in writing/presentation and reproducibility. First, I feel the writing of Section 3 can be improved to avoid readers' confusion. For example - We can be more clear about how Eq 2 is rooted in Zhou et al 2004. In fact, I didn't get it when I checked the referenced paper. - In (2), are both Z and \\theta learnable? - P is binded twice, once in P* (Eq 3) and once in P^{k} (Eq 6) - In Eq 7, what is for inner level optimization and what is for outer level optimization? - Is "Graph-Aware Propagation Layers" terminology used in the literature? Second, it seems that the proposed method EBBS will be incorporating test nodes during training. Will this cause test information to leak into the training process? Is there any specific preprocessing to avoid leaking? Is EBBS easy to implement? Minor issues (typos, formats): - "on top model leaderboards" -> on the top of model leaderboards - the template seems a bit different from the normal one, especially the font and the colour of the citation text - "whereby the end-to-end training of a bilevel loss is such that values of the boosted base model f ebb and flow across the input graph producing a smoothed predictor f" -> not sure if there is a grammar issue - "use mi and to reference" -> delete "and" Overall, I like the problem the paper aims to address - how to better combine GNN with boosting methods for learning on tabular data. The paper proposes a novel way to address this problem, which is based on a principled meta-loss. Empirical results show the effectiveness of the results. I feel the paper can be improved more by iterating on the formulations in Sec 3. <doc-sep>In this paper, the authors present a new approach to combine the boosted decision tree classifiers with a graph propagation model, which is important in handling table input data. The approach casts the graph propagation as an optimization problem, where the input node features are generated by boosted decision trees. The gradient can be taken in the functional space to learn the decision trees to minimize a unified loss. The final algorithm is shown to minimize the unified loss in a principled manner. The superior performance is demonstrated over the existing BGNN model. Strength - The approach nicely defines a single objective that the model (graph propagation + decision trees) optimizes. - Empirically, there is a nice improvement over the existing BGNN. Weakness: - The studied problem does not seem particularly novel to me, especially given BGNN. Given BGNN, the scope seems a bit narrow to me (although I acknowledge that the authors solve the problem in a potentially better way than the BGNN paper). Comments/questions: - I am curious to see the result of XGBoost + C&S (e.g., use XGBoost as the base predictor in C&S). - Does the framework supports any propagation rules beyond (6)? I would be curious to see how general the method is. Overall, the approach seems sound and principled, although the scope is a bit narrow. Hence, I will give the weak accept. I would also like the authors to address my comments/questions.
The paper addresses a problem encountered in many real-world applications, i.e. the treatment of tabular data, composed of heterogeneous feature types, where samples are not i.i.d. In this case, learning is more effective if the typically successful approach for i.i.d. data (boosted decision trees + committee techniques) is combined with GNN to take into account the dependencies between samples. The main contribution of the paper with respect to previous work in the field is the introduction of a principled approach to pursue such integration. One important component of the proposed approach is played by the definition of a specific bi-level loss (efficient bilevel boosted smoothing) that allows for convergence guarantees under mild assumptions. Both theoretical and experimental contributions are sound and convincing, justifying the claimed merits of the proposed approach. Another strong point is the fact that the proposed approach is general and amenable to support a broad family of propagation rules. One weakness with the original submission was presentation, mainly because some key information was confined into the supplementary material. The revised version addressed this problem and added some more empirical results that confirmed the superiority of the proposed approach. Finally, the fact that learning over tabular graph data is very important in Industry, the proposed approach may be of interest for a wide audience.
The paper introduces SegViT, a semantic segmentation framework with plain ViTs as backbones. One of the core technical contributions is the proposed Attention-to-Mask (ATM) block, which generating masks from the intermediate attention maps between class embeddings and key maps. In addition, a shrunk structure is then proposed to save computational cost while maintaining the performance. Based on plain ViT networks only, SegViT obtains state-of-the-art results on three semantic segmentation datasets (ADE20K, PASCAL-Context and COCO-Stuff-10K). Pros 1. The paper is well motivated. Recently many works (e.g. [*1]) have realized even plain ViTs could have rich representation capacity, which however requires special optimization (e.g. masked image modeling) or other architectural modifications for downstream tasks. I am pleased that the paper demonstrates that plain ViTs can obtain as good results as the hierarchical counterparts (e.g. [15, 47]) on segmentation tasks, which may encourage simpler and unified network design principles. 2. Strong results are reported in the paper. For example, on ADE20K val a model with ViT-L backbone achieves 55.2 mIoU, which is very competitive even among more sophisticated networks, such as Swin-L and MViT. 3. The motivation of ATM module sounds reasonable to some extent: intuitively a good attention mask should cover the foreground of the given object (or class). Therefore, it is possible to generate mask directly from the attention matrix. Cons 1. My major concern is that the technical novelty is relatively limited. The overall framework is very similar to MaskFormer [15] and Mask2Former [47]. Compared with [15], the major difference on the technical details is, [15] generate masks from the product of the mask embedding and the per-pixel embedding, while in the paper the mask is directly derived from the attention weights. However, I do not think it differs much. Although [15, 47] mainly evaluate on hierarchical backbones, theoretically they can also be equipped with plain networks. In addition, the proposed QU/QD layers are not novel (also sounds irrelevant to the main topic of the paper), since many previous works, e.g. PVT [17], also adopt similar blocks to reduce computational cost. In conclusion, I think the contributions claimed in the introduction seems not significant. 2. According to Table 4 and Line 234-239, in the proposed ATM block, the separated supervision of classification and mask prediction is the most important design principle. However, it is not originally proposed in the paper as [15, 16] already introduces the paradigm. It further weakens the significance of the proposed method. [*1] Li et al. Exploring Plain Vision Transformer Backbones for Object Detection. Tech report. Limitations are mentioned in the conclusion. Although I think more discussion and comparisons with [15] are required in the paper. <doc-sep>The authors deal with ViT-based semantic segmentation. In particular, they use a set of learnable tokens (each corresponding to a semantic class) which decode the outputs of the ViT-based backbone into per-class semantic masks. This is accomplished by multiple layers of cross attention between class token and ViT tokens. Rather than use a dot product like mechanism to produce similarity between a class token and spatial features, they directly supervise the cross attention maps using a sigmoidal output. Furthermore, they introduce a down/upsampling technique to mimic the general idea of an efficient multi-scale prediction head. Their results are quite good even when compared to some of the best recent models and their QD module provides some computation/performance tradeoffs. Strengths: 1. This is a well written paper and the approach is quite clean 2. The results presented are quite good as well, achieving at/near SOTA against competitive models Weaknesses 1. The idea is still related to the idea of dot product based segmentation (from some class embedding). I think a good deal of experiments might need to be performed to actually understand the technical contribution. 2. While the results are good, related work like Segmenter is not far off from the performance presented here and shares some significant similarities with this method. I believe so. <doc-sep>The paper proposed a plain-ViT based semantic segmentation framework, which uses an Attention-to-Mask decoder to aggregate image features and a Shrunk structure to save computational cost. Strengths: 1. The paper proposed SegViT framework and achieved a SOTA performance based on a plain ViT backbone. 2. The paper is well written and clear to understand. Weaknesses: 1. I doubt the novelty of the design of ATM module. Since the MaskFormer framework has been proposed for over half a year, the ATM module is similar to the MaskFormer transformer decoder module. The only difference is that the mask output of MaskFormer is generated by the multiplication of the final output query tokens and image features, while the mask output of ATM is from the multiplication of an intermediate variable K inside the transformer layer and the image features. The difference is not obvious. The SegViT framework is just like MaskFormer + ViT + multi-layer-feature. 2. Table 4 shows that ATM has a relatively low performance gain of about 0.5% to SETR. It shows that the performance of ATM is even worse than Segmenter (Since the result of Segmenter is 0.8% better than SETR in Talbe 1)? 3. Also, in table 4, it shows that by using L_mask loss the mIoU result increases about 2.6% than using CE loss only. However, Table 1 shows that the result of SegViT is about 2.3% better than Segmenter baseline (which only uses CE loss). If it shows that the performance gain is all from the new loss design but not from the framework architecture. The authors have addressed the limitations and potential negative societal impacts. <doc-sep>This paper present a semantic segmentation method based on the plain vision transformer (ViT). Specifically, it proposes the attention-to-mask (ATM) module to generate the pixel-level mask. In addition, to reduce the computational cost, it designs a query-based downsampling (QD) and upsampling module (QU) in the shrunk version. Experiments are conducted on three datasets and better results are obtained compared with previous methods. **Strength** 1. Exploring plain architecture for semantic segmentation is an interesting and promising direction. This paper make a forward step towards this direction. 2. The performance of SegVIT seems to be better than previous state-of-the-art methods. **Weakness** 1. About the Attention-to-Mask module (ATM), it is implemented in cross-attention manner. But in fact, there is little difference with the standard classifier (a fc layer to map features to probability) for per-pixel classification in the normal semantic segmentation framework. Each learned token could be viewed as a classifier layer to map the pixel-level features into a probability with a Sigmoid function. In this sense, the ATM is similar to a standard classification layer. 2. About the Shrunk structure, I am confused about the query-based downsampling operation (QD). In line 172, it says to use the nearest sampling to reduce the token numbers. In this sense, it has nothing with the query based downsampling and is simply a standard downsampling operation. I am also confused about the implementation details on query-based upsampling operation (QU). It says to use a standard transformer decoder structure to upsample features. Is there any special design on the transformer decoder by incorporating the spatial information? More details are required on the decoder design. 3. About the two QU operations in the version (c) of Figure 3, the downsampling ratio of lower QU is 1/16, and it is natural to think its output have smaller downsampling ratio like 1/8. However, from line 181-183, its output size seems to be 1/16, which is confused for me. 4. I think this paper should compare with the previous works PerceiverIO, which employs a similar downsampling-upsampling architecture for dense prediction with transformers. More discussion on the difference is required to better motivate the proposed method. The authors have addressed the limitations of the proposed method in large memory consumption.
This submission has received comments from 4 official reviewers. The authors have made very detailed replies to the reviewer's comments. The authors and reviewers had quite rich discussions. After these discussions, 3 reviewers recommended weak acceptance, and 1 recommended rejection. For the novelty concerns, the authors clarify them during the rebuttal. The reviewers have also recommended comparing with recent semantic segmentation methods using ViTs. Missing comparisons should be included in the final version, including comparisons with [1] Ma, Xuezhe, et al. "Luna: Linear unified nested attention." NeurIPS 2021. [2] Ryoo, Michael, et al. "Tokenlearner: Adaptive space-time tokenization for videos." NeurIPS 2021. [3] Wu, Yu-Huan, et al. "P2T: Pyramid Pooling Transformer for Scene Understanding", IEEE TPAMI, 2022. Only reviewer Eyo8 recommends borderline rejection. The authors have made quite a detailed rebuttal but we have not heard from the reviewer after the rebuttal. Thus, the AC would like to recommend acceptance.
Authors introduce a new (pseudo) distance on attributed graphs, the Tree Mover's distance (TMD). They first introduce Then authors introduce a distance between trees (TD) which aims at recursively comparing their roots, through their respective attributes, and their subtrees using optimal transport (OT) to get hard assignments between these induced subtrees to pursue the recursion i.e comparing trees of smaller respective depth. TMD then naturally comes from TD by modeling graphs as multisets of trees rooted in each node of each graph. TMD is shown to define a proper pseudo-metric for which the axiom of discernability is closely related to the Weisfeiler-Lehman graph isomorphism test. After investigating the revelance of TMD for graphs classification, authors further study its relevance to quantify stability and generalization abilities of the well-known Graph Isomorphism Networks (GIN), being SOTA graph neural networks. $\\textbf{Strengths}$: - Overall the paper is well-written and the design of TMD is elegant/original. The authors have managed to clearly address a wide variety of concepts, from kernels to GNNs. An obvious pedagogical effort has been made in several proofs of the results provided, which is appreciable. - TMD is interesting by its flexibility suggested by its dependencies to a depth-dependent weighting function and to cost functions inherent to used OT (on nodes). - TMD seems competitive as a kernel in graphs classification and mostly shines on the study of GIN. $\\textbf{Neutral about}$: - Without a clear characterization of TMD balls in the space of attributed graphs (or at first, unattributed graphs), I find it difficult to envision how TMD can guide GNN to further improvements, but given the difficulty of this task, the empirical evidence provided in Sections 5 and 6 supports the relevance of TMD for this purpose. $\\textbf{Weaknesses / points to clarify}$: - Authors exploit specific properties of the Kantorovich formulation of OT (especially its relation to Monge's formulation) which are eluded in the paper and clearly not straightforward so, to improve the clarity of the document e.g the need for definitions 2 and 3, it would be good to mention them (Also the caption of figure 1 can be improved for this purpose). - Theorem 7: From your proof I would say that the stated implication is actually an equivalence. Could you elaborate on this ? - On the graphs classification benchmark : I am not sure to understand your validation scheme from your explanations. From my understanding you did a cross-validation for TMD, while e.g authors of FGW reported a 10-fold nested cross-validation in their paper (Which better quantifies generalization abilities and is more natural for graph kernel methods as the computational bottleneck lies in the computation of the kernel matrix). Therefore I suggest harmonizing the validation scheme on kernel methods instead of just reporting the performance of the respective papers. Moreover, could you complete the benchmark on graphs classification with a benchmark in terms of runtimes ? - On subsection 5.1: There is a difference between your formulation of message-passing and the one from GIN (see equation 4.1 of [48]), $\\epsilon$ is not handled in the same way. As you set $\\epsilon=1$ for your experiments they are still valid, but the implications of this change for the theoretical results in your paper and the ones in GIN's paper are not clear to me, even if it seems minor. Could you elaborate on this ? - There is no reference to the figure 4 and 5 in the main paper. **Modification:** I increased my initial grade from 5 (borderline accept) to 6 (weak accept) after a convincing rebuttal and discussion by the authors. A few limitations of their work have not really been addressed as illustrated by my elaboration on the "weaknesses/points for clarification" paragraph. The authors have adequately addressed the potential negative societal impact of their work in the supplemental material. <doc-sep>A metric on the set of graphs is defined using concepts from optimal transport. It is shown both analytically and by experiment that graph neural networks define Lipschitz continuous functions in the metric. Strength: The proposal is clear and well motivated, and has evident applications. The evaluation is adequate for a first work. Weakness: Graph metrics are a much studied field and there is little comparison with previous work. Technical results with no immediate societal impact. <doc-sep>This paper introduces a graph pseudo-metric based on the hierarchical Optimal Transport , to understand the generalization of machine learning models on graphs. They show that the proposed TMD captures properties relevant to graph classification and can be related to generalization of GNNs under distribution shifts. Strengths: 1. The paper is well-written and easy to follow, although there exist some unclear descriptions. 2. This paper proposes a new OT distance for graphs. Using the computation trees of the graph to calculate the distance between two graphs is direct and reasonable. 3. The Lipschitz Constant and Stability analysis of GNNs seem to be useful and meaningful. Weaknesses: 1. ”TMD can provably distinguish graphs that are identifiable by the (1-dimensional) Weisfeiler-Leman graph isomorphism test”. The authors say it can be further strengthened by augmenting node attributes e.g. with positional encodings, but they have not provided the details. Since expressive power is very important for graph representation learning. 2. The computational complexity of TMD is high. The authors only implement it on CPU (POT package). It is unclear whether the method can be accelerated by GPUs. 3. Some baselines on graph OT are missing, for example, [1] [2] bellow. [1] GOT: An Optimal Transport framework for Graph comparison [2] COPT: Coordinated Optimal Transport on Graph. Yes <doc-sep>The authors first propose TMD, a pseudometric for comparing graphs to each other. TMD compares graphs to each other by recursively solving a series of optimal transport problems which minimizes distances of subtree patterns. The proposed pseudometric is evaluated in graph classification (distances fed to an indefinite kernel and then to SVM). The results indicate that TMD performs on par with the best performing baselines. The authors also provide some theoretical results. First, they bound the Lipschitz constant of the GIN model with respect to TMD, and also analyze the stability of GIN under node deletions, edge deletions and perturbations of node features. Finally, they provide a result about the generalization error of GIN under distribution shifts. Strengths: - In my view, the originality of the paper is high. The proposed TMD distance is novel. Previous studies have applied optimal transport techniques on the labels produced by the WL algorithm, but in my understanding, the proposed recursive definition is different from previous work. - I really like the results about the stability of graph neural networks. Most previous studies have focused on a different problem: whether a graph neural network can distinguish classes of non-isomorphic graphs or not. Not much work has been done on the distance between the graph representations. I am not thought sure how useful the experiments of subsection 5.3 are. What's the purpose of showing that the correlation between TMD and a graph neural network is high? Furthemore, does this hold for all datasets? Weaknesses: - The computational complexity of the proposed distance function is very high since it needs to solve a series of optimal transport problems. This renders the method practically infeasible for datasets that contain large graphs (such as REDDIT-BINARY) and datasets that contain many samples (such as the OGB graph property prediction datasets). Of course, for the optimal transport problem, one could use some approximate algorithm, but still I don't think the proposed method can be applied to large datasets. - As one can see in Table 1, the proposed method provides only marginal improvements in accuracy over the baselines. Furthermore, TMD is not compared against several state-of-the-art graph neural networks. It is only compared against GIN and GCN which are at most as expressive as 1-WL. It would be nice if the authors could also report the running time of TMD and compare it against that of the Wasserstein WL function. - The message passing scheme of GIN given in subsection 5.1 is different from the one provided in the original paper, i.e., $z_v^{(l)} = \\phi^{(l)} ((1+\\epsilon^{(l)})z_v^{(l-1)} +\\sum_{u \\in \\mathcal{N}(v)} z_u^{(l-1)})$. Furthermore, both for the experiments and for the proof of Theorem 8, the authors set $\\epsilon=1$. GIN is known to be less expressive than 1-WL when $\\epsilon=1$. Furthermore, the message passing scheme of GCN given in Appendix B.1 is not correct. Thus, I wonder whether the Lipschitz constant of any message passing graph neural network can be bounded under TMD or are there some conditions that need to be satisfied? - Even though TMD is sufficiently different from the Wasserstein WL pseudometric, I would suggest the authors provide more details about how the two methods differ from each other and also compare the complexities of the two approaches to each other. The authors discuss the limitations and potential negative societal impact of their work.
This paper proposes a new similarity measure between graphs, based on computing optimal transport between distributions of trees extracted from graphs. The method benefit from the fast solvers of OT between trees and the proposed metric has been shown to be interesting for computing a "Lipshitz" constant related to the generalization of message passing GNN. The experiments were appreciated but lack of comparison with existing graph distances and GNN was noted by the reviewers on the graph classification experiment. The authors did a very good reply to the reviewers which was much appreciated. For instance the new experiments are very interesting and should be included in the paper or supplementary. The fact that the performance does not depend too much on the classifier (SVM VS KNN) is also interesting. During discussion the consensus was that the paper deserves to be published at NeurIPS but that the authors are requested to include the new results and discussions/clarifications in the paper and supp.
This is an excellent analysis paper of a very interesting phenomenon in deep neural networks. Quality, Clarity, Originality: As far as I know, the paper explores a very relevant and original question -- studying how the learning process of different examples in the dataset varies. In particular, the authors study whether some examples are harder to learn than others (examples that are forgotten and relearned multiple times through learning.) We can imagine that such examples are "support vectors" for neural networks, helping define the decision boundary. The paper is very clear and the experiments are of very high quality. I particularly appreciated the effort of the authors to use architectures that achieve close to SOTA on all datasets to ensure conclusions are valid in this setting. I also thought the multiple repetitions and analysing rank correlation over different random seeds was a good additional test. Significance This paper has some very interesting and significant takeaways. Some of the other experiments I thought were particularly insightful were the effect on test error of removing examples that aren't forgotten to examples that are forgotten more. In summary, the "harder" examples are more crucial to define the right decision boundaries. I also liked the experiment with noisy labels, showing that this results in networks forgetting faster. My one suggestion would be to try this experiment with noisy *data* instead of noisy labels, as we are especially curious about the effect of the data (as opposed to a different labelling task.) I encourage the authors to followup with a larger scaled version of their experiments. It's possible that for a harder task like Imagenet, a combination of "easy" and "hard" examples might be needed to enable learning and define good decision boundaries. I argue strongly for this paper to be accepted to ICLR, I think it will be of great interest to the community.<doc-sep>UPDATE 2 (Nov 19, 2018): The paper has improved very substantially since the initial submission, and the authors have addressed almost all of my comments. I have therefore increased my score to an 8 and recommend acceptance. ------------------------------------------------------------------------------------------------------------------------------ UPDATE (Nov 16, 2018) : In light of the author response, I have increased my score to a 6. ------------------------------------------------------------------------------------------------------------------------------ This paper aims to analyze the extent to which networks learn to correctly classify specific examples and then “forget” these examples over the course of training. The authors provide several examples of forgettable and unforgettable examples, demonstrating, among other things, that examples with noisy examples are more forgettable and that a reasonable fraction of unforgettable examples can be removed from the training set without harming performance. The paper is clearly written, and the work is novel -- to my knowledge, this is the first investigation of example forgetting over training. There are an interesting and likely important set of ideas here, and portions of the paper are quite strong -- in particular, the experiment demonstrating that examples with noisy examples are more forgettable is quite nice. However, there are several experimental oversights which make this paper difficult to recommend for publication in its current form. Major points: 1) The most critical issue is with the measurement of forgetting itself: the authors do not take into account the chance forgetting rate in any of their experiments. Simply due to chance, some examples will be correctly labeled at some point in training (especially in the datasets analyzed, which only contain 10 classes). This makes it difficult to distinguish whether a “forgotten” example was actually ever learned in the first place. In order to properly ground this metric, measurements of chance forgetting rates will be necessary (for example, what are the forgetting rates when random steps are taken at each update step?). 2) Were the networks trained on MNIST, permutedMNIST, and CIFAR-10 trained for the same number of epochs? Related to point 1, the forgetting rate should increase with the number of epochs used in training as the probability of each example being correctly classified should increase. If the CIFAR-10 models were trained for more epochs, this would explain the observation that more CIFAR-10 examples were “forgettable.” 3) In the experiment presented in Figure 4b, it is difficult to tell whether the never forgotten set suffers less degradation in the third training regime because the examples were never forgotten or because the model had twice has much prior experience. Please include a control where the order is flipped (e.g., forgotten, never forgotten, forgotten in addition to the included never forgotten, forgotten, never forgotten order currently present). 4) The visual inspection of forgettable and unforgettable examples in Figure 2 is extremely anecdotal, and moreover, do not even appear to clearly support the claims made in the paper. Minor points: 1) In the discussion of previous studies which attempted to assess the importance of particular examples to classification decisions, a citation to [1] should be added. 2) The point regarding similarity across seeds is absolutely critical (especially wrt major comment 1) , and should be included earlier in the paper and more prominently. 3) The histograms in Figure 1 are misleading in the cropped state. While I appreciate that the authors included the full histogram in the supplement, these full histograms should be included in the main figure as well, perhaps as an inset. 4) The inclusion of a space after the commas in numbers (e.g., 50, 245) is quite confusing, especially when multiple numbers are listed as in the first line on page 4. [1] Koh, Pang Wei and Percy Liang. “Understanding Black-box Predictions via Influence Functions.” ICML (2017). <doc-sep>This paper studies the forgetting behavior of the training examples during SGD. Empirically it shows there are forgettable and unforgettable examples, unforgettable examples are like "support examples", one can achieve similar performance by training only on these "support examples". The paper also shows this phenomenon is consistent across different network architectures. Pros: This paper is written in high quality, clearly presented. It is original in the sense that this is the first empirical study on the forgettability of examples in during neural network training. Comments and Questions on the experiment details: 1. Is the dataset randomly shuffled after every epoch? One concern is that if the order is fixed, some of the examples will be unforgettable simply because the previous batches have similar examples , and training the model on the previous batches makes it good on some examples in the current batch. 2. It would be more interesting to also include datasets like cifar100, which has more labels. The current datasets all have only 10 categories. 3. An addition figure can be provided which switches the order of training in figure 4b. Namely, start with training on b.2. Cons: Lack of insight. Subjectively, I usually expect empirical analysis papers to either come up with unexpected observations or provide guidance for practice. In my opinion, the findings of this work is within expectation, and there is a gap for practice. Overall this paper is worth publishing for the systematic experiments which empirically verifies that there are support examples in neural networks.
This paper is an analysis of the phenomenon of example forgetting in deep neural net training. The empirical study is the first of its kind and features convincing experiments with architectures that achieve near state-of-the-art results. It shows that a portion of the training set can be seen as support examples. The reviewers noted weaknesses such as in the measurement of the forgetting itself and the training regiment. However, they agreed that their concerns we addressed by the rebuttal. They also noted that the paper is not forthcoming with insights, but found enough value in the systematic empirical study it provides.
This paper studies the effect of batch normalization via a physics style mean-field theory. The theory yields a prediction of maximal learning rate for fully-connected and convolutional networks, and experimentally the max learning rate agrees very well with the theoretical prediction. This is a well-written paper with a clean, novel result: when we fix the BatchNorm parameter \\gamma, a smaller \\gamma stabilizes the training better (allowing a greater range of learning rates). Though in practice the BatchNorm parameters are also trained, this result may suggest using a smaller initialization. A couple of things I was wondering: -- As a baseline, how would the max learning rate behave without BatchNorm? Would the theories again match the experimental result there? -- Is the presence of momentum important? If I set the momentum to be zero, it does not change the theory about the Fisher information and only affects the dependence of \\eta on the Fisher information. In this case would the theory still match the experiments?<doc-sep>Interesting application of MFT on FIM to understand Batch Normalization This paper applies mean field analysis to networks with batch normalization layers. Analyzing maximum eigenvalue of the Fisher Information Matrix, the authors provide theoretical evidence of allowing higher learning rates and faster convergence of networks with batch normalization. The analysis reduces to providing lower bound for maximum eigenvalue of FIM using mean-field approximation. Authors provide lower bound of the maximum eigenvalue in the case of fully-connected and convolutional networks with batch normalization layers. Lastly authors observe empirical correlation between smaller \\gamma and lower test loss. Pro: - Clear result providing theoretical ground for commonly observed effects. - Experiments are simple but illustrative. It is quite surprising how well the maximum learning rate prediction matches with actual training performance curve. Con: - While mean field analysis a-priori works in the limit where networks width goes to infinity for fixed dataset size, the analysis of Fisher and Batch normalization need asymptotic limit of dataset size. - Although some interesting results are provided. The content could be expanded further for conference submission. The prediction on maximum learning rate is interesting and the concrete result from mean field analysis - While correlation between batch norm \\gamma parameter and test loss is also interesting, the provided theory does not seem to provide good intuition about the phenomenon. Comments: - The theory provides the means to compute lower bound of maximum eigenvalue of FIM using mean-field theory. In Figure 1, is \\bar \\lambda_{max} computed using the theory or empirically computed on the actual network? It would be nice to make this clear. - In Figure 2, the observed \\eta_*/2 of dark bands in heatmap is interesting. While most of networks without Batch Norm, performance is maximized using learning rates very close to maximal value, often networks using batch norm the learning rate with maximal performance is not the maximal one and it would be interesting to provide theoretical - I feel like section 3.2 should cite Xiao et al (2018). Although this paper is cited in the intro, the mean field analysis of convolutional layers was first worked out in this paper and should be credited. <doc-sep>In this paper, the effect of batch normalization to the maximum eigenvalue of the Fisher information is analyzed. The techinique is mostly developed by Karakida et al. (2018). The main result is an informal bound of the maximum eigenvalue, which is given without proof. Though, the numerical result corresponds to the derived bound. The paper is basically well written, but the technical part has several notational problems. For example, there is no definition of "\\otimes", "\\odot", and "Hess" operators. The use of the mean-field theory is an interesting direction to analyze batch normalization. However, in this paper, it seems failed to say some rigorous conclusion. Indeed, all of the theoretical outcomes are written as "Claims" and no formal proof is given. Also, there is no clear explanation of why the authors give the results in a non-rigorous way, where is the difficult part to analyze in a rigorous way, etc. Aside from the rigor issue, the paper heavily depends on the study of Karakida et al. (2018). The derivation of the bound (44) is directly built on Karakida's results such as Eqs. (7,8,20--22), which reduces the paper's originality. The paper also lacks practical value. Can we improve an algorithm or something by using the bound (44) or other results?
This paper presents a mean field analysis of the effect of batch norm on optimization. Assuming the weights and biases are independent Gaussians (an assumption that's led to other interesting analysis), they propagate various statistics through the network, which lets them derive the maximum eigenvalue of the Fisher information matrix. This determines the maximum learning rate at which learning is stable. The finding is that batch norm allows larger learning rates. In terms of novelty, the paper builds on the analysis of Karakida et al. (2018). The derivations are mostly mechanical, though there's probably still sufficient novelty. Unfortunately, it's not clear what we learn at the end of the day. The maximum learning rate isn't very meaningful to analyze, since the learning rate is only meaningful relative to the scale of the weights and gradients, and the distance that needs to be moved to reach the optimum. The authors claim that a "higher learning rate leads to faster convergence", but this seems false, and at the very least would need more justification. It's well-known that batch norm rescales the norm of the gradients inversely to the norm of the weights; hence, if the weight norm is larger than 1, BN will reduce the gradient norm and hence increase the maximum learning rate. But this isn't a very interesting effect from an optimization perspective. I can't tell from the analysis whether there's a more meaningful sense in which BN speeds up convergence. The condition number might be more relevant from a convergence perspective. Overall, this paper is a promising start, but needs more work before it's ready for publication at ICLR.
Recommendation: 2: Serious ethical issues that need to be addressed in the final version Ethics Review: The authors introduce the TGEA 2.0 dataset which is a Chinese dataset where the examples are generated by various pretrained language models. The dataset has been annotated such that the machine-authored texts can be assessed on various tasks within the broad categories of diagnosis tasks and pathology mitigation tasks. The main issue raised by reviewers is the risk of erasure and invisibility of linguistic variability in Chinese language training data. A recommendation was formulated in this regard. Ethics Documentation: To address the reviewers' concerns on erasure of specificities in the Chinese language, the authors offered to contact the developers of the publicly available models they are using with the idea of asking for information on the training data used for these models. The authors also offered to include data cards for th emodels and datasets, especially with respect to the varieties of Chinese. The goal is to identify varieties of Chinese other than Mandarin. More generally, authors propose to provide a clear description with respect to the variety of Chinese in the revised version of the paper. N/A <doc-sep>This paper presents a large-scale and curated dataset in Chinese along with two benchmarks for diagnosis (5 tasks) and pathology mitigation (2 tasks) to improve quality of generated texts from language models. The authors designed a thorough annotation process including data collection, training annotators in the pre-annotation phase and quality control by the feedback loop. The selected sentences for annotation cover 3 aspects: model, decoding strategy and prompt. They also provided a detailed analysis on the distributions of erroneous sentences produced by a variety of models with different sizes ranging from 110M to 2.6B parameters. The experimental results on the proposed benchmarks show that the diagnosis tasks are challenging and training a language model (here, GPT-2) on their dataset help reducing errors in the generated texts. 1. Data collection phase is well-implemented: The authors use different models, decoding strategies and prompt types (nominal, phrasal and sentential) with various domains (News, Wikipedia and Web Fictions) to diversify types of erroneous sentences. 2. The annotation process and quality control are well-designed to annotate the large-scale dataset while maintaining its quality. 3. The dataset has potential usages: Large language models in Chinese can benefit from training on this dataset to mitigating erroneous sentences. Also, the discriminative models can be trained to automatically detect errors made by language models. Thus, this work can be valuable for future research. 1. I have some concerns regarding the quality control: * L188: Who trained the first 4 reviewers? I would like to evaluate the quality of this dataset carefully, since their annotations are used as ground-truths to train other annotators. * L199: What is average performance of 7 well-trained reviewers? Are they trained by the annotations produced by the first 4 reviewers? The reason I asked is that they are the ones who guarantee the high-quality outcomes for this dataset. 2. Although alpha-balanced loss was used, some diagnostic tasks such as Erroneous Text Detection, MiSEW Extraction suffer a heavy unbalance that may affect model training and evaluation, so the results on these datasets are not quite convincing. 3. No statistics reported for the proposed tasks in two sets of benchmarks. 4. The Word Prediction task in the pathology mitigation benchmark does not properly evaluate the ability of language models because there can be many correct predictions for the last token given each sentence. 5. No qualitative examples (i.e., model predictions) for each benchmark task in the main text and the appendix. 6. Some minor issues in presentation: * Numbers in Table 1 are quite small that makes it hard to read * L240: Error Correction task is missing. * L267: It should be MacBERT instead of MacBEERT. <doc-sep>The authors introduce the TGEA 2.0 dataset which is a Chinese dataset where the examples are generated by various pretrained language models. The dataset has been annotated such that the machine-authored texts can be assessed on various tasks within the broad categories of diagnosis tasks and pathology mitigation tasks. The main strength of this dataset is its scale: it substantially extends TGEA 1.0 to now consist of 195,629 annotated sentences. Such a dataset will be particularly useful in devising methods to assess the quality of the generated text from pre-trained language models. Also, the authors have taken great care for a sophisticated quality control process in order to ensure that the annotations for the various benchmarking tasks can be trusted. Further explanation or clarification is required for the following points: - The authors claim that the examples generated are diverse due to the different decoding strategies and also using 4 different pretrained language models. However, are 4 pretrained language models representative in the mistakes they make for future pretrained language models that have come out recently and to come out which are far larger in size and may have different pathological weaknesses? - It would be great if the novelty in the tasks beyond TGEA 1.0 could be clearly spelled out beyond just the scale of the dataset. - How valid is it to compare the pathological weaknesses of models in Chinese to English examples as in SCARECROW? <doc-sep>This paper proposes TGEA 2.0, the largest dataset for diagnosing typed errors made by pretrained language models. It is an extended version of TGEA, with various large language models and downstream tasks. The paper mainly compared its contribution with TGEA and ScareCrow. Several experiments are performed using the dataset, and experimental results on various downstream tasks show that there are large rooms exploring the proposed dataset. - It nicely expands previous TGEA in terms of scalability, annotation richness, etc. - Strict quality control on the construction process - Proposed MiSEW and pathology mitigation which can assist the annotation richness. - The intention of MiSEW extraction is plausible, but is somewhat overlapped with erroneous span location as a downstream task. Thus the necessity of the task should be more justified in some manner, such as qualitative analysis. - Proposed pathology mitigation should also be more explained in terms of why it should be jointly considered in future works. <doc-sep>This paper contributes to understanding and reducing the text generation errors made by large pre-trained language models. The authors have created the largest Chinese language dataset of machine-authored texts and a substantial subset of the texts (>195k) were manually annotated at a fine-grained level and corrected for text quality issues (grammaticality and semantic coherence). The authors use the annotated data both to benchmark the best performance of 4 major PLMs (with variable architecture and scale) against each top-level error category and to test the extent to which fine-tuning with the human-corrected texts reduces the prevalence of these errors in the PLM outputs. The size of the dataset and multiple annotations (erroneous spans and minimal set of error-related words and corrections for these words) provide an excellent basis for studying the nature of grammatical and semantic coherence errors in Chinese language automated text generated by current SOTA models performing at their best. As such, this dataset should be of considerable interest to researchers seeking to understand the persistence of certain error types in machine-generated text, and also, to understand, from objective evidence (vs subjective human evaluations, cf. Clark 2021), what kinds of errors may be indicative of machine-generated outputs and therefore may support the detection of machine-generated text. The further interest of this paper is the attempt to use the error-plus-correction MiSEW pairs to improve the quality of the generated text by reducing errors of these kinds in the output. Another strength of the paper is the clear and comprehensive description of the research process, including the annotation process, which adheres to annotation best practices in many ways (including an indicator of annotator confidence, pre-training to support annotator convergence to a high level of inter-annotator agreement and iterative re-training during annotation, etc). It would have been good to see Cohen's Kappa statistics (or similar) cited for the inter-annotator agreement in section 3.3. Average accuracy of annotators is mentioned ("average performance ... increases from 58.9% to 79.7%") and also "inter-annotator disagreement" but not measure of the latter is explicitly provided. <doc-sep>The work builds on by releasing a larger and higher quality version of a previously released dataset named TGEA. It is a comprehensive collection of machine authored texts in Chinese language that have been annotated for errors based on a novel ontology of errors. This ontology is based on data mining for frequently occurring forms of errors followed by supervision by expert annotators. Furthermore, systematic analysis to the annotated errors is performed to reveal patterns that are helpful in gauging the capability of various PLMs on different datasets. Lastly, the work goes on to validate whether errors found can be fixed automatically with pre-existing large language models and find it hard to solve by modern means. This paves the way for 1) A dataset to analyze the kind errors PLM makes 2) Developing automated methods that can automatically correct the errors made by PLMs because existing SoTA is not enough to rectify it 3) Benchmark of the performance of SoTA on both diagnostic as well as pathological errors for future works to compare against. * Improvement over previous work where authors used the stochastic decoding strategy and repetition penalty which reduced the redundancy related errors frequency and hence also allowed energy to be focused on harder errors. * The number of annotated samples is large enough to gain confidence and mitigate risk of incorrect conclusions due to spurious correlations. * Benchmark results are shared on the presented dataset using SoTA models which allows the research community to have solid baselines to compare their research and findings against. Furthermore, by evaluating numerous PLMs on diverse datasets, the work also helps users of PLMs in deciding appropriate model for any given NLG task. * The reasoning behind choosing three point scale for asking confidence in annotation instead of standard Likert scale is not provided. * Any inspiration that was taken from related work is missing for the choice of annotator training methodology. * Given that beyond a certain threshold, PLMs show emergence capabilities. The work done here is called into question as whether or not the errors patterns in small PLMs (<5B parameters) are also prevalent in Large PLMs (> 100B) like PanGu-α (200B), Wu Dao 2.0 (1.75T). Grammar errors: 1. L168 "think it" 2. L190 "in three times"
The reviewers all liked the paper. The authors' response clarified most points raised by the reviewers. In view of that, the authors are strongly invited to take the feedback on board for the final version. The main ethical issue raised by reviewers is the risk of erasure and invisibility of linguistic variability in Chinese language training data. Data cards need to be added to the final version.
I have read the authors' responses to all reviews and ultimately elected to leave my score as it is (weak accept). I think the empirical results are strong, and while I am not as troubled by the motivation and framing of the work as reviewers 3 and 4, I think their more conceptual and methodological critiques have merit, dampening my enthusiasm for the submission. ----- This submission proposes a model-driven data augmentation strategy that aims to improve the calibration and reduce the over-confidence of a variety of Bayesian neural network architectures when dealing with out-of-distribution (OOD) samples. It involves adding a generator network that aims to generate plausible OOD samples during training, along with an objective term that tries to force the predictor network to make high entropy (low confidence) predictions for these samples. The paper does a fairly thorough empirical comparison with ten datasets (eight regression, two image classification) and half a dozen baselines, most of which can be combined with PAD. The results indicate that PAD usually improves both calibration and accuracy by at least a small amount. This is a solid paper: the proposed method seems sensible (if pretty complex) and appears to be modestly effective in the included experimental results. The introduction summarizes the paper's contributions as: 1. It proposes a model-driven data augmentation technique aimed at improving calibration and reducing over-confidence for OOD samples. 2. It adapts and extends the technique to regression problems, which the paper argues is unprecedented. 3. It demonstrates empirically that the proposed approach improves the OOD accuracy and calibration of four different strong Bayesian neural net models. I lack the broad familiarity with the data augmentation literature required to verify claim (2.). I suspect that if this simple claim is true, then it may be trivially so: it's hard to believe that _no one_ has applied data augmentation to regression tasks, so perhaps folks haven't bothered to publish it. The authors can always modify or remove this claim, if needed. The other two contributions seem supported, although the empirical improvements are for the most part small (and probably not statistically significant?). I lean weakly toward acceptance: I would not oppose its inclusion in the ICLR 2021 proceedings, but I wouldn't enthusiastically endorse it. I'll explain below. The paper's motivation as laid out in Sections 1 and 2 is sound: calibration and proper quantification of uncertainty are increasingly important in a wide range of applications where machine learning has real world consequences for safety, fairness, etc. What is more, existing techniques based on neural networks (increasingly widespread) do seem to suffer significant flaws, especially exhibiting overconfidence when they should not. The paper offers a diagnosis in the form of a conjecture (Section 2.2): "failure to revert to the prior $p(\\theta)$ for regions of the input space with insufficient evidence to warrant low entropy predictions." Figure 1 effectively visualizes this phenomenon in a toy setting, but no further proof is offered. Further, the assumption that prior reversion is the correct thing to do isn't examined (though that's a basic tenet Bayesian modeling, so we'll set that aside). The proposed technique seems sensible, if complicated: add a generator network to produce OOD pseudo-samples during training and penalize the prediction network for making high confidence (low entropy) predictions on these pseudo-samples. We can consider this a form of model-driven (vs. heuristic) data augmentation. The generator loss, given in Equation (5), looks correct to my non-expert eye, and I suspect it's immediately comprehensible to readers familiar with GANs, VAEs, and Bayesian neural nets. The intuition for the OOD samples resonates with me: they should be close enough to real data to be plausible but far enough away that the predictor would be unjustified in assigning a high confidence or departing from the prior. The regularization term in Equation (7) is a bit more arcane at first glance, but it's intuitive: the conditional prediction distribution should be close to the prior for pseudo-data points far from the real training distribution. The derivations of the K-L term for regression and categorical classification are given in the appendix, but these aren't critical details for judging the significance of the paper (they're quite straightforward). The design of the experiments is sound: they simulate OOD settings by clustering each dataset and using distinct clusters for training and test splits and measure both accuracy and calibration. I don't have an opinion about the choice of the Kuleshov metric for calibration. The chosen baselines look strong, but I am not up-to-date on the relevant literature so I would not be able to identify a non-obvious missing baseline. The experimental results are promising. A PAD variant is usually (but not always) the best for each task and metric (exceptions include GP for Naval/accuracy and R1BNN for Power/calibration). Perhaps more important PAD does generally seem to improve both accuracy and calibration across both variants (DE, MC, SWAG, etc.) and datasets. So in other words, if a modeler chooses to use one of the compatible Bayesian neural networks, in most cases they should also use PAD. The work and manuscript have a few weaknesses that prevent me from more strongly recommending acceptance. For one, some of the exposition around training is unclear, in particular, how the objectivees in Equations (5) and (8) are combined during training. I praised the results above, but I think the manuscript's interpretation of its results (Section 4.3) is still more generous than mine. PAD does consistently improve accuracy and calibration, but the margin is sometimes small, raising the question aboout whether the added complexity is worthwhile in all cases. The paper argues that PAD consistently improves the calibration curves in Figure 3, at least for poorly calibrated models, but that does not seem obvious to me. This might be because the curves are somewhat cluttered, but I see a number of exceptions where the PAD look potentially worse: Energy, Naval (SWAG), and Yacht. I think perhaps the real problem is not the results themselves, which overall are strong, but rather the manuscript's rather cursory discussion of the results and its failure to offer any insights or guidance about, e.g., when PAD should be expected to help (based on task or dataset) or which baselines it works best with. One last note: I don't want to over-index on a toy figure included for illustration purposes, but I don't find the results in Figure 1 convincing! Perhaps I am misunderstanding what behavior we desire (if so, please correct me). I agree that the PAD distribution does a better job of capturing the uncertainty in the central low-data area, but at the left- and righthand ends, the baselines actually look preferable in that the uncertainty is often appropriately wider and contains or at least follows the true function. It looks like PAD might be over-regularizing things in these cases. Here are some actionable suggestions for improvements: - Clarify how the objectives are combined and how training proceeds. Consider adding, e.g., an "Algorithm" summary figure. - Expand the results discussion beyond simply restating the results (which are displayed in the Tables). For the raw accuracy and calibration numbers, perhaps you could compute some summary statistics for the baseline vs. PAD differences, so readers could get a quick sense of whether PAD usually beats the baseline. Maybe also some counts for how often a PAD variant has the best performance. - Also in the discussion, try to distill out some illustrative patterns that could be turned into insights or practical guidance. - For the calilbration curve plots (Figure 3), consider reducing the clutter by removing redundant curves. For most of the tasks, most baselines (and corresponding PAD variants) are quite similar so perhaps you could show a representative subset for each task (and then put the complete figures in the appendix). Here's a laundry list of questions: - How does training proceed? Is it a typical alternating adversarial optimization, i.e., optimize generator, then discriminator, repeat? - What additional computational complexity does PAD introduce during training? - Under which conditions PAD should be expected to help most: type or distribution of data, task structure, baseline model, etc.?<doc-sep>This paper proposes a data augmentation scheme (named PAD) to improve accuracy and calibration of NNs. The idea is to generate OOD data, close to the training data, where the model is overconfident, and force a higher entropy for their corresponding predictions. This topic is very relevant to the ICLR community, the paper is clear, and I was excited with the goal in a first place. However, the paper as it is has major drawbacks. 1. The biggest drawback is that the proposed approach is ad-hoc, a heuristic with no guarantees that it will work as desired. In fact, recent work has shown that Data augmentation on top of Ensembles can be harmful, the authors should discuss this in the paper (see [Wen et.al, 2020: Combining Ensembles and Data Augmentation can Harm your Calibration). For this paper to be accepted, the authors should explore the properties of the proposed approach with careful controlled toy scenarios, and bring further insights on when the approach is expected (ideally, guaranteed) to work. 2. Using PAD on top of other probabilistic approaches destroys the probabilistic interpretation. 3. Experimental results are extensive, but not convincing: Figure 1 lacks the GP reference, and shows bad performance on the left extreme; the Ablation study suggest that Equation (5) could be simplified; finally results in Table 2-4 suggest that the proposed approach hurts in high-dimensional scenarios (Energy and Kin8nm datasets), the reported numbers also strongly depend on model selection and tuning of PAD and other baselines, information which is currently missing. 4. The authors do not compare nor mention recent advances on calibrating DNNs, for example: * (Antoran et.al, 2020) Depth Uncertainty in Neural Networks * (Liu et.al, 2019) Simple and principled uncertainty estimation with deterministic deep learning via distance awareness More comments: * the proposed model does not seem to scale to high-dimensions, as "filling the gaps" with the OOD data generator becomes infeasible (this is reflected in the tables, where both accuracy and calibration are systematically worse for the high-dim Kin8nm dataset). Up to how many dimensions would this approach be useful? * the OOD dataset produce an "equally sized pseudo dataset". Yet, one might think that the amount of data needed to robustify uncertainty would depend on the manifold geometry. * location of OOD samples is chosen as an interpolation of latent representations for the observed data. That means that many generated datapoints will NOT bee out-of-sample. * From the ablation study (Tables 4 and 5), "without AB" gives similar results to Regular (always within the reported error bars of "regular". That seems to indicate that terms A and B are not that relevant. Am I missing something? * Figure 1: the authors should include one column for the GP behavior, since the authors claim that the observed behavior is similar to that. Otherwise, it is unclear by eye what is best. In particular, PAD * Could the proposed approach suffer from the opposite issue, i.e., deliver too high uncertainty in the augmented OOD data? How do you avoid this issue? * How does the proposed approach compare to a DNN whose last layer is GP or Bayesian RBF network? (see http://www.gatsby.ucl.ac.uk/~balaji/udl2020/accepted-papers/UDL2020-paper-009.pdf) * The proposed method encourages a reversion to a *specific* prior (0 mean functions) Minor: * the authors mention limited expressiveness of GPs, but this is subject to a simple kernel. If the kernel is complicated enough, then GPs are as expressive as we would like to (see equivalences between DNN and GPs in [Neil, 1997] and [Lee et.al, 2017]). Please clarify this statement. * Figure 3 is hard to read, I suggest to highlight the PAD curves by changing the color scheme).<doc-sep>Overview: The authors propose a data augmentation scheme that generates samples out of distribution and helps with uncertainty estimates. Comparisons are to various bayesian methods in uci regression and mnist/cfar for classification. PAD seems to give some improvements in out of distribution uncertainty quantification. The major concern is that the gains seem relatively small, and the objective is ad-hoc. It would be nice to see either more substantial, uniform gains (so that the authors can justify the procedure on the results alone) or more solid conceptual motivation of the method, especially from the Bayesian side. It seems like the motivation and intro is clear, and section 3 onwards becomes very ad-hoc and loses much of this. It would be nice to be convinced that there are a set of assumptions and conditions under which this is the right way to do uncertainty quantification. Positives: The evaluations are extensive, and it's commendable that they include both positive and negative results in their regression evaluations. Uncertainty estimation out of distribution is an important and timely problem. Negatives: Minor: I'm not sure why there is a claim that the problems with uncertainty estimation comes from p(theta|D) and not p_theta(y|x). The fact that non-bayesian methods have similar issues with uncertainty quantification would suggest that the latter is certainly an issue. Figure 1 doesn't seem like a compelling argument for the narrative in the paper. MC dropout and deep ensembles both have decent behavior outside the support (x<0, x>1) but suffer in the gap between 0.25 to 0.75 which is arguably due to overaggressive interpolation. Term A in equation 5 is justified as "generating data where f is overconfident" but I don't see how this is true. It's just generating data where there's low prediction entropy... this includes areas where it is confident for the right reasons. Somewhat minor, but the sum of term A and term B seems a bit problematic, since A will be in terms of discrete entropy in classification, and term B is going to be differential entropy in general. Rescaling X also seems like it would arbitrarily shift the weights between AB and C? The weird thresholding on the C penalty for regression problems does not inspire confidence. Overall, equation 5 gives a sense of a fairly ad-hoc criterion. I'd like to be convinced that this is actually the right way of doing things, especially from a Bayesian perspective. Looking at equation (7) it seems like learning the distribution of tilde X is alot of work to regularize the KL towards the marginal with a squared-exponential penalty away from the training data. Is it really not possible to post-process the model distribution to achieve the same thing? The experiments are extensive, but a bit mixed. The dataset construction for regression seems like it would naturally favor PAD-type methods, because the clustering occurs on the basis of feature distances, and PAD enforces uncertainty based on feature distances (via the squared-exp term in equation 7). In terms of results, I think overall PAD gives gains, but it's not uniform and in cases like SWAG on table 2 seems to hurt more than it helps. The corruptions in MNIST / CIFAR must also be pretty aggressive, as the accuracy numbers are quite low for both. Does PAD do similarly well on milder or no distribution shift settings? I am slightly concerned that the evaluations here focus so much on the large distribution setting and that PAD is tuned to that case. Minor: Inline equation involving sines is missing a closing parenthesis. Notation. g_phi is a 'generative model' in section 2 but it seems to be the output of an autoencoder in section 3.1 q_phi seems to be the actual generative model?<doc-sep>I think the paper is interesting and well-written. I agree with the mis-calibration can be caused by out-of-distribution data, even though it is still commonly observed without such discrepancy. Addressing OOD data is an important direction, and I think the author proposed a reasonable approach to prevent models from overfitting on data points that are rarely observed during training. However, I believe there are some limitation of the method at the current stage, and the experiments did not fully convince me. Strength: 1. The paper addresses an important question of models being over-confident on out-of-distribution data. The method is practical for applications where uncertainty estimation is needed. 2. The adversarial generation of OOD data is intersting, and the rationale is well explained. 3. The authors included a good selection of datasets and experiments. The PAD-based methods are also compared to a good variety of baselines. Weakness: 1. To determine whether a data point is out-of-distribution, both the adversarial model and main model rely on the L2 distance and the length scale parameter \\ell. My concern here is about 1) the heterogeneity in different dimensions, and 2) \\ell seems particularly important and difficult parameter to tune. I would like the authors to give more details on how it is chosen. 2. I believe the approach here is to revert to prior when evaluated data points are far away from the observed ones. Thus the difference in accuracy depends on how good the starting prior is, and how much bias the baseline model learned from OOD data. Table 3 shows some of the trade-off, but I think it also be good to show the difference when there's no OOD data, because it's not necessarily known in advance. 3. Looking at figure 3, I don't think the PAD method shows significant improvement in most of the datasets. Housing seems to be the only one here. In figure 5, it looks like the OOD data are mostly in the convex hull of observed data (at least in this low dimensional embedding). It is unclear how to differentiate those from the region where models should be interpolating. Moreover, all the OOD data are artificially constructed. I think it would be more convincing to test the methods on some OOD data that arises naturally. One such source could be temporal data where distribution could shift over time. Minor comments: 1. In section 5 "excessive computation for large models an datasets", an->and. ----------- Thank the authors for lot of these responses. I'm still around neutral for this paper, but I will raise my score to marginally above acceptance.
This paper studies the problem of uncertainty estimation under distribution shift. The proposed approach (PAD) addresses this under-estimation issue, by augmenting the training data with inputs that the network has unjustified low uncertainty estimates, and asking the model to correct this under-estimation at those augmented datapoints. Results show promising improvement over a set of common benchmark tasks in uncertainty estimation, with comparisons to a number of existing approaches. All the reviewer agreed that the experiments are well conducted and the empirical results are very promising. However, they also had a shared concern on the justification of the approach. Reviewers are less willing to accept a paper merely for commending its empirical performance. I share the above concern as the reviewers, and I personally found the presentation of the approach a bit rush and disconnected from the motivation. For example, the current presentation feels like the method is motivated by BNNs but it is not clear to me how the proposed objective connects to the motivation. Also no derivation of the objective is included in either main text or appendix. In revision, I would suggest a focus on improving the clarity and theoretical justification of the proposed objective function.
Summary: This paper addresses the complexity of the forward pass inference in neural ODEs. The paper proposes to augment training of the neural ODE with an auxiliary neural network that dynamically selects the best numerical integrator for a given input sample. Furthermore, the paper also proposes a regularizer that uses the errors of the numerical integrator to reduce the number of function evaluations, without sacrificing accuracy. The paper is well written and addresses an impediment to utilizing neural ODEs in practice. I did find the paper lacking in detail, however. For example, it is not clear where the regularizer in Eq. (2) is derived from. The authors mention a connection to the Finlay reference in Sec. 2.3, but it is not clear what this is precisely. For the cost of each integrator in Eq. (4), how should M be chosen? What does it mean to say that a prediction is “correct”? What is the criteria being used for this purpose? It appears that the authors treat the training of the auxiliary network as a supervised learning procedure. Why is this appropriate for this task? Another way of looking at the problem is through a reinforcement learning lens, where the objective is to learn a policy mapping inputs to choices of integrators, and minimizing long-term costs (either discounted or long-term average). Of course, there is perhaps no Markov structure to the data in this setting, but presumably the inputs in the set T could be viewed as i.i.d. samples? Could the authors comment on such alternate formulations? <doc-sep>### Summary This study proposes a method to accelerate the forward-pass in Neural ODEs, known to be a significant time bottleneck. The study is technically sound, the empirical results convincing, but the clarity could be substantially improved. ### Quality The paper is technically sound and the claims are for the most part appropriately backed by empirical evaluation. There is just one minor point I would suggest the authors to address: the authors write "One interesting point is that RK4 had not been used at all because it does not show any notable difference from the Euler method in this task. In that regard, choosing the Euler method is a sensible decision." This claim is not really illustrated anywhere in the manuscript and it would be good if the authors show this, even if in a supplement. ### Clarity The manuscript provides enough information for an expert reader to understand all the steps to reproduce the results. However, the text contains a substantial amount of grammar errors and imprecisions, which I would recommend the authors to tackle. Here is a (non-exhaustive) list: -instead of "Much work has been actively studied to", "Much work has been actively devoted to"; -instead of "Neuarl ODEs and numerical methods", "Neural ODEs [...]"; -confusing formulation "It had been reported that approximating neural networks with differential equations can be done by many researchers"; -instead of "as shown in Fig. 2, consist of three parts", "as shown in Fig. 1 [...]"; -instead of "and the step size is decided by a function", "and the step size is determined by a function" or "and the step size is a function"; -instead of "Dupont et al. said that by", "Dupont et al. showed that by"; -instead of "which is not our main interest", "which is not our setting"; -instead of "Neural ODEs have one critical drawback that it requires", "Neural ODEs [...] they require"; -instead of "step size is decided by an inverse function", "step size is an inverse function"; -instead of "because the average step size will decrease", shouldn't it be "because the average step size will increase"? -instead of "the auxiliary integrator selection network v is stabilized and we can deploy them", "the auxiliary integrator selection network [...] we can deploy it"; -confusing sentence "which is our main experimental stage". Maybe delete it for clarity? -instead of "in average", "on average"; -instead of "in the paper or in their github repository", "in the paper or in the respective github repository"; -instead of "It is note that", "It is worth noting that"; -instead of "the task-specific loss is to maximize [...] i.e. $L_{task}$", "the task-specific loss is to maximize [...] i.e. minimize $L_{task}$". ### Originality The novelty of the study is two fold: (1) it proposes a regulariser to speed-up the DOPRI ODE numerical solver; (2) it trains an auxiliary neural network to choose the most appropriate numerical solver for the Neural ODE between DOPRI, fourth-order Runge-Kutta RK4 and forward Euler. ### Significance of the work The results suggest that the developed approach is a solid step towards developing faster Neural ODEs.<doc-sep>The authors make two suggestions in the context of neural ODEs: 1. a regularization term based on the error estimate of an adaptive step size ODE solver (Dormand-Prince) 2. an auxiliary predictor to recommend an integrator to use for a specific sample based on minimizing the required number of function evaluations in the numerical integrator. Based on their suggestions the authors show that it is possible to obtain improved neural ODE accuracy results at less computational cost for three tasks: 1. MNIST image classification 2. PhysioNet mortality classification 3. Continuous normalizing flows The paper can be significantly improved in two major areas: 1. There is already important related work that the authors should take into account: The paper "Learning differential equations that are easy to solve" (https://arxiv.org/abs/2007.04504) suggests the regularization of the k-th order derivatives with respect to time. Based on the view of the Taylor method integrator the higher-order derivatives with respect to time are an error estimate of the current time step and also reflect on the cost of computing the solution up to a certain accuracy. The idea in the above paper very similar to the idea of the regularization of the error estimate of an adaptive step size solver such as Dormand-Prince. The authors say that in the Dormand-Prince method "the error is estimated by the difference between the fourth-order and the fifth-order Runge-Kutta methods". Runge-Kutta methods use multiple (of the previous) function evaluations in order to extrapolate the solution of the next step, the higher the order, the higher the term of the Taylor expansion that is estimated (assuming the integrated function is differentiable up to the necessary order). So the error estimation of the Dormand-Prince method is related (proportional) to a higher derivative with respect to time and regularizing it is thus very similar to the more general idea in the above paper. The authors could make their analysis more clear and relate it to the previous work. In general the work would benefit from a clearer exposition about adaptive step size solvers and the smoothness of the ODE at hand. 2. Principled reasoning and explanation of the auxiliary integrator recommendation system: The purpose of an adaptive step size solver is already to make large steps where the integrated function allows this. Given the effort it takes to properly tune the auxiliary network architecture in a task specific way it is not clear to me that there is a truly general purpose advantage (to quote the authors: "the neural network architecture for v should be carefully designed for each application"). Furthermore, the objective function of the auxiliary network is based on a discrete quantity (number of function evaluations) that is not differentiable with respect to the input. As far as I can see the paper does not directly explain how this objective can efficiently be trained (as gradients should not be available). I do not recommend to accept the paper since the described large changes are required for the paper to become a serious contribution. Further recommendations: - Give references to the claims made in the abstract already in the abstract even if the references follow in the text later. Especially for big statements like "significantly reduce number of parameters". That statement could also be improved by making it more quantitatively specific (how much is the reduction). - Define the term "procrastinated" in the context of neural ODEs. - The finding that "a non-trivial percentage of cases can be solved with simple integrators" seems to somewhat contradict the previous claim that "advanced integrators" have to be used. Also for "simple" (in numerical analysis terms less "stiff") cases an adaptive solver should already use much fewer steps and hence number of function evaluations. - Introduction: two advantages... "Neural ODEs can interpret the neural network layer as a continuous variable and a hidden vector at an arbitrary layer can be calculated", why is this an advantage / what is this useful for? - Section 2.1: The statement "It had been reported that approximating neural networks with differential equations can be done by many researchs" can be read in two ways. Maybe find a different formulation. - Table 1: Why not also list wall-clock time of inference as that is what we are truly interested in? - Section 2.2: "DOPRI is one of the most powerful integrators" What do you mean by powerful? How is that measured? - Clearly explain the adaptive step size scheme of DOPRI, instead of just saying "inversely proportional to error": If I evaluate with step size h_1 and get error estimate e_1 do I then choose h_2 = 1 / e_1? How does that work exactly? - Perhaps say something about the differentiability assumptions of the higher-order Runge-Kutta methods. - Perhaps differentiate between explicit and implicit Euler method (instead of just saying Euler method), implicit integrators are not as unstable for stiff problems but can require many more function evaluations since they perform a nonlinear system solve at every time step. - In Equation (2) you could make more specific what range $i$ is summed over. - Section 3.2: "solving for h(0)", we solve with h(0) as initial data but we solve "for" h(t_final) - How is the alpha in the exponent of the auxiliary loss chosen and for what reason? <doc-sep>Summarizing the paper claims ------------------------------------------ The paper addresses the question of reducing (on average) the number of function evaluations (NFEs) for the forward-pass inference through the neural ODE. The proposed approach includes two main components. The first one is a direct regularization of solver's (DOPRI) estimated errors during training. The second one is an auxiliary neural network that is learned to predict, which solver from the pre-defined set of solvers (DOPRI and fixed-step RK4, Euler) should be used during inference for a given input sample. The paper claims that these components and their combination yield to reduce NFEs. Strong points ------------------- - The paper attracts attention to the fact that neural ODE architecture shouldn't stick to the sole usage of the most powerful solver during inference, depending on the input data, less powerful solvers can be applied. - The proposed approach is evaluated on a variety of tasks (image classification, mortality prediction, continuous normalizing flows) Weak points ----------------- Some important details concerning the experimental setup are omitted, which makes it hard to correctly evaluate the benefits of the proposed approach and reproduce the results. Please, see below for a wider explanation. Particularly, the following points need to be clarified to understand the fairness of the provided comparisons. 1. Were the models from the same table (Table1/Table5/Table8) trained using the same random seeds? Neural ODE performance can significantly depend on architecture initialization, and hence using the same random seeds is required for a fair comparison. 2. Were the models from the same table (Table1/Table5/Table8) computed only once? Or provided data correspond to the mean value across several experimental runs? If the mean is provided, what is the corresponding standard deviation? Without knowing the standard deviation, it's not clear if there is a significant improvement of one method over another. 3. How many steps of RK4 and Euler are done during forward? What are hyperparameters for DOPRI (e.g., tolerance)? In the paper, I didn't find an explanation of how the number of steps for fixed-step size solvers has been picked, and how the tolerance for DOPRI has been set. The quality, as well as NFEs and DISE predictions, can vary significantly depending on these parameters. Recommendation (accept or reject) ------------------------------------------------ For the current stage of the review, I tend to reject the paper. However, I find the topic of the paper important to the neural ODE community and will make the final score decision after the authors' clarification on crucial experimental setups. Questions -------------- - That would be helpful for understanding to see the test performance of pre-trained with DOPRI neural ODE when only RK4 or only Euler is used. - Will we observe the same behavior if we perform a comparison with adaptive methods of smaller order? - Does the DISE strategy to choose an appropriate solver outperforms the strategy when we randomly sample solver for the next input during inference? If sampling uses the same probabilities as obtained with DISE? If uniform sampling is done? - What is a time overhead for the training using introduced regularization? That would be nice to see plots NFEs-forward vs. epochs (wall clock time) and NFEs-backward vs. epochs (wall clock time) dependence for different methods during training
This paper proposes two methods to speed up the evaluation of neural ODEs: regularizing the ODE to be easier to integrate, and adaptively choosing which integrator to use. These two ideas are fundamentally sensible, but the execution of the current paper is lacking. In addition to writing and clarity issues, the main problem is not comparing to Finlay et al. The Kelly et al paper could potentially be considered concurrent work. I also suggest broadening the scope of the DISE method to ODE / SDE /PDE solvers in general, in situations where many similar differential equations need to be solved, amortizing the solver selection will be worthwhile even if there are no neural nets in the differential equation. I also encourage the authors to do experiments that explore the tradeoffs of different approaches, rather than aiming just for bold lines in tables.
Authors propose a method that takes a video with multiple foreground objects as input, along with the corresponding background frames/image, and rough segmentation masks for each object. It then outputs an alpha decomposition of the video, where one layer corresponds to background, and the others to all visual effects from each object – so e.g. one layer includes a person plus their shadows and reflections. The method is trained for the task of missing-frame reconstruction, and relies on inductive biases (ease of learning) to ensure the correct shadow/etc. are paired with their respective objects, rather than any sophisticated constraints. Synthetic data (with plenty of shadows and reflections) is used for training. At test time, the method is further tuned to optimise the decomposition of a given video. It is demonstrated on both synthetic and real data; quantitative results on synthetic data are better than a recent baseline, and qualitative results on real data look reasonable. Strengths: - the proposed approach is novel - the proposed method achieves significantly better quantitative results on synthetic data than a recent baseline - the method is also demonstrated on real-world data, where it often achieves visually acceptable results - the paper is clear and easy to read throughout; it is well structured, and the figures are appropriate Weaknesses: - the assumption of known background seems rather restrictive (and makes the problem considerably easier) - given the background is assumed known a-priori, what is the actual practical purpose/use of the proposed method? Authors should identify this clearly in the introduction, and add an experimental evaluation that shows performance on this task - on real data, the method is not significantly better than the baseline (though it is faster if one disregards training time) - there is no attempt to measure quantitative performance on real-world data. While I appreciate that this would require some manual annotation, it would only need to be a handful of frames from the few videos, indicating which regions are indeed moving with which object - the technical contribution is small, compared with the baseline [12], and also considering similar-in-spirit works such as Video Centrifuge – particularly given the method uses fairly standard architectures and doesn't have a strong justification for its own successes (see below) - there is no convincing reason presented for the method to work – the 'correct' result is one of several equivalent local optima (assigning shadows to arbitrary layers), and the arguments about Schelling points do not resolve why the 'correct' result is found (merely why the layers should find some arbitrary valid joint decomposition). It seems that the correct optimum is found simply because it is 'easier' for the network to learn, but it'd be nice to see a proper analysis of this. At minimum, does the model training ever converge to 'incorrect' solutions – and for what fraction of training runs if so? - the method assumes (I think) data is in sRGB (normalised to 0–1) and that different object contributions can be combined by alpha blending. However, this is not true in general – reflections should be treated as strictly additive in linear color space, and shadows darken surfaces rather than alpha-overlaying. This may limit applicability to scenes where the lighting and exposure are fairly well-behaved - resolution is limited (only 128x128 even training/testing on TPUs), thus limiting applicability in practice There is minimal discussion of limitations; the paper would benefit from adding an explicit subsection for this. There is adequate discussion of broader impacts. <doc-sep>The goal of this work is to decompose videos into different layers, for example, objects of interest and their shadow, reflection, and other visual effects. It is a challenging problem due to the complexity of the 3D geometry and lighting conditions in the real world, as well as the difficulties to get the ground truth. This paper proposes a self-supervised method to solve this problem. They borrow the idea from game theory and train networks to achieve this focal point. The experiments show the effectiveness of their design choice. Strengths: + The idea of using Focal Point is interesting and reasonable. + This method achieves promising visualization results for video decomposition. + The paper is easy to follow and the method is demonstrated detailedly. Weakness: - Object number: In this paper, the authors only show the results of 2 or 4 object scenarios. I am not sure this method can handle the scenarios with arbitrary objects. It might limit the generalization ability of this method. - It is better to show how this network achieves this focal point. Please refer to the weakness <doc-sep>This paper presents a novel frame for video layer decomposition, where they borrow the idea from game theory concept of focal points to frame this problem as a coordination game and let the networks reach consensus on their predictions. Strengths: - The presented idea is both novel and interesting. - The paper is well written and easy to read. - Extensive experiments are conducted and improved results are shown. Weakness: - The results on real dataset is a little bit worse than the one on synthetic data. The presented method might fail with more complex scene with heavy occlusion. <doc-sep>Given a short video with some moving objects and a rough mask for each object: This paper tackles the problem of generating a per-object color and alpha mask for each object, containing all the effects on the image caused by that object (including e.g. shadows and reflections) This paper achieves this via a network which plays a 'coordination game'; each 'copy' of the network is supplied with a different object's input mask, and attempts to reconstruct the mask and pixels corresponding to this object. The network is trained via a self-supervised reconstruction loss. ## Originality The problem being tackled is not original, but the proposed solution is, to the best of my knowledge, novel and interesting. ## Quality Really nice to see experiments with real data rather than just rely fully on synthetic experiments. I would have liked to have seen more 'simple, heuristic-driven' baselines. For example: Given the videos are from a fixed camera, I could imagine using simple median-differencing to find a background image for each video. Then this background image could be used to find per-frame, per-pixel differences from the background image; these must mostly be due to effects of foreground objects. Finally each of these pixels can be associated to the foreground masks via e.g. nearest neighbour assignment. I don't expect this simple heuristic to beat the proposed approach, but it might give better context to the numbers. I would also have liked to see more ablations on the components of the algorithm. For example, what would happen if $W_t$ was set to ones everywhere? What about varying the hyperparameters in line 158: 2*sigmoid(5x)? ## Clarity The overall writing of the paper is clear and easy to follow. I enjoyed section 3.5: an explanatory view of the model being used. I was very pleased to see the video results, I felt these made the overall system and quality of results clear. I would have liked a little more justification of the formulation of $W_t$ (line 155). The idea of this is to give the network more emphasis on reconstructing areas *outside* the mask of the current object? ## Significance The problem tackled is an interesting one, and the authors propose a thought-provoking solution. I hope that the paper shows other researchers that this solution (of multiple identical networks playing a coordination game) can be of use in these types of scenarios. Yes I think they have.
All reviewers found that the paper provides a novel, interesting solution, and is well written. They appreciated that the proposed method outperforms prior work on synthetic experiments and shows reasonable results on real data. The video results were particularly helpful in judging the results. The majority of the reviewers were concerned about the convergence of the proposed coordination game to the correct solution. While the authors provided some empirical evidence, a more formal analysis could alleviate concerns much more easily and would provide a strong justification for the proposed method. The requests by reviewers for more ablations, simple heuristic baselines, and quantitative results on real data were simply ignored by the authors. This does not induce confidence that any of these requests will be addressed in a final version.
The idea of creating an intermediate model of the eye for tracking is an interesting one. The weakest part of the paper seems to be the evaluation and lack of real world testing. With such validation the paper would be much more compelling and much stronger. Some specific questions: * You say you are doing 8.2% above SotA. If you could convert this into a real world number (e.g., X degree tighter track). * It seems like you are testing 9 possible gaze extrema - but this doesn't give a really good idea how well tracking would occur with a real world application. * Is your statement about "too much disk space and slow interface speeds" a factor of the hardware you are running on? Is it something Moore's law will address, or is there something fundamental to the problem. * You are modeling occlusion as a random stipple noise pattern. This is not really accurate is it? Occlusion is typically a group of pixels. The random noise is actually the best possible case for reconstruction. Did you test with larger drop-out regions? * In your efficiency arguments; frame to frame tracking is typically a perturbation problem. How do you perform when you know the last frame? <doc-sep>The authors present a novel approach to reconstructing the complete the eye region from a noisy partial eye scan. They describe the short comings in existing RGB based approaches and point out that 3D semantic surface modeling can lead to a more practical use-case, i.e. gaze estimation. The authors describe their methods and evaluation in sufficient detail and show that they achieve excellent performance for both tasks. They also propose a simple way to build a dataset for semantic completion eyes based on UnityEyes meshes. The manuscript is relatively easy to read and makes arguments relatively well. They present evaluation results that are compelling in terms of performance and time, and in case of gaze estimation - accuracy. Overall, I found the manuscript to be very good.
This work proposes a neural architecture point-cloud approach for reconstructing the eye geometry based on noisy partial observations. Reviewers have found the problem addressed interesting and the experimental results relatively compelling. Some of the reviews raised several questions that can be adequately addressed in the camera-ready version. Overall, the paper has received positive feedback that suggests acceptance.
The paper proposes a genarative model for images which explicitly separates the within class variation (the covariant part) from the across class variation (invariant part). Functionally, this achieves a similar result as various recent works on incorporating invariances in neural nets, but the fact that it is able to explicitly construct models for both parts of the distribution is nice. Results on MNIST are good, but of course this is a very simple dataset. It would be very interesting to see how the model performs on a more realistic problem. Admittedly, I am not an expert in generative models. This is a clean paper with a clear goal, it is hard for me to judge how original the idea is. "Covariant" might not be the best word to use here because it has a very specific meaning in the context of some other neural networks related to how quantities transform according to representations of a symmetry group. This is a potential source of confusion. <doc-sep>This paper is well written, and the quality of the figures is good. In this paper, the authors propose an invariant-covariant idea, which should be dated back at least to the bilinear models. The general direction is important and should be pursued further. However, the literature is not well addressed. Eslami et al. 2018 have been cited, but some very important and related earlier works like: [1] Kulkarni et al. 2015, Deep Convolutional Inverse Graphics Network [2] Cheung et al. 2015, Discovering Hidden Factors of Variation in Deep Networks were not discussed at all. The authors should certainly make an effort to discuss the connections and new developments beyond these works. At the end of section 1, the authors argue that the covariant vector could be more general, but in fact, these earlier works can achieve further equivalence, which is much stronger than the proposed covariance. There is also an effort to compare this work to Sabour et al. 2017 and the general capsule idea. I would like to point out, the capsule concept is a much more fine-grained what & where separation rather than a coarse-grained class & pose separation in one shot. In a hierarchical representation, what & where can appear at any level as one class can consist of several parts each with a geometrical configuration space. So the comparison of this work to the generic capsule network is only superficial if the authors can not make the proposed architecture into a hierarchical separation. Besides different capsule network papers, I found another potentially useful reference on a fine-grained separation: [3]Goroshin et al., Learning to Linearize Under Uncertainty In the paper, it is argued several times that the latent vector r_y contains a rich set of global properties of class y, rather than just its label and the aim is that it can learn what the elements of the class manifold have in common. But this point is not supported well since we can always make a label and this latent vector r_y equivalent by a template. I think this point could be meaningful if we look at r_y's for different y, where each of the dimension may have some semantic meaning. Additional interpretation is certainly needed. Under equation (3), "Note that v is inferred from r_y" should be "inferred from both r_y and x", which is pretty clear from the fig 5. Related to this, I could imagine some encoder can extract the 'style' directly from x, but here both r_y and x are used. I couldn't find any guarantee that v only contains the 'style' information based on the architecture with even this additional complication, could the authors comment on this? Equation (5) is not really a marginalization and further equation (6) may not be a lower bound anymore. This is probably a relatively minor thing and a little extra care is probably enough. The numbers in table 2 seems a little outdated. To conclude, I like the general direction of separating the identity and configurations. The natural signals have hierarchical structures and the class manifold concept is not general enough to describe the regularities and provide a transparent representation. Rather, it's a good starting point. If the authors could carefully address the related prior works and help us understand the unique and original contributions of this work, this paper could be considered for publication.<doc-sep>The paper presents a VAE that uses labels to separate the learned representation into an invariant and a covariant part. The method is validated using experiments on the MNIST dataset. The writing in this paper is somewhat problematic. Although it is hard to put the finger on a particularly severe instance, the paper is filled with vague and hyperbolic statements. Words like "efficiently", "meaningful", "natural", etc. are sprinkled throughout to confer a positive connotation, often without having a specific meaning in their context or adding any information. Where the meaning is somewhat clear, the claims are often not supported by evidence. Sometimes the claims are so broad that it is not clear what kind of evidence could support such a claim. A relatively large amount of space is used to explain the general concept of invariant/covariant learning, which, as a general concept, is widely understood and not novel. There are other instances of overclaiming, such as "The goal of CoVAE is to provide an approach to probabilistic modelling that enables meaningful representations [...]". In fact, CoVAE is a rather specific model(class), rather than an approach to probabilistic modelling. The paper is at times meandering. For instance, the benefits of and motivation for the proposed approach are not simply stated in the introduction and then demonstrated in the rest of the paper, but instead the paper states some benefits and motivations, explains some technical content, mentions some more benefits, repeats some motivations stated before, etc. Many researchers working on representation learning hope to discover the underlying learning principles that lead to representations that seem natural to a human being. In this paper, labels are used to guide the representation into the "right" representation. It is in my opinion not very surprising that one can use labels to induce certain qualities deemed desirable in the representation. To conclude, because of the writing, limited novelty, and limited experiments, I think this paper currently does not pass the bar for ICLR.
The paper presents a new approach to learn separate class-invariant and class-equivariant latent representations, by training on labeled (and optional additional unlabelled) multi class data. Empirical results on MNIST and SVHN show that the method works well. Reviewers initially highlighted the following weaknesses of the paper: insufficient references and contrasting with related work (given that this problem space has been much explored before), limited novelty of the approach, limited experiments (MNIST only). One reviewer also mentioned a sometimes vague, overly hyperbolic, and meandering writeup. Authors did a commendable effort to improve the paper based on the reviews, adding new references, removing and rewriting parts of the paper to make it more focused, and providing experimental results on an additional dataset (SVHN). The paper did improve as a result. But while attenuated, the initial criticisms remain valid: the literature review and discussion remains short and too superficial. The peculiarities of the approach which grant it (modest) originality are insufficiently (theoretically and empirically) justified and not clearly enough put in context of the whole body of prior work. Consequently the proposed approach feels very ad-hoc. Finally the additional experiments are a step in the right direction, but experiments on only MNIST and SVHN are hardly enough in 2018 to convince the reader that a method has a universal potential and is more generally useful. Given the limited novelty, and in the absence of theoretical justification, experiments should be much more extensive, both in diversity of data/problems, and in the range of alternative approaches compared to, to build a convincing case.
This paper studies the reliance of deep learning models on spurious correlations. In particular, the authors look at the quality of feature representations learned by models trained via ERM versus models trained using group robustness methods. They evaluate these feature representations by utilizing the Deep Feature Retraining (DFR) procedure: retraining the last layer of the model on a held-out set which would likely not contain spurious correlations present in the training set. This procedure helps reveal how much information about causal factors is present in the learned representations. The authors further explore how these feature representations are influenced by the model architecture, pre-training task and strategy, regularization (via weight decay, data augmentation), training length, and whether or not model has been trained on the target data. They find that the quality of these representations often depends heavily on choice of data augmentation, model architecture, and the pre-training strategy involved, while regularization and training time may not be as helpful in improving the quality of said representations. The authors share results on CelebA, Waterbirds, WILDS-FMOW, CXR, MultiNLI, and CivilComments datasets. Overall, the paper presents interesting results and insights. I have some minor comments which I hope the authors will address during the response period. ### Strengths: - This work looks at an interesting and well motivated problem. - The experimental setup is well designed and results offer insights that would be useful to future researchers. - The paper is well written and organized. ### Weaknesses: - The experiments on NLP datasets are based on a BERT model. While I understand that the goal here is not to create a state of the art model but to analyze representations learned by a model, significantly better models (DeBERTa, ERNIE, T5, etc) are out there that the authors could have used. - There are several works in NLP that have looked at the problem of spurious correlations ([1,2,3,4] are just a few examples), addressing them and understanding when models weigh causal features vs non causal features. The paper currently does not position itself well in that literature. ### Additional comments: - Section 3, Preliminaries (Lines 99-102): This appears to be incorrect in light of [1]. In fact, as most Machine Learning tasks are anticausal, models will rely on spurious correlations regardless. As [1] show in their anticausal setup, a model will rely on spurious factors most of the time (unless the spurious features observe higher noise compared to the causal features). [1] Kaushik, D., Setlur, A., Hovy, E. H., & Lipton, Z. C. Explaining the Efficacy of Counterfactually Augmented Data. ICLR 2021. [2] Eisenstein, J. (2022). Uninformative Input Features and Counterfactual Invariance: Two Perspectives on Spurious Correlations in Natural Language. arXiv preprint arXiv:2204.04487. [3] Veitch, V., D'Amour, A., Yadlowsky, S., & Eisenstein, J. Counterfactual invariance to spurious correlations in text classification. NeurIPS 2021. [4] Kaushik, D., Hovy, E., & Lipton, Z. Learning The Difference That Makes A Difference With Counterfactually-Augmented Data. ICLR 2020. As this is an analysis paper, it is hard to understand the limitations and potential negative social impact, but I would urge the authors to think about potential negative impacts arising from misinterpretation of their analysis. <doc-sep>This paper considers deep learning in the common case where the training data contains spurious correlations. The main takeaway is that empirical risk minimization (alone) is sufficient to obtain state-of-the-art performance; specialized group robustness methods do not appear to provide a significant benefit. This is demonstrated on six datasets spanning both vision and text problems. The effect of the architecture, pretraining strategy, and regularization is also considered. Spurious correlations are a concern when fitting neural networks, so this paper tackles an important problem. Overall, I found the presentation to be quite good and the experiments fairly convincing. My main issue with the work in its current form is that the scope (and therefore potential impact) of the work is more limited than the title and introduction suggest: the spurious correlations studied are *labeled* properties of the inputs, rather than latent spurious features. In most cases, one does not have access to labeled attributes (or even class labels!) when fitting a neural network, and therefore this work has a narrower scope than expected. This being the case, I think it is notable that specialized group robustness methods appear to perform no better than ERM when it comes to learning the in the presence of spurious correlations. In addition, the empirical observations regarding regularization and other effects of the base model are interesting, although many of them rely on DFR, which AFAIK is not peer reviewed. The DFR procedure is somewhat similar to what's done in contrastive learning, e.g. "Supervised Contrastive Learning," except there the second stage is performed on the original dataset. This often results in improved performance thanks to the contrastive objective. I'm curious how supervised contrastive learning would impact the results, both using the original dataset or the "reweighted" one. I would have liked to see the analysis of pretraining to include additional experiments with text / BERT. A smaller concern is that this paper leans heavily on Deep Feature Reweighting (DFR), which appears in a recent arxiv preprint (Kirichenko et al. [22]). Unfortunately, reading that pre-print is necessary to understand this work; reading the short description in S3 was not sufficient to follow along. It would be better to make this paper self-contained, especially given how simple the DFR idea is (e.g., define the reweighting dataset). I would have liked to see the scope of the paper defined a bit more clearly. In addition, while both vision and text datasets are used, the majority of the experiments are only on the image datasets. <doc-sep>This is primarily an empirical paper about learning in the presence of spurious correlations. They run a lot of tests that show training a model with empirical risk minimization (ERM) followed by deep feature reweighting (DFR) (which is retraining last layer on some on a held-out set that doesn't have spurious correlations) yields results that are not too different from group robustness training (Group DRO) Thanks to the authors for the hard work on this paper. I like it overall, and think it is a valuable contribution. Strengths ------------ - A large amount of insightful experiments - Clear writing - Sensible comparisons and conclusions Weaknesses ---------------- - The conclusion of this paper hinges on the idea that: "If two different methods get roughly the same held-out performance, then they must be learning the same kind of thing." I don't think that's necessarily true, and your experiments don't really prove it. The conclusion in lines 207-211 could be made stronger with other types of analyses. To have more certainty it's that "better weighting of the learned features rather than learning better representations of the core features" you could, for example, actually extract the representations learned from ERM+DFT and GDRO and see if they look similar. This experiment could be tricky, but if you make sure to have identical weight initialization, and run the experiment ~10 times, you could show that the feature spaces learned are really similar or not. - Why is early stopping important for RWY, RWG and GDRO? Are there some simple experiments you could do to elucidate this? Not required here.
The paper shows that empirical risk minimization is sufficient to obtain good worst-group accuracies and specialized group robustness methods do not appear to provide additional benefits. The reviewers pointed out that the current work depends on DFR, which seems to require some additional data compared to group robustness methods. The reviewers also note that the NLP experiments did not use more recent models, and the authors addressed these issues. Generally the reviewers think this is a well-executed paper on an important problem, and are unanimous in accepting it.
The paper focuses on the understanding of the effects of adaptive learning rate and momentum. In particular it proves that the adaptive learning rate can escape saddle points efficiently and cannot select flat minima as SGD does. It also shows that momentum helps the training process by passing through saddle points and without affecting the minima selection.The paper also proposes a new adaptive algorithm, named Adai (Algorithm 2), which uses parameter-wise adaptive intertia to accelerate the training and finds flat minima as well as SGD. Finally, the paper provides extensive numerical testing showing the benefits of Adai. I believe that the main ideas of the paper are interesting. However I find that the presentation of this work is not very clear and somehow confusing. In particular, the structure of the paper and the results presented in sections 2 and 3 are difficult to absorb. See my comments below. I understand the motivation of the authors and what they tried to communicate but i find that there is no satisfactory explanation of the results presented in sections 2 and 3. The authors assumed that the reader is familiar with the closely related recent work on the SGD diffusion theory and do not provide enough details on the framework. For example they use terminology like "Fokker-Planck equation", "divergence operator", and "diffusion matrix" that are not really standard in the area of adaptive methods. In addition, Assumption 1 on the second order Taylor approximation near critical points, is given without providing some interesting problems where it is satisfied. Also in section 3, Assumptions 2 and 3 are used without further explanation of what exactly they mean. The authors provide a few details in the appendix on what is the quasi-equilibrium approximation and low temperature approximation but this is not sufficient. How these assumptions are related to standard concepts in the area? What are the mathematical expressions of these assumptions? how are related to stochastic gradients and the noise? Also i find it a bit surprising that there is no formal presentation of the problem that we are interested to solve and the assumptions that one requires to be able to prove convergence. A statement of the minimization (or maximation) problem under study with the main assumptions is missing from the paper. One of the most important contributions of the paper is the analysis of the new algorithm, Adaptive Inertia Optimization (Adai) proposed in Section 5. However if one focuses on Theorem 4 which provides the convergence guarantees of the Adai it is clear that the analysis hold under very strong conditions / assumptions. For example the authors assumed both bounded variance, and bounded gradient of the objective function, which rarely hold in practical scenarios. Note that these conditions have already been proved to contradict special classes of non-convex problems like functions satisfying the Polyak-Lojasiewicz condition (the combination of these assumptions lead to an empty set of problems). Thus the theorem cannot hold for all non-convex smooth problems. As i mentioned in the main review, I believe that the main ideas of the paper are interesting. However I find that the presentation of this work is not very clear and somehow confusing. In particular, the structure of the paper and the results presented in sections 2 and 3 are difficult to absorb. See my comments below. <doc-sep>This paper disentangles the effects of adaptive learning rate and momentum in Adam learning dynamics, and proves that adaptive learning rate is good at escaping saddle points but not good at selecting flat minima, while momentum helps escape saddle point and matters little to escaping sharp minima. Based on the analysis, the authors propose a novel optimizer, Adai. Compared to SGDM, Adai parameter-wisely adapts the momentum hyperparameter to the (approximated) Hessians of saddle points, and is proved to fast escape saddle points and sharp minima. The theoretical analysis of the Adam optimizer is based on the SGD diffusion theory, and the results confirms and explains the observation that Adam can sometimes converge faster but generalize worse than SGDM. The proposed Adai optimizer is theoretically sound, and demonstrates slightly better generalization performance than SGDM (and significantly better than Adam) on image classification tasks. Despite estimating the moments in a similar way as Adam, the proposed Adai optimizer seems more akin to SGDM, with the only difference being its adaptive momentum; and it doesn't use adaptive learning rates, which is a main feature of Adam. Moreover, as shown in Figs. 1 to 3, and 10, the training curves of Adai very much resemble those of SGDM. Therefore, to further improve this work, more comparisons should be made between Adai and SGDM rather than between Adai and Adam. In particular, it would be interesting to see if the performance gap between Adai and SGDM results from faster convergence (as suggested by the theory), and therefore a convergence comparison between Adai and SGDM as the one conducted between Adai and Adam (Fig. 11) should be helpful. This paper provides new insights into the performance of Adam, and proposes a novel optimizer that both converges fast and generalizes well. Further improvements can be made by comparing the proposed method to SGDM more thoroughly. <doc-sep>This work analyzes the dynamics of momentum SGD and Adam on escaping saddle points and sharp minima, which is based on the diffusion theoretical framework proposed in (Xie et al. 2021b). The authors prove that momentum provides a drift effect around saddle points and does not affect flat minima selection (for SGD), and while Adam escapes saddle points efficiently, it does not favor flat minima as well as SGD. The analysis explains some empirical observation of SGDM and Adam. Motivated by the analysis, the authors propose adaptive inertia (Adai) method, which can approximately achieve Hessian-independent momentum drift (escapes saddle points fast) and favors flat minima as well as (momentum) SGD. This paper is generally well-written. The diffusion theoretical analysis does provide some insight on the empirical performance of momentum SGD and Adam. The authors also put in efforts to conduct numerical verifications to their theoretical statements, which is highly appreciated. However, I think that this work does not completed ''disentangle'' the effects of adaptive learning rate and momentum since the work analyzes Adam, which fuses these two algorithmic components. It would be better to discuss the effect of each component in Adam separately (probably by setting some parameters to zero). The authors then propose Adai, which achieves approximately Hessian-independent momentum drift without damaging flat minima selection (BTW, the proof of Proposition 3 is missing, if it is a direct consequence of Theorem 2, it is better to mention it somehow). The construction of Adai is interesting, and its effectiveness is justified by the empirical experiments. However, it seems to me that this contribution is a bit disconnected to the main story as Adai does not use adaptive learning rate. Some revision (probably changing the title?) might be good to make the story clearer and more fluent. Typos: - Missing reference on page 7, "Note that an existing “adaptive momentum” method (?) " - The last sentence "better than popular Adam and SGD". This work provides some new theoretical insights for momentum SGD and Adam, which are interesting and important. The authors then propose adaptive inertia based on the insights, which shows good performance. Some revision is needed to make the story clearer (see main review). <doc-sep>This paper studies the behaviors of some algorithms when the iterate is at a critical point via SDEs. A variant of Adam is given in the end. The paper draws some conclusions about adaptive learning rate and momentum, claiming that - QUOTE momentum matters little to escaping sharp minima UNQUOTE - QUOTE adaptive learning rate is not good at selecting flat minima UNQUOTE Here are some of the questions and concerns: From my perspective, the theoretical analysis is not rigorous and skips a lot of details. For example, in Section 4 (Page 5), the paper claims that the continuous-time dynamic of Adam can be written as equation (6). However, a detail derivation to support the claim and the proof of showing that the SDE is a valid one for Adam are missing in the paper. Indeed, it can be seen from Page 5, where the authors said they are analyzing an "idealized" Adam. It is not clear why this is an idealized one and the writing is like hiding something under the rug. We can see that some statements about Adam are made, but these conclusions are drawn from a SDE which does not really correspond to Adam. On the other hand, the SDE of Adam does exist in the literature, e.g. https://arxiv.org/pdf/2010.05627.pdf. If this paper really wants to claim/argue something about Adam, then a careful analysis regarding the discretization error between the solution of the proposed SDE and the discrete-time Adam should be provided in the paper. For another example, on page 3, it states QUOTE the diffusion matrix D is independent of $\\theta$ near critical points UNQUOTE. But the proof is not provided. How is it independent? It looks like some approximations were used there. The paper should prove the "independence". Another concern is about (8). The paper should provide a detailed derivation of showing that (8) is really about how the distribution evolves when the underlying dynamic is (7). Currently the equation (8) is like jumping out of nowhere. How does Assumption 1 help to show (8)? Also, some places in the paper are not clear: (a) (Second paragraph on Page 1) QUOTE all previous works have not touched the saddle-point escaping property of the dynamics UNQUOTE Apparently there are quite a few works regarding saddle-point escaping by SGD, SGD with momentum, and Adam. The authors might want to explain what they meant here. (b) There are two approximations on (3). But it would be more helpful to explain how the approximations are made in detail. There are some descriptions below (3) but are not very clear. (c) (Last paragraph in Section 3) QUOTE The momentum does not affect flat minima selection in terms of the escape time UNQUOTE This is another confusing statement. What does "momentum does not affect flat minima selection" mean? Does it mean momentum and SGD without momentum converge to the same point? What is the definition of "flat minima"? (d) (Second to last paragraph in Section 4) QUOTE Adam has $\\log(\\tau)=O(H_{ae}^{-1/2})$ ... SGD and SGD+momentum both have $log(\\tau) = O(H_{ae}^{-1})$ UNQUOTE It seems that the conclusion right below this sentence would be reversed if $|H_{ae}|>1$. Something wrong? (e) (Theorem 1 and 2) The authors show some guarantees about the variance at time $t$ when the iterate of the algorithm follows the Gaussian distribution. But does a higher variance of the Gaussian distribution imply a faster saddle point escape? The authors might want to add some discussions about the connection to the notion of saddle point escape in the literature (e.g Jin et al. 2017). (f) After reading the paper, I am still not sure how the effect of the learning rate and momentum was "disentangled" in the analysis. I see some analysis about the behavior of SGD, ADAM, SGD+momentum at critical points. It would be more helpful if the authors can explain why the effect of the learning rate and momentum can be isolated. The presentation and statements are confusing in my opinion. Some steps in the analysis are not transparent.
The paper is aimed at providing an explaining the perceived lack of generalization results for Adam as compared to SGD. To this end the paper decouples the effect of adaptive per parameter learning rate and the momentum aspect of Adam. The paper shows that the while adaptive rates help escape saddle points faster - they are worse when consider the flatness of minima being selected. Further momentum has no effect on the flatness of minima but again leads to better optimization by providing a drift leading to saddle point evasion. They also provide a new algorithm Adai (based on inertia) targeted at better generalization of adaptive methods. The paper definitely provides an interesting perspective and the approach to decouple the effect of momentum and adaptive LR and study their efficacy in escaping saddle points and flatness of minima seems a very useful perspective. The primary reason for my recommendation is the presentation of the paper in terms of the rigor its assumptions to establish the results. These aspects have been highlighted by the reviewers in detail. I suggest the authors to carefully revisit the paper and improve the presentation of the assumptions, adding rigor to the presentation as well as adding justifications where appropriate especially in light of non-standardness of these assumptions in optimization literature.
This paper derives a counter term to the gradient flow ODE formulation that reduces the discretization error from Euler's method, which is gradient descent. When this correction term is expanded as a Taylor series, adding a select number of terms reduces the discretization order accordingly. This is then used to analyze the behavior of GD under symmetry constraints, specifically scale- and translation-invariant parameters. Specifically, this adds learning-rate-dependent correction terms to the decay rates of certain quantities, which matches gradient descent in practice. Pros: - Quite an interesting take which attempts to correct the theoretical ODE formulation in order to match practice (as opposed to bridging this gap by using higher order solvers, for example). - The motivation, process, and theoretical results are presented very well. I could follow understand every result (just the results; not the proofs) despite not being an expert in the theory of gradient descent. Cons: - I imagine the theory (of estimating discretization error of ODEs) has been done before, though perhaps not in this exact context. The zero-th order term (Eq 9) has shown up in machine learning studies before. - It's not clear to me if we gain anything from using the higher order terms in the derivation, as the analysis and experimental results all use only the zero-th order term. . <doc-sep>The authors derive an Equation of Motion, i.e., a continuous differential equation that matches the discrete time dynamics of gradient descent more clearly. They do so by adding a counter terms to Gradient Flow that cancels out higher order discretization errors in DNNs, and this counter term is derived using backward error analysis, more precisely it is the solution to Equation 6 in the paper. Given they are using backward error analysis, they are also able to quantify the discretization error for GD approximation of GF along with the counter term, and hence also provide a bound on the learning rate such that this discretization error is small. The authors apply their Equation of Motion to translation and scale invariant layers and show that their theoretical predictions better match GD. My main concern is that this paper does not provide any interesting new result. The main novelty of the paper is the general form of the counter term (which is derived in Theorem 3.3), as opposed to previous work for example Barrett and Dherin (Implicit gradient regularization) which only uses the first order term as their regularization term. While the authors mention that they derive the discretization error (in Corollary 4.1), the precise formulation is not provided and the rate is provided as an upper bound (using Big-OH) which I believe is an artifact of standard series expansion results. Furthermore, for most of the results on the discretization error bounds and the upper bound on the learning rate, the authors assume that the counter term is either equal to zero or assume the first order counter term (i.e., the term in equation 9). With the counter term equal to 0, the analysis matches a lot of the previous work (example Elkabetz and Cohen, [18]) and with the first order term the main theoretical results are very similar to the ones already established in Barrett and Dherin (who introduce the first order counter term as the implicit regularizer and the error is discussed in Theorem 3.1) Besides this, the authors do apply their analysis to characterize the learning dynamics of scale and translation invariant layers and show that with the inclusion of the first order (adding higher order counter term is going to be computationally expensive) they are able to better predict the decay of parameter norm, which is interesting but not that surprising. Yes the authors have discussed the limitations of their work. <doc-sep>This paper deals with the discrepancy between the actual discretized gradient descent and its continuous version, i.e., a gradient flow, for describing the equation of motion of learning dynamics more precisely. The discrepancy error is formally introduced by using the backward error in numerical analysis. The authors derive a counter term, which can compensate for such a discrepancy of the gradient flow, thus can describe the actual discretized trajectories in a continuous manner. While the derived counter term is a complicated functional integral equation, it can be analytically solved (for all orders) by assuming the underlying solution is a power series. As an application, the authors use the derived dynamics with the proposed counter term for investigating scaling- and translation-invariant layers. [Note] Because I am not an expert on learning theory, my evaluation might not be exhaustive. Also, I did not read the proofs in the supplementary material carefully. [Strengths] To me, the derived counter term is novel and seems to be useful to predict and interpret complicated learning dynamics of deep neural network models. Although there have been previous studies that incorporate some correction terms with respect to the backward analysis error, to my knowledge they are restricted to 1-st order compensation $\\frac{1}{4} \\nabla || \\nabla f(\\theta)||$, which is generally called an implicit gradient regularization. The proposed counter term is generalized for higher orders and can recover the previous studies well as in (9). While the main result (8) seems to be a known technique in the numerical analysis field, I would like to give appropriate credit to the authors for the contribution that introducing such a technique to the deep learning field well. [Weakenesses] The authors address a full-batch gradient descent only. The authors also mention such a limitation of this work in Conclusion and Limitations. I am not sure whether the approach can be easily generalized for the mini-batch stochastic gradient descent method. While the authors theoretically prove high-order corrections are required to cancel the leading order of discretization error, it will be great if the authors (1) experimentally show the discrepancy between the GF with the proposed correction and that with a first order correction, and (2) demonstrate the former can approximate GD well compared the latter, e.g., in Figure 2 or Figure 4. The proposed method is not guaranteed for GD with a large learning rate, thus cannot be used for explaining some interesting phenomena, e.g., the regularization effect of an initial large learning rate. However, I think it is not a crucial drawback of this paper, considering the essential assumption of GF. The paper is very dense and thus hard to read. A journal format might be more suitable for a clear representation of this work. The authors discuss the limitations of the proposed method, e.g., the lack of concerning the mini-batch stochastic GD and other optimizers beyond GD, in Conclusion and Limitations section. It will be nice if the authors also address the questions raised above. <doc-sep>This paper is concerned with a theoretical understanding of modelling the dynamics of gradient descent with a differential equation. Previous work (Gradient Flow) describes the differential equation as: $\\frac{d\\theta}{dt} = -\\nabla_{\\theta} L(\\theta)$ Which is the Euler discretisation of Gradient Descent: $\\theta_{t+1} = \\theta_{t} - \\eta \\nabla_{\\theta}L(\\theta_t)$ However, discretisation error exists, such that Gradient Flow and Gradient Descent diverge. This paper derives a counter term to Gradient Flow, labelled by $\\xi$: $\\frac{d\\theta}{dt} = -\\nabla_{\\theta} L(\\theta) - \\eta\\xi(\\theta)$ The counter term is a functional integral, the paper approximates the counter term with a series solution in $\\eta$, with a recursive relationship existing to get from term $k$ to term $k+1$, the new dynamics are called Equation of Motion (EoM). A limit for the learning rate is also derived, which allows accurate simulation of gradient descent using (EoM) with larger step sizes. Finally these findings are tested on scale-invariant layers and translation-invariant layers and the results support the theoretical findings. Strengths: This is not an area I have significant expertise in, however, overall this paper is very good in my opinion. Specifically: - The paper shows many impressive theoretical results - The experiments are extensive and support the theory - The motivation for the paper is very clear Weaknesses: Again, I believe the paper is very good. I think it presents good theoretical results, with sufficient experiment to support this. The only weaknesses overall are in the writing style and presentation, I think the paper is quite math heavy currently, which makes it less accessible. However, this is down to personal preference. For example lines 239-245 (Definitions) feel like quite a complicated way of saying most of the meanings. One definition is $\\alpha_{\\mathcal{A}} = \\alpha I_\\mathcal{A} + I_{\\mathcal{A}^{C}}$, but it is easier in my opinion to say $\\alpha_{\\mathcal{A}}$ is $\\alpha$ for the parameters in layer $\\mathcal{A}$ and $1$ for the others. This can be extended for most of the definitions in this paragraph. I think the best way the paper can be improved is by having as much intuition as possible in the main text, with theorems included, and then possibly having more mathematical detail in the appendix. Other small points are the presentation of results. Figure captions don't have **Figure n** in line with the caption, but over to the left which feels weird. Table 1 could also be improved, rather than listing the decay rates, it might be more informative to list the differences (and relative differences) in decay rates between GF & GD and EoM & GD. The authors have been upfront with the limitations of their work. These are given in the conclusion and provide a nice avenue for future research, they are about different optimizers and how using minibatches are not accounted for in the current work. I cannot think of any further limitations.
Reviewers were unanimous in recommending that the paper be accepted, and I accordingly recommend the same. I encourage the authors to take into account suggestions made by reviewers so as to further improve the text in the camera-ready version.
The paper proposes an approach for using data relabelling for meta-RL for better sample efficiency and to enable training on sparse reward environments. Specifically the proposed method combines Pearl[1] with a modified version of HIPI[2], where the trajectories chosen for relabelling are effective for adaptation, and not necessarily high in reward themselves. [1] : Efficient Off-Policy Meta-RL vis Probabilistic Context Variables (Rakelly et al.) [2] : Rewriting History with Inverse RL (Esyenbach et al.) Strengths 1. Usefulness of relabelling The problem considered is an important one, since even though hindsight relabelling is standard in multi-task RL, and has been shown to enable learning on sparse reward environments (which are otherwise very difficult to solve), this approach hasn’t been applied to the meta-RL setting yet. This is despite the fact that meta-RL also considers a multi-task distribution, and can benefit from explicitly using data for a different task and relabelling it under the corresponding reward function. The mathematical formulation of the approach closely follows HIPI[2], with the difference that post-adaptation trajectory return is considered instead of current trajectory return, to be aligned with the meta-learning objective. The authors show experimentally that current meta-RL approaches do not begin to make progress on sparse-reward tasks, showing the importance and effectiveness of relabelling. 2. Extent of Evaluation and Analysis The paper includes evaluation on 5 different sparse reward environments, 3 dense reward environments which show that the relabelling scheme offers benefits over current meta-RL approaches mainly in the sparse reward setting. The authors also include ablations/analysis of specific components, such as using a learned reward instead of the true reward, using hardmax instead of softmax for sampling the relabelling task etc. The paper is well written, the presentation is clear and well-motivated. Weaknesses 1. Small performance gap with HIPI, Simplistic Environments Out of the 8 experimental domains chosen, the performance of the proposed approach (HFR) is significantly better than HIPI on only two domains (ant-goal and sawyer-push). This indicates that most of the benefit is coming from the relabelling scheme for most environments, either because the adaptation procedure doesn't actually lead to better performance, or because the environments are too simple to require a lot of adaptation to held-out tasks. Given that performance is much better than HIPI on the hardest environments (ant-goal and sawyer-push), I am inclined to think the issue is the latter instead of the former, which can be addressed by evaluating on harder environments. These could include other single-family robotic tasks from meta-world (eg: sawyer-door-open, sawyer-box-close etc). Even better would be meta-training across task families, using MetaWorld ML-10 or ML-45. This would test adaptation to tasks that are semantically different, and would make the paper a lot more compelling. The approach introduces relabelling (which has already shown to be important in multi-task RL) in meta-RL, and shows superior performance on sparse reward environments. The paper would be more compelling if it included evaluations on more challenging environments, to establish the importance of the adaptation component. <doc-sep>This paper proposes a way to share data across different tasks in meta-reinforcement learning (meta-RL), where the data from one task is reused in another task by relabeling the rewards. Based on the HIPI method ([1]), the authors construct a relabeling distribution to relabel the pre-adaptation trajectories from one task to be used for another task. The relabeling probability of a trajectory is chosen to be proportional to the exponentiated utility function, which is defined as the expected return after the agent uses that trajectory to adapt. In practice, the post-adaptation return is approximated using the learned Q function. The authors apply this relabeling distribution to PEARL, an existing off-policy actor-critic style meta-learning algorithm. The authors conduct experiments on simulated robotics experiments. The results suggest that the proposed method outperforms prior methods on sparse reward tasks, while performing roughly the same on dense reward tasks. References [1] Eysenbach, Benjamin, et al. "Rewriting history with inverse rl: Hindsight inference for policy improvement." arXiv preprint arXiv:2002.11089 (2020). Overall I think this paper presents an interesting idea for sharing data between tasks of a meta-RL problem. The paper is well written and the ideas are presented clearly. Pros: 1\\. I find the main insight of the paper simple and intuitive. The idea that we need to relabel not according to how much return we achieve but according to how much information we can gather for task identification gives us a clear distinction between multi-task RL and meta-RL. The derivation of relabeling according to the exponentiated post adaptation return follows naturally. 2\\. The ablation study in the paper is very informative. The ablation study gives us clear comparisons, showing us which component is more important. From the ablation study, it seems that using the partition function and softmax relabeling distribution are the most important components. 3\\. The paper is well written. The insights, ideas, algorithms and experiments are easy to follow. Cons: 1\\. I am somewhat skeptical about the approach of using the learned Q function to estimate return after adaptation. In the base meta-RL algorithm PEARL, the context encoder is trained to identify the task from a distribution of tasks, producing a posterior distribution of context z corresponding to that task. This means that given the relabeled trajectory, even if the context encoder predicts the context corresponding to a wrong task, as long as the produced context is within the distribution of tasks, the expected return will still be high because the policy is trained to do well also on that wrong task. Therefore it is not clear to me why using the learned Q function is a good way to estimate return on that specific task. In order to verify this, I’d like to ask the authors to include the following experiments. First train the proposed algorithm to convergence and freeze the weights of the context encoder. Then train a N-way classifier on top of the context encoder to classify the context into one of N training tasks without using any relabeled trajectories, and then report the accuracy of the classifier. Finally relabel the trajectories according to the proposed method and report the classifier’s accuracy on top of the relabeled trajectories. If the relabeling mechanism using the learned Q function is correctly capturing the task information, we would see that the classifier’s accuracy on the relabeled trajectories is comparable to that on the true trajectories. In fact, this experiment could also lead to an even simpler relabeling strategy: directly use the true task’s probability under the classifier’s prediction as the source of relabeling signal. 2\\. The empirical performance of the proposed method does not seem very strong. Only in 3 of 5 sparse reward tasks the proposed method significantly outperforms the baselines, and the proposed method does not show much improvement on dense reward tasks. Given these limitations, I’m leaning slightly towards not accepting the paper. I’d highly encourage the authors to conduct the experiment I suggested in order to verify that the proposed method is indeed capturing the task information correctly. ## Update After Author Response The authors conducted additional classifier experiments I requested and the results suggest that using learned Q function to estimate returns is highly informative about the task. Therefore my main concern about the proposed method has been addressed, and I'm now leaning towards accepting the paper. The paper presents an interesting idea about reusing data across tasks in meta-RL. The idea is very intuitive and the paper is well written. However, I’m not sure about whether the approach used to implement the idea of the paper really does what the authors claim it does. Therefore I’d like to see more evidence before I can recommend accepting the paper. <doc-sep>This paper studies task relabelling in hindsight to increase the efficiency of meta-reinforcement-learning. The authors propose a strategy for calculating a distribution of tasks for which a particular batch of data would be useful for adaptation, and sample from this distribution to construct a relabelled batch which augments the training data. The authors show empirically that this improves sample efficiency over more naive relabelling schemes, particularly for sparse reward tasks. A series of ablations further justifies several design decisions or investigates robustness to hyperparameters. I like this paper overall. The motivation is sound: meta-RL is almost by definition slow, by using a slower timescale for meta-learning than the fast learning or adaptation, and so data-efficient methods are key. Task relabelling, like in multi-task or goal-conditioned RL, makes a lot of sense in this context. The particular proposed method seems reasonable, although I have some concerns about the detail of the exposition – I found section 4.1 fairly difficult to follow. First, it’s not 100% clear to me how to map eq(1) to the objective written in terms of utilities because eq(1) does not define where tau_pre come from. Presumably this is just following pi_theta, and the conditioning of q(tau | psi) on psi only results in differing rewards in tau, not differing state-action sequences? Then, most importantly for understanding this section, I don’t follow why the objective for (theta, phi) should be maximised by adjusting this q for fixed (theta, phi). The paper says this “facilitates alignment with the goals of the meta-learner” but I’m not sure what this means. The derivation then continues in a very brusque manner. I’m not a fan of “it is easy to show”: in general if it is easy rather write it yourself (in an appendix if need be for space) or cite appropriately. Eventually we arrive at an ‘optimal’ relabeling distribution but I don’t understand in what sense it is optimal due the previous confusion. It could be I’m missing something simple or these things are all straightforward and clear to a reader with the right context. However, I encourage the authors to substantially clarify and elaborate this section to engage with a broad audience. The issue that I have more intuitively with the method is that the optimal task inference should depend on the true distribution of tasks. By altering this distribution through relabeling, it seems it would change the optimal (theta, phi). Can the authors elaborate on whether or not this should be a consideration, perhaps by clarifying the exposition given in S4.1? The implementation of the approach is quite neat. I like the use of PEARL’s particular type of value function to efficiently estimate the value of the post-adaptation policy without sampling any fresh transitions. I also like the empirical study. The performance gains seem substantial in several tasks, and I appreciate the credible baselines which are more naive but not just vanilla PEARL without any task relabeling. I appreciated the informative ablations. Minor comments or questions: - Paragraph 2 of the intro says meta-RL is "inherently on-policy": this is incorrect. - Why relabel with just one task sampled from q(psi|tau)? Why not several samples, or weighted samples? - In Algo 1, maybe use a different letter to distinguish N in GetLogPartition and in N in ComputeUtility - The paper would benefit from more details on the setup with a learned reward function. ---------------------- The authors were able to clarify the points that had confused me in my initial reading. I am persuaded that the optimal meta-learned solution will not be biased by the proposed relabelling; and that the derivation is sound. Optimising the relabelling distribution for the immediate post-adaptation returns makes sense as a somewhat myopic heuristic to accelerate meta-learning. I also appreciate the additional experiment carried out for HrwG. Further, while the connection to prior work is close in many ways, I believe the adaptation of the method for this context is sufficiently novel and effective to warrant acceptance. The work is well-motivated intuitively, but the mathematical justification for the specific method is difficult to follow (so I cannot quickly verify its soundness). The empirical study is well done overall, so I lean to accept the paper but would likely increase my score and confidence if the authors can clarify the theoretical motivation for their relabeling strategy. <doc-sep>The paper proposes a trajectory relabeling method for meta Reinforcement Learning (meta-RL), aiming to share some of the collected trajectories to improve sample efficiency during meta-training. The relabeling method is built on HIPI (Eysenbach et al., 2020). Instead of relabeling the trajectory based on the total reward as in HIPI, the paper argues that in meta-RL, the metric of interest for trajectories from different tasks is their usefulness for task-identification rather than returns. The paper further proposes a meta-RL algorithm based on PEARL (Rakelly et al. 2019). The experimental results on several sparse-reward tasks show that the method outperforms other relabeling methods as well as PEARL. (a). I am a little concerned with the novelty of the paper. The authors made an interesting point that compared to multi-task RL, the objective in meta-RL is to learn to learn a new task, so the metric of interest for relabeling trajectories in meta-RL is their usefulness for task-identification, rather than the returns like in multi-task RL (Section 4, page 4). The paper’s main contribution is a trajectory-relabeling algorithm in meta-RL setting based on this intuition. However, the proposed algorithm still seems to try to relabel trajectories based on their returns on other tasks. Specifically, the resulting learning objective Equation (8) is quite similar to that of HIPI (Eysenbach et al., 2020), as the utility function similarly aims to maximize the expected return of the trajectory on the new task, which I think would be exactly the total reward of trajectory on the new task in the multi-task setting. This seems to be contradictory to what the papers says about the difference of relabeling trajectories of previous work in multi-task rl and relabeling trajectories in meta-RL proposed in this work. Plus, the actual implementation looks like a straightforward combination of HIPI and PEARL (Rakelly et al., 2019) to me. (b). The proposed algorithm makes an important assumption that the actual reward function is known for each task in this meta-RL setting. This makes the meta-RL problem setting confusing as the paper also says the tasks share the same dynamics and only differ in the reward function. But like the paper mentions, there may exist some scenarios where this assumption is reasonable. The experimental results show that the proposed algorithm improves performance compared with other relabeling methods (HIPI and random) in such settings. First, I think there might exist methods that, under the same assumptions, are able to do better than meta-RL algorithms. For instance, one can relabel all the collected data with the new task reward and run some kind of offline RL algorithms on it (without meta-learning). In my opinion, that would be another good baseline to strengthen the author’s claim under the same assumptions. Secondly, in the experiment section the paper mentions a variant of the proposed method that also considers the scenario where the true reward function cannot be queried for individual transitions. This is a more interesting setting and I think the authors should elaborate on this part (e.g. how do you learn the reward functions? Is there anything specifically designed for meta-RL settings?) And it would be better to show more experimental results under these settings. For instance, the author can compare to the state-of-the-art meta-RL algorithms under such more common settings, especially in sparse-reward environments where the proposed algorithm is more competent. Potential baseline: 1. MetaCURE: Meta Reinforcement Learning with Empowerment-Driven Exploration, Zhang et al., ICML 2021; 3. Towards Effective Context for Meta-Reinforcement Learning: an Approach based on Contrastive Learning, Fu et al., AAAI 2021. (c). I have some minor comments listed below: 1. Equation (4), On the left hand side, $j$ should be superscript instead of subscript. 2. Figure 4, the actor $\\pi$ and critic $Q$ should be parameterized using different denotations, instead of jointly using $\\theta$. 3. Figure 6, could the authors explain why randomly relabeling the trajectories can achieve competitive performance or even better performance than HIPI and PEARL? 4. The adaptation procedures listed on page 6 before equation (9) confuse me. Could the authors provide an algorithm bar for the meta-test phase (maybe in appendix)? The idea in the paper is well presented and carefully investigated. The proposed method is simple and effective. However, I am not quite convinced about the novelty of the proposed idea and I think the experimental settings can be improved to strengthen the paper’s claim.
This paper proposes Hindsight Foresight Relabeling (HFR), an approach for reward relabeling for meta RL. The main contribution is a measure of how useful a given trajectory is for the purpose of meta-task identification as well as the derivation of a task relabeling distribution based on this measure. Reviewers agreed that the paper tackles an interesting problem and found the main insight to be simple and intuitive. While the initial reviews raised some concerns regarding novelty, the performance gap, and using the learned Q-function to estimate post-adaptation returns the rebuttal did a good job of addressing these concerns. Overall, the paper proposes a non-trivial extension of hindsight relabeling to meta RL and while the results could be stronger I think the paper provides useful ideas and insights so I recommend acceptance as a poster.
This paper introduces a task of joint visual-linguistic grammar induction from parallel image-text data, presents models and metrics for the task, and shows strong empirical results. ### Strengths - As far as I know, this is the first paper that proposes joint visual-linguistic grammar induction in a real-world setting (in contrast to synthetic settings; Hong et al., 2021). - The approach and the evaluation process are solid and make a lot of sense to me. - The visually grounded parsing results are quite impressive. ### Weakness - My major concern is about the model selection process and the potential unfair comparisons to existing work. - Model selection: If I understood correctly, for text parsing, the best models are selected w.r.t. to the parsing performance on a 1000-example dev set (Appendix F). \\ This is an unrealistic setting (see https://aclanthology.org/2020.emnlp-main.614.pdf for discussions; in short, for any fancy unsupervised parsing model that uses a labeled development set, a supervised model trained on these development examples should be considered as a strong baseline) -- introducing unsupervised criteria for model selection is more important than our initial impression. - Unfair comparison: CLIORA, the model proposed in this paper, uses DIORA as initialization, which uses ELMo to initialize word embeddings and the PTB labeled development set for model selection. This means that CLIORA has seen far more text than other baselines (VG-NSL, C-PCFG, VC-PCFG, and so on) and human language learners. \\ This issue also undermines the authors' arguments about potential links to how humans learn language. I expect either a CLIORA trained from scratch (without DIORA initialization) or weakened arguments about the relationship between the current CLIORA and human language learning. - There seem to be some confusion on basic linguistic concepts, e.g., nonterminal vs. terminal symbols, and a few typos that affects smooth understanding (please see also detailed comments below). ### Other comments and questions - Introduction: "These works, however, fail to consider a unified VL structure, nor have they demonstrated impact on visual understanding." \\ I don't think I necessarily agree with this statement, especially regarding Hong et al. (2021). Despite that there is a clear gap between their dataset and the real world settings, they are aligning the "visual grammars" to language grammars, yielding an arguably unified VL structure. - Introduction: "The non-terminal symbol of a conventional constituency structure is a category label from a limited set (e.g., the set of part-of-speech (POS) tags) (Hopcroft et al., 2001). " \\ Do you mean *terminal* symbols here? We usually refer to POS tags (to clarify, phrase tags are not POS tags) by preterminal or terminal (depending on whether the phrase-structure grammar is lexicalized, i.e., whether it's considering real words or just POS tags), and refer to the phrase nodes by nonterminal nodes/symbols (e.g., NP, PP). It seems that this is not a typo -- I have the same questions for the following task definition section on page 3. - Task definition, evaluation metrics: if I understood correctly, CCRA requires some extra annotation of critical concepts -- how did you collect such annotations to determine which NPs are critical? \\ (Very minor) based on the full name, CCRA should really be CCRR -- what does A stand for here? - Section 3.2, feature extraction: the Yoon Kim et al. (2019b) paper is not relevant to image features at all -- did you mean Shi et al., (2019)? - Table 1: what is the dagger after VGNSL-HI? - Section 4.3: did you mean "augments" by "arguments"? - Some more thoughts regarding motivation limitations: humans arguably learns how to parse concrete sentences first, and can then generalize to abstract domains that are not visually groundable. In this work, it seems that the model only works when both the text and image are available, as there is a need to infuse visual features into text spans. Do you have any thoughts on enabling a trained CLIORA model to parse pure text without grounding signals? ### Missing Reference Kojima et al. [1] has strengthened the VG-NSL model by simplifying the architecture, and argued that such visually grounded models are potentially biased towards concrete noun phrases. However, the paper neither cited it nor discussed the relevant issues. [1] https://aclanthology.org/2020.acl-main.234.pdf There have been a lot of relevant work earlier than 2019 on visual-semantic embeddings or structured visual understanding with text. To name a few, **Older work on structured image-text representations** [2] https://openaccess.thecvf.com/content_iccv_2015/papers/Ma_Multimodal_Convolutional_Neural_ICCV_2015_paper.pdf [3] https://openaccess.thecvf.com/content_cvpr_2018/papers/You_End-to-End_Convolutional_Semantic_CVPR_2018_paper.pdf **Contrastive loss for visual-semantic embeddings** [4] https://arxiv.org/pdf/1411.2539.pdf ### Minor Editing Comments - I was confused about what CCRA is when reading the abstract -- would be good to include the full name and give an intuitive description of the metric. - Yoon et al. $\\rightarrow$ Kim et al. - Shi et al. (2019) proposes $\\rightarrow$ Shi et al. (2019) propose - In my opinion, putting Section 3.4 before 3.3 would better streamline the paper. This paper introduces the task of joint visual-linguistic grammar induction, and presents models, metrics and empirical results on it. While I appreciate the impressive results, I am concerned about the unrealistic model selection process (comparing model outputs to a large set of ground-truth parse trees) and the unfair comparison (the proposed model has access to much more unlabeled text data than baselines). <doc-sep>This paper presents a new model for grammar induction for text, with help from the coupled images. The model was built on top of an existing unsupervised grammar induction model used for text without image information. The experimental results show the approach was effective. The work essentially demonstrates some effective ways of leveraging the additional image information for improving the grammar induction task. The paper also discussed some weaknesses of the approach and future work. The topic of grammar induction has been there for a very long time in NLP and is a very fundamental topic. The model was largely built based on an existing model for purely text-based grammar induction. The model essentially makes use of neural networks to learn good latent representations (using a reconstruction loss) where the latent representation is defined with neural networks which yield scores for constituents and vector representations of them. The approach adopts the classical inside-outside process for the computing of the scores. The paper essentially investigates what might be the effective methods for integrating image information into text for improved grammar induction. The execution of the paper was quite good, and the results are convincing. However, I feel the overall model is essentially a way to use image information to regularize the grammar induction process. Little can be said about in what precise manner the image is actively contributing to the induction process. Indeed, the authors also acknowledged something along with what I thought in the final section. Nevertheless, I think it is an interesting piece that might inspire future research on multimodal processing (for image + language). I think this is a reasonable piece, with good writing and a nice set of experiments. It would be helpful for future research in this domain. <doc-sep>The paper proposed a new method CLIORA to do unsupervised parsing and vision-language grounding. CLIORA is based on DIORA model. But different from previous unsupervised parsing methods, CLIORA also induces alignment between constituents and image regions. In order to train the model, the author introduces a contrastive loss. Experiment results show that the proposed method outperforms baseline unsupervised parsing methods and it also induces meaningful alignment between image regions and constituents. Strengths: The idea of jointly inducing structure in natural language and grounding the constituents with real-world images is intuitively correct. The ablation study also shows that both feature-level fusion and score-level fusion (including the contrastive loss, if I understand correctly) helps in improving the parsing results. Weakness: 1) The image features are only used for computing the inside pass. The image feature should contain information that can help predict the missing word, such that it could be used in the outside pass too. Selecting the best image region for predicting the missing word is also an intuitively correct way to build the vision-language alignment. 2) The compute of sim(I, c_ij) includes a max operator, this could lead to a biased gradient. 3) As the author mentioned in the discussion section, the model doesn't consider the latent hierarchical structure of the image. For example, the sentence describes the entire image, while each phrase describes part of the image. Overall, the proposed method is interesting and inspiring. The idea should be interesting to both unsupervised parsing and multimodel communities.
This paper proposes to perform unsupervised grammar induction over image-text pairs and used shared structure between the modalities to improve grammar induction on both sides. Authors find the paper clear, creative and interesting and recommend acceptance without hesitation.
This paper proposes a self-supervised idea for unsupervised anomaly detection. Specifically, this framework enables high-performance AD without any labels via SRR, which is an ensemble approach to propose candidate anomaly samples that are refined from training. This way allows more robust fitting of the anomaly decision boundaries and also better learning of data representations. Multiple examples are used to demonstrate the proposed methods on effectiveness and robustness. **Strength** 1. the authors fully explore the power of unsupervised AD and shows outperformed results by using the proposed SRR scheme based on the GOAD framework. 2. The performance is improved by leveraging the ensemble of the OCC, which works well but may result in additional computational cost. 3. Enough details, such as the sensitivity of hyperparameters, are provided to reproduce the experiments on multiple tasks, including tabular and image datasets. **Weakness** 1. The idea sounds promising but may not be the first work. The performance is enhanced by an ensemble of multiple tricks. It would be helpful to see a detailed ablation study, which may show more insights. 2. The baseline is mainly consisting of GOAD, OC-SVM, etc. Some recent SOTA baselines are missed, for example, NeuTral [1] [1] Qiu, Chen, Timo Pfrommer, Marius Kloft, Stephan Mandt, and Maja Rudolph. "Neural Transformation Learning for Deep Anomaly Detection Beyond Images." ICML 2021. 3. The baseline varies case by case, for example, Cutpaste is used for MVTec datasets, have you considered the SOTA performance, for example in this link - https://paperswithcode.com/sota/anomaly-detection-on-mvtec-ad Cupaste only ranks the 9th, have you compared with the other SOTA baselines? I, therefore, have a concern, how do you choose the baseline method in different datasets? I suggest to choose more rather than a specific one. Overall, the paper is well-written and the results and performance look solid. My major concern is the novelty, specifically compared with the GOAD method. In addition, the code is not uploaded, so I am not sure how it works for reproducibility and time cost. <doc-sep>This paper proposes an ensemble approach, called SRR (Self-supervise, Refine, Repeat), for robust unsupervised anomaly detection. The proposed approach trains an ensemble of K detectors together with a joint self-supervised feature extractor g on K disjoint subsets of the data. The ensemble is then used to filter the training data, keeping only the data points that are collectively deemed normal by the K detectors. This training-data filtering process is repeated until the self-supervised feature extractor has converged. A final detector is then trained using the refined data and converged self-supervised feature extractor. Experiments on tabular and image datasets are presented which show that the proposed ensemble approach is more robust at high anomaly contamination ratios than respective state-of-the-art single detectors. *Pros* + The experimental results demonstrate significant anomaly detection performance improvements for the proposed SRR approach, especially at high anomaly ratios. + The paper is overall presented and written well, as well as technically sound. + The paper is well placed well into the existing literature, up to including recent works. *Cons* - The methodological novelty of the proposed SRR approach is rather low (ensemble learning is standard to improve robustness and the individual detection method from Sohn et al. (2020) is not new) - The experiments do not contain a comparison to any specifically robust AD approach (e.g. robust PCA for the tabular and robust autoencoders for the image dataset). - While the paper includes ablation studies on key components and hyperparameters (ensemble size, data rejection confidence, updating the self-supervised feature extractor), I think there should also be a comparison to just using the final ensemble on the converged self-supervised extractor vs. training an additional final model. Is there much of a difference left? Some additional minor points: - I don't agree with the framing that most prior AD works "all depend on some labeled data" as expressed in the abstract and introduction. The bulk of AD research is on unsupervised methods (see Chandola et al. (2009) and the recent reviews by Ruff et al. (2021) and Pang et al. (2021)). However, I agree that most methods assume fairly clean training data. This view should be updated in my mind. - The GDE abbreviation is used before it is explained. - p.6: "For the N setting [...]" Typo? Though I think the methodological novelty of the proposed approach is rather low, and that the experimental comparison should be somewhat extended (where I expect the improvements of SRR to hold up), I am overall positive towards accepting this work since robust anomaly detection is a relevant problem of high practical significance, for which the proposed SRR approach demonstrates significant improvements over current state-of-the-art methods. <doc-sep>The paper tackles an unsupervised anomaly detection problem where the training set contains an unknown portion of anomalies. When anomalies are contained in the training set, it is known that classical AD approaches' performance degrades. The idea is to filter out potential anomaly samples (data refinement) by ensemble model. Each model in the ensemble is trained on a disjoint set of training data and then used as a classifier to determine potential anomalies. Then the data refinement process uses a hard assignment excluding anomalies from the training set. Refinement and ensemble training is repeated iteratively until convergence. The proposed framework is validated on the four tabular datasets and four image datasets. [Stregnth] The effectiveness of the proposed framework is validated on top of contrastive learning-based models, which are state-of-the-arts. Extensive experiments on both tabular and image datasets support the effectiveness of the framework. Ablation studies decouple the effects of each hyperparameter. Also, the representation update study shows that re-training representation with the refined dataset is important. [Weakness] - Although the proposed framework is tested on contrastive models, the idea of data refinement itself is independent of these models. The framework can be applied to other types of anomaly detectors but it is not shown. - The iterative refinement has been studied [1,2,3] previously, however, no comparison or discussion is addressed with those approaches. Although they use AE-based models, the idea of iterative refinement can be deployed on top of contrastive models as well. What makes SRR more effective than these methods? What is the main factor that makes SRR more competitive than these methods? [1] Xia, Yan, et al. "Learning discriminative reconstructions for unsupervised outlier removal." Proceedings of the IEEE International Conference on Computer Vision. 2015. [2] Beggel, Laura, Michael Pfeiffer, and Bernd Bischl. "Robust anomaly detection in images using adversarial autoencoders." arXiv preprint arXiv:1901.06355 (2019). [3] Pang, Guansong, et al. "Self-trained deep ordinal regression for end-to-end video anomaly detection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. - Hyperparameter selection requires tuning. Although it is shown in Figure 7, that any value of gamma improves over baseline, it is still important how to choose this value. The paper suggested Otsu's method as a solution to predict the approximate anomaly ratio of the dataset but it is not shown in the main experiments. (MVTec experiment is provided in A.5) In the main experiments, hyperparameter gamma is tuned with two times of anomaly ratio which this setting requires prior knowledge on anomaly ratio. I wonder how effective Otsu's method is in other scenarios. - The training requires heavy computation as the framework requires ensemble learning on top of contrastive learning. [Questions] - What is the convergence condition? Is it necessary to train with the framework until the data refinement gives marginal change to the data? How long does it take? - What is the important difference of SRR from the previous refinement methods? Is SRR more effective than the previous refinement approaches? - How does SRR perform when trained with Otsu's method rather than using true anomaly ratio? [Post rebuttal] The authors addressed all my concerns. I raise the score, assuming the rebuttal materials will be included in the revised version. Rebuttal materials here means, - OTsu's method experiment - detailed discussion on the difference between SRR and the previous iterative works - Convergence analysis (maybe this one in the appendix) The paper proposes a framework to refine data and train contrastive models for unsupervised anomaly detection problems. The extensive experiments show the effectiveness of the method. However, the main experiments require prior knowledge of the true anomaly ratio which is unavailable in real-world problems. Discussions about the important difference between the proposed method and the previous iterative methods would make the paper more convincing. [Post rebuttal] The authors addressed all my concerns. I raise the score, assuming the rebuttal materials will be included in the revised version. Rebuttal materials here means, - OTsu's method experiment - detailed discussion on the difference between SRR and the previous iterative works - Convergence analysis (maybe this one in the appendix) <doc-sep>The authors propose a data refinement approach combined with self-supervised representation to robust one-class classification, which is commonly used in the anomaly detection scenario. The proposed data refinement approach is designed based on an ensemble of one-class classifiers. The authors propose a novel AD framework to enable inspect defects with one-class, which is called SRR and applicable on unlabeled datasets.SRR employs an ensemble of multiple OCCs to give the potential anomaly refined samples from training. SRR brings the advantages of making the anomaly decision boundaries more robust and giving better data representations. The proof and experiment results are well organized. The paper is ready for acceptance.
The paper worked on fully unsupervised anomaly detection and proposed to use self-supervised representation learning to improve the performance of one-class classification. This is a borderline case close to acceptance but cannot make it. Specifically, it is useful, but its novelty is the main issue, since it is not surprising that self-supervised representation learning can improve one-class classification without representation learning (this part is still much of the taste of ICLR) and an ensemble of multiple models can improve upon a single model (which is just "bootstrap aggregating" or "bagging" used everyday in practice and known to machine learning and statistics societies a very long time ago). After seeing the rebuttal, the concerns were not really addressed well and the issues were only partially solved. Thus, the paper is not enough to guarantee an acceptance to ICLR unfortunately.
This paper addressed an interesting problem of reducing the kernel to achieve CNN models, which is important and attracts lots of research work. However, the methods don't have very good justifications. For example, in Section 3.1, the authors mentioned that "Specifically, in normal CNNs it is quite common to have multiple stages/blocks which contain repeated patterns such as layers or structures." It is still unclear why it is better to replace these so-called repeated patterns. The defined "information field" is not clearly explained and the benefits are also not demonstrated.<doc-sep>Standard dense 2D convolution (dense in space and channels) may waste parameters. This paper points out the many ways that sparser convolutional operators (“kernels”) may be combined into a single combined operator that may be used in place of dense convolution. The paper waxes grandiose about the exponentially many ways that operations may be combined but then defines and tries only four. While trying four alternatives may be quite interesting, the paper could have avoided grandiose language by just stating: “We tried four things. If you restrict yourself to kernels with 3x3 receptive field and no repeated operations <and probably other assumptions>, there are only four unique combinations to be tried.” Perhaps a page of text could have been saved. The paper also defines “information field” as the product of the operator’s (spatial) receptive field and the number of channels that each unit can see. Authors proceed to make broad claims about how information field is an important concept that predicts performance. While this may indeed turn out to be an important concept, it is not shown as such by the paper. Claims: “…we identify a easily measurable quantity named information field behind various sparse kernel designs, which is closely related to the model accuracy.” “During the process to reduce the design space, we find an unified property named information field behind various designs, which could directly indicate the final accuracy.” But the paper does not substantiate these claims. Since information field is defined as the product of the receptive field and the number of channels seen, it would seem necessary to show, say, at least some experiments with varying receptive field sizes and number of channels. Then it might be shown, for example, that across a wide array of network sizes, widths, depths, holding all but information field constant, information field is predictive of performance. But these experiments are not done. Receptive fields: the paper *only ever tries 3x3 receptive fields* (Table 2, 3, 4). So absolutely no support is given for the relevance of two out of the three components (i size, j size) comprising information field! Number of channels: as far as I can tell, Table 2 and 3 contain the only results in this direction. Reading off of Table 2: for networks of the same depth (98), info size 256 works a bit better than 128*, and 512 works a bit better than 256. * (see also Table 3 lines 4 vs 5 show the same 256 vs 128 effect.) Cool. But *two comparisons* are not even close to enough to support the statement “we find an unified property named information field behind various designs”. It is enough to support the statement “for this single network we tried and using 3x3 receptive fields, we found that letting units see more channels seemed to help.” Unfortunately, this conclusion on its own is not a publishable result. To make this paper great, you will have to close the gap between what you believe and what you have shown. (1) You believe that information field is predictive of accuracy. So show it is predictive of accuracy across sufficiently many well-controlled experiments. (2) It may also be that the PWGConv+DW+PWGConv combination is a winning one; in this case, show that swapping it in for standard convolution helps in a variety of networks (not just ResNet) and tasks (not just ImageNet). Other minor notes: - Equations are critical in some parts of some papers, but e.g. triple nested sums probably aren’t the easiest way of describing group convolution. - The part about regexes seemed unnecessary. If 1000 different designs were tried in a large automated study where architectures were generated and pruned automatically, this detail might be important (but put it in SI). But if only four are tried this detail isn’t needed: we can see all four are different. - Figure 1 is a great diagram! - How efficient are these kernels to compute on the GPU? Include computation time. - “Efficiency given the total amount of parameters.” These equations and scaling properties seemed to miss the point. For example, “It can be easily verified that given the total number of parameters the greatest width is reached when the best efficiency is achieved.” This is just saying that standard convolution scales poorly as F -> infinity. This doesn’t seem like the most useful definition of efficiency. A better one might be “How many params do you need to get to x% accuracy on ImageNet?” Then show curves (# of params vs accuracy) for variants of a few popular model architectures (like ResNet or Xception with varying width and depth). - 3.3.2: define M and N <doc-sep>The paper considers sparse kernel design in order to reduce the space complexity of a convolutional neural network. In specifics, the proposed procedure is composed of following steps: 1) remove repeated layers, 2) remove designs with large degradation design, and 3) further remove design for better parameter efficiency. The paper proposed the composition of group convolution, pointwise convolution, and depthwise convolution for the sparse kernel design, which seems pretty promising. In addition, the authors discussed the efficiency of each convolution compositions. I failed to appreciate the idea of information field, I didn't understand the claims that "For one output tensor, sizes of information fields for all activations are usually the same". When introducing a new concept, it's very important to make it clear and friendly. The author could consider more intuitive, high level, explanation, or some graphic demonstrations. Also, I couldn't see why this notion is important in the rest of the paper. Personally I'm so confused by the theorem. It looks like a mathematical over-claim to me. It claims that the best efficiency is achieved when M N = C. However, is it always the case? What is M N \\neq C? What does the theorem mean for real applications? All the reasoning and derivation are assuming the 3 x 3 spatial area and 4 way tensor. I would assume these constant are not important, the paper could be much stronger if there is a clear notion of general results.
This paper points out methods to obtain sparse convolutional operators. The reviewers have a consensus on rejection due to clarity and lack of support to the claims.
** Paper Summary ** This paper proposed a simple regularization technique for domain generalization tasks, termed MixStyle, based on the observation that domains are determined by image styles. By mixing styles of different instances, which generates synthesized domain samples while preserving the content features, the proposed method achieves the generalizability of the trained model. The MixStyle was applied to numerous applications, such as category classification, instance retrieval, and reinforcement learning, and attained the state-of-the arts. The MixStyle is relatively simple to implement, but effective. ** Paper Strength ** + Simple methodological design, so it is easy to implement. + Understanding the domain shift problems as a style variation makes sense. + Randomizing the styles might be the solution to alleviate the domain generalization problems, but searching all the possible styles and applying them would be challenging and not feasible. So, using different instance samples to extract the styles was nice. + It makes sense that introducing the \\lambda to mix the styles itself and ones of different instances. + The paper is well organized and written. ** Paper Weakness ** I have no major comments on this paper, but minor comments as follows: - Even though the authors have shown the ablation study to analyze the levels where the MixStyle should be applied, it is not clear for me yet. The authors applied the MixStyle after 1st, 2nd, and 3rd residual blocks for category classification problems, but applied the MixStyle after 1st and 2nd residual blocks for category classification problems for instance retrieval task. In 3.4 analysis, they only showed the ablation studies on the category classification. Thus, one think the optimal combinations may vary according to the applications. In addition, another combination, e.g., conv34, conv25, would be more interesting. - Fig 4 is hard to understand; what do the corresponding style statistics mean? Why does (d) only represent different legends? - In Table 1, some experimental settings, e.g., Cartoon or Photo, have shown that MixStyle w/ random shuffle was better? The discussion on this might be interesting. <doc-sep>This work proposes a technique for domain generalization by mixing style of images from different domains. This work adopts a mix up style approach [A] for domain generalization. Different from [A], the paper proposes to conduct mix-up in the intermediate layers, in particular, instance normalization layers. The proposed approach diversifies the data implicitly and the experimental results show that the mix-style can improve domain generalization. Overall the paper is well-written with plenty of details. I also appreciate the experimental analysis in Sec 3.4 and the variance reported in Table 1. However, I have several concerns regarding the paper: - The technical novelty seems rather incremental. This method is an extension of [A] to the instance normalization layer. Similar strategies have been discussed in other works such [B] and [C]. However, these works are not discussed in terms of main similarities/differences. - I also found the experimental validation not fully sufficient to grant publication. Currently the validation is only conducted on PACS, the improvement also seems limited. I believe validation on more datasets(such as Digits, Office-Home as used in L2A-OT) can further confirm the effectiveness of the proposed method. - I suspect that interpolating the style parameter might cause performance drop on the domains that have been seen during training. Would it be possible to report performance on the domains that have been seen in the training? [A] Vikas Verma et al. Manifold Mixup: Better Representations by Interpolating Hidden States. In ICML 2019. [B] Rui Gong et al. DLOW: Domain Flow for Adaptation and Generalization. In CVPR 2019. [C] Seonguk Seo. Learning to Optimize Domain Specific Normalization for Domain Generalization. In ECCV 2020. --- I have read authors' response and other reviews. Some of my concerns are addressed in the response. Especially the added discussion with related work is helpful. Thus I would increase my rating to 6. <doc-sep>**Summary:** The paper proposes a simple method for domain generalization where multiple source domains are given for a certain task (like image classification) and testing happens on an unseen domain. The authors are inspired by normalization-based style-transfer techniques (Adaptive InstanceNorm) and propose to mix the styles of different source domains to effectively increase diversity of domains during training. **Pros:** - Overall, this is a well written paper with a clear idea that is simple but intuitive. - The idea is well described, put into context of prior work and empirically validated to improve results over various baselines. - It is good to see experiments outside of plain image classification to validate the proposed idea. - The analysis where to apply MixStyle is good and makes intuitive sense. **Cons:** - The relation to MixUp needs to be explained in more details. While related to the proposed MixStyle, MixUp creates a convex combination of both input and output spaces. I can believe that MixUp as a standard data augmentation gives worse results than a vanilla CNN (Table 1) but I would not fully agree with the statement "... which demonstrates the advantage of mixing style statistics at the feature level over mixing images at the pixel level" from page 4. MixUp also interpolates the output label space, so the advantage cannot be only attributed the placement of the mixing within the network instead of at the pixel level. - As an additional baseline, one could use MixUp with a sampled lambda that is larger than 0.5 in all cases (like in [FixMatch. Sohn et al. NeurIPS'20]) but keeping the label from sample $x$ rather than interpolating with $\\hat{x}$. - I do not understand why the suffix "_x" is added to the analysis in Table 3. Is MixStyle applied after each convolutional layer or after each block in a ResNet architecture? Specifically, for "conv234_x", how often is the MixStyle layer added? (3 times or 3 * num_convs_in_block times?) - For the ReID experiments, I think it should be better highlighted that the cross-dataset setup is the key difference to evaluations in prior work. This somehow gets almost unnoticed because the default setting of ReID is already considered a valid domain generalization task due to the new label space and camera views. This left me a bit confused about how RandomErase can be a widely used data augmentation technique for ReID when it gives worse results in the experiments from Table 2. This became clear to me only after reading the discussion in the last paragraph of Section 3.2. - I would not make the statement that "... mixing is CLEARLY better than replacing" on page 7 (see Table 4) while also stating that "... with alpha increasing from 0.1 to 0.4, the accuracy SLIGHTLY slides from 82.8% to 81.7%". That "slight" change is larger than the "clear" gap before. **Other notes and open questions:** - MixUp was used successfully as regularization for semi-supervised learning (SSL) [MixMatch. Berthelot et al. NIPS'19]. Can MixStyle also be used for SSL?
All three reviewers recommend acceptance after the rebuttal stage, and the AC found no reason to disagree with them. The proposed method is simple and effective, and the concerns raised about experimental validation and novelty seem well addressed in the rebuttal.
This paper unifies several variants of the graph convolutional networks (GCNs) into a regularized quadratic optimization framework. Basically, the function to be optimized considers both to preserve node information and to perform graph Laplacian regularization, whose optimal solution gives a convolutional layer. The unification is given by equations (3) and (18) and elaborated in section 3, which includes several methods including GCN, graph attentions, residual connections, concatenation, etc. This is not surprising: as a GCN layer (without activation) is a linear transformation, surely it is the optimum of a quadratic function. Broadly, any linear layer can be trivially formulated as a quadratic optimization problem. Still, I appreciate the authors' delicate work on unifying these diverse methods from an optimization perspective, which is useful and could lead to new methods. From a technical perspective, the main novelty is that the authors further extend this framework by adding another feature variance term, so that the learned features have a certain variance. This is similar to the idea of batch normalization. This is reasonable because GCN tends to correlate the learned features with the graph Laplacian embedding (the optimal solution of the 2nd term in the authors' framework). This is interesting but empirical. I would like to see how this additional regularization can be equivalent to transforming the original graph with some formal arguments. Unfortunately, this technic is mainly introduced as a heuristic and more detailed analysis is missing. As in any regularization framework, there is an additional parameter involved that is the regularization strength (\\alpha_3 in 21). Therefore the performance improvement is not surprising as the model is "enlarged". In the experiments or supplementary material, there should be a sensitivity study of this parameter. On three citation graphs (that are commonly used to evaluate graph neural networks) and semi-supervised node classification tasks, the authors showed that the regularizer can bring marginal performance improvement. Regarding Clarity, there are some typos in several places and rarely used phrases. Overall, I don't feel excited after reading the article (although the contents are useful), as a large part of this work is on summarizing existing literature. The "new bit" is mainly on the additional regularization term that is introduced as a heuristic. Based on the novelty, a more proper venue for publishing this work could be relevant journals. Overall this submission presents a borderline case and I recommend weak acceptance. As a minor comment: Equation (21) why not set \\alpha_1=1? ---- After rebuttal: Novelty: my assessment remains the same. It is not non-trivial enough to combine several linear operators into a unified optimization framework. Although the unification is useful, it is not a major novelty. Thank you for the additional experiments on testing the hyper-parameter. As you mentioned instability, it is worth to have some toy example to demonstrate the instability and study the cause of such instability and show how to avoid such instability using the proposed regularizer. Clearly (19) is bounded. When \\alpha_3 is large enough, the solution will be trivial. Regarding non-linearity: the authors' framework is for unifying a graph convolution operator (that is one layer in a graph neural network). Nonlinear activation is another operator. This is not a major problem from my perspective. Overall, I think this work has some value (although the novelty is not strong) and still recommend weak acceptance.<doc-sep>This paper presents a unified framework for graph convolutional neural networks based on regularized optimization, connecting different variants of graph neural networks including vanilla, attention-based, and topology-based approaches. The authors also propose a novel regularization technique to approach the oversmoothing problem in graph convolution. Experiments on the standard settings of node classification on Citeseer, Cora, and Pubmed prove the effectiveness of the proposed regularization techniques. Overall, this is a very interesting paper, proposing a unified framework for different variants of convolution-based graph neural networks. However, I also have a few concerns: (1) The proposed framework is mainly designed for GNNs without considering the nonlinear transformation matrix. What if we have to consider the nonlinear transformation? Is the whole framework able to unify different GNNs? (2) In the case of linear GNNs (without nonlinear transformation matrix), it is actually not surprising formulating GNNs as a regularized optimization problem. Such a regularization framework has already been discussed in the original GCN paper (Kipf et al. 2016). (3) In the case of linear GNNs, the overall framework is also very similar to the traditional label propagation framework (Zhou et al. Learning with Local and Global Consistency). Could you explain the difference? (4) The new novel regularization technique seems to be similar to the one proposed in PairNorm (Zhao et al. 2020). Could you also explain the difference? <doc-sep>Summary: The paper shows that several graph networks (GCN, attention GCN, PPNP, residual) can be unified under a common framework of Laplacian-regularised optimisation. Subsequently, different types of regularisation are combined to propose a new method for graph transduction, which is then empirically evaluated. Significance: Laplacian regularisation is a classical approach for formulating/justifying graph transduction algorithms (multiple papers by Mikhail Belkin and Xiaojin Zhu around 2004-06). It is interesting to see that so many graph networks can also be unified in the same framework. A unified framework does aid in both theoretical analysis and implementation of GCNs. However, the claims and derivation do not seem to account for the non-linear activation in the networks, and hence, significance of the work seems limited. Quality: As noted above, non-linearity is not considered which makes the derivation significantly simpler. Moreover, the first-order approximation is quite misleading since even the proof do not seem to consider non-linear activation. Since the proposed method combines multiple types of regularisation, it is expected to perform better than other networks. However, it is not clear if the training time increases due to the complex regularisation. Clarity and orginality: The paper is otherwise well written / organised, and the theoretical contributions (although technically straightforward) seem original and somewhat interesting. <doc-sep>The paper introduces a unified framework for graph convolutional networks by interpreting filters as regularizers in the graph Fourier domain. In particular, this framework allows to establish the relationships between standard, attention-based and topology-based GNNs. Furthermore, the authors propose a regularization technique based upon the proposed framework which tackles the oversmoothing problem of GNNs, which achieves clear benefits on standard (small) benchmark datasets. The paper is mostly well-written, although though to understand on first read. I especially liked that it tries to establish a systematic view of different GNN models and their relations, which is a welcome work in the field of graph representation learning (especially with the sheer amount of GNN models available in literature). In my opinion, the proposed framework has the potential to improve our understanding of GCNs and inspire better models in return. On the other hand, it is not exactly clear to me how the proposed regularization technique differs from PairNorm (which is build upon similar insights by preventing node embeddings from becoming too similar). I would very much welcome a discussion between key differences and similarities between the two approaches. Furthermore, the authors should consider comparing the proposed regularization technique against related approaches, e.g., PairNorm and DropEdge. Overall, the empirical evaluation feels a bit shallow by only evaluating on small benchmark datasets, but might be sufficient for a work that has mostly theoretical contributions. Minor comments: * It is not exactly clear to me how ECC can be viewed as an attention-based GNN since this operator learns a weight matrix conditioned on the edge features (instead of performing weighted normalization). Does this operator really fit into the proposed unified framework? * A GCN baseline is missing on the PPI dataset. ============== Post Rebuttal Comments ================= I would like to thank the authors for their insightful rebuttal and clarifications. Sadly, I cannot find the newly added section regarding the non-linearity analysis in the revised manuscript and therefore cannot judge the findings of the authors. Hence, my rating will stay the same.
Four reviewers have reviewed and discussed this submission. After rebuttal, two reviewers felt the paper is below acceptance threshold. Firstly, Rev. 1 and Rev. 2 were somewhat disappointed in the lack of analysis regarding non-linearities despite authors suggested this was resolved in the revised manuscript, e.g. Rev. 2 argued the paper without such an analysis is too similar to existing 'linear' models, e.g. APPNP, SGC, and so on. While Rev. 3 was mildly positive about the paper, they also noted that combining several linear operators is somewhat trivial. Overall, all reviewers felt there is some novelty in the proposed regularization term but also felt that contributions of the paper could have been stronger. While AC sympathizes with this submission and hopes that authors can improve this work, in its current form it appears marginally below the acceptance threshold.
This paper mainly studied how the negative samples can affect the model performance in supervised learning CIO works. Through the experiments, this work has a few interesting findings, including the majority of negative samples are not important for the model learning, only a small subset of hard samples determine the model importance. These hard examples are also closely related with positive samples (more semantically similar). We can see from experiments that it's very important to fairly treat negative samples in supervised learning tasks. However, there is no frameworks proposed to help improve the learning representation or speed up the training task.  In general, the readers are more interested in the solutions after realizing the importance of negative samples treatment during the experiments. It would be necessary to include the corresponding solutions by automatically setup these negatives samples in CID related task.<doc-sep> This paper argues that in contrastive self-supervised learning, different negative instances have different importance. This importance is relevant to the ``difficulty" of negative instances. On ImageNet and MoCo2, the authors show that using the most difficult 5% negative instances can achieve similar performance compared with using all negative instances. However, the most difficult 0.1% of negative instances yield bad performance. I recommend to reject this paper due to the following major concerns: 1) study is performed on a single dataset, which is not convincing; 2) study is performed on a single method, which casts doubts on whether the conclusions hold for other methods; 3) this study does not seem to have practical value. While this study is interesting, it lacks rigor, in the following aspects. 1. The study is only performed on a single contrastive self-supervised learning method: MoCo2. It is unclear whether the conclusions hold for other contrastive SSL methods, such as BYOL and many others. 2. The study is conducted on a single dataset: ImageNet. It is unclear whether the conclusions hold for other datasets. 3. Another concern is this study does not seem to have practical value. In each iteration during training, finding the hardest examples for a query needs to calculate the inner-product between this query and all other training examples, which is computationally very heavy. 4. In the author's measure of difficulty, the difficulty is a function of network weights. In early stage of the training, the network weights are random, which implies that the calculated difficulty may be meaningless. Can the authors comment on this? However, the paper does have a few strong points. 1. The paper is well-written. The organization is clear and the paper is easy to follow. 2. The studied problem is interesting and novel. Other comments. 1. Figure 5a is difficult to interpret. The author may consider to reorganize it. 2. In Figure 3, only three temperature values were considered, which may not be very convincing. ----------------------------------------------------------------------------------------------------------------------------------- Update: I read the authors' rebuttal. The authors didn't address my concern "The study is conducted on a single dataset: ImageNet. It is unclear whether the conclusions hold for other datasets." sufficiently. I would like to keep my original rating. <doc-sep> The findings of this work are that for contrastive learning, most of the negatives deemed easily separable are unnecessary, the most important negatives are somewhere in the top 5% closest to the positive sample, and that some of the exceedingly hard examples are detrimental. -In general, I felt the main findings of this work to be roughly in line with what we already know about contrastive learning. We can easily look at this work's findings with respect to the soft SVM margin, in that only the examples close to the decision boundary should matter (max margin), but some difficult examples (the aforementioned exceedingly difficult ones) make the data inseparable, so we allow some violation (slack terms). While I'm not suggesting that slapping a soft SVM here would solve the problem, there is a large body of SVM-based detection/classification literature that precedes the findings of this work. -Validity of WordNet as a measure of semantic similarity: Section 4 uses WordNet distances to estimate the semantic similarities between classes by finding their shared subtree root. The deeper the subtree, the more semantically similar. While I do not dispute the claim of the hardest negatives being from semantically similar classes. Different parts of the WordNet synset tree have semantic hierarchies of varying levels of coarseness. A 2 hop distance in one subtree could easily be more of a semantic jump than a 3 hop distance in another. -The exist prior works dealing with the neglected semantic hierarchies in ImageNet by setting up hierarchical classifiers. An example is [1]. -I would further argue that there's some nuance in the correlation between semantic similarity and example hardness, in that it really depends on your choice of feature representation. Visual features will naturally correlate with closer semantic levels in visually-defined categories. However, this will not necessarily hold for semantic categories defined by function, in that two visually distinct items may fall under close semantic labels. -The related works section claims object detection works have not "explicitly involved negative examples as in CID." I have to imagine this statement is poorly phrased, as [2] (also cited in this paragraph) very explicitly mines for face-like non-face patterns. There is a very long list of hard-negative mining works in object detection. Overall, I value the empirical impact of this work, in that the rather detailed analysis may lead to improvements to future versions of the contrastive feature learning task. However, I do not find the findings of this work to be sufficiently novel for this conference, and therefore cannot recommend this work for acceptance in its current state. [1] Yan et al. HD-CNN: Hierarchical Deep Convolutional Neural Networksfor Large Scale Visual Recognition. ICCV 2015 [2] Sung and Poggio. Example-Based Learning for View-Based Human Face Detection. TPAMI 1998<doc-sep>In this paper, the authors carried out a series of experiments to analyze the impact of negative samples in contrastive learning (instance discrimination - CID). In particular, they try to identify which difficulty range is important for representation learning. Of the many recent self-supervised learning approaches, they chose MOCO V2 as the testbed. They trained the MOCO model from an ImageNet pre-trained one. Various settings, which correspond to various ways of filtering our hard or easy negatives, were used. Hardness of samples are measured based on embedding distance to the query. I.e. ones with large distance are easy. Their main findings are, for negative samples, 1) Using the 5% hardest is enough for downstream tasks, 2) the easiest 95% of them were unnecessary and insufficient, 3) The hardest 0.1% is harmful and 4) hard negatives were more semantically similar to the query. In general, in my opinion, this is a paper in which the authors tried to answers many interesting practical questions. The author provided experiments and convincing evidences for a number of insights. My main reservations with this paper are: 1) most of the points are not new and are elaborations of what were pointed out before elsewhere, for example, in semi-hard mining for distance metric learning. 2) the empirical results are only within the context of MOCO2 and for a linear classification task. It is not clear how such numbers as 0.1%, 5% or 95% would change when adopting other frameworks such as BYOL or SwAV… The reported gains seems a little bit sensitive to the temperature parameters of MOCO. 3) The sample hardness is measured based on embedding distance, which would be evolved during the training process itself. It is not clear how accurate it is especially in the early stage of training. My suggestion for improvements is that either to empirically show that their findings (numbers) are consistent across a number of frameworks and downstream tasks, or to provide some theoretical justification for their findings if only MOCO v2 is used.
This paper empirically studies the impact of different types of negatives used in recent contrastive self-supervised learning methods. Results were initially shown on Mocov2, though after rebuttal simCLR was also added, and several interesting findings were found including that only hardest 5% of the negatives are necessary and sufficient. While the reviewers saw the benefit of rigorously studying this aspect of recent advances in self-supervised learning, a number of issues were raised including: 1) The limited scope of the conclusions, given that only two (after rebuttal) algorithms were used on one datasets, 2) Limited connections drawn to existing works on hard negative mining (which is very common across machine learning including metric learning and object detection), and 3) Limited discussion of some of the methodological issues such as use of measures that are intrinsically tied to the model's weights (hence being less reliable early in the training) and WordNet as a measure for semantic similarity. Though the authors provided lengthy rebuttals, the reviewers still felt some of these issues were not addressed. As a result, I recommend rejection in this cycle, and that the authors bolster some of these aspects for a submission to future venues. I would like to emphasize that this type of work, which provides rigorous empirical investigation of various phenomena in machine learning, is indeed important and worth doing. Hence, the lack of a new method (e.g. to address the selection of negatives) was not the basis of the decision. While the paper clearly does a thorough job at investigating these issues for a limited scope (e.g. in terms of datasets), a larger contribution is expected for empirical papers such that 1) we can ensure the generality of the conclusions (across methods and datasets), 2) we have a conceptual framework for understanding the empirical results especially with respect to what is already known in adjacent areas (e.g. metric learning and object detection), and 3) we understand some of the methodological choices that were made and why they are sufficiently justified.
This paper provides a benchmark to evaluate approximators for Wasserstein-1 distances as loss functions in the generative adversarial network setting. - **{S1}** While previous works use discrete distributions for benchmarking solvers, this work suggests continuous distributions, which is a novel aspect for benchmarking W_1. - **{W1}** The benchmark contains only one image dataset with a single mode (faces). The addition of more image datasets, especially multi-modal ones (e.g. CIFAR-10), would improve the versatility of the benchmark and extend it to conditional models. <doc-sep>Authors propose a generic methodology to construct benchmark pairs with ground truth OT plan, OT cost, and OT gradient. We can use this tool to evaluate the performance of the neural dual OT solvers approximating the Wasserstein-1 distance or the gradient of Wasserstein-1 distance. Specifically, the authors employ the 1-Lipschitz MinFunnel functions to compute transport rays and define the ray monotone map. With them, we can define a target distribution $\\mathbb{Q}$ and compute OT cost and OT gradient based on the original distribution $\\mathbb{P}$ The authors provide an elaborate introduction to the Wasserstein-1 and its neural dual OT solvers. Followed by compact math proof about their benchmark pairs. Experiments are also reasonable. It is also a good point of view to consider the gradient of the Wasserstein-1 distance. Some minor concerns. Is it hard to turn hyperparameters for this method? For example, when you compute the High-dimensional benchmark pairs, you choose $b_n \\sim \\mathcal{N}(0,0.1)$ and p = 8. How do you choose it? How long does it cost for the hyperparameter search? The dimension of images ,in reality, is higher than $2^7$. Can this tool handle higher dimensions? If we carefully choose MinFunnel function u, instead of randomly picking, will the performance be better? What will be the effect of increasing N and D? Paper mentions "in WGANs, the solvers move the generated distribution (bad images, $\\mathbb{Q}$ in our construction) to the real distribution (good images, $\\mathbb{P}$)". However, $\\mathbb{P}$ is synthetic distribution and $\\mathbb{Q}$ is computed ground truth 'real image' distribution, in the case of images benchmark. Why do the solvers move $\\mathbb{Q}$ to $\\mathbb{P}$, instead of the opposite? Authors mention solvers MM, MM:R takes longer for training, compared with GP, SO, and LP. Is the time gap significant? <doc-sep>Motivated by the lack of benchmarks for W1 dual methods (other than perceptual measures such as FID or IS), this paper proposes to create a (semi-)synthetic set of benchmark datasets with known optimal transport plans, maps, and distance. To do this, the paper first develops theory about maps that are optimal by construction. Then, the paper proposes concrete methods for constructing the necessary functions and computing the necessary plans, maps and gradients. Finally, synthetic dataset pairs are generated from truncated Gaussian data and CelebA data at various dimensionalities and used to evaluate and discuss many existing W1 methods. - Discusses good overview of W1 methods. - Proves theoretical results about how to construct maps that are optimal w.r.t. W1. - Proposes novel way to construct ground-truth (semi-)synthetic benchmarks for evaluating Wasserstein-1 dual solvers. - Provides code and datasets for benchmark datasets and algorithms. - Evaluates the gradient of the W1 w.r.t. the parameters, which is actually most important for most generative methods. - Only one real-world dataset (celebA) is considered. And the synthetic datasets are quite simple (i.e., truncated Gaussians). It seems including more real-world datasets (even MNIST or CIFAR10) would be useful or using interesting real-world tabular data for smaller dimensions (e.g., even something like iris). - (This limitation is mentioned in the text but does seem to be the main limitation) It seems the benchmark only considers maps where the samples are grouped more closely together (or the reverse). Maps that expand parts of the space or where some expand and some contract would be better. It is unclear whether the benchmark maps properly represent real-world OT maps. - (Minor but nonetheless important for final paper) All result tables are in the appendix. And the figures are in odd places with nonstandard captions. At least some summary table of the results and your recommendations for suggested methods based on context would be important to include. What methods would you recommend and why? The answer may be a combination of ease-of-use, convergence behavior, and overall performance. <doc-sep>This paper proposes a benchmark to evaluate the methods of computing the Wasserstein-1 distance. The authors construct 1-Lipschitz functions and use them to build ray monotone transport plans, which yield pairs of continuous benchmark distributions in high-dimensional spaces. Some WGAN dual form solvers are evaluated using these benchmark pairs. 1. This paper proposed a benchmark to evaluate the methods of computing the Wasserstein-1 distance. The problem is interesting to the community. 2. This paper is well-written and technically sound. The method uses 1-Lipschitz functions to construct pairs of continuous distributions, which is well designed. 3. This paper thoroughly evaluates popular WGAN dual form solvers in high-dimensional spaces using these benchmark pairs. 1. The title of this paper is ambiguous and may lead to inappropriate reviewers. 2. The theoretical analysis and the intuition of the proposed method is weak. It is unclear why the proposed method works well than previous methods. 3. Evaluating the Wasserstein-1 distance does not directly validate the superiority of the methods on specific tasks, which may need more explanations. <doc-sep>This paper proposes a benchmark for computing the Wasserstein-1 distance. The authors first propose to use 1-Lipschitz functions to build ray monotone transport plans and obtain known OT maps. These ground truth maps are then used to benchmark dual OT solvers used in particular in the Wasserstein GAN framework. - This papers proposes a method to build **known** OT maps using 1-Lipschitz MinFunnet functions. This choice is clearly justified as these functions are universal approximator of 1-Lipschitz functions (Prop.2). Having known OT maps allows to faithfully compare the OT solvers - They carefully build transport ray of these functions. - The paper is well written and easy to follow. - The authors tackle an interesting problem and having more comparison like this one is crucial - I regret that the results of the benchmarks are only available in the Appendices. I would recommend the authors to include some of them in the main paper since those are the main results of the paper. - The restriction to 1-Lipschitz *MinFunnet* functions seems to be a main limitation of this work. - It seems that in the experiments only one random start is considered. Is there any reasons why the authors did not perform multiple runs? This seems to impede to assess the methods stability and robustness with regard to the random start and the parameters $a_n$ and $b_n$ in the *funnel*. <doc-sep>This paper proposes a benchmark for methods computing the Wasserstein-1 distance. Section 1 summarizes background information on computing W1, often with the dual in eq (4) and (5), and how the W1 is used in GAN training. Section 2 summarizes methods estimating the dual potentials and transport maps. Section 3 describes the benchmark distributions, and Section 4 shows the results of evaluating the methods on the results, which are quantified in Section D of the appendix. + Approximating W1 computations is widely used and a difficult setting to benchmark because the ground-truth transport maps and distances are often not known. I am not aware of an established W1 benchmarks and papers often have to rely on downstream tasks (such as inception scores) to justify an algorithmic improvement to the W1 approximation. + This paper presents non-trivial settings where the ground-truth transport map is known and uses it to + The experimental results are thorough and the paper strongly shows that minimax methods solve the benchmark tasks in most settings, at least for obtaining a gradient that approximates the true gradient. + While the paper proposes a new benchmark for approximating the W1, it unfortunately does not present results in established GAN settings as the ground-truth maps are not known. Thus research that is ultimately focused on improving the W1 computations in settings such as GANs may be able to use these benchmarks for preliminary experiments, but these benchmark tasks may not reflect the true difficulties. of these methods thus established and powerful + It is not clear how "solved" W1 OT is, how much work remains in the field, and how many new directions this benchmark will enable. In other words, better solutions to this benchmark will not directly enable new methods (or new GAN results).
This paper proposes a new benchmark to evaluate the solution of optimal transport problems. The reviewers concur that the benchmark is well-executed and novel. Some are concerned that a better benchmark for OT problems will not drive progress, as the successes of Wasserstein GANs occur despite their failure to solve OT. However, it seems like a useful intermediate check to deepen understanding of why Wasserstein GANs (and models to come!) work by (at least) eliminating non-explanations.
Overall, I found the paper well-written, the methods appear reasonable and the math appears correct. I think the case made for the importance/significance of the method could be improved, but think the paper should be accepted regardless. --- Comments --- 1. I thought the introduction did a good job setting up the high-level problem, but did not really establish why a new method is needed. I got to the end of the intro and wondered why I couldn't use any one of a dozen DRO type methods to get the type of robustness described. The experiments do a reasonable job of demonstrating the utility of the proposed method, but I recommend adding something to the intro like: Methods have been proposed that promote robustness to distributional shift, however, these methods fail to capture XYZ shifts because ABC. 2. I really liked the examples of the different types of shifts that the authors are interested in, but thought the paper could do a better job arguing why these specific shifts are important or relevant. Perhaps swapping the "red chair" example for something more compelling might do the trick. In particular, it is worth giving a compelling reason why the drop in average performance might be worth an improvement in, for example, anomaly detection. 3. I found the early parts of Section 3 a bit confusing because it was unclear at that point in the paper where the "majority" and "minority" groups were coming from. I recommend adding a few sentences to the beginning of that section like: Suppose that we knew the collection of non-semantic features and thus new the relevant majority and minority groups. This will not, in general, be the case and we will show how to derive such groups from the data, but we first establish our method as if such groups were known. 4. I found Equation (10) very hard to follow. First, the notation has *many* problems (superscripts from earlier in the section are changed to subscripts; no domain is given for $\\alpha$; it is not clear what it means to index $\\alpha$ by $e$; $\\mu$ is not defined; $\\gamma$ is not defined). Second, I would give a few sentences explaining what the pieces of this objective are doing and why it achieves the goal of splitting the data into relevant majority and minority groups. --- Minor comments --- 1. Page 2, par 3, line 1: Not clear what "modeling bias" refers to here or why it affects the dataset. 2. Equations (1) and (2) don't really seem necessary for the rest of the work. In particular, $\\mathcal{C}$ isn't reference anywhere else in the paper. I would just define $h_s$ and $h_n$ and move on. 3. I found the $\\not\\sim$ notation a bit confusing. Does it mean "sampled from any distribution other than $p$? Based on the description, it seems more like it means "sampled from outside the support of $p$" which matches the examples given in Fig 1. 4. Equation 6: $\\ell$ is not defined. 5. Equation 9 and surrounding text: I would change reverse-KL to just KL since, in this context, this not an obvious forward/reverse direction. 6. Text under Equation 9: It doesn't really make sense for a categorical distribution to be multi-modal since there is no inherent ordering to values in the support (unless there are literally two or more equivalent modes). I would recommend changing unimodal/multi-modal to peaked/flat or low-entropy/high-entropy.<doc-sep>Summary: the paper studies a setting where there are simple correlations with the target variables (that are not however robust) and more complex but robust features. The simple correlation is usually such that in most cases the feature is descriptive of the label, but at times it takes values that are part of a “minority group” that is not descriptive of any one class in particular. Systematic-shift generalization is tested using the same spurious features that are present in training, in all combinations except the usual pairing of spurious feature and class. Non-systematic shifts are tested using novel spurious features. Anomaly detection is tested with unseen robust features. Neural networks trained with standard ERM notoriously pick up on all features that correlate with the label, and the paper compares several methods (including IRMv1, REx, and GroupDRO) to the proposed Predictive Group Invariance. The latter is shown to work better than the baselines (even when using class conditioning) on systematic shifts and anomaly detection, and often non-systematic shifts. I found the paper to be mostly very well written (with few exceptions), and especially sections 1 and 2 very easy to read and understand. The experiments setting is clear and not at all trivial. My main questions and concern: - The description of PGI could be improved, I would suggest adding an algorithm box that sums up what is described. - “environments” and “partitions” are the same thing for PGI? How are “environments” chosen for the baseliness that need them? With the same partition networks used for PGI? If not, it might be hard to disentangle the role of using the partition networks from the KL objective of PGI. - The part about partitioning is not detailed enough. P13: "We use a separate network for each object category” what is an object category? From page 3 I understood that there might be 2 categories (easy and hard), is this the case? Is it correct to think of the role of these partition predicting networks as a sort of clustering that focuses on the most easy-to-find features? - I haven’t seen any mention of early stopping in the experiments. It sounds like the performance reported is the one at the end of all training epochs. This might explain why for example ERM on COCO-on-Colours only achieves a 1.10% accuracy on Systematic shift. If this is correct, why not using early stopping, given how prone to overfitting these networks might be? Overall, based on the results, the proposed PGI seems to improve the accuracy over the baselines, even though it seems to be more of an incremental improvement than a substantial and conceptual one. I look forward to a constructive discussion with the authors and hope my questions and concerns will be clarified. Minor: - I think the blanket statement “it has been reported that highly competitive performances can often be achieved with a baseline model on such domain generalisation benchmarks (Gulrajani & Lopez-Paz, 2020), similarly as in Table 1. “ is a bit too vague, and since it questions the validity of the results in all papers mentioned before, it should be either properly justified or adjusted to make sure that what it says actually applies to all of them. - Figure 3 is referenced in the text instead of Figure 2 (perhaps it was not referenced with \\ref ) <doc-sep>This paper shows that group invariance methods across inferred partitions show better generalization in (non-)systematic distributional shifts and anomaly detection settings. It also suggests a new invariance penalty and empirically shows that it works better on three synthetic datasets viz. coloured-MNIST, COCO-on-colours, and COCO-on-places. The paper is written well and starts off by giving an intuition of why IRM-like methods are important by presenting the results of a simple experiment on coloured-MNIST (table 1). It then goes on to talk about (non-)systematic generalization before introducing the proposed method. The authors use reverse KL divergence between the group distributions as the penalty and use prior work to partition the datasets into groups. They use The results look promising across datasets, though it is slightly lower in the 'in-distribution' setting. I am happy to see that they also talk extensively about hyperparameter selection especially in the case where they assume no access to validation sets with a distributional shift. Overall, I like the work and would like to see it presented at the conference. One minor point: cite work the first time you introduce something, not later on. It can be a little confusing for the readers. I wondered if I missed something. For ex: "We find that a recently proposed method can be effective at discovering...", "IRMv1", etc. <doc-sep>Summary: This paper studies the behaviour of deep neural networks in situations where simple but irrelevant correlations exist between input and output, and dominate more complex but relevant correlations. The authors conduct experiments on synthetic datasets (like coloured MNIST) and show that an invariance penalty helps the network focus on relevant correlations. Pros: - The paper studies neural network behaviour with respect to systemic biases that are likely faced by most neural networks in some form or the other. To make the study tenable, the authors make use of meaningful synthetic datasets, and propose an intuitive regularization to overcome the systemic biases. - The analysis done in the paper is very methodical, and the presentation is very clear. - The numerical simulations are comprehensive and convincing. Cons: - It would be nice to see how this would be applicable to real world datasets. The paper is interesting even without it, and I also appreciate that the authors are honest about it - so I would not hold it against the authors. But it would further strengthen the paper if some basic experiments are done on real world datasets. For instance, will one be able to find a partition on ImageNet? Comments: - Section 5.1: Minimization is spelt incorrectly. - Equation (6-7): I am not entirely sure what is happening with respect to the constraint on \\theta. What does capital \\theta correspond to? And if \\theta itself is the result of an optimisation (argmin), then why is there another optimisation on the same \\theta in the loss function? - In the text that appears before equation (3), it is mentioned that the predicted features f_\\theta will be matched for the two partitions, but equation (7) matches the predicted output post softmax. Could you please clarify?
All reviewers seems in favour of accepting this paper, witht he majority voting for marginally above acceptance threshold. The authors have taken special heed of the suggestions and improved the clarity of the paper. From examination of the reviews, the paper achieves enough to warrant publication. My recommendation is therefore to accept the manuscript.
This paper takes an interesting nonconvex optimization perspective on the continual learning problem. More specifically, the authors pose continual learning with episodic memory as a smooth nonconvex finite sum problem. They then consider the requirements for a theoretical proof of convergence to a stationary point for previously learned tasks. This results in the proposed NCCL method that leverages these ideas to modulate learning rates for the current and previous tasks to prevent escape from the feasible region. Overall, the strength of this paper is its theoretical analysis and I find the idea of connecting continual learning with the associated nonconvex optimization problem compelling. I am not an expert in nonconvex optimization, but my understanding is that the analysis itself is not that unique for the field. Rather, what is novel is the interesting application of the ideas to the continual learning problem. I find the theoretical aspect of this paper strong, but still lean towards rejection in its current form as I am very skeptical that the idea is at all validated by the experiments. This potentially suggests that the theory may lack relevance in these domains. There are some comparisons to baselines and prior work that I found a bit questionable. On the bottom of page 6, the authors state that existing GEM based algorithms only focus on canceling the negative direction, but actually maximizing transfer even when gradient dot products align was explored in [1]. The authors also suggest in section 4.2 that, despite worse empirical results, the NCCL approach is superior to GEM because of its inefficient quadratic program computation. However, this was already addressed in A-GEM [2], so it is not so clear that there is a significant computational advantage to NCCL. I would think that the authors should actually compare compute times inline with prior work. I also am almost 100% sure that the comparison to reservoir sampling is incorrect. If you look at results in [1] and [3] you see that reservoir sampling consistently performs right around GEM and sometimes better than GEM on exactly these same benchmarks. The 10% number seems unfathomable to me and at the very least needs an explanation about how this could be true. [1] "Learning to Learn Without Forgetting By Maximizing Transfer and Minimizing Interference" Riemer et al., ICLR 2019. [2] "Efficient Lifelong Learning With A-GEM" Chaudhry et al., ICLR 2019. [3] "On Tiny Episodic Memories in Continual Learning" Chaudhry et al., 2019. This last point is related to my biggest overall concern, which is that it is not clear that the learning rate weighting scheme proposed in this work actually helps in comparison to generic replay. For example, it would be a really important ablation to try the very same buffer setup but with no learning rate modulation. My experience leads me to believe that the gap between the GEM based approaches and NCCL is likely larger than the gap between these approaches and vanilla replay. As a result, I am very skeptical that the learning rate modulation component adds value based on the current results. Additionally, it would be very interesting to look deeper into how the model is working to understand its effect on learning. For example, the authors should detail patterns with the chosen modulated learning rates over time. While I appreciate the theoretical analysis of this paper, I think the experiment section is too short and leaves many important questions unexplored. Unfortunately, I feel that I must support rejection of this paper in its current form as my doubts about the experiments leave me unsure that the approach works at all in practice. After The Rebuttal: I really appreciate the author response and it is a shame that the revisions do not seem to be correctly uploaded. Unfortunately, the responses to my comments rely heavily on references to the revision that I cannot see, making it impossible for me to validate if my concerns were actually adequately addressed. The other reviewers have mentioned some very valid concerns about the submitted draft as well. As such, I continue to lean towards rejection of the submitted paper as significant revisions are certainly needed. <doc-sep>**Summary of paper** This paper analyses the convergence of episodic memory-based continual learning methods by looking at it as a nonconvex optimisation problem. They analyse the convergence rates for the case where all memory from past tasks is stored, and then consider the case where there is only a subset of past data, leading to overfitting on the episodic memory. They then introduce a method that scales the learning rates of the their update method, with the goal of tightening the bound obtained in the convergence analysis. Finally, experiments are shown on different benchmarks, and the proposed method is compared to some competing baselines. **Summary of review** I am recommending rejecting this paper. Although the goal of the paper is commendable (convergence analysis for nonconvex episodic memory-based continual learning), I feel like there are many parts of the paper that can be improved (see later in the review). **Pros of paper** 1. The paper attempts to analyse the convergence of continual learning methods theoretically (especially Section 3.1). This is very important to do, so that we can understand the problem of nonconvex continual learning better. This has not been attempted enough in the literature, partly because this is a very difficult problem. 2. The work appears to be well-positioned with related work on convergence rates (as far as I am aware). 3. The paper builds nicely, from Introduction to Preliminary Work to Theoretical Results to Experiments. **Cons of paper (/questions for the authors)** 4. Although the aim of the paper is great, it appears to me as if the methods the paper mentions are not instances of the update that the paper analyses (Equation 6). Specifically, GEM and EWC (mentioned in the first paragraph of Section 3.1): GEM has a different optimisation technique (quadratic programming algorithm), and EWC does not store any episodic memory (only stores previous model parameters). 5. I am struggling to see the significance of Section 3.2 ("Overfitting to Episodic Memory"). It appears like the authors are just pointing out that there is a bias introduced by storing only a subset of past data, without sufficiently commenting on the effects or significance of this bias. 6. Appendix A (proof of Theorem 1) is incomplete. 7. Something seems wrong to me with the BWT metric in Section 4.1: a) My own experience with Fine-tune and EWC strongly suggests that both methods should have BWT<0. This is because the methods first learn the task well and then forget it slowly over time, and is fully expected from such algorithms. However, the authors report BWT>0. b) Fine-tune on Permuted-MNIST (Table 1) has an ACC of 2.43% but a BWT of 12.10%. Surely, be definition, BWT<=ACC always (Equation 18)? c) A final point on BWT: A BWT<0 does not "imply that catastrophic forgetting happens" (final paragraph page 7). Although it does imply *forgetting*, this is not necessarily *catastrophic forgetting*, which is only when BWT is extremely negative. For example, the concept of *graceful forgetting* will still have BWT<0 (but is usually distinguished from catastrophic forgetting). 8. Can the authors comment on why the proposed method performs better with 1 epoch per task than with 5 epochs per task (Tables 1 vs 2, Permuted-MNIST)? This result appears to indicate that, despite the correction terms of the method, the method is forgetting tasks as it trains for longer. **Additional (minor) feedback** 9. I would strongly recommend proof-reading the paper (or else asking a native English speaker to do so). 10. Figure 1 is a nice sketch visually, but I did not see how it shows the benefit/key idea of NCCL specifically (which is about finding optimal learning rates). There is no visual/diagramatic element of how those learning rates might be chosen. (Alternatively put, a similar figure could be used to describe eg GEM). **Update to review** Thanks to the authors for responding. They did clear up point 5 (above) for me. However, I shall keep my score of 4. Unfortunately I cannot see the new revision of the paper that the authors refer to, meaning I cannot change my score. <doc-sep>This paper attempts to provide a convergence analysis for nonconvex continual learning with episodic memories, and try to theoretically show the degradation of backward transfer caused by overfitting to memorized samples. It further proposes an algorithm for learning rate scheduling in nonconvex continual learning based on these results. The reason of the score of the paper is that the theoretical proof is wrong in my understanding and cannot support the main contribution claimed in this paper, the main problems are as below. The proof of the main theorems is questionable regarding the nonconvex assumption, which is the most important contribution claimed in this paper. Regarding the inequality in Eq.(5), in my understanding it is hold to be true only when f is a convex function [1]. And the theorems are based on this inequality which cannot be hold for nonconvex case if this inequality is not true for nonconvex functions. If I'm wrong, authors please provide proof of how to get Eq.(5) by L-smooth nonconvex functions. Moreover, in the proof of Theorem 1, Eq.19 (Appendix A), the inequality of the last step cannot be hold unless the inner product of gradients < \\Delta f, \\Delat g > is always positive, which cannot be guaranteed. Otherwise, there is no reason to develop gradient-based approaches in continual learning, such as GEM [2] or AGEM [3]. So even if Eq.(5) can hold for nonconvex case, the theorem is still questionable. Therefore, the main claim of this paper is highly suspicious to me. If authors cannot clarify these issues, this paper would be considered as with significant flaws. Despite the questions on the main theorem, the assumption of the initial state of the model is quite strong as it assumes the initial values of parameters are close to the optimal values, which is not very practical unless a pre-trained model is applied. So the significance of this paper is further limited. As the theoretical part is incorrect, I haven't reviewed the experiments part of this paper. If the authors can clarify all above main concerns, I'm willing to make another round of review. [1] Nesterov, Yurii. "Introductory lectures on convex programming volume i: Basic course." Lecture notes 3.4 (1998): 5. [2] Lopez-Paz, David, and Marc'Aurelio Ranzato. "Gradient episodic memory for continual learning." Advances in neural information processing systems. 2017. [3] Chaudhry, Arslan, et al. "Efficient lifelong learning with a-gem." arXiv preprint arXiv:1812.00420 (2018). ############################feedback to authors' response############################# I'm aware of the non-convex setting is valid, but since the corrected proof of the theorem is not uploaded, I will raise my score to 3. <doc-sep>In this paper, the authors provide theoretical justifications for memory-based continual learning (CL) methods and provide a scaling learning rate method NCCL to improve the practical performance. The results look quite exciting (there is quite scant theoretical paper for CL), however, after looking into the details of the paper, I was confused by many places and would say the authors need to further improve their manuscript in order to qualify for the ICLR standard. 1. The theoretical analysis is not very impressive. The theory just split out the catastrophic forgetting term C and demonstrated that performance degradation depends on C. However, where C comes from (I know it is an additional term directly from mathematical derivation, but what's the meaning and intuition) is not clearly discussed. Also, the theorem based on the unrealistic assumption e_t is unbiased (Assumption 2), which can never happen in memory-based CL methods. The authors do mention approaches such as NCCL without Assumption 2, but no theory is provided. Probability section 3.2 is on theory without Assumption 2, then please provide a complete theorem instead of just waving hands. 2. Moreover, there are many flaws in the proof, I just list a few of them here (or correct me if I misunderstand). - In proof of Theorem 1, second inequality of eq (19), why does the cross product term disappear? i.e., why $||\\nabla f + \\nabla g||^2 <= ||\\nabla f||^2 + ||\\nabla g||^2$? - why $C_t = E_I (\\tilde{C}_t)$, when taking an expectation over $I_t$? $C_t$ is defined in eq (7), and there is no randomness over $J_t$ (already with $E_J$). But $E_I (\\tilde{C}_t)$ still has randomness over $J_t$. - why $E||e_t||^2$ is written as $||e_t||^2$, we also have randomness in $e_t$ over $I_t$, see definition of $e_t$. - In proof of Lemma 1, why $E(||\\nabla f||) = O(E(||\\nabla g||))$ or how do we get the second equality? - How do we get the relation of $E||\\nabla g||^2 = O(\\beta^2\\delta / \\sqrt{T})$? I see it is directly assumed in Corollary 1 (expected stationary of $g$ be $\\delta/\\sqrt{T}$). But I think we should derive this instead of simply making an assumption. Actually, $f$ and $g$ are equivalent and interchangeable, if we assume $g$ already converge, does that mean $f$ also assumed converge? But if we directly apply results derived from $f$ this will be circular reasoning. So I am not sure, the authors better make more discussions on this. 3. For practical performance, if we compare NCCL (68.52 accuracies in Table I) with GEM (89.50), A-GEM (89.10), GSS (77.30), or even EWC (68.30), there is no performance improvement at all. The authors further claim their methods are faster in computation, then please also include a time comparison, instead of just mentioning it. Otherwise, it is hard to quantify the contribution of the new method. Overall speaking, I am afraid that such work does not have sufficient theoretical or algorithmic contributions. And I doubt the true value of designing a new method without any performance improvement. However, I still appreciate the motivation of the paper and will be more tolerant since there are quite scant theory papers for CL. So I would be happy to adjust my rating if all my concerns were properly addressed. If there is any misunderstanding, please also let me know. update: Thanks for the response. However, there is no updated revision in the revision history of this paper. Based on the flaws that I have previously pointed out, it is impossible for me to validate if my concerns were actually adequately addressed without seeing the updated version. I will keep my score unchanged.
This work proposes to analyse convergence of episodic memory-based continual learning methods by looking at this problem through the lense of nonconvex optimisation. Based on the analysis a method is proposed to scale learning rates such that the bounds on the convergence rate are improved. Pros: - I agree with the reviewers that this is an interesting and novel perspective on continual learning Cons: - Reviewers point out concerns/issues with the clarity of the manuscript with respect to several parts: - reviewers raise concerns with respect to the significance of the evaluation - reviewers point out that the theoretical analysis itself is somewhat standard and not novel in itself, and 2 reviewers raise concerns with respect to the analysis made Unfortunately the authors seem to have missed the upload of the revised version. The reviewers have nevertheless considered the rebuttal by the authors and the consensus is that this manuscript is not ready yet in it's current form.
This paper provides theory on PAC-learnability of out-of-distribution (OOD) detection. OOD detection is classification task but test data may come from unknown classes. If test data come from classes known during training, we want to classify them into those classes, but otherwise, we need to detect they belong to unknown classes. The authors provide a series of theorems about conditions for OOD detection in several interesting setups. Their results imply that we should not hope for finding an OOD detection algorithm that works in general cases, but we can still design algorithms for special cases. # Strengths - The paper provides rigorous theory on an important machine learning task. - The paper is excellently-written and easy to follow despite its technical content although all the proofs are in the supplemental material. - The scenarios that the authors consider are not too technical but highly relevant to practical OOD detection methods. Hence, it gives useful insights for practitioners as well. # Weaknesses - Most results are negative ones showing impossibility of OOD detection in general cases, and the paper does not provide concrete algorithms. The theory only handles a few special combinations of distributions and hypothesis spaces although I do not consider this as a very strong limitation because they cover many common practical situations. <doc-sep>The out-of-distribution detection problem is defined as follows: after training on an ID joint distribution $D_{X_{ I}Y_{ I}}$ with random variables from $\\mathcal{X}$ and labels in $\\mathcal{Y}$, we need to learn a classifier which can detect a test sample as OOD if the sample is drawn from outside of $D_{X_{ I}Y_{ I}}$, while predicting the correct label if the test sample is drawn from ID distribution. This paper mainly answers the agnostic PAC learnability of out-of-distribution detection in different scenarios, which is known as an open problem in out-of-distribution learning theory. This paper firstly defines the basic concepts of agnostic PAC learnability of OOD detection, which are natural extensions of agnostic PAC learnability of supervised learning. Then, considering the imbalance issue of OOD detection, the author proposes the prior-unknown spaces and indicates that researchers should focus on agnostic PAC learnability of OOD detection in the prior-unknown spaces. By discovering a necessary condition (Condition 1), the author shows that the condition cannot hold in the total space and separate space. Based on this observation, the paper proves that in most general setting (total space and separate space), OOD detection is not agnostic PAC learnable. Next, the author proves the necessary and sufficient conditions to show that the separate space can be learnable if and only if the hypothesis space contains almost all classifiers, while the paper proves that in the finite-ID-distribution space, Condition 3 is the necessary and sufficient condition for the learnability of OOD detection. The paper also proves that in the realizability assumption case, OOD detection is learnable in density-based space. Lastly, the author considers OOD detection in some practical hypothesis space—FCNN-based and score-based. The paper shows that in the separate space, OOD is learnable in FCNN-based spaces or score-based spaces iff the feature space is finite. In Theorem 11, the paper shows that Condition 1, condition 3 and realizability assumption and learnability are equivalent. In Theorem 12, the author also reveals that overlap will lead to the failure of OOD detection. This paper is important to understand when and how OOD can work in real applications, as this also gives insight and guidance to OOD detection algorithm designing. Strengths: 1. The issue is definitely relevant to the NeurIPS as well as ICML, ALT and COLT. When OOD detection can be learnable is an open issue in OOD learning. Due to missing necessary information from the OOD data, the learnability of OOD detection is very difficult. Despite plenty of applied work, there is still few theory to be established for this issue. To address this issue, it requires the author to dig and discovery unknown necessary conditions from scratch. This paper does make an effort to address this problem and make great progress. 2. This paper is sound. I am interested in this topic, but the paper is long. So I spend several days to check the proofs carefully. All of the results in this paper are supported by proofs. From what I have checked, all proofs are correct. 3. The paper answers negatively and positively the question of agnostic PAC learnability of OOD, and introduces sufficient assumptions to recover it (such as assumption 1). These assumptions are practical and mild, and can be satisfied by many practical cases, for example, FCNNs, CNNs and kernel space. Therefore, the theory can be tightly connected with practical applications. 4. Plenty applied work has been proposed to address this OOD, but theoretical works discussing when OOD detection work is lacking. The paper theoretical shows when OOD can work in practical cases. I think the contribution are significantly important and this work can give a good guidance for the development of OOD detection. This paper has the potential to achieve a long term impact to OOD learning field. 5. The paper is written well enough to understand. Weaknesses: 1. The appendix is long and the proofs are complicated. Although I have check almost all important proofs and believe they are correct, I still spend three days to check them. It is better for the author to provide proof sketch and intuitions for important theorems. 2. It seems that the description of Theorem 4 in main text is slightly different from the description of Theorem 4 in appendix. I have checked it and found that the description Theorem 4 in appendix is more rigorous. Although you have explained why they are different (because of the space limitation) in appendix G.2, I still suggest that the author should use the description of Theorem 4 in appendix to replace Theorem 4 in main text, because the description in appendix is correct. 3. Typos/grammar: 1) In line 305, $K$ should be $\\lambda$. 2) In line 340, $D_{XY|Y}^{ in}$ should be $D_{X_{I}Y_{I}}$. 3) In line 171, $D_{X_{I}}$ should be $D_{X_{I}Y_{I}}$? 4. After checking your proof, I think Condition 2 can be removed from Theorems 7 and 10. Although Condition 2 is weak and meaningful, I still think it is better to remove Condition 2. The idea about how to remove Condition 2 can be motivate from the proof of Theorem 9 (the second part). The paper focuses on theory for OOD detection and gives the first theoretical support to understand when OOD detection can work. There is no any potential negative social impact. <doc-sep>This paper explores the theoretical foundation of learnability of out-of-distribution detection. Based on the PAC learning theory, the paper proved several impossibility theorems for the learnability of OOD detection under some scenarios, and finds some conditions that OOD detection is PAC-learnable. Also, the paper demonstrate the theory in real practice using FCNN and OOD scores as examples. Recently there are loads of papers proposed empirical methods for OOD detection, but the theory is rarely explored.This paper is the first to investigate the theory of OOD detection so thoroughly, which is meaningful to this field. Strengths: - The paper is clear and well-written. And the proofs are generally correct. - This paper is one of the few theoretical works focusing on OOD detection, which plays a significant role in this field. - The theory is intuitive and have some practical impacts. It can somewhat guide the design of OOD detection algorithms. Weakness: - Some notations and expressions can be refined in Section 2. For example, $S$ or $D_{XY}^n $ in eq.2 can be explained (minor). - Some typos. In section 2 Definition 1. "if there exist an algorithm" -> "if there exists an algorithm". - Some experiments can be added to show the correctness of the theorems. - The practical impacts may not be large enough. Yes. <doc-sep>Recently, reliable AI plays important role in designing an intelligent machine learning system. How to let AI system tell “do not know” is critical for reliable AI systems, which is the focus of this paper. In this paper, the authors consider a practical scenario where out-of-distribution data (the system should not know) is unseen during the training process. In this scenario, the authors want to investigate if the OOD detection is learnable. The theoretical part is easy to follow. I find that the theoretical contributions are completed and interesting. At first, this paper shows that OOD detection is not learnable in the most general case, which does make sense due to the unavailability of OOD data. Then, this paper points out a necessary condition (sometimes as a necessary and sufficient condition) of the learnability of OOD detection, which directly induces a lot of necessary and sufficient conditions of learnability of OOD detection. In my opinion, this is a significant contribution to the field. Finding necessary and sufficient conditions is always a core and the most important part when studying a problem. From the practical part, several theorems are considered using networks or finite in-distribution domains, making the whole paper also fit the taste of practitioners. In many practical scenarios, we cannot expect OOD data is the ones we have already seen, which is exactly the problem this paper studies. Besides, the theorem regarding finite ID distributions is also practical. If I understand correctly, in this practical scenario, this paper gives a better result, which is very interesting to me and significant to the field (we often only have finite ID distributions in practice). Pros: 1. This paper is the first to characterize the learnability of OOD detection, which makes a significant contribution to the field. There are many OOD detection papers targeting the problem this paper considers. The problem is very difficult yet very important in practice. Previously, no theoretical works are proposed for this problem. In this paper, a completed theory is proposed for this problem, including when OOD detection will fail and when OOD detection will succeed. A lot of necessary and sufficient conditions of learnability of OOD detection are exciting to this field. 2. For practitioners, this paper relieves some big concerns regarding existing OOD detection methods. Before this work, one could intuitively think that OOD detection is not learnable (which is true in the most general case, yet our common datasets are not such general). However, this paper gives a theoretical boundary between learnability and unlearnability of OOD detection by proving some necessary and sufficient conditions. Thus, we can know, on what kind of datasets, OOD detection is learnable. This contribution is significant and meaningful. 3. Fig. 1 is very helpful in understanding the key necessary condition of OOD detection, which seems that it can motivate a bunch of papers in this research direction. 4. I can see that there are three research topics regarding that “let AI say don’t know”: 1) classification with reject option; 2) PQ learning; and 3) OOD detection. The first two have already had some theories but the last one does not have. This paper fills up this gap, making OOD detection method (which might be more practical than the other two) possible in theory. 5. Although the proofs of this paper are not easy to follow, the logic and organizations of proofs are clear. I have read most proofs and have not found unrecoverable errors for important results. The proofs are soundness. Cons: 1. I have read some papers regarding PQ learning and feel that PQ learning is totally different from OOD detection. PQ learning focuses on scenarios where OOD data are somehow available, yet OOD detection focuses on the opposite scenarios. However, it is better to demonstrate their difference deeply. Does PQ learning have limitations when meeting different OOD data in the future? I am interested to see some discussions regarding this part. 2. Similar to PQ learning, classification with reject option could be deeply compared to OOD detection instead of just comparing both using plain words. I know they are very different and OOD detection theory is more difficult. But giving more detailed comparation is better for this paper. 3. I have some questions regarding Figure 1, which I hope that the authors can confirm with me. In my opinion, the solid line is the ground-truth line. Do we expect that the estimated lines (dash lines) get closer to the solid line? If so, when overlap exists, why is the solid line not straight? Can you bring me to the specific part regarding this? It seems that the solid line will be straight if there are no overlaps, which makes OOD detection learnable. Is that correct? 4. More explanation, like Figure 1, could be added for understanding the theorems better. Brief proofs might be also useful. 5. In line 26, there are too many separate citations. In my opinion, it is not necessary. 6. Line 148 should not be a new paragraph. 7. The density-based space is very important and interesting. Especially, the theorem 11 is one of the spotlights. Can you give more explanations or applications regarding density-based space (theorems 9 and 11)? 8. The mathematic expression in Definition 1 about PAC learnability is different with the normal expression of PAC learnability. Although line 118 has told us that they are equivalent and I also realize that they are equivalent by paper [21,30] (exercise 4.5 in [21] can prove it?) , the paper will be improved and more clear if a brief proof for the equivalent descriptions is given in the final version. It is a pure theoretical paper. So I think there is no negative social impacts.
This paper studies generalization and learnability questions in the realm of out-of-distribution (OOD) detection. Specifically, it applies PAC learning to the theory of OOD detection. The contributions include new conceptual definitions of agnostic PAC learnability of OOD detection. Then, the authors argue for studying prior-unknown spaces under certain necessary conditions. This leads to a number of novel results, both in theory and in terms of possible practical impact (e.g., when OOD detection will succeed vs. fail). The reviewers found the paper sound, insightful, clearly-written, and novel. This paper benefits the community because it is one of the few theoretical studies of OOD detection. For the final version, the reviewers have many comments regarding definitions, terminology, and some of the technical details. I encourage the authors to incorporate as much of this feedback as possible to make the paper easier to read for future audiences. For example, please - add the full proof of how Eq. (2) relates to PAC-learnability, - add and clarify the realizability assumption in the revision, - use the description of Theorem 4 in appendix G.2 to replace Theorem 4 in main text. The authors should also provide proof sketches for the main results (either in the main paper or the appendix). This paper contains many theoretical results, as well as ways to unpack them in the context of more practical scenarios. All of this would benefit from clear exposition. There are also a handful of typos to fix (in the notation/equations and in the exposition). Given the large number of small questions/issues, it is important to address these in the final version of the paper. The reviewers all vote positively toward acceptance of this paper, and therefore, I also recommend acceptance.
This paper proposes an alternative to softmax attention using the sinc function. The authors were well-motivated using the sinc function from the Fourier integral estimator and provided theoretical support for approximation error. They have done experiments on two datasets showing improvement over softmax attention. Strengths: 1. The paper is a pleasant read and clear to understand. 2. The established connection between non-parametric kernel density estimator and self-attention. 3. The authors have provided well-motivated intuition for their proposed approach. 4. Related work has been moderately covered. 5. The evaluation is convincing to show the benefit of sinc-based attention. Weaknesses: 1. The authors only experimented with one choice of $\\phi$. It would be great to see what other suitable candidate of $\\phi$ is possible. 2. How do they determine the value of $R$? Is it dataset-specific? 3. There are no quantitative results on the runtime of the proposed attention mechanism. Authors have not adequately commented on the known limitations. <doc-sep>The authors demonstrate the FourierFormer, a new class of transformers in which the novel generalized Fourier integral kernels replace the dot-product kernels. The FourierFormer can capture correlations between query features and key self-attention vectors. The authors empirically corroborate the advantages of FourierFormers over the baseline transformers in various practical applications, including language modeling and image classification. Strengths: The ideas that the authors put forward are novel, and the mathematical arguments are complete and ingenious. Weaknesses: The experiments in this paper are insufficient and, therefore, not convincing enough to demonstrate the effectiveness of the FourierFormer. The experiment only involves two basic tasks based on WikiText-103 and ImageNet. Although the authors have given detailed proof mathematically, due to the poor interpretability of the Transformer itself, I still need to see more experimental results to agree with their point of view. The current experimental results are insufficient and not persuasive. <doc-sep>In this paper, the authors provide a new perspective to interpret the self-attention mechanism in Transformers. In particular, with the assumption that the query and key vectors are normalized, the self-attention mechanism coincides with the well-known Nonparametric Kernel Regression with kernel density estimation. Motivated by this, the authors instead use the Generalized Fourier Integral Theorem to build more powerful estimators for capturing the interaction between features in different dimensions. Experiments on some benchmarks are conducted. **Strengths** - The interpretation of seeing the self-attention mechanism as using the isotropic Gaussian kernels for kernel density estimation and nonparametric regression estimation seems to be novel, which provides a new perspective to the community to understand the behavior of self-attention. - The motivation seems to be reasonable to use the generalized Fourier Integral Theorem to capture the feature interaction instead of using the multivariate Gaussian kernels with proper covariance matrices. - The theoretical analysis is thorough, including approximation error of the generalized Fourier density estimator (Theorem 1) and the generalized Fourier nonparametric regression estimator (Theorem 2). **Weaknesses** - **Regarding the background**: the authors should consider adding a preliminary section to introduce the background knowledge on the nonparametric kernel regression, kernel density estimation, and the generalized Fourier Integral theorem, which could help the readers easily follow the derivation of Section 2 and understand the motivation to use the Fourier Integral theorem as a guide to developing a new self-attention mechanism. - **Regarding the experimental evaluation**: the issues are three-fold. 1) since the authors provide an analysis of the approximation error between estimators and true functions (Theorem 1 and 2), it is informative to provide an empirical evaluation of these quantities on real data as further verification. 2) The experiments should be more comprehensive and general. For both the language modeling task and image classification task, the model size is limited and the baselines are restrictive. 3) Since the FourierFormer need customized operators for implementation, the authors should also provide the memory/time cost profiling compared to popular Transformer architectures. Based on these issues, the efficiency and effectiveness of the FourierFormer are doubtful. -------After Rebuttal------- Thank authors for the detailed response. Most of my concerns have been addressed. I have updated my scores to 6. No negative societal impact <doc-sep>This paper proposes the FourierFormer, in which the dot-product kernels are replaced by the generalized Fourier integral kernels. Unlike the dot-product kernels, where we need to choose a good covariance matrix to capture the dependency of the features of data, the generalized Fourier integral kernels can automatically capture such dependency and remove the need to tune the covariance matrix. This paper theoretically prove that the proposed Fourier integral kernels can efficiently approximate key and query distributions and verify this point through experiments on two transformer-based tasks. 1.This paper introduces a new angle to interpret transformer and its key module. This work provides a nonparametric regression interpretation to study self-attention in transformers and formulate self-attention from the viewpoint of kernel regression. 2.This work adopts the generalized Fourier integral estimators to replace the traditional dot-product self-attention and provide theoretical guarantees for the estimator. 3.Overall, the paper is well organized and technically sound. The experimental results on multiple transformer-based tasks verify the efficiency of the proposed Fourier Former. Weaknesses 1.The derivation process and the presentation need to be improved. Some important symbol annotations or explanation is missing during the algorithm description, which make readers hard to follow the derivation process. For example, in the equation (9), some important symbol annotations are missing, e.g. ‘s’, ‘R’. It is difficult for readers to catch up the derivation, and the derivation of p(k) is crucial to the following interpretation. 2.Some pitfalls in the paper: a) in line 100, “are i.i.d samples from.” ; b) in equation (6) one of /psi is written as /phi; c)line 185, the C’ in text are in wrong format. The paper didn't address the limitation and potential negative societal impact of the work.
Overall, the reviews about this paper are very positive. The authors spent great effort engaging in discussions and improving the paper with clarifications and additional experiments. We recommend accepting the paper.
The paper details a way of investigating the space of pre-images that lead to a particular output from a ReLU layer, with the goal of using the inverted representations as a way to understand the deep neural networks. Three experiments are proposed where the authors claim that the computed pre-images help interpret the network decision making. Overall the paper is interesting, however I am not certain of the novelty as some related work is not discussed. Additionally, although the practical application of the method is interesting, the clarity could be improved for the last experiment. Positives: * Understanding the invariances of neural networks can potentially lead to more interpretable models, and one way to investigate this is by looking at the preimages for a network. * The paper is a nice mix of theoretical results which lead to practical applications Questions and Concerns: * The authors state that maxpool can be rewritten in terms of a linear component and a ReLU, but this is non obvious. If this is true, a mathematical formulation should be explicitly included in the paper. * The paper is missing some potentially related references. Previous work has investigated how multiple stimuli can get mapped onto (approximately) the same point in recognition networks by inverting the representations via iterative gradient descent (Mahendran & Vedaldi 2015, and recent work including invarianced based adversarial examples in Jacobsen et al. 2019 or model metamers in Feather et al. 2019). How does the proposed preimage computation help improve model interpretability beyond this previous work, especially given the authors statement that the method is intractable for large networks? * The paper does not discuss invertible networks which have a bijective mapping between the input and output (ie Invertible Residual Networks in Behrmann et al. 2019). Discussing this work seems relevant if the goal is to make models such that one can start with hypothetical outputs and understand the inputs that lead to them. * The final example of using this method in practice for ACAS systems is interesting, but it is difficult to follow what “success” would mean for this experiment Minor points: * The following sentence on page 3 seems to be missing something “Preimages are most insightful and useful when the inputs and outputs have definite interpretation – application areas where the need for massive networks is less.”. * There it a typo in the last sentence of page 3 (“bu”->”but”) <doc-sep>There are many issues in the paper that can be improved. The title is not appropriate, this work does not address safety applications. It is worth noting that the word safety is not defined and not used in the main body of the paper. It is difficult to follow the presentation of the paper since mainly the applications are presented and then some contributions given, in the same presentation as the abstract. A major issue it that the paper is missing some important theoretical analysis. Of particular interest is the existence of the preimages, because not all outputs have inputs. Moreover, the uniqueness of the solution needs to be studied. These properties should depend on the used nonlinearities and architecture of the neural network. There are many spelling and grammatical errors, such as “suprising”, “have been been”, “Coincidentlly”, “configuations”<doc-sep>The paper presents a method to verify if a NN is performing as expected in sensitive applications. Although the general area is very important in machine learning, the paper is not very well presented: The problem is not well stated, the approach is not very clear, and the results are not well justified. - The presentation and the writing of the paper should be improved. Unfortunately, with the current format it is hard to glean the idea of the paper. There are some typos (e.g., 'bu' in page 3, 'plot' in page 4, etc.). - There are some concepts that are not defined early on and maybe never in the paper. For example, what is the problem that the paper tries to solve mathematically? It is not very clear. What is the mathematical definition of a preimage? - The authors say: "Preimages are most insightful and useful when the inputs and outputs have definite interpretation – application areas where the need for massive networks is less". Its hard to fully understand but it seems that the method suffers scalability issues. Can this be formally analyzed? What is the complexity of the algorithm in time and space? Why is there a scalability issue? Is it a fundamental problem? How does this limit the scope and applicability of the method? Also, what does "definite interpretation" mean? - The NN used in the experiments are very tiny. I would consider experiments that reflect more realistic situations in the real-world. Current setup significantly limits the scope of the method. - It is not clear how to verify the performance of the method. Results in Figure 1 and 2 does not show us the quality of the method, is it doing good or bad? I found the results in Figure 1 surprising as the moon data is fairly symmetric while the preimage is biased towards one class. Is there a reason for that?<doc-sep>Deep neural networks are known to be brittle, and can lead to dangerous consequences if left unverified. Forward reach set computation can be used as a basic primitive to verify properties of deep neural networks used in a robotic setting. There has been a rising interest in verifying larger neural networks used in safety critical setting. In this paper, the authors propose a way to compute reachable sets for a neural network in a backward sense. Starting from the outputs of the neural network, and then work it's way to the inputs. This is an interesting way to look at the problem itself, but as the authors point out it is an intractable problem. My concern about this paper is I don't see the use of a pre-image computation algorithm as being very useful. A forward reachability tool works pretty well for the size of neural networks considered in the paper. Pre-image computation does not provide any advantage in terms of scalability, as is apparent from the experiments. Moreover, almost any safety constraint that needs to be verified with system dynamics in the loop always should ideally work forward in time. Thus for the neural network controller from the inputs to the outputs. Cartpole example : The authors come up with rules about, which output behaviors are correct for a few of the input regions. Then use this as a specification for the verification algorithm. But the very specifications, comes from reasoning about the forward behavior of the system dynamics itself. The idea of forward reach sets computation would generalize much better to a wide range of examples therefore. Without the need to come up with such handcrafted rules. The authors do make a convincing case for the ACASXu example. But this example is less interesting given the amount of attention it has received recently.
Thank you for your submission to ICLR. The reviewers and I unanimously felt, even after some of the clarifications provided, that while there was some interesting element to this work, ultimately there were substantial issues with both the presentation and content of the paper. Specifically, the reviewers largely felt that the precise problem being solved was somewhat poorly defined, and the benefit of the proposed preimage technique wasn't always clear. And while the ACAS system was a nice application, it seems to be difficult to quantify the real benefit of the proposed method in this setting (especially given that other techniques can similarly be used to verify NNs for this size problem). The answer that this paper provides seems to be something along the lines of "ease of visual interpretation" of the pre-image conditions, but this needs to be quantified substantially more to be a compelling case.
This paper develops a mean field theory for batch normalization (BN) in fully-connected networks with randomly initialized weights. There are a number of interesting predictions made in this paper on the basis of this analysis. The main technical results of the paper are Theorems 5-8 which compute the statistics of the covariance of the activations and the gradients. Comments: 1. The observation that gradients explode in spite of BN is quite counter-intuitive. Can you give an intuitive explanation of why this occurs? 2. In a similar vein, there a number of highly technical results in the paper and it would be great if the authors provide an intuitive explanation of their theorems. 3. Can the statistics of activations be controlled using activation functions or operations which break the symmetry? For instance, are BSB1 fixed points good for training neural networks? 4. Mean field analysis, although it lends an insight into the statistics of the activations, needs to connected with empirical observations. For instance, when the authors observe that the structure of the fixed point is such that activations are of identical norm equally spread apart in terms of angle, this is quite far from practice. It would be good to mention this in the introduction or the conclusions.<doc-sep> This paper investigates the effect of the batch normalization in DNN learning. The mean field theory in statistical mechanics was employed to analyze the progress of variance matrices between layers. As the results, the batch normalization itself is found to be the cause of gradient explosion. Moreover, the authors pointed out that near-linear activation function can improve such gradient explosion. Some numerical studies were reported to confirm theoretical findings. The detailed analysis of the training of DNN with the batch normalization is quite interesting. There are some minor comments below. - in page 3, 2line above eq(2): what is delta in the variance of the multivariate normal distribution? - the notation q appeared in the middle part of page 3 before the definition of q is shown in the last paragraph of p.3. - The randomized weight is not very practical. Though it may be the standard approach of mean field, some comments would be helpful to the readers. <doc-sep>This paper provides a new dynamic perspective on deep neural network. Based on Gaussian weights and biases, the paper investigates the evolution of the covariance matrix along with the layers. Eventually the matrices achieve a stationary point, i.e., fixed point of the dynamic system. Local performance around the fixed point is explored. Extensions are provided to include the batch normalization. I believe this paper may stimulate some interesting ideas for other researchers. Two technical questions: 1. When the layers tends to infinity, the covariance matrix reaches stationary (fixed) point. How to understand this phenomenon? Does this mean that the distribution of the layer outputs will not change too much if the layer is deep enough? This somewhat conflicts the commonsense of "the deeper the better?" 2. Typos: the weight matrix in the end of page 2 should be N_l times N_{l-1}. Also, the x_i's in the first line of page 3 should be bold.
This paper provides a mean-field-theory analysis of batch normalization. First there is a negative result as to the necessity of gradient explosion when using batch normalization in a fully connected network. They then provide further insights as to what can be done about this, along with experiments to confirm their theoretical predictions. The reviewers (and random commenters) found this paper very interesting. The reviewers were unanimous in their vote to accept.
Summary: This paper proposes a new algorithm for solving neural ODEs. Each numerical solver step of the neural ODE is implemented as an invertible neural network via a variant of the asynchronous leafprog integrator. While still computing an accurate gradient, this allows memory savings by discarding intermediate data from the numerical integration steps since it can be reconstructed using the inverse. A theoretical stability analysis is provided. The experimental results show that the algorithm achieves similar performance to previous methods (e.g. ACA) while using less memory. Strengths: + Identifies a nice connection between invertibility and memory efficiency. Beyond neural ODEs, this could enable use of larger models where invertible networks are useful (e.g. normalizing flows) + The theoretical analysis of stability is useful to build intuition Concerns / weaknesses: - Most experiments in the paper use damping_factor $= \\eta = 1$. Since theoretically, this is not stable, it would be nice to see if the empirical improvements hold up for $\\eta < 1$, where stable regions do exist - The naive method seems too naive. Why are all results from all $m$ evaluations being saved? It is obvious that $m-1$ of these are unnecessary for gradient computation since they don't affect $z(T)$. The related claim about the computation graph being deeper for the naive method also seems incorrect. Other comments: - In Algorithm 1, shouldn’t $error_est = \\inf$ be inside the while loop? - In Algorithm 4, shouldn’t $a(T)$ be the partial derivative of $L$ wrt $z(T)$ instead of total derivative? - In Theorem 3.2, what is $\\sigma$? Should it be $\\sigma_i$? - In various locations, notation like $O(N+1)$ is used. Should this just be $O(N)$ since I assume $N$ is at least $\\Omega(1)$? - It seems a bit strange that we have to do a local forward and local backward pass in Algorithm 4. Could this be solved by making each layer of f invertible? In the same vein, it seems that the adjoint method needs to do a separate solve of the reverse ODE because of loss of information. If we were to assume invertibility of the forward map, is there a way to modify the adjoint method to exactly retrace the path backwards?<doc-sep>Summary: This paper presents a memory-efficient asynchronous leapfrog integrator for numerically solving neural ODEs, referred to as MALI. The method comes with a constant memory guarantee (like the adjoint method) and also guarantees reverse-time accuracy (like the adaptive checkpoint adjoint (ACA) method). The authors also give a rigorous theoretical analysis of MALI, and also discuss a "damped" version with an increased stability region. The method is evaluated on a variety of tasks which includes classification, dynamical modelling and generative modelling. Pros: - The theoretical analysis looks correct, noting that I haven't worked out all the details. - Experimental evaluation is very exhaustive, and MALI achieves near-the-best performance in all tasks. - The method is proven as accurate as the standard numerical ODE solvers. Thanks to its reduced memory cost (compared to ACA), MALI can then be treated as an off-the-shelf replacement. Cons and Questions: - Looking at the results, I'm having difficulty seeing any significant improvement upon ACA. Then the main contribution (in addition to the theoretical analysis) is the reduced memory consumption, which makes me rethink whether ICLR is a suitable venue. - Although the memory costs of the adjoint method and MALI are $O(N_f)$ and $O(N_f+1)$, this doesn't really reflect in Figure 4c, where the blue bar doubles the red one. I'd be happy if the authors can briefly explain why - Looking at Table-2, why does the test performance of a NODE trained with MALI increase when we switch from MALI to RK4? It would be much nicer to see some error variance estimate. - I would be happy to see an experimental evaluation of the "A-stability". As mentioned by the authors, the stability analysis is asymptotic and T could be arbitrarily small in, e.g., continuous-time flows. However, that's not the case in time-series modelling. So I wonder if the stability claim can be verified on a scenario in which, e.g., 100 observations arrive uniformly in time with 10 secs gaps. - To generate Table-2, did you train a ResNet without any differentials/integrals involved and try to evaluate the test performance using an ODE solver (simply using the trained ResNet as the drift function)? If so, I don't think this makes any sense except Euler-1 solver, and the entire ResNet row in Table-2 could go away. Additional comments: - Figure-4 caption could include some more detail (at least mentioning the experiment name) - Why is there a "local forward" step within the for loop in the backward routine in Alg.4? - It would be nice to see a brief description of the mujoco dataset. - Typo in the title of section B.3.2. Note: After rebuttal, I increase my overall score from 6 to 7.<doc-sep>1) Summary The manuscript proposes a reversible integration scheme for approximately estimating the gradient of neural ordinary differential equations. These ODEs represent DNNs with continuous rather than discrete values for the number of layers. The solver is theoretically analysed and empirically compared to other solvers. 2) Strengths + The paper is mostly well written. + The reversibility property of the solver leads to a memory footprint that does not depend on integration time. + The model is applied to standard datasets. 3) Concerns - The concept of Neural ODE models, their scope and their expected usefulness should be better motivated. It is not obvious which role these models play and what they offer as potential strengths. - The integrations scheme seems already known and well established. - It does not seem that the paper makes code and data available to the public. 4) Remarks/Questions a) Algorithm 1: It seems that the stepsize h should be initialized upon every step. Otherwise the steps can only get smaller. b) References: capitalization not correct e.g. "neural information processing systems", "ode-net", "lennard-jones" c) What benefits does the neural ODE model have in the context of image classification? What is the intuition behind the "continuous depth" idea in this scenario?<doc-sep>**Paper summary** There are typically two methods for estimating the gradients with respect to the loss for neural ODEs. The naive method directly backpropagates through the steps of the ODE solver leading to accurate gradients but very large memory cost. The adjoint method in contrast does not store the entire trajectory in memory, but has reverse trajectory errors (i.e. the numerical solution in the reverse direction will not be the inverse of the numerical solution in the forward direction). In this paper, the authors propose a method that is both reverse accurate and has low memory cost. To achieve this the authors take advantage of the asynchronous leapfrog solver. This numerical method is reversible: solving an ODE numerically in the reverse direction is the inverse of solving the ODE numerically in the forward direction. This is not generally true for the ODE solvers typically used (RK4 and so on) in the neural ODE literature. As the numerical solver is explicitly invertible, the authors can (from only the final state and not the entire trajectory) locally reconstruct each ODE solver step to get the local gradient of the parameters with respect to the loss. They can then calculate these gradients along the entire reverse trajectory to obtain an accurate estimate of the gradient with respect to the parameters. As each step of the numerical solver is reversible, they do not need to store the entire trajectory. The authors analyse the stability and numerical error of their proposed method and provide a toy example to show how well their method estimates gradients compared the naive, adjoint and adaptive checkpoint methods. The authors then perform experiments on a variety of tasks to test their model. They test their model on image classification experiments, both on CIFAR10 and Imagenet and achieve good results compared to the baselines. In addition, they perform adversarial robustness experiments on ImageNet and also show good performance. Finally, the authors test their method both for time series modeling and continuous normalizing flows, again showing good performance compared with naive integration methods. **Positives** - The motivation and core idea of the paper is clear. Numerical solvers are in general not reversible and this can lead to inaccurate gradient estimates when using the adjoint method. The authors explain this clearly and then propose a method that effectively solves this. - The experimental results are quite impressive. The model performs on par with the adaptive checkpoint method in terms of accuracy but is much more memory efficient (and notably memory is independent of the number of integration steps). This allows the authors to run their model on large scale datasets like ImageNet which was not previously possible with most neural ODE methods. Further, the authors achieve good performance on quite a wide variety of tasks (image classification, adversarial attacks, time series modeling, generative modeling) which is nice. - The authors perform a thorough analysis of the runtime of their integration method compared to others which is very helpful. **Negatives** - The presentation of the method and results is not always very clear. For example, the section about damping for the ALF integrator is not clear. The authors mention that ALF is not stable for eta=1, but (as far as I can tell) never mention what value of eta they use in practice and whether choosing this value is difficult. Further, it is not clear if ALF is still reversible with this eta parameter. Presumably you would have to use 1/eta in the reverse step for it to remain invertible, in which case the reverse is not stable? The authors should be more clear about this. - The toy example is confusing. How come the integration time starts from t=20? Is this because the error only grows after t=20? As you use T=1 for all experiments (and the rtol and atol are also roughly the same for all experiments), it would be nice to see if this actually makes a difference also for t<1. In Figure 4, the authors also mention the derivative dL/dy_0 but this derivative is never mentioned in the text. Do you mean dL/dz_0? The plots of memory consumption are nice and clear though. - The ALF solver already exists, so the main contribution of the paper is simply to apply the ALF solver to neural ODEs. This means that the novelty of the method is somewhat limited, but I do not think that this is a major issue as the method works well and is clearly motivated. - The section about using different numerical solvers for ResNets does not make much sense. ResNets are not designed as flows and do not behave as flows in practice, so we should not expect them to work at all with other numerical solvers than Euler with timestep=1. I don’t really think these experiments show anything interesting and should be removed for clarity. **Recommendation** Overall the paper has a clear motivation, provides a nice and simple solution to an interesting problem and has good experimental results. However, there are some clarity issues which make some aspects of the model and method confusing. I therefore recommend a weak accept but would increase my score if the clarity issues are solved. **Questions** The model achieves extremely good bits/dim on MNIST (0.87). However, it seems from the appendix that the samples are fairly poor (compared to vanilla FFJORD for example). Log likelihood and sample quality are not always correlated, but the difference seems particularly jarring here. Do you know why this is? **Typos and small comments** - In many places the authors use latex math to write words that should either just be written in italics (w.r.t) or using \\text{} in math mode (e.g. atol, rtol). - There are several typos in the script, so I think it would be a good idea for the authors to read through the script again to fix those. - In several places, the authors write O(N_f + 1) which instead should be O(N_f) - The authors often write “constant memory cost with respect to integration time”. I think it would be more helpful to say “number of solver steps” or something along those lines as integration time typically refers to the upper limit of the integral when solving an ODE.
This paper introduced a new ODE integration scheme that allows constant-memory gradient computation. I was concerned that the low order of convergence of this method would make it impractical, but the authors performed extensive experiments and got impressive results. Overall the paper addresses one of the main practical difficulties with large neural ODE models. The authors satisfactorily addressed the reviewers' concerns in the discussion.
The paper revisits the design of ensemble critics in offline RL. The authors argue that the common design where critics in the ensemble sharing the same, pessimistic target function in learning can lead to actually optimistic critics. The authors analyze this phenomenon theoretically under the NTK assumption, and present toy simulation examples. The authors further use this insight to design an offline RL algorithm MSG, which gives SoTA results on common offline RL benchmarks. In these experiments, they show that the separating the targets is a key to the algorithm's superior performance. ********** Comments for Rebuttal ************* Thanks for the rebuttal and the clarification. While they address some of my concerns, my main concern stay the same. As stated in the original review, the main issue I have is "whether the proposal here is sufficient to design a full offline RL algorithm or just provide an important note on implementation choice". The rebuttal also points out "it is the objective of our work to advocate for relying on uncertainty estimation as the main source of pessimism for offline RL". Let's examine this question from two aspects based on the paper and rebuttal . From the theoretical side, the paper provides that Theorem 3.1, which compares the iterates of $Q_{LCB}$ of Independent Targets and the Shared Targets. However, it does not show "how" good pessimistic the estimate of Independent Targets is. In the review, I mentioned "in general optimizing a pessimistic critic or being more pessimistic does not imply good performance in offline RL." because whether a pessimistic critic is useful or not depends on how "tight" the under estimation is and where it is pessimistic. Being more pessimistic is not always good, e.g., estimating all values as $V_{min}$ is pessimistic but it's obviously useless. The current theory does not provide enough insights to how good the pessimistic estimate is, or how good the learned policy based on such value estimate will be. This is why I said "the significance of the theoretical results are rather limited" in the review. For empirical side, I think to demonstrate the authors claim, it is necessary to show that SoTA can be achieved with $\\alpha=0$. However, the current results do not support that fully. While I agree that in Figure 3, $\\alpha=0$ is among the best performing results. I also think that Figure 3 does not provide a conclusive answer, as it is missing results of larger $\\alpha$ value for $\\beta=0$, as there is an increasing trend. This was pointed out in the review. It's also hard to compare Fig 3 and Fig 2 directly I think the failure of using $\\alpha=0$ in simpler mujoco domains is actually showing that the proposed approach "alone" might not be sufficient to provide enough pessimism "broadly". I do not agree with the rebuttal's statement that these environments can be solved by behavior cloning. All the methods the authors listed there are not pure behavior cloning, which mimics all actions, as all of them perform some reasoning what actions are better based on the rewards. Therefore I don't think this is a good excuse of using the CQL term here. Yes, I agree that the proposed method works with $\\alpha=0$ with Antmaze which is considered as a harder domain, but is it because of the reason that the authors mentioned? if this is the case, why does not perform well in simpler domain. Or is it because of something that is related to the structure of Antmaze environment and dataset? Currently, we don't have sufficient evidence to tell. Thus, I think that currently the paper provides insufficient results to show that the proposed uncertainty estimation "alone" can achieve SoTA offline RL results. Nonetheless, I also think that this paper provides more than sufficient reasons to show that it would be a good design choice to improve an existing value-based offline RL algorithm. Therefore, I keep my original recommendation. Strengths: 1. This paper presents an overlooked finding. 2. It provides some theoretical reasons to back it up. It also provides empirical validation. 2. The writing of this paper is clear and the experimental results are thorough. Weakness: 1. While the authors provide explanations of why the common usage of Shared Targets may lead to optimism, the current results are not conclusive. In particular, while existing offline RL implementations use Shared Targets, the pessimism of Shared Targets is not the main source of pessimism but rather an implementation detail. Therefore, it is unclear whether the proposal here is sufficient to design a full offline RL algorithm or just provide an important note on implementation choice. This factor limits the significance of the paper. 2. It is unclear what data assumptions are needed for the proposed method to work properly as an offline RL method. The authors well discuss the limitation such as the extra complexity needed by the proposed method. However, in my view, the results here are more limited than what the authors claim for the reasons above. <doc-sep>This paper studies ensemble-based pessimism in offline RL from both theoretical and empirical aspects, - Giving an analysis mathematically through NTK to show shared pessimistic targets can paradoxically lead to Q-estimates which are in fact optimistic. - Proposing MSG that trains each Q-network independently, and conducts experiments in D4RL and RL Unplugged tasks. Strengths - Formally analyze the offline RL methods based on Q-ensemble pessimism on infinite-width neural networks, and shows the pessimism term may become positive in shared targets. - Empirically verifies the effectiveness of the algorithm by combining Q-ensemble with CQL. Weaknesses - The NTK assumptions in section 3.1 and the Gaussian assumptions in section 3.2 seems limited in broader value iteration. - The Q-ensemble needs to combine with CQL to obtain reasonable performance. The use of CQL makes it difficult to analyze the source of performance improvement. However, there exist several methods (e.g., EDAC in NeurIPS 2021 and PBRL in ICLR 2022) that perform Q-ensemble for offline-RL with purely uncertainty. - The experiment results are not complete for D4RL benchmark. N/A <doc-sep>This paper discusses the uncertainty estimation in RL, which is an alternative to induce pessimism in offline RL. Though uncertainty estimation through ensembles has been proposed in the offline RL literature, this paper points out a critical flaw in how to incorporate the LCB in the actor-critic based algorithm. It shows theoretically that the previous algorithms, which regress different Q functions to the shared pessimistic target values, and does policy evaluation based on the LCB could sometime leads to over-estimation of the Q function. To address this, the paper proposes a simple fix, that is in the Bellman backup stage, instead of regressing the different Q values to the shared LCB estimate, just regressing them to independent Q target. Empirically, it shows better performance in challenging tasks that require stitching. Beyond this, the paper also examines how different efficient ensemble method work in RL setting, and it seems there still exist a large gap in the performance compared with deep ensemble, which opens up more interesting questions in efficient ensemble methods in the RL setting. Strength: [1]. The paper is well-written and very easy to follow. [2]. It discusses a major flaw in how to incorporate LCB estimate in offline RL algorithms, which seems being overlooked in the literature, but empirically seems make a big difference. The theoretical claim is well-supported. [3]. It has a comprehensive set of empirical studies, which covers various aspects about the applicability of the method, such as the ensemble size, the hyper-parameter sensitivity. I do appreciate the authors' effort in discussing how to transfer efficient ensemble method in supervised learning setting to the RL setup, to make it more computationally efficient, though some negative results there. Weakness: [1]. Maybe I am misunderstanding something, but i do feel some of the claims and findings are not explained in a super clear way, see Questions section for details for this. [2]. The experiment section does show that incorporating the independent target leads to the better performance in the challenging tasks, it would be great to see that this is result from better LCB estimate. We see the overestimation issues in the toy task, and it would be really helpful to see in the challenging tasks, that shared target does lead to over-estimation, which is the reason that the method helps. [3]. Section 4.2 seems a little bit out of picture. As it seems alpha=0 works great in most cases, the authors state that it might help in some narrow data regime, is any empirical study supporting this? Yes. <doc-sep>The paper observes a problem in existing pessimism estimation in offline RL using ensembles: using shared targets for all ensembles updates. The paper instead proposes to update each ensemble individually and apply the pessimism at policy updates. The paper derives the update form of both methods in the NTK setting and shows that the update method with shared target could even result in optimism, which is also shown with some synthetic simulation data. Finally the paper evaluates the proposed method in several offline RL benchmarks and show its empirical competitiveness. ## Strength 1. Overall the paper is well written, easy to follow, and the technical part seems correct. 2. The paper makes a good observation of the existing methods for offline RL when they update the Q-values for ensembles: from hindsight one really does not need to incorporate the pessimism into the function update procedure, but instead just apply pessimism during policy update. This also seems to agree to the theory RL algorithms: one can just perform the regular bellman updates (or perform elimination in version space algorithms) and define policy with LCB or take minimum over the remaining set of functions (for pessimism) (for example, [1]). 3. Although it may not be obvious under what kind of conditions such that in the NTK setting, using the shared target could result in optimism, the following subsection provides good evidence that that indeed could happen. It could be better to provide some more intuitive scenario or even a closed-form construction. 4. The paper provides extensive and convincing experiments, including a) good ablation experiments which contains the different kinds of shared target updating methods (such as shared-LCB Ens., Shared-Min Deep Ens, and with a different number of ensembles) . b) The paper tries many different hyperparameters for the baselines, so the baselines seem to be fine-tuned for the final presentation of the results. c) The experiments are performed on extensive benchmarks. ## Weakness 1. The theoretical results provide very good intuition into the problems of the previous pessimism estimation in offline deep RL methods, but since the result is based on the NTK setting, it still has some gap between the practical situations. 2. The result presented in table 1 has different hyperparameter for different tasks, which likely undermines the empirical merits of the proposed algorithm. ### references [1] Xie, Tengyang, et al. "Bellman-consistent pessimism for offline reinforcement learning." Advances in neural information processing systems 34 (2021): 6683-6694. 1. The overall algorithm has good intuition and motivation, but the introduction to the additional term in section 4.2 looks irrelevant to the rest of the paper. From the experiments, this term seems crucial to a good performance of the algorithm and thus unavoidable in the current version. Although it makes sense that some kind of regularization may be needed for unseen action, this additional term indeed undermines the overall message a little bit. 2. The ablation of using a more condensed surrogate for ensemble is a good experiment, and as the paper already suggests, it would be better if a more efficient way of pessimism could be derived, which seems beyond the scope of this paper.
The paper identifies a common flaw in pessimistic algorithms related to the use of shared targets, and propose an alternative based on independent targets that mitigate the overly-optimistic estimates. The rebuttal has addressed a number of concerns raised by the reviewers, and in particular, the negative reviewer qbbN acknowledged that > ... the proposed idea here would make an existing algorithm that uses e.g. double Q networks (which is quite common) and also other main pessimism (like value penalty or closeness to behavior policy) to perform better. Thus, the insight here can be quite useful in practice. That said, the reviewer is still concerned about the framing of the work > the paper does not provide sufficient evidence (theoretically or empirically) that the proposed pessimistic estimate based on Independent Training "alone" is sufficient to design a SoTA offline RL algorithm [which the paper claims to]... I think that the paper needs to provide stronger evidences or changes the framing. Given the strong support from other reviewers, the AC is leaning towards acceptance, but strongly recommend that the authors change the framing of the paper to honestly reflect the contributions of the work.
The paper suggests a method for approximating the 2-Wasserstein gradient flow for the relative entropy. The proposed particle-based method uses a neural network function approximation-based approach to estimating the necessary density ratios. Experiments verify reasonable performance compared to MALA and ULA. The paper appears to miss the fact that the 2-Wasserstein gradient flow for the relative entropy defines a Markov process, which is exactly the Langevin dynamics; this can be seen by comparing Eq. (4) to the Fokker–Planck equation for an Ito diffusion (e.g., Eq. (4.1) in [1]; see also section 3.5 in [2] for a relevant discussion from the ML literature). Indeed, the Fokker–Planck equation determines the diffusion uniquely because the differential operator in the Fokker–Planck equation is the adjoint of the infinitesimal generator of the diffusion. Thus, the proposed algorithm is a rather unorthodox way of approximating a Langevin distribution. The paper makes the said approximation more difficult than it should be by using a particle approximation that requires estimating density ratios, a notorious tricky problem. The algorithm ends up having some ability to handle multimodality because the use of weighted particles and density ratio estimation allows the algorithm to effectively compute the relative volumes of different modes of the distribution. The proposed method for estimating the density ratios appears to be the same as Geyer’s reverse logistic regression method [3], with a neural net replacing the inner product. Thus, I expect similar results could be obtained more directly, and with only a single round of volume estimation, by (1) running MALA many times and (2) then estimating the relative volume of each chain (note there are numerous other methods for doing this other than reverse logistic regression). Such an approach should work well on the kinds of low-dimensional examples considered in the numerical experiments. Further issues arise in the experimental evaluation. First, the experiments seem to show very similar performance to MALA, all within the standard errors when provided (e.g., in Table 2). Second, I’m concerned about the quality of the MALA implementation, for which code was not included. The lack of convergence in one example suggests MALA was not run with appropriate step size adaptation targeting the optimal acceptance rate [4,5], as is standard in the literature. If so, then the comparison is not appropriate. Third, for a fair comparison, the MALA chains should also be reweighted based on volume estimates for each chain, as described above. Is it possible there are some gains from using the proposed method on multimodal distribution? Yes. But I remain skeptical. Moreover, if the goal is prediction, I expect combining MCMC with stacking will be more effective [6,7]. [1] Pavliotis, G. A. Stochastic Processes and Applications. (Springer, 2014). [2] Liu, Q. Stein Variational Gradient Descent as Gradient Flow. In NeurIPS (2017). [3] Geyer, C. Estimating normalizing constants and reweighting mixtures. Technical Report (1994). [4] Roberts, G. O. & Rosenthal, J. S. Optimal scaling of discrete approximations to Langevin diffusions. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 60, 255–268 (1998). [5] Roberts, G. O. & Rosenthal, J. S. Optimal scaling for various Metropolis-Hastings algorithms. Statistical Science 16, 351–367 (2001). [6] Yao, Y., Vehtari, A. & Gelman, A. Stacking for Non-mixing Bayesian Computations: The Curse and Blessing of Multimodal Posteriors. arXiv.org, arXiv:2006.12335 (2020). [7] Yao, Y., Vehtari, A., Simpson, D. & Gelman, A. Using Stacking to Average Bayesian Predictive Distributions. Bayesian Analysis 13, 917–1007 (2017). The paper seems to have a fundamental misunderstanding of the Wasserstein gradient flow for the relative entropy, and the experimental evaluations may not be appropriate. <doc-sep>This paper considers the problem of sampling from an unnormalized distribution. The unnormalized target distribution can be regarded as a stationary point of the Wasserstein gradient flow of the corresponding relative entropy functional, which can be equivalently identified from a microscopic perspective by defining a time-varying velocity field of the particles. While the exact time-varying velocity field is not exactly available, the authors propose to estimate such a quantity by approximating the corresponding logarithmic density ratio through minimizing the Bregman score. Such an approximation requires only samples from the variable distribution which can obtain by simulating particles following the estimated velocity field. Wasserstein gradient flow has proved to be a useful tool for sampling from an unnormalized distribution. Section 2 to 4 of this work follow the standard derivation of the work along this research line and well explain the particle evolution strategy. Since the underlying velocity field of the Wasserstein gradient flow requires the access to the variable distribution $q_t$ which is in general not available, a key step in methods along this research line is to estimate such a quantity. To estimate such a quantity, this work proposes to estimate the log density ratio between the potential function and the variable distribution $q_t$ by minimizing the Bregman score which is described in Section 5. However, I find Section 5 difficult to understand. It would be vary helpful if the authors could explain the intuition of the Bregman score. In fact, I think there should be an individual section in the preliminary that describes the Bregman score and all the statements below equation (15) so that the reader can follow this very important step. I think section 5 is the part that differs this work from previous work like [1] and is where the novelty of this paper lies. It need to be very clearly explained. [1] Degond, Pierre, and Francisco-José Mustieles. "A deterministic approximation of diffusion equations using particles." SIAM Journal on Scientific and Statistical Computing 11, no. 2 (1990): 293-310. This paper leverage the microscopic equivalence of the Wasserstein gradient flow of the relative entropy to sample from an unnormalized distribution, but the derivation of the key step in the proposed approach is not well explained. <doc-sep>The paper proposes a novel way to sample from unnormalized distributions. This is helpful when calculating or estimating the normalizing constant is untractable. The main idea is to track the gradient flow of the relative entropy in the Wasserstein space of probability distributions. It is known that the flow converges to the target distribution and the paper introduces a variational characterization of the discretized steps. The main benefit of this characterization is that it bypasses the need to know the normalizing constant as well as being amenable to estimation by using a combined particle evolution. The benefits of the new algorithm are demonstrated through several numerical simulations. I enjoyed reading the paper. It is well written, the motivation is clear and it is easy to follow the main ideas. However, I find it hard to assess the actual contribution of the paper. On one hand, while the proposed algorithm makes sense, there is no guarantee in the paper, either for the sampling accuracy or even for the fact that the algorithm will converge to the target measure. For example, is there any guarantee that the discretized flow does not add bias to the obtained measure? There are many tunable parameters in the algorithms, $s$ the discretization step, $K$, the time horizon, $n$, the number of particles. What is the interplay between those parameters, given some distribution, how should I choose those? I would have expected to see a bit more of the underlying theory behind this algorithm. On the other hand, from the perspective of actual results, I find that the numerical experiments are somewhat restricted and artificial. Coupled with the computational overhead, it's not clear to me when one will actually prefer to use the new algorith, The idea is elegant interesting but the paper lacks evidence for its usefulness, both from theoretical and applied perspectives. <doc-sep>The paper addresses the issue of sampling from an unnormalized distribution. The sampling problem is cast as the numerical simulation of the gradient flow associated with the KL divergence between the target unnormalized distribution and the approximating distribution. The challenging part is to estimate the density ratio that appears in the gradient term. The authors propose to use a deep neural network to estimate the density ratio. Numerical results show the usefulness of the proposed method. Overall, the paper is well written: contributions are clearly stated, relation to previous is presented, the proposed method is well explained and a somewhat extended comparative numerical evaluation of the proposed method is given. I find the overall approach interesting - the difficulty of sampling being cast as a density ratio estimation problem. This new problem, however, is not easy to solve and your approach of using a neural network to estimate the density ratio seems to work, at least in the considered examples. That being said, I do have some issues with the paper. You mention in the conclusion that you hope to establish the convergence properties of the proposed method. I understand that it is not trivial to establish convergence. However, not presenting at least some intuition about the convergence of the algorithm is a strong drawback. The way I see it is that there are two sources of error that could hamper convergence: the discretization error of the numerical implementation of the continuous gradient flow and the approximation error of the density ratio. I didn't find anything about either in the paper. If not a proof, at least some intuition about how they affect the results, about how they interact, etc. In the numerical experiments section, you present a fair amount of examples which show that the proposed method is capable of outperforming the competing algorithms. The results are interesting, however, there are no results to show how the performances of the proposed method vary with the different parameters, notably the number of considered particles, and the choice of distribution w. Such results would cast some light into the inner workings of the proposed method and would be useful for anyone interested in using it. In section 6.4 you do mention that the improved performances of the proposed method come with a higher computational cost. However, you do not perform any analysis of the trade-off between computation time and performances. It would have also been interesting to compare the performances of the different algorithms for the same computational budget. Another aspect that struck out to me is the choice of competing algorithms. The algorithms that you choose as competitors are valid, however their choice is questionable. My first remark was why didn't you choose the SMC algorithm as a competitor? It also uses particles in order to estimate the unnormalized target distribution. Also, HMC could have also been considered. I spotted some typos here and there, for example chians instead of chains on page 8 in the bottom paragraph "... denote the ULA and MALA with k chians", repeats instead of repeat on page 9 "We repeats the random partition 10 times.". Another small issue is with figure 4, it's hardly readable. I understand that there is a limit on the page count, however, that's not a justification for having figures that are hard to read. More so, as there are some redundancies in the text that could have been eliminated, for ex. equations (11) and (13) are the same, is the presence of both necessary for understanding the idea that is presented in section 4? I find the approach interesting. However, there are some issue with the paper as it is, both theoretical and empirical. From a theoretical point of view, there is no discussion about conditions for convergence of the algorithm. From an empirical point of view, the numerical experiments are not complete enough, with respect to the comparison analysis that is carried out, but also with respect to compensating missing theoretical analysis. Overall, the paper is interesting, but in its current form is not ripe enough for publication.
The paper proposes a sampling technique for unnormalized distributions. The main idea is to gradually transform particles by following the gradient flow of the relative entropy in the Wasserstein space of probability distributions. The paper tackles an important problem and provides an interesting new perspective. However, even putting aside the concerns on the theoretical analysis raised by the reviewers, the experimental evaluations does not seem sufficient to demonstrate the benefits of the proposed approach.
This paper proposed a novel graph representation: Circuit Graph, integrating the heterogeneous circuit information from logic synthesis and placement to facilitate the EDA design process. The proposed graph structure considers both topological (cell connection in the netlist) and geometric information (positioning of the standard cells on the layout). A corresponding graph neural network (GNN) structure is proposed for extracting circuit representation for various downstream tasks. The experimental results demonstrated the effectiveness of the graph in congestion and net wirelength prediction tasks with efficient NN computation. Strengths: 1. Heterogeneous information fusion across multiple EDA design stages. Typically, circuit designs are divided into multiple phases. Each phase may have its own unique representation for the same underlying circuit. The proposed circuit graph brings two representations (netlist and cell placement) into a unified graph representation, which provided a more informative data structure embedding knowledge from multiple EDA design phases. 1. The proposed circuit graph is general enough to be extended to inspire future work. The paper only touches on congestion and net wirelength prediction tasks for detailed routing, and the graph featurization contains only related basic topology information and simple geometric information. The reviewer believes the proposed graph can inspire more work in EDA areas. For example, by adding standard cell delay as one new feature in the cell node, the proposed graph may also help with the timing analysis of the circuit. 1. The overall GNN structure follows the design of the circuit graph, which sounds promising. The topological and geometric message passing structures preserve the structure of the original circuit graph, Weaknesses: 1. The paper didn't touch on how representative the extracted GNN features are. The two tasks (congestion prediction and net wirelength prediction) in the paper are experimented independently. Although these two tasks have different readouts, they shared the same input graph features and extract GNN feature representation. It would be interesting to check if the knowledge can be transferred from one task to another using the proposed GNN. 1. Although the overall GNN structure sounds promising, some detailed formulation or design choice of GNN needs to be further justified. Detailed comments are made in the questions. The paper mentioned that one limitation is to test the proposed method under commercial products and more complex scenarios. The reviewer appreciate that the authors bring up this and understand the difficulty behind it. <doc-sep>This work constructs a modeling framework that aims to solve various problems in the circuit design process. This work incorporates 1. A novel circuit graph that is able to jointly integrate the topological and geometrical information and is claimed to be the first unified circuit representation approach that can be easily compatible across EDA tasks and stage. 2. A novel message-passing paradigm, CircuitGNN, that is tailored towards the aforementioned graph dataset structure. The structure can conduct message-passing on both topological and geometrical edges distinctively and then fuse the messages to update cells and nets representations 3. Extensive experiments validates the merits of the proposed methods in terms of both the task accuracy and execution speeds. Strength: 1. This work does a good job on analyzing and illustrating the tasks and problems of circuit EDA in light of the machine learning methods. 2. The methodology is described in much detailed but straight-forward way. 3. Overall this work provides decent improvements over the existing methods. Just per the results alone, it is impressive. 4. The code provided in the supplementary materials is certainly a plus, contributing to the transparency and reproduction of the works in the fields. Weakness: 1. Apart from the improvements on the message passing methods, one of the key contributions of this work is to be able to jointly integrate the topo and geom information in one model. However, I do not see clearly the motivation for this point from both the application and results perspectives. For actual application, is there a significant disadvantage of simply using two sets of models or even methods respectively for logical synthesis and place-and-routing? One of the reasons, I would perceive, is that a joint model may yield a better task performance due to the complementary information. However, as Table 2 suggests, the joint model's improvements against the proposed method with only geom message passing. Yes, it's addressed. <doc-sep>The authors propose a unified way to construct graphs in different phases of EDA flow, and develop a general GNN model for downstream EDA tasks. Specifically, the proposed approach first constructs a heterogeneous graph by incorporating cell-cell connections (geometrical information) and cell-net connections (topological information). The node and edge features are generated based on physical properties of cells, pins, and nets. Then, a circuit GNN model is proposed to apply message passing on cell-cell and cell-net connections separately, which produces the representations of cells and nets for downstream tasks. The experimental results show that the proposed method increases 16.7% accuracy on congestion prediction and reduces 16.9% error on wirelength prediction. **Key Strength** - The paper is clearly written. All the technical steps are easy to follow. - The proposed method can be used to solve multi-stage EDA tasks. **Key Weakness** Although the proposed circuit graph construction and GNN model are all reasonable, they lack some technical significance. For example, - For circuit graph construction, it is straightforward to construct a bipartite graph based on cell-net connections from netlist, in order to produce representations of cells and nets for downstream tasks. Hence, the contribution is limited for the graph construction, especially in logic synthesis stage where placement information is not available. - For GNN model, it is a common way to apply message passing individually per edge type for handling heterogeneous graphs (e.g., [2]). Thus, the novelty of the proposed model is limited. Although the experiments show promising accuracy gains for downstream EDA tasks, further clarification could make the improvements more convincing: - Missing strong GNN baselines: The chosen baselines (i.e., GCN, GraphSAGE, and GAT) only consider node features. Since edge features are important in this paper, authors should compare the proposed model against stronger baselines (e.g., MPNN[1]) that incorporate edge features, on the same input graph. Without a stronger baseline, the contribution of the proposed GNN model is unclear. - Not tuning hyperparameters for baselines: Authors choose the default hyperparameters for baselines from their original papers. Since the datasets used in those papers (e.g., [3]) are different from this paper, hyperparameter tuning is necessary. - Not comparing against DREAMPlace: The purpose of wirelength prediction is to speedup EDA design closure. Nonetheless, there are no results of the runtime comparison between the proposed model and the placement method DREAMPlace, which is a very fast placement method by exploiting GPUs. Without this comparison, it's unclear about the motivation of wirelength prediction in placement. [1]: Gilmer et al. "Neural message passing for quantum chemistry." ICML'17. \\ [2]: Zhang et al. "Heterogeneous graph neural network." KDD'19. \\ [3]: Xie et al. "Pre-Placement Net Length and Timing Estimation by Customized Graph Neural Network." TCAD'22. Thanks authors for mentioning potential limitations of this work. One key challenge of deploying ML models into commercial EDA tools is the model generalizability. Authors can evaluate the trained model on more unseen designs to see if it is truly generalizable.
This paper proposes a GNN approach to EDA using the construction of a circuit graph that combines geometric and topological information, as well as features generated from physical properties of circuit components. While reviewers have raised certain concerns (some addressed already in rebuttal), they all settled (post rebuttal) on recommending weak accept of the paper. I agree with them and think the NeurIPS audience would benefit from the inclusion of this work in the program, and therefore I recommend acceptance. I would like to encourage the authors to take into account the comments and discussion with the reviewers, as well as incorporate materials presented in their responses, when preparing the camera ready version.
Overall I like this direction since this is an important, open problem in RL that does not seem to be widely known (I was unaware of it until I looked into the related work) and could lead to improved algorithms. I encourage the authors to continue to pursue this line of research. However, I have a few clarifications and questions regarding the experiments which make it unclear how meaningful the results are. For now, I vote to reject this work but am willing to change my opinion based on the rebuttal. Strengths: - The paper investigates and draws further attention to an important open problem that does not seem to widely known. Based on my reading of Nota and Thomas, it appears most major papers in the field today do not acknowledge the discrepancy of the missing discount factor. - The paper includes many experiments especially in the Appendix each with a robust 10 seeds. I do have some issues with the experimental setup that I will detail later but I appreciate the variation in experiments. - I also think the representation learning experiments in Scenario 1 using FHTD are an interesting approach to study the effect of learnt representations. - The experimental setup and methods used are clearly described and it appears the code will be made available in the final version thereby potentially making the experiments highly reproducible. Issues/Points of clarification: - Most of the study is done in the setting where \\gamma=1 (Scenario 1 in the paper). This corresponds to the undiscounted objective where the current time index must be included in the state for correct estimation of the value function. However the setting that is most widely used in existing literature involves a discount factor<1. For instance, all of the methods cited in the Methodology section: Henderson et al., 2017; Ilyas et al., 2018; Engstrom et al., 2019; Andrychowicz et al., 2020, Fujimoto et al., 2018, Haarnoja et al., 2018 use a discount<1 (Andrychowicz et al. do not include a discount of 1 in their sweep over discount factors either). This is dubbed Scenario 2 in the main text and includes only one experiment on the Ant task. It is fine to try to draw insights and focus on Scenario 1 as long as it is well motivated. However I do think it is misleading to claim ‘we believe our empirical results are relevant to most practitioners’ when most of the study does not involve a setting that is actually used by said practitioners. - My second concern is with the method used to choose hyperparameters for the experiments. In particular, the learning rate is chosen based on the ‘Ant’ experiment and then the best performing parameters are fixed and transferred to the others. While I appreciate the motivation behind this approach, I’m not certain how well these transfer to some tasks. In particular, the HumanoidStandup task seems to involve returns which are an order of magnitude greater than the other tasks. I think at least for this one task a small sweep is essential to be confident of the claims. - There are a few points in the paper where correlation seems to be misinterpreted as causation. For instance Figures 11-13 in the paper indicate that: a) a discounted critic (\\gamma_c<1) performs better on all tasks; b) biased updates using TD instead of empirical returns performs better on some tasks. These two statements alone are insufficient to claim that the advantage of a discounted critic (\\gamma_c=1) is therefore partly due to bias. Looking at Figures 11 and 13, I think a figure similar to Figure 12 comparing TD and empirical returns can be generated for any discount factor (e.g. \\gamma=0.995). Perhaps I am missing something here and if so clarification from the authors would be much appreciated! - These discrepancies combine in Figure 1 where for \\gamma_c=0.99, different values of extra transition samples (N) are plotted. Ostensibly, increasing N should reduce the variance even further. However quite a few of the curves choosing N=2 or 4 performs significantly worse. Could the authors clarify why they think this happens? Interestingly, the only task where the effect of N seems to not matter is the Ant task for which a hyperparameter sweep was completed. Additionally the task where increasing N impacts performance the most is the HumanoidStandup task where the returns are quite significantly different. To me, this result stresses that there might be more at play here and a more detailed study is required to tease apart the various confounding factors. In summary, while I think the approach is quite interesting, there are concerns in some of the claims made in the text. I appreciate the effort that went into the current set of results and the experimental setup. With that in mind, I would be willing to accept this submission if my concerns above are clarified and if the conclusions drawn from the results are tempered given the evidence. Finally there are minor points of clarification that did not affect my overall review but I nonetheless list below: - In the discounted infinite horizon setup of Scenario 2, the timestep no longer needs to be added to the state. However the text indicates that this is still done even in this case. I think this does affect bootstrapping and thus learning the value target. Specifically it may be easier to learn a consistent value function that in this setting when the time index is not included in the state. Could the authors clarify this point? - As a minor point for readability, it would be good if the algorithm boxes for PPO-TD and PPO-TD-Ex etc included colours to highlight the changes to PPO (Algorithm 1) since these overlap quite a bit. This is purely from a presentation perspective of course. References: Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560, 2017. Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Madry. A closer look at deep policy gradients. arXiv preprint arXiv:1811.02553, 2018. Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Madry. Implementation matters in deep rl: A case study on ppo and trpo. In International Conference on Learning Representations, 2019. Marcin Andrychowicz, Anton Raichuk, Piotr Stanczyk, Manu Orsini, Sertan Girgin, Raphael ´ Marinier, Leonard Hussenot, Matthieu Geist, Olivier Pietquin, Marcin Michalski, et al. What ´ matters in on-policy reinforcement learning? a large-scale empirical study. arXiv preprint arXiv:2006.05990, 2020. Scott Fujimoto, Herke van Hoof, and David Meger. Addressing function approximation error in actor-critic methods. arXiv preprint arXiv:1802.09477, 2018. <doc-sep>The authors examine the commonly used paradigm of not discounting in the policy gradient objective. They propose two hypotheses relating to discounting. (1) discounting the critic improves representation learning. (2) undiscounted policy gradient is similar to discounting + an auxiliary loss. These hypotheses are studied through a series of empirical tests in the MuJoCo domain with PPO. Strengths: - I believe this paper is asking the right type of questions about common setups. There are a lot of choices made in deep RL algorithms which don't align with theory and are otherwise unstudied and empirical studies are an important. - Some of the approaches used to answer these questions are quite unique. - Overall, there a lot of experiments both in the paper and the appendix, which is detailed. This is a paper which will benefit from the additional page of content as a lot of key figures can be shifted to the main body. Weaknesses: Given the empirical nature of this study, it is really important to have robust experimentation to really answer the hypotheses the paper raises. I think the paper falls short at this aspect and I wasn't convinced by the arguments made for either hypothesis. Furthermore, the conclusions that could be drawn from the results are generally not that surprising. - I'm not sure PPO is the best algorithm to analyze many of these questions. For example, Engstrom et al., 2019 showed a lot of very minor implementation level details had a large impact on the performance. Consequently, it may be difficult to disentangle the actual causative factors in performance. This is problematic as many of the claims in the paper are supported by empirical tests where the performance is not strikingly different. For example, Figure 1 is meant to justify that for $\\gamma_c = 0.99$ additional transitions improved performance, but on several environments increasing $N$ to 2 or 4 seems to hurt performance, going against our intuition about variance reduction. Figure 2 shows that for $N \\neq 0$ there is a large performance drop, but all values of $N \\neq 0$ achieve a very similar performance rather than trending downwards as $N$ increases. To me this suggests a very brittle algorithm. - For section 3 the bias-variance trade-off is evident from prior work (as referenced by the authors) so the result is of course not novel. I think analyzing it in a deep RL setting is important but because of the problems mentioned prior, I didn't find that these results provide anything solid to add to our understanding. - The results for Figure 3 aren't convincing (1) because they are overfit, by selecting the best possible H for each it seems likely to always arrive at a high performing agent. (2) This more suggests that these environments don't require the full horizon to achieve a high performance. Consider a simple cartpole problem which is optimal using greedy actions but has a horizon of 1 million time steps. Since were in an approximate setting with deep networks, it isn't surprising that the agent can achieve a high performance without considering the full horizon. - The results from the toy MRP experiment and distributional RL do suggest some kind of connection to representation learning, but isn't considering a longer horizon simply a more difficult learning problem? Is the representation necessarily an important aspect here? I didn't find that the authors answered this question. - The conclusion from Section 4 is that $\\gamma_A=1$ is an inductive bias that all transitions are equally important seems entirely self-evident from the mathematical definition given it applies equal weight to all transitions. At the same time the main question of hypothesis 2 seems unanswered. Shouldn't AuxPPO $\\approx$ PPO, rather than DisPPO if this was true? - A single environment for Figure 9 is not enough to draw any meaningful conclusions. I did not find the discussion in B.1. convincing that the other environments were not suitable. Simply change $t_0$ for the other environments. From personal experience the horizon of Ant is generally large (near 1000) as the terminal condition is hard to achieve meaning the difference between Ant and the fixed length environments should be small. Additional Comments: 1. I do wonder if this paper is better off as two separate documents where each hypothesis is provided much more significant attention/experimentation. For example, hypothesis 1 isn't actor-critic specific and is also applicable to Q-learning based methods. These experiments could be simplified by looking at algorithms with significantly fewer components and more settings. 2. For the PPO-TD-Ex experiment I think it's also worth considering extrapolation error (Fujimoto et al., 2019) in TD learning. Since $S^i_{t+1}$ is sampled from a single transition rather than a full trajectory it is not necessarily contained in the batch. As a result, $\\hat v$ is not trained on $S^i_{t+1}$ and produces an erroneous TD target. My first impression was that the performance drop for $\\gamma_c=1$ was not surprising but the performance gain from $N=1$ for $\\gamma_c=0.99$ was, and I think are are unanswered questions here. Another important reference is Bengio et al., 2020 which showed TD(0) generalizes worse than TD($\\lambda$) and there is clearly a related result here. 3. Given MuJoCo environments are time-limited to 1000 time steps, 1024 heads for PPO-FHTD seems like a mistake/oversight. 4. Why does PPO-FHTD with H=1024 produce different results for the different parametrizations? 5. Is Figure 6 surprising since the value function needs to consider a large space of solutions as the horizon increases? 6. Given distributional RL provides a large performance gain (which to the best of my knowledge, we are still missing a conclusive reason as to why), I'm not sure PPO-C51 > PPO-TD is a significant result. 7. It would be clearer if DisPPO was described before mentioning Figure 15. 8. Figure 15 seems like an important conclusion and should be contained in the main body of the paper. However, the y-axis of Figure 15 also conflicts with the description in the main body so I'm not sure what the correct interpretation is. 9. I wonder if the result from Figure 9 is reproducible if the flipping was done in a different way. In the MuJoCo environments is the agent is rewarded mainly for velocity and the behavior of the agent in these cases would be enlightening. Does the agent run forward and then attempt to terminate? Can it move backwards? Conclusion: I think the authors present a lot of interesting ideas and experimental approaches to answer their underlying questions. However, I felt that the experimentation was not sufficiently robust to justify their conclusions and I cannot recommend acceptance. References - Engstrom, Logan, et al. "Implementation Matters in Deep RL: A Case Study on PPO and TRPO." 2019. - Fujimoto, Scott, et al. "Off-policy deep reinforcement learning without exploration." 2019. - Bengio, Emmanuel, et al. "Interference and Generalization in Temporal Difference Learning." 2020. ** Edit (Nov 23): I have slightly increased my score due to the improvements made to the paper (mainly reorganization) & some clarifications made by the authors, but I still don't feel like my main concerns were addressed. <doc-sep>***Summary*** The paper proposes an empirical study of the discount factor as a regularization parameter in the actor-critic architectures. Specifically, the paper considers the case in which the actor and the critic employ different values of the discount factor. Two scenarios are considered. First, the paper analyzes the case in which the true objective is undiscounted and a discount factor is employed in the critic (like in TRPO and PPO). Second, the case in which the true objective is actually discounted but the discount factor is ignored in the update of the actor. A quite large suite of experimental results is reported. ***Major issues*** - (Organization) The paper presents an extensive experimental evaluation that is split between the main paper and the appendix. However, in the main paper, there are a lot of references and discussions related to experimental results that are provided in the appendix only. This happens both in Section 3 and in Section 4. Sometimes these results (presented in the appendix only) seem to be some fundamental claims of the paper, like for Figures 11, 12, and 13. I think this choice makes affects negatively the readability and clarity of the paper. Indeed, the reader has to continuously jump between the main paper and the appendix. Similarly, the pseudocodes are reported in the appendix only, but I think that this is less relevant compared to the plots. I think that the paper would greatly benefit from a reorganization, making it more self-contained. - (Bias-Representation Trade-off) One of the main claims of the paper is that using a discount factor < 1 in the critic when the true objective is undiscounted has a regularization effect not only on the variance but also on the learnability of the value function itself. I have to admit that the paper has not convinced me on this point. It is hard to say that the representation of the value function becomes more complex as the discount factor approaches one or, similarly, as the horizon increases. In general, I think that is possible to devise MDPs in which the value function representation becomes simpler as the horizon increases as well as MDPs in which it becomes more complex. I can imagine that for a class of tasks the statement can be true, but the paper does discuss the properties of these tasks. Can the authors elaborate more on this point? - (Auxiliary task perspective) The paper proposes a perspective of the critic update without a discount factor for a discounted objective as a sum of two terms. However, I have some concerns about the application of the clipping technique independently for the two terms. Why not perform the clipping just once to the original discounted objective? ***Minor issues*** - In Section 2, the MDP model is introduced assuming finite state-action spaces. Is this assumption really necessary? The experiments are carried out on Mujoco tasks that are characterized by continuous state-action spaces. - The plots are very small, including the ticks and labels on the axis. Moreover, they are not readable when printing the paper in grayscale. I suggest using different linestyles or markers. ***Overall*** I think that the paper addresses a relevant problem that is surely important to bridge the gap between theory and practice. However, I have some concerns about the organization and about the conclusions (especially regarding the bias-representation trade-off) that the paper draws from the presented results. For these reasons, I think that the paper is currently not ready for publication at ICLR.<doc-sep>In this paper, the authors focus on the discounting mismatch in the Actor-Critic algorithm. From comprehensive experiments, the authors claim that this mismatch is either a bias-variance representation tradeoff or an auxiliary task for the actor update. Since the discounting mismatch problem is a well-known gap between the theoretical analysis and the application, their work, especially the experiments, might have some impact on how to understand this gap. However, since it does not provide any new analysis technique or practical model to improve the performance of the AC algorithm. I would encourage the authors to do more analysis of the choice of $\\gamma$, like how to choose $\\gamma$ might lead to a good performance (either experimentally or theoretically). And I believe that would have more impact on both the theoretical analysis and practical algorithm design. and Meanwhile, since in the first scenario, the mismatching of $\\gamma$ is considered to reduce the variance, it would be interesting if the authors could compare this kind of variance reduction with the stochastic variance reduction on the policy-gradient algorithms [1] [2] [3]. Therefore, though this paper lacks a theoretical analysis or a ground-breaking experimental performance, this paper has an interesting and comprehensive experimental survey and proposes some new hypothesizes on this problem, I will suggest borderline accept for this paper. I might consider modifying my suggestion after discussion with other reviewers and the author's response. [1] Papini, Matteo, et al. "Stochastic variance-reduced policy gradient." arXiv preprint arXiv:1806.05618 (2018). [2] Xu, Pan, Felicia Gao, and Quanquan Gu. "Sample efficient policy gradient methods with recursive variance reduction." arXiv preprint arXiv:1909.08610 (2019). [3] Yuan, Huizhuo, et al. "Stochastic Recursive Momentum for Policy Gradient Methods." arXiv preprint arXiv:2003.04302 (2020).
This paper studies the effect of the discount mismatch in actor-critics: the discount used for evaluation (often 1), the discount used for the critic and the discount used for the actor. There’s notably a representation learning argument supported by a series of experiments. The initial reviews pointed out that this paper addresses very relevant research questions, sometimes in a quite original way, with a large set of experiments. However, they also raised concerns about the organization/clarity of the paper, and possible weaknesses about the experimental studies. The authors provided a rebuttal and a revision, that clarified some points and triggered additional discussions. However, if the revision improved the initial submission, the shared assessment is that the clarity and experiments themselves are still somewhat lacking. As such, the AC cannot recommend accepting this paper. Yet, this work does have interesting ideas, and the problem considered is of interest for the community and under studied. The authors are strongly encouraged to submit a revised version to a future venue.
This paper tries to prove that there is a bottleneck in feature learning for long-tailed classification and data augmentation can help relieve the issues in long-tail feature space. Three major experiments were done to prove that feature space 1) is more biased than balanced feature space, 2) is more disused and less compact than balanced feature space, and 3) less localized in terms of feature centroids. And data augmentation can help alleviate all three issues. The weakness: 1) The second objective of this paper is to discuss "why data augmentation helps in representation learning". However, in the paper, only positive effects from data augmentation were shown, the reasons and mechanisms were not fully discussed. 2) The overall paper is based on the unserious term "good enough". What is this term defined? How good is good enough? Good enough in terms of what? Generalization and robustness compared to full balanced data sets or in terms of knowledge transfer? If it is the first one, of course, long-tailed representations are less generalized and robust compared to balanced representations. It is not a new idea and is already discussed in [1]. If it is the second one, then the later experiments don't make any sense. And I think when people say long-tail representations are "good enough" in studies like [2], it is more like it is good enough for long-tail learning rather than comparing it to balanced learning. 3) All the experiments seem unfair to me. For example, D_LT are representations from long-tailed data sets, and D* are representations from balanced data sets. Balanced data sets always have much more training samples compared to corresponding long-tailed counterparts. How do you know these inferior results were not caused by the lack of training samples? 4) In the "adding unseen samples" experiments (e.g., Fig 6, Fig 7, Fig 8), only results on D_LT were reported. I want to see results when unseen samples are added to D* as well. Only by doing this can you prove D* is less diffused and better localized. 5) Fig 7 needs a more detailed legend. So many components don't have explanations. 6) By looking at Figure 5, I don't see a significant difference between D_LT and D* in Cifar100-LT and ImageNet-LT. [1] Liu, Z., Miao, Z., Zhan, X., Wang, J., Gong, B., & Yu, S. X. (2019). Large-scale long-tailed recognition in an open world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2537-2546). [2] Kang, B., Xie, S., Rohrbach, M., Yan, Z., Gordo, A., Feng, J., & Kalantidis, Y. (2019). Decoupling representation and classifier for long-tailed recognition. arXiv preprint arXiv:1910.09217. I think the intuition of this paper is not clear and the experiments are not persuasive. <doc-sep>This paper poses an interesting and important question - where are the bottlenecks in long-tailed classification. The authors use empirical experiments to show their observations: (1) representation is more critical than classifier, (2) data augmentation is helpful. Three datasets (CIFAR-10 LT, CIFAR-100 LT and ImageNet-LT) are employed to work with ResNet-32 and ResNet-10 models to demonstrate their observations. Strength: 1. The topic is interesting and the papers pose some unique observations after extensive empirical analysis and experiments 2. The paper defines several simple mathematical and statistical metrics to measure the differences between representations. Weakness: 1. The posed questions are not well addressed The paper shows some observations but did not provide a concrete and reasonable solution such that the long-tailed classification issue can be addressed. The useful insight from this paper is limited. 2. The empirical observations are not solid and rigorous The paper only provides some simple metrics but did not explain why the metric is necessary and what's the high-level intuition. I did not get the motivation why the authors come up with these metrics to show the differences between representation. In addition, there is no rigorous mathematical proof or statistical analysis. 3. Lack of careful related work discussion The related work section is hard to follow and the authors did not explain their contributions and differences from existing works. 4. The writing needs to be improved. Many typos result in additional difficulties to read. The citation format is not consistent. For example, Cui et al.. in section 3 does not have a year, and "He et al (2015)" followed by Resnet-32 should be "(He et al, 2015)". The "google enough" and "Normal" in the abstract should be corrected. An overall feeling is that the paper is an ongoing work and needs to be carefully written and improved. My recommendation is to reject it in the current form. <doc-sep>The authors study the long-tail dataset problem in order to determine the true bottleneck for the task. After performing many ablations and experiments on 3 benchmark datasets they establish that contrary to common belief the bottleneck is in data representation rather than the classifier itself. I really enjoyed reading the paper. It has a very clear direction from the beginning with good experiments to back it up. The writing is clear as well. I believe long-tailed classification is an interesting problem with clear real-world applications, so studying it in-depth is necessary for the community. Overall, I don't see any major drawbacks or shortcomings as the experiments and ablations combined with the analysis are solid. That said I have few questions: The difference in representation between D* and D_{LT} is clearly visible. However, apart from difference in the shape of distribution (long-tail vs balanced) there is also difference in the amount of data between those two which might play an important role, especially when considering learned representations. It would be good to see what is the difference as well between two datasets that have equal number of examples, but different distributions: normal ad LT. Since otherwise the authors conclusion might raise a question. Additionally, Yin et al. [1] performed a related analysis of classifier magnitude difference which was depending on the distribution of classes. The authors analysis seems deeper here, however it would be good to address any similarities/differences. On top of that, few methods [2, 3] used adversarial examples in order to modify the learned representations instead of the classifiers in LT task. It would be good to see what the authors think about such direction and what impact on the feature space and measured statistics it would have. And finally, apart from the analysis, what are the conclusions here for the future researchers - any thoughts on proposed directions/approaches that could originate from the performed analysis? [1] Yin, Xi, et al. "Feature transfer learning for face recognition with under-represented data." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. [2] Kim, Jaehyung, Jongheon Jeong, and Jinwoo Shin. "M2m: Imbalanced classification via major-to-minor translation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020. [3] Kozerawski, Jedrzej, et al. "BLT: Balancing Long-Tailed Datasets with Adversarially-Perturbed Images." Proceedings of the Asian Conference on Computer Vision. 2020. Good analytical paper on interesting subject of long-tailed classification. It would be good to see authors thoughts on impact of the amount of data in training, impact of adversarial augmentations, and proposed directions stemming from the analysis. <doc-sep>This paper seeks to study what is the bottleneck in long-tailed learning. Based on extensive experiments, the authors propose that representation learning is the bottleneck in long-tailed classification. Also, this paper analyzes representation learning from the perspectives of intra-class compactness and inter-class separation, as well as the influence of data mixup on long-tailed representation learning. Positive points: 1. This paper seeks to empirically investigate the importance of representation learning, which may provide a new understanding of deep long-tailed learning to the community. 2. This work shows the effectiveness of intra-class compactness and inter-class separation on long-tailed representation learning. 3. This work also analyzes the influence of data augmentation on long-tailed representation learning, which provides a better understanding of data augmentation in deep long-tailed learning. Negative points: 1. This paper mentioned that "a commonly held belief in deep long-tailed classification is that performance bottleneck is the classification head atop the representation learner". However, such a belief may not be common. Note that many recent long-tailed learning studies focus on improving representation learning [1], e.g., KCL [2], Hybrid [3], PaCo[4] and DRO-LT [5]. Moreover, in the conclusion, this paper stated that "the results suggest that the primary problem in long-tailed classification may in fact be the few-shot learning problem on the tail classes, rather than the imbalance problem". However, this argument is too strong and the obtained results cannot support it. It would be better if the authors had written all the arguments more rigorously and verified them more completely. 2. As mentioned by the above question, there are many representation learning-based long-tailed studies, e.g., [1-5]. Therefore, it would be better if authors can review mire representation learning based long-tailed learning methods in related work. 3. The vital problem in this paper is the used balanced set: i.e., CIFAR-10/100 and ImageNet-1K. Please note that the data number of CIFAR-10/100 or ImageNet-1K is much more than their long-tailed variants, i.e., CIFAR-LT and ImageNet-LT. Considering using more training samples will lead to significant improvement in representation learning and model performance, most empirical comparisons in this paper (especially Table 1 and Figure 2) are unfair and the corresponding arguments are unpersuasive. The experiments would have been more persuasive, if the balanced training set is a variant of the long-tailed training set with a similar total data number but each class has the same/similar data number, like [1,2]. For example, a balanced set of ImageNet-LT can be obtained at https://github.com/Vanint/Awesome-LongTailed-Learning/tree/main/resources/data_txt/ImageNet_LT. 4. Please discuss more of Figure 5. On ImageNet-LT and CIFAR100-LT, the variance trends of long-tailed representations and ideal representations are quite consistent. Such observations seem different from the conclusion of Sec 2.1. Since I am confused about the results, I guess other readers may also do so. Therefore, I suggest the authors explain them more. Minor suggestions: 1. Figure 3 is not clear enough. It would be better if the authors can explain it more in the captions. Moreover, what are "degrees" in Fig.6-8? Please make them more clear. 2. In Lines 4-5 of page 2, ImageNet-LT appears twice. In Line 1 of page 6, there should be a "full stop" before In practice. References: [1] Deep long-tailed learning: A survey. ArXiv, 2021. [2] Exploring balanced feature spaces for representation learning. In ICLR, 2021. [3] Contrastive learning based hybrid networks for long-tailed image classification. In CVPR, 2021. [4] Parametric contrastive learning. In ICCV, 2021. [5] Distributional robustness loss for long-tail learning. In ICCV, 2021. Overall, I like the goal of this paper, i.e., analyzing the bottleneck of long-tailed learning. However, I cannot champion this paper since the data number of the used balanced set is much larger than the long-tailed set, which makes the empirical comparisons unfair and the corresponding finding unpersuasive. Moreover, the arguments in this paper should be written more rigorously. I am glad to see the response of the authors.
This paper investigates the role of representation learning when the distribution over the feature space has a long tail. The main motivation is to determine how much of the overall learning, in this case, is bottlenecked specifically by representation learning. The main findings are that vanilla learning gives brittle long-tailed representations, harming overall performance. The paper suggests a form of data augmentation to remedy this. Reviewers acknowledge that this investigation is worthwhile. However, many concerns were raised as to whether experiments support the drawn conclusions. A more principled approach to the data augmentation methodology is also needed. The authors address some of these, providing further experiments, but these were not enough to sway reviewers. Since results are fundamentally empirical in nature, this shortcoming indicates that the paper is not ready to share with the community just yet. Stronger experiments with clearer evidence are needed to fully support the thesis of the work.
The paper proposes a Bayesian model comparison based approach for quantifying the semantic similarity between two groups of embeddings (e.g., two sentences). In particular, it proposes to use the difference between the probability that the two groups are from the same model and the probability that they are from different models. While the approach looks interesting, I have a few concerns: -- Using the Bayesian model comparison framework seems to be an interesting idea. However, what are the advantages compared to widely used learned models (say, a learned CNN that takes as input two sentences and outputs the similarity score)? The latter can fit the ground-truth labels given by humans, while it's unclear the model comparison leads to good correlation with human judgments. Some discussion should be provided. -- The von Mises-Fisher Likelihood is a very simplified model of actual text data. Have you considered using other models? In particular, more sophisticated ones may lead to better performance. -- Different information criteria can be plugged in. Are there comparisons? -- The experiments are just too simple and incomplete to make reasonable conclusions. For example, it seems compared to SIF there is not much advantage even in the online setting. <doc-sep>The authors propose a probabilistic model for computing the sentence similarity between two sets of representations in an online fashion (that is, they do not need to see the entire dataset at once as SIF does when using PCA). They evaluate on the STS tasks and outperform competitive baselines like WMD, averaging embeddings, and SIF (without PCA), but they have worse performance that SIF + PCA. The paper is clearly written and their model is carefully laid out along with their derivation. My concern with this paper however, is that I feel the paper lacks a motivation, was it derive an online similarity metric that outperforms SIF(without PCA)? A few experimental questions/comments: What happens to all methods when stop words are not removed? How far does performance fall? I think one reason it might fall (in addition to the reasons given in the paper) is that all vectors are set to have the same norm. For STS tasks, often the norms of these vectors are reduced during training which lessens their influence. What mechanism was used to identify the stop words and does removing these help the other methods (I know in the paper, stop words were removed in the baseline, did this unilaterally improve performance for these methods)? Overall I do like the paper, however I do find the results to be lackluster. There are many papers on combining word embeddings trained in various ways that have much stronger numbers on STS, but these methods won't be effective with this type of similarity (namely because embeddings must have unit norm in their model). Therefore, I think the paper needs some more motivation and experimental evidence of its superiority over related methods like SIF+PCA in order for it to be accepted. PROS - Probabilistic model with clear design assumptions from which a similarity metric can be derived. - Derived similarity metric doesn't require knowledge of the entire dataset (in comparison to SIF + PCA) CONS - Performance seems to be slightly better than SIF, WMD, and averaging word embeddings, but below that of SIF + PCA - Unclear motivation for the model, was it derive an online similarity metric that outperforms SIF(without PCA)? - Requires the removal of stop words, but doesn't state how these were defined. Minor point, but tuning this could be enough to cause the improvement over related methods.<doc-sep>Main contribution: devising and evaluating a theoretically-sound algorithm for quantifying the semantic similarity between two pieces of text (e.g., two sentences), given pre-trained word embeddings (glove). Clarity: The paper is generally well-written, but I would have liked to see more details regarding the motivation for the work, description of the prior work and discussion of the results. As an example, I could not understand what were the differences between the online and offline settings, with only a reference to the (Arora et al. 2016) paper that does not contain neither "online" nor "offline". The mathematical derivations are detailed, which is nice. Originality: The work looks original. It proposes a method for quantifying semantic similarity that does not rely on cosine similarity. Significance: I should start by saying I am not a great reviewer for this paper. I am not familiar with the STS dataset and don't have the mathematical background to fully understand the author's algorithm. I like to see theoretical work in a field that desperately needs some, but overall I feel the paper could do a much better job at explaining the motivation behind the work, which is limited to "cosine similarity [...] is not backed by a solid theoretical foundation". I am not convinced of the practicality of the algorithm either: the algorithm seems to improve slightly over the compared approaches (and it is unclear if the differences are significant), and only in some settings. The approach needs to remove stop-words, which is reminiscent of good old feature engineering. Finally, the paper claims better average time complexity than some other methods, but discussing whether the algorithm is faster for common ranges of d (the word embedding dimension) would also have been interesting.
This paper presents a novel family of probabilistic approaches to computing the similarities between two sentences using bag-of-embeddings representations, and presents evaluations on a standard benchmark to demonstrate the effectiveness of the approach. While there seem to be no substantial disputes about the soundness of the paper in its current form, the reviewers were not convinced by the broad motivation for the approach, and did not find the empirical results compelling enough to serve as a motivation on its own. Given that, no reviewer was willing to argue that this paper makes an important enough contribution to be accepted. It is unfortunate that one of the assigned reviewers—by their own admission—was not well qualified to review it and that a second reviewer did not submit a review at all, necessitating a late fill-in review (thank you, anonymous emergency reviewer!). However, the paper was considered seriously: I can attest that both of the two higher-confidence reviewers are well qualified to review work on problems and methods like these.
Summary of the paper: This work presents a novel method for similarity function learning using non-linear model. The main problem with the similarity function learning models is the pairwise component of the loss function which grows quadratically with the training set. The existing stochastic approximations which are agnostic to training set size have high variance and this in-turn results in poor convergence and generalisation. This paper presents a new stochastic approximation of the pairwise loss with reduced variance. This is achieved by exploiting the dot-product structure of the least-squares loss and is computationally efficient provided the embedding dimensions are small. The core idea is to rewrite the least-squares as the matrix dot product of two PSD matrices (Grammian). The Grammian matrix is the sum of the outer-product of embeddings along the training samples. The authors present two algorithms for training the model, 1)SAGram: By maintaining a cache of all embedding vectors of training points (O(nk) space)$, whenever a point is encountered it's cache is replaced with it's embedding vector. 2) SOGram: This algorithm keeps a moving average of the Grammian estimate to reduce the variance. Experimental results shows that this approach reduces the variance in the Grammian estimates, results in faster convergence and better generalisation. Review: The paper is well written with clear contribution to the problem of similarity learning. My only complain is that, I think the evaluation is a bit weak and does not support the claim that is applicable all kinds of problems e.g. nlp and recommender systems. This task in Wikipedia does not seem to be standard (kind of arbitrary) — there are some recommendation results in the appendix but I think it should have been in the main paper. Overall interesting but I would recommend evaluating in standard similarity learning for nlp and other tasks (perhaps more than one) There are specific similarity evaluation sets for word embeddings. It can be found in following papers: https://arxiv.org/pdf/1301.3781.pdf http://www.aclweb.org/anthology/D15-1036<doc-sep>This paper proposes an efficient algorithm to learn neural embedding models with a dot-product structure over very large corpora. The main method is to reformulate the objective function in terms of generalized Gramiam matrices, and maintain estimates of those matrices in the training process. The algorithm uses less time and achieves significantly better quality than sampling based methods. 1. About the experiments, it seems the sample size for sampling based experiments is not discussed. The number of noise samples have a large influence on the performance of the models. In figure 2, different sampling strategies are discussed. It would be cool if we can also see how the sampling size affects the estimation error. 2. If we just look at the sampling based methods, in figure 2a, uniform sampling’s Gramian estimates is the worst. But the MAP of uniform sampling on validation set for all three datasets are not the worst. Do you have any comments? 3. wheter an edge -> whether an edge. <doc-sep>This paper proposes a method for estimating non-linear similarities between items using Gramian estimation. This is achieved by having two separate neural networks defined for each item to be compared, which are then combined via a dot product. The proposed innovation in this paper is to use Gramian estimation for the penalty parameter of the optimization which allows for the non-linear case. Two algorithms are proposed which allow for estimation in the stochastic / online setting. Experiments are presented which appear to show good performance on some standard benchmark tasks. Overall, I think this is an interesting set of ideas for an important problem. I have two reservations. First, the organization of the paper needs to be addressed in order to aid user readability. The paper often jumps across sections without giving motivation or connecting language. This will limit the audience of the paper and the work. Second (and more importantly), I found the experiments to be slightly underwhelming. The hyperparameters (batch size, learning rate) and architecture don’t have any rationale attached to them. It is also not entirely clear whether the chosen comparison methods fully constitute the current state of the art. Nonetheless, I think this is an interesting idea and strong work with compelling results. Editorial comments: The organization of this paper leaves something to be desired. The introductions ends very abruptly, and then appears to begin again after the related work section. From what I can tell the first three sections all constitute the introduction and should be merged with appropriate edits to make the narrative clear. “where x and y are nodes in a graph and the similarity is wheter an edge” → typo and sentence ends prematurely.
This paper presents methods to scale learning of embedding models estimated using neural networks. The main idea is to work with Gram matrices whose sizes depend on the length of the embedding. Building upon existing works like SAG algorithm, the paper proposes two new stochastic methods for learning using stochastic estimates of Gram matrices. Reviewers find the paper interesting and useful, although have given many suggestions to improve the presentation and experiments. For this reason, I recommend to accept this paper. A small note: SAG algorithm was originally proposed in 2013. The paper only cites the 2017 version. Please include the 2013 version as well.
The paper proposes a method to improve the generalization of neural networks by training them to be robust to adversarial perturbations in the statistics of the batch normalization (BN) layers. The approach combines gradients computed on unperturbed BN statistics with gradients computed on perturbed statistics. Perturbations or noise in the BN statistics are obtained through 1) signed gradients from the first update and 2) reductions in the batch size for the second update. Experiments demonstrate improvements over standard training, especially in the case of smaller-scale datasets, i.e., CIFAR and Time-ImageNet. The method can also be combined with other techniques, such as Mixup and SAM optimization, typically leading to further improvements. Strengths: - The method benefits the generalization of neural networks trained on smaller datasets considerably - The technical presentation of the method in Section 3.2 is detailed and sufficiently clear - The method can be combined with other training methods, such as SAM. Weaknesses: - The paper claims to bridge the gap between robustness and generalization. Experiments are focused mainly on the generalization ability of the learned networks, and robustness experiments are restricted to perturbations of the BN statistics. This is quite limited, and it is unclear if the learned networks are robust to various other adversarial attacks. Indeed, it is unclear what the relevance of Sections 4.4 and 4.5 are regarding the robustness of the networks in practice. - Another contribution of the paper is "a new AT paradigm, termed model-based AT." It appears that the main idea of perturbing model parameters has been explored in various prior works (e.g., [8, 28]). It is not clear what the generic formulation in Eq 2 contributes or what novel insights are provided. - The benefits of the method seem to disappear during large-scale experiments on ImageNet. This is somewhat concerning, and it might be good to investigate this issue further. - Section 3.3 is somewhat confusing: L206 claims \\mathcal{R}=0, but then \\mathcal{R} appears in the perturbation computation of (7). It is also unclear if a term similar to g_\\phi exists in this case. As mentioned above, it might be good to further address the performance on larger scale datasets if this turns out to be a limitation. Also, depending on how robust the method is to other adversarial perturbations, this could also be mentioned in the limitations. <doc-sep>While Adversarial Training is one of the most successful methods to increase robustness, it usually degrades performance of the models on clean images. The authors attribute this to distributional discrepancy in Batch Norm statistics. They propose Adversarially Perturbed bAtch noRmalizaTion (APART) to achieve robustness against BN statistics noise, and to bridge the gap between models’ generalization and robustness. They perform backward passes twice over each batch of clean samples. The first backward pass produces two gradient computations: a normal gradient that helps update parameters of model, and a statistics gradient that is used to perturb the statistics parameters in BN. The second pass is performed to generate the defensive gradient that helps the model resist the adversarial statistics perturbation. The normal and defensive gradients are combined to improve both generalization and robustness of the model. Experiments are performed on CIFAR, Tiny-ImageNet and ImageNet, and show improved clean accuracy over standard training and SAM [28]. **Originality and Significance**: - The paper presents a new way of bridging the gap between models’ generalization and robustness. It is known in the literature that there is discrepancy between Batch Norm statistics of clean and adversarial examples [13] (as well as the statistics from different batches). AdvProp proposes using two batch norm statistics, one for clean images and one auxiliary for adversarial examples [13]. Rather than creating a separate layer to deal with this discrepancy, the paper attempts to make the models robust to the BN statistics noise. This approach is interesting and novel to the best of my knowledge. - The method can be combined with other augmentations to further boost performance. The proposed combination with SAM (as one of the state-of-the-art methods) is particularly promising. **Quality**: - Overall the paper is well-structured and well-written. - The proposed approach is sound, and is described clearly. - Experiments are performed on various datasets including CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet. Overall, experimental results are convincing. They demonstrate improvements on clean accuracy over the baselines as well as robustness against perturbed BN statistics. Comparing with baselines using the same budget is important given the additional cost of the proposed approach. - The authors report detailed experimental results in the supplementary material, and show that ARAPT is relatively insensitive to hyper-parameters. **Clarity**: - Scalability of ARAPT to large datasets and models is not clearly supported in the experiments. The authors use the relatively small ResNet-18 model on ImageNet-1K. ARAPT underperforms standard training on ImageNet at 2x budget and outperforms it at 4x budget (Table 2). The authors note that "APART employed on the large-scale dataset requires more steps to show its promise", but do not provide further explanation or experiments on this. - All experiments are performed on the ResNet family. On ImageNet the achieved accuracy of 72.14% (Table 2) is far from the state-of-the-art. It'd be good to include experiments on other architectures (e.g. EfficientNet), and see if the gains are significant. - The authors have addressed limitation of the work in terms of suffering from potential degeneration in case of the combination with other training methods implicitly involving BN. - The authors can address potential limitation of their work on large-scale datasets and models. - There are no potential negative societal impact that need to be specifically addressed. <doc-sep>This paper proposes to add adversarial noise on the BN statistics to improve classification accuracy on in-distribution images. Strength: 1. The paper is well-written and easy to follow. The related works are thoroughly discussed. Weakness: 1. The novelty is limited. The proposed method is almost identical with AdvBN [15] (NeurIPS'21). Although the authors mentioned three differences in related work section, I still think they are all minor differences. 2. In experiments, no results on [11] or [15] are reported. This makes it hard to evaluate whether the proposed method can outperform previous works. Please see above <doc-sep>This paper introduces an ‘Adversarially Perturbed Batch Normalization’ to improve the model’s generalization and robustness. Experiments on CIFAR, Tiny-ImageNet, and ImageNet show that the proposed methods can improve the models’ performance, compared with the baseline model. Strengths: Compared with the previous AdvBN [15], the proposed APART is more appliable and easy-training. The paper is well-written, and the theoretical analysis is clear. Weaknesses: Experiments From the reviewer’s view, the experiments in this paper are not sufficient. (1) As mentioned in Lines 62-63, the author mentioned that they want to bridge the gap between the model’s generalization and robustness. The reviewer thinks experiments on ImageNet-C or Stylized ImageNet are needed to show the advantages of robustness. (2) The comparison with other methods is missing. The reviewer thinks a comparison with normalization methods [18-20] and adversarial methods [11, 15] is needed. (3) ‘Mix-Up’ experiments on ImageNet are missing. (4) Similar to the experiments on CIFAR-10 and CIFRA-100, the authors are suggested to conduct the experiments on one more backbone on Tiny-ImageNet and ImageNet. For me, the current experiments are not sufficient. The authors are suggested to add more experiments to show the advantages of their paper.
The paper presents a new way of bridging the gap between models’ generalization and robustness, by combining gradients computed on unperturbed BN statistics with gradients computed on perturbed statistics. The main goal is to improve the standard generalization, but the authors should clarify their definition of "robustness" as it seems to confuse all reviewers (e.g., questioning adversarial attacks). Moreover, the method itself is very simple, and the idea of using adversarial perturbation to stabilize model training isn't new (AdvProp, etc.). Reviewers are further concerned about the lack of large-scale experiments or on state-of-the-art architectures. Besides, there are no comparisons with some of the competing methods such as AdvProp. Therefore, I find no sufficient ground to recommend acceptance in this paper's current shape.
This paper tackles the challenge of generating adversarial perturbation for a target model - with no access to the model, or the model's training data (i.e. target domain). Using a trained model and data from a source domain (ImageNet), the authors train a generator to craft perturbations which maximize the cosine distance between the intermediate features of clean and adversarial images. This generator is then assisted by two techniques - random normalization of the input image, and spatial attention on Intermediate-layer features (used for cosine distance). Experiments show that this method outperforms prior methods in black-box setting (no access to target domain or model) as well as white-box setting. ### Strengths: 1) The problem setting (no access to target data) is of importance - in practice, access to data is as hard, if not harder, than access to model. 2) The experiments are extensive, and clearly show a significant improvement in black-box attack capability. 3) The code provided with the paper, along with the appendix, help gaining a clearer understanding of the method (conversely, they further emphasize readability issues of the manuscript). ### Weaknesses: 1) The manuscript is poorly written - grammatical mistakes and semantic mistakes are aplenty. Some phrases are the opposite of what the method actually does. Section 3.4 states "Specifically, we apply a channel-wise average pooling to the feature maps at layer L" where as the actually operation is cross-channel average pooling (Refer to [1]). Other mistakes are highlighted below. 2) The novelty of the method is limited. In Wen Zhou et al., 2018 [2], Intermediate feature disruption is used to increasing black-box transferability. In Weibin Wu et al., 2020 [3], attention is used for increasing transferability. The paper does not mention these works. 3) The claims of section 4.2 are only weakly supported. Statement: "The downsampling module has an essential impact on the resulting adversarial examples" I fail to see how this can be inferred from visualizing the cross-channel attention outputs. ### Other weaknesses: 1) Since all the experiments are only regarding ImageNet -> target datasets, it is unclear how well the method will perform if the source dataset is different (especially if the source dataset is small). 2) The metrics do not include standard deviation across multiple random runs. Evaluating the standard deviation in at least one setting will elucidate the significance of the results. 3) Results with combining the two proposed techniques (RN and DA) *should* be present in the main draft. This is an important question that the manuscript only deals with in the appendix. The manuscript with benefit from a discussion on the fact that using these two techniques in tandem is challenging, and fails to consistently out-perform using just a single module. ### Text Errors: #### Abstract: 1) transferability nature ==> transferable nature. 2) the only knowledge ==> only the knowledge 3) the coarse-grained domain ==> coarse-grained domains #### Introduction: 1) possible to the spotlight ==> possible 2) transparent ==> opaque 3) the query ==> querying 4) but more threatening ==> and more threatening 5) the generator ==> a generator #### Method: 1) they can subject to the ==> they can be modeled as samples from the standard normal distribution. 2) even the inputs are not ==> even if the inputs are not #### Experiments: 1) in the Torchvision. ==> in the Torchvision library. 2) another seven ==> seven other ### Questions for the authors: 1) How will the generator network perform with its trained with all source models at once? (See experiments - Table 3 in Konda Reddy Mopuri et al., 2017 [4]) I suspect that it should further increase the transferability. 2) Have the authors tried to increase the RGB jittering when comparing to existing methods? I suspect that with significant jittering, augmentation may perform similar to random normalization. ### References: [1] Network In Network, Min Lin, Qiang Chen, Shuicheng Yan, arXiv 2013. [2] Transferable Adversarial Perturbations, Wen Zhou, Xin Hou, Yongjun Chen, Mengyun Tang, Xiangqi Huang, Xiang Gan, Yong Yang, ECCV 2018. [3] Boosting the Transferability of Adversarial Samples via Attention, Weibin Wu, Yuxin Su, Xixian Chen, Shenglin Zhao, Irwin King, Michael R. Lyu, Yu-Wing Tai, CVPR 2020. [4] NAG: Network for Adversary Generation, Konda Reddy Mopuri, Utkarsh Ojha, Utsav Garg, R. Venkatesh Babu, CVPR 2018 The method proposed in this paper out-performs existing methods, and targets an important setting (no access to target domain or model). However, the writing is error-ridden, and the proposed method is only marginally novel w.r.t. existing works. Therefore, I rate the paper as marginally above accept threshold, conditional on the authors correcting the mistakes highlighted above. <doc-sep>This work first identifies a more practical threat model for black-box transfer adversarial attack, where the target model's domain remains unknown, and the attacker's surrogate model may be trained in another domain. Then, the BIA attack is proposed to enhance transferability, whose key idea is to distort low-level features captured by DNN's intermediate layers instead of perturbing the domain-specific features in the output layer. Two modules, DA and RN, are further proposed to improve attack success rate. Experimental results demonstrate that BIA is more effective than existing methods. See the pros/cons below. ### Pros 1. Considering more practical threat model is certainly helpful and important for the transfer attack research. 2. The results indeed demonstrate large improvement of BIA in terms of error rate. ### Cons 1. I'm a little concerned about the "cross-domain" statement made in this work. To me, the target datasets considered in this work (CIFAR, STL, CUB, Stanford Cars) are still coming from the same natural imagery "domain" as ImageNet, despite they have different label spaces. In particular, CUB is known to have overlap with ImageNet [1], where the "cross-domain" claim certainly does not hold. An example case that is more "cross-domain" would be to transfer from ImageNet model to a ChestX-ray model (in a similar sense to Naseer et al.). 2. The specific methodology of BIA seems not new. It is known that perturbing intermediate layer features can yield more transferable adversarial examples (e.g., [2]). In fact, the formulation of BIA appears very similar to the one proposed in [2] (while BIA minimizes the cosine similarity between intermediated layer features of the clean and adversarial examples, [2] maximizes the euclidean distance, which essentially is the same). Feature space attacks are also shown to be more powerful than decision space attacks in more strict black-box transfer scenarios [3], but Sec. 3.2 fails to recognize these existing works. Clearly identifying the difference between BIA and [2] might help address this concern. 3. If my above judgement of BIA not being new is correct, then my further concern comes from the result side. In Table 2, it seems that the performance gain can be largely attributed to the BIA (or essentially feature space attack) itself rather than the DA and RN module. This hurts the empirical novelty to some extent, as previous works have shown the superiority of feature space attacks in either standard or more strict transfer settings ([2,3]). [1] http://www.vision.caltech.edu/visipedia/CUB-200-2011.html [2] Feature Space Perturbations Yield More Transferable Adversarial Examples [3] Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability This paper indeed identifies a more practical threat model, but the experiments do not closely match the proposed "cross-domain" scenario, and the performance gain seems to largely come from existing technique (perturbing feature space instead of decision space). These issues prevent me from recommending for acceptance. <doc-sep>This paper focuses on the transferability of black-box domains. In real life, we do not know the relevant information of the deployed model and transfer attacks on black-box domains can better evaluate the vulnerability of deployed models. Therefore, Beyond ImageNet Attack (BIA) is proposed to investigate the transferability towards black-box domains (unknown classification tasks) with the only knowledge of the ImageNet domain. From the perspective of data and model, the authors propose random normalization (RN) module and domain-agnostic attention (DA) module to narrow the gap between the source and target domains. Finally, BIA achieves state-of-the-art performance in black-box domains settings. Strengths: 1. BIA focuses on disrupting low-level features to improve transferability. 2. This work proposes random normalization (RN) module to handle the various distributions between source domains and target domains. 3. This work proposes domain-agnostic attention (DA) module to produce a more robust feature representation. Weaknesses: 1. RN module and DA module are not always mutually reinforcing. The reason behind this has not been analyzed. 2. In competitors, diverse inputs method (DIM) is not new. From this perspective, why not use the more powerful MI-DI-TI-FGSM or a newer transfer attack? 3. Table 2 and Table 3 show the transferability comparisons on classification tasks. However, the effects of DA and RN seem to depend on different models. To understand more deeply, it is necessary to analyze why different modules have different effects in different models. Minor questions: 1. What are the experiments on the fine-grained and coarse-grained classification to prove? Why distinguish between fine-grained and coarse-grained? There is no clear explanation in this work. 2. Does the comparison method differ greatly in training costs? I tend to accept this paper because it focuses on more realistic black box attack settings and proposes two modules to improve performance. The design of the module is insightful and effective, but the module proposed under some models is not always effective, which limits its application and requires more adequate analysis.
This paper considers that the model's training data may be not accessible when learning the attacking model, and thus a more practical blackbox attack scheme, Beyond ImageNet Attack (BIA) framework, is designed. All the reviewers agreed that the setting in this paper is important and helpful when designing attack methods. However, the method is not totally new. Nevertheless, considering the importance of the problem investigated in this paper, the nice design of the overall framework, and the extensive experiments, the AC recommends accept for this paper.
This paper propose a new threat model called stability attack. The goal of stability attack is to hinder model from being robust to adversarial attacks. The author proposes hypocritical perturbation as a method for stability attack and shows that hypocritical perturbation is harmful in terms of adversarial robustness in a simple gaussian mixture setting. Finally, the author shows that 2 adversarial training is enough for protecting stability attack. Pros: - As far as I know, this is the first work that studies the robustness of adversarial training against train data poisoning called stability attack. - As [1] noted, hypocritical perturbation is known to be a weak attack in terms of the standard accuracy, but this paper illustrates that it can be a strong attack in terms of the robust accuracy. - The intuition of stability attack for vulnerability of adversarial attack is well demonstrated. Cons: - It seems that hypocritical perturbation is main content, but is not well described. Especially, the comparison of adversarial perturbation and hypocritical perturbation lacks. - It seems that Stability Attack in Table 2 and Adversarial Poisoning and Hyp. and Adv. in Table (3, 4) are identical, respectively. The consistency of naming the attacks is necessary. - The description of the poisoning methods in Table 2 is not given. - In Table 3, Adv. and Hyp are only considered for the methods of data poisoning. It would be more appropriate to do evaluation against the various attacks as in Table 2. - The final message of the paper is that by using a larger perturbation size, the hypocritical perturbation can be defended. However, this message would be obvious and so the novelty would not be high. [1] Lue Tao, Lei Feng, Jinfeng Yi, Sheng-Jun Huang, and Songcan Chen. Better safe than sorry: Preventing delusive adversaries with adversarial training. In NeurIPS, 2021. - As mentioned in the manuscript, the standard accuracy of a robustly leanred prediction model based on training data poisoned by the stability attack is better than the standard accuracy of a robustly trained model based on training data poisoned by other methods. In this sense, the stability attack is less serious than other poisoned methods which degrade the standard accuracy as well as the robust accuracy simultaeously. <doc-sep>This paper presents sability attack against the conventional adversarial training process, aiming to reduce the eventual robust accuracy of the resulting model. Specifically, the corresponding hypocritical perturbations are applied into training data as a training-time attack. Theoretical analysis is provided to support the idea of hypocritical perturbations. Experimental results on commonly used classification datasets like CIFAR-10 demonstrate the effeciveness of the proposed method. # Strengths 1. This paper is technically sound and clear around the theoretical analysis. Experimental results are significant, which well support the theory. 2. The writing quality of the paper is good overall. Specifically, the background of problem is smoothly introduced. 3. The proposed method is sufficiently evaluated. To be specific, key attacks like FGSM, PGD, CW, and AutoAttack are present for robustness evaluation. Meanwhile, the experiments cover four datasets, namely CIFAR-10/100, SVHN, and Tiny-ImageNet, which are sufficient to demonstrate the effectiveness for the proposed method. 4. The proposed method successfully compromised adversarial training methods. However, countermeasure (adaptive defense) is also proposed and evaluated. # Weaknesses 1. Overall this is a good paper and I did not find many problems. Limitations are discussed as pointed out in the checklist. <doc-sep>This paper introduces the problem of adversarial training when they face the new type of attacks called stability attacks. The stability attacks aim to compromise the robust availability by slightly manipulating the training data. Most of existing methods neglect that the test robustness of the adversarial trained models when they are under the training-time availability attacks. Under the threat of the stability attacks, they demonstrate that the adversarial trained network with epsilon perturbation budget is not enough to defend against the epsilon bounded adversarial perturbation. The authors argue that it is necessary to enlarge the epsilon perturbation budget when they conduct the adversarial training. Strengths : 1. Clear writing. Easy to understand and well-organized paper. 2. Important experimental result. The test-robustness of adversarial trained network against evasion attacks when they are under the delusive attacks is intriguing result in the adversarial research. Weaknesses : 1. Low originality: Missing comparison with the recent key reference [Ref_1] which is somewhat similar method of the training-time availability attacks. The attack generation algorithm of training-time and test-time perturbation is similar to [Ref_1]. Furthermore, [Ref_1] achieved the state-of-the-art attack performance against adversarial trained network on clean data. Thus, it is unclear that this paper made the fair comparison between current SOTA poisoning attacks on adversarial trained network. In this regard, the novelty of the threat model on the adversarial trained network by perturbing the training data is marginal. 2. Insufficient explanations on the relationship with non-robust features. This paper passes the buck to the non-robust feature for the experiment results of the increase of standard accuracy and the decrease of robust accuracy when they are under the stability attacks. The authors emphasize the responsibility of the non-robust feature in the title, abstract, and throughout the paper. However, I am not convinced that non-robust feature is the only reason for the experiment result in Table 2. Natural test robustness has increased and test robustness under evasion attacks has decreased when they are adversarially trained under the stability attacks. But, considering the fact that there always exists a trade-off between the standard accuracy and the robust accuracy [Ref_2], non-robust feature can’t be solely blamed. As the authors followed the process of theoretical analysis with the [Ref_3], they need to present another empirical evidence, feature-level analysis, or visualizations for the explanations on relationship with non-robust features as [Ref_3] did. 3. Confounding usage of the term for the similar concept. It is very confused when the term ‘Hyp’, ‘stability attacks’, and similar concept appears throughout the paper. [Ref_1] Fu, S., He, F., Liu, Y., Shen, L., & Tao, D. (2021, September). Robust unlearnable examples: Protecting data privacy against adversarial learning. In International Conference on Learning Representations. [Ref_2] Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., & Jordan, M. (2019, May). Theoretically principled trade-off between robustness and accuracy. In International conference on machine learning (pp. 7472-7482). PMLR. [Ref_3] Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., & Madry, A. (2018). Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152. Yes, the authors have addressed the limitations and potential negative societal impact of their work. <doc-sep>### Summary: This paper introduces a novel data poisoning attack against adversarial training called *stability attacks*. The goal is to temper the training data such that the robust performance of adversarial training over this manipulated dataset is degraded. To construct this attack, a *hypocritical perturbation* is built: unlike *adversarial perturbations*, the aim of *hypocritical perturbations* is to reinforce the non-robust features in the training data. These perturbations can be generated by negating adversarial example generation objectives. **Motivation:** The paper motivates stability attacks from the perspective of robust vs. non-robust features. Specifically, a simple binary classification task over a mixture of Gaussians is considered. Statistical analysis on this task shows that adversarial training over *hypocritically perturbed data* is destructive to adversarial robustness. Moreover, it is shown that a larger perturbation magnitude is needed to guard adversarial training against stability attacks. **Implementation:** The effectiveness of stability attacks against adversarial training is demonstrated through extensive experimental results. ### Strengths: - The paper is clear, and it guides the reader skillfully. - The paper is well-motivated. The statistical analysis of the binary classification task is thorough, and the implications of the theoretical results are discussed comprehensively. - This paper sheds light on the implications of robust vs. non-robust features from a novel perspective and utilizes these studies to introduce a new threat against adversarial training. - The experimental settings are discussed in detail, and the effects of different hyper-parameters and architectures on the performance are investigated. ### Weaknesses: - While the theoretical justifications of *hypocritical perturbations* on the binary classification task are discussed, the relationship of these results with the attack generation process (Eq. (10)) is obscure. Although the given example for the logistic loss and the binary classification task is appreciated, the origins of the objective function in Eq. (10) need a better justification. - Furthermore, a thorough discussion on the relationship of this work with existing works on the trade-off between the clean and robust accuracy of neural networks with adversarial training seems missing. As the experimental results suggest, the implications align with the observations of Tsipras et al. [63] on the trade-off between the clean and robust accuracy (e.g., see Table 4). From this perspective, it seems like stability attacks are somehow just exploiting this trade-off to pose their threat on adversarial training. Thus, a comprehensive discussion on the differences between this work and prior work in this area is required. A potential discussion on the real-world negative impacts of the current work is missing. This reviewer would encourage the authors to discuss this matter explicitly.
This paper proposes a new threat model called stability attack, which aims to hinder model from being robust to adversarial attacks. The author proposes hypocritical perturbation as a method for stability attack and shows that hypocritical perturbation can indeed decrease the adversarial robustness of a model trained in a simple gaussian mixture setting. The reviewers agree that the problem being studied is interesting, the proposed method is well motivated, and the experiments are mostly convincing. The authors are encouraged to merge the new results during the rebuttal into the publication and discuss more on the efficiency of the proposed method.
Summary\\ This paper proposes a new 'baseline' for attribution methods tailored to deep neural networks. DNN attribution methods like integrated gradients, deep lift and others require a baseline to compare to as part of the computation. The choice of a baseline has been controversial in the literature, and a good method to select a baseline remains an open problem. This paper seeks to address that problem. Specifically, this paper seeks to develop a baseline for one-vs-one explanations as opposed to one-vs-all explanations. Consider an MNIST model, a one-vs-one attribution would attribute why an input is say a '2' and not a '4', i.e., it is contrastive against a particular target class and not all classes. This paper proposes to use a StarGAN for generating these baselines. The paper then evaluates explanations derived using the new baseline and shows that they explanations 'perform' better. Overall, I think the paper tackles an important problem, but I have several concerns with the motivation, the appropriateness of the baseline definition in this work, and the evaluation. I'll expand on these concerns in the later part of the review, so I am not recommending an accept in its current form. Significance/Quality\\ The paper tackles an interesting and potentially challenging problem. However, motivation is still somewhat unclear, and there are critical problems with the evaluations used as justification here. I go into these at the end of this review. Clarity/Writing\\ The paper is generally easy to follow. I problem I had reading it is that there are a few sentences that are stated as fact without any justification. For example, the paper notes, "The minimum distance training sample in section 2.2 is a true class-targeted baseline proposed in the past." What is a 'true' class-targeted baseline? Such statements should probably be reformulated. Minor changes Lasted paragraph of section 2.3: "has posted a challenging", posted is probably not the desired word here. Questions and Concerns - Motivation: it is still not clear to me why a one-vs-one attribution is desirable? More should probably be done here to motivate this. The biggest need though is motivation for the one-vs-one attribution baseline. In several statements, the paper alludes to properties of baselines used in expected gradients and other methods, stating the reasons why these baselines are undesirable. I agree, but why should a one-vs-one baseline be preferable to these? Ideally, the paper will set out a list of desirable properties; then show that the baseline derived from GANMEX satisfies these. It is still not clear to me why a notion of minimum distance in a different target class is the right one. Can the authors say more about why this should be the case? - Evaluation: I'll preface my concerns here with the fact that I think evaluating model attributions or explanations in general is a difficult and open problem. This said, I don't think any of the evaluations presented in this paper can be taken as showing that the GANMEX baseline is the desirable one. First, the perturbation-based evaluation does not provide consistent rankings (see: Tomsett et. al. (AAAI Sanity checks for saliency metrics)). I suspect the gini index will have the same problems as those discussed in the Tomsett et. al. paper. The sanity checks themselves, i.e. the cascading randomization, will tell you if a method should be ruled out and not whether a method is effective. Consequently, I don't think the sanity checks can say much in judging baselines. Having said all of this, I think the way to evaluate a baseline is to take a task where the truth ground-truth rankings are known a priori, train a model to respect and align to the true ground truth. Now one can compare attributions from such a model for a normal baseline and a baseline from GANMEX. Assuming the attribution method itself is a reliable one, then one can quantify improvements due to the GANMEX baseline. A paper that might be related to this work that also incorporated generative modeling: https://arxiv.org/abs/1807.08024.pdf Overall, the concerns above make me hesitant about this current draft; however, I am happy to revise my assessment if the authors think I am wrong. <doc-sep>Summary: This paper looks to use GANs to generate baselines for attribution methods. The focus on one-vs-one feature importance explanations is novel, to the best of my knowledge. The paper attempts to make progress on the baseline selection problem that has plagued the feature importance community. Strengths - As far as I know, the authors' contribution of one vs one attribution (compared to one vs any attribution) is novel. Whilst other works have alluded to this or ran heusristic experiments, this paper does a good job of formalizing the notion. - The ability for GANMEX to live on top of any other attribution method makes it an attractive addition to existing attribution methods. Thank you for visualizing the baselines generated by GANMEX, quite helpful :) Weaknesses - A computational complexity analysis is required to gauge the practical utility of generating baselines with GANMEX. Also, it would be nice to give complexity of GANMEX compared to FIDO, EG, and simple nearest neighbor baselines. - In addition to the visual comparisons provided, it would have been helpful to evaluate explanations using exisiting evaluation criteria in the attribution literature (i.e., faithfulness, sensitivity, monotonicity, etc.) This paper has the opportunity to broadly assess the effects of various baselines on attributions. Questions - While GANs seem like an attractive choice of deep generative model (DGM) for this problem, can you comment on or experiment with other DGMs (i.e., VAEs or specifically VAEACs [1])? However, any DGM that has latent class separation should suffice. You would be able to perform optimization in the latent space [2, 3, 4] and achieve similar class separation, as described in Figure 1. - The attributions in Figure 3.E seem like noise, while zero baseline seems visually appealing -- can you provide some intuition for why this occurs? The GAN feels like overkill for MNIST, but might be suitable for other high dimensional problems wherein the baseline needs to pick up on small nuances in the data. [1] https://openreview.net/forum?id=SyxtJh0qYm [2] https://arxiv.org/abs/1806.08867 [3] https://arxiv.org/abs/2006.06848 [4] https://arxiv.org/abs/1807.08024<doc-sep>Paper Summary: This paper considers the less-explored baseline selection issue in attribution methods for one-vs-one explanations of multi-class classifiers. The key insight is to construct the closest and realistic target class baseline. To this end, an existing image-to-image translation GAN model, namely StarGAN, is leveraged to transform an input example to another example in a target class yet is close to the input. This baseline can be integrated with a variety of attribution methods, including integrated gradient, DeepLIFT, Occlusion, and deepSHAP, and shows consistent improvements over zero baseline and minimum distance training sample for one-vs-one explanations. The experiments are conducted on three datasets – MNIST, SVHN, and apple2orange. Paper Strengths: This paper addresses an important yet overlooked baseline selection problem. The way the authors address this problem is interesting by leveraging GAN models. Empirical evaluations demonstrate the effectiveness and generalizability of the proposed approach. Paper Weaknesses: 1) The main weakness of this paper to me is the evaluation section. The proposed approach is only validated on simple datasets like MNIST and SVHN. It would be more convincing to show the effectiveness of the proposed approach on natural images and a large number of classes, like CIFAR and ImageNet, as used in the previous work such as IG. 2) Following the comment in 1), the prior works, such as IG and DeepLIFT, have been used to analyze other types of models and been evaluated on other types of data, such as genomics and neural machine translation. In addition to images, would the proposed approach also apply to these domains? 3) As illustrated in Figure 1, the key assumption of the proposed approach is that a GAN model (StarGAN) is able to generate examples that are much closer to the input examples than the training examples (i.e., minimum distance training sample). Under what conditions would such an assumption hold? 4) Following the comment in 3), it would be interesting to show and analyze some failure cases. 5) In the proposed approach, the StarGAN directly uses the already trained model classifier as its discriminator. How if the StarGAN trains its own discriminator without using the model classifier? 6) It would be interesting to show the hyper-parameter (different trade-off lambdas) sensitivity. 7) I understood that the authors focused on one-vs-one explanations. But I am interested to hear the authors’ thoughts on how to extend the proposed approach to one-vs-all explanations. After Rebuttal: I thank the authors for the rebuttal. I have also read the other reviewers’ comments. Unfortunately, the rebuttal is unconvincing and sometimes vague. I keep my original rating.<doc-sep>The paper claims to present a novel GAN-based model explainability, for generating one-vs-one explanations, by incorporating to-be-explained classifier as part of the GAN. They use GANs to produce a baseline image which is a realistic instance from a target class that resembles the original instance. Positive aspects: - a novel approach for generating one-vs-one explanation baselines leveraging GANs - the proposed approach improves the saliency maps for binary classifiers Negative aspects: - the paper lacks clarity - the approach is demonstrated on cherry-picking examples. Have doubts of its generalization capability Please find below some of my concerns: 1. Your claim: "...we use GANs to produce a baseline image which is a realistic instance from a target class that resembles the original instance". Why do you need a GAN? Why don't you use a network to generate a confusion matrix to analyze the performance of the classifier? And based on this analysis you could explain why, for instance, the digit '0' is classified as a '6'. 2. Related with the previous point, your analysis is very limited. You assume '0' is classified as '6'. Could '0' be classified as an '8' or '9'? It is not clear from your analysis. There are no comments on these cases. Looks like your examples to defend your approach are cherry-picked. 3. I am not sure how to interpret Figure 2. Some other comments: 1. The paper lacks novelty. The authors' contribution is not clear. 2. The experimental validation is limited and not convincing. The authors use just some simple datasets (MNIST, SVHN). What about more complex datasets, like CIFAR10, LSUN, etc.? Could your approach explain the mis-classification in these cases?
This work investigates the choice of a 'baseline' for attribution methods. Such a choice is important and can heavily influence the outcome of any analysis that involves attribution methods. The work proposes doing (1) one-vs-one attribution in a sort of contrastive fashion (2) generating baselines using StarGAN. The reviewers have brought out a number of valid concerns about this work: 1. One-vs-one attribution appears to be novel, and distinctive enough from the more prevalent "one-vs-all" formulations. I am perhaps more optimistic than the reviewers that such a formulation is in fact useful, but I can see where the hesitancy can come from. 2. It's not clear that the evaluation shows that the proposed method is in fact superior to the others. All the reviewers touched upon this one way or another. 3. Somewhat simplistic datasets used for evaluation (noted that there are CIFAR10 results in the rebuttal). This was more borderline than the scores would indicate. I thank the authors for the extensive replies and extra experiments. I encourage them to incorporate more of the feedback and resubmit to the next suitable conference. I do believe that doing experiments on ImagetNet (like previous work does, such as IG) would be quite worthwhile and convincing. I suspect the computational expense could be mitigated by re-using pretrained networks, of which there are many available for ImageNet specifically.
This work provides a unifying review of hardness measures for MDPs that have appeared in previous RL theory bounds in tabular settings. The authors have also developed a benchmark with easily estimable values for these hardness measures to be used for empirical investigation of RL theory. Performance of four standard algorithms are measured in the environments. *Originality:* The work provides a unifying perspective on what has thus far been seen as disparate notions of hardness, with qualitative comparison of the strengths and weaknesses of each. I think that, while no new notion of hardness is investigated here, which would make this a very strong paper, the perspective offered is novel and worthwhile. The development of a standard tabular benchmark for RL theory is again a work of synthesis from previous papers. While valuable for RL theory practitioners, there is less novel insight here, as these environments are well known. *Quality:* The paper is clearly well-thought through and well constructed, with unifying insight provided. I think if the environments weren't chosen from the literature, but instead chosen such that the different aspects of hardness described in the paper were easier to control independently with respect to the different policy and environment parameters described in the paper, the benchmark would lead to even more meaningful insight. *Clarity:* The paper is easy to follow, and plots and charts are easy to read. The visualisations of the environments give good intuition as to their structure, and the experiments give good characterisation of their hardness. *Significance:* The paper is significant, and well-poised to enable future work in unified hardness and empirical evaluation of RL theory results. As suggested above, the insight here is mainly unifying, with little specific novel contributions, either in terms of environment or in terms of analysis. New measures of hardness, or previously unseen environments with useful properties would make the paper very strong. <doc-sep>The paper introduces Colosseum, a Python package that allows empirical investigation of MDP hardness along estimation complexity and visitation complexity for tabular reinforcement learning. It surveys various existing harndess measures and argues why many of these do not capture the above mentioned complexities properly. The ones which come closest to capturing these are Environmental value norm for estimation complexity and Diameter for visitation complexity as I understand from the paper. It also implements agents and implements a benchmark for what the authors claim are the four most widely studied tabular reinforcement learning settings - which I believe are: (a) Episodic ergodic. (b) Episodic communicating. (c) Continuous ergodic. (d) Continuous communicating. They also perform experiments examining the hardness measures under various changes to the MPDs (Fig. 1) as also with the agents in the mentioned settings (Tab. 1 and fig. 2). I believe they motivate the approach well and the benchmark is more comprehensive than existing benhcmark for tabular RL. The quality of the woek also seems high. They provide code and collect it in a single package which should be of great significance to the community, esp. but not just the tabular RL communtiy. I assume the code quality is also good based on a quick walk through the Jupyter notebook: Colosseum_main_tutorial.ipynb. The analysis plots also look good. However, in respect to the clarity, I feel like the paper would be much more suited to the journal format. I feel like many of the questions I ask below are probably clarified in the Appendix (which I only glanced through), however, I feel like some of these rather belong in the main paper. The motivations for the different MDP families was not described in the main paper There were some statements such as: >The ergodic settings seem generally slightly easier than the communicating settings. But the evidence was not explicitly pointed out. I felt like this in many places and it feels this was done to save space. Because of such space-saving measures, I feel the paper is better suited to a journal. In various places the clarity can be improved, e.g.: >The diameter also increases almost linearly with the number of states. When p_rand is relatively small, an approximately linear relationship can still be observed. This was a little confusing because sub-figures being referenced are not mentioned. esp. for the 2nd sentence, I can't see the linear relationship when p is "small" in Fig. 1a because it's not zoomed in enough. What values of p were meant "small"? >"Sum of the reciprocals of the sub-optimality gaps": This measure of hardness is not particularly apt at capturing estimation complexity, since it focuses solely on optimal policy identification. It also underestimates the increase in hardness induced by an increase in visitation complexity. I understand the arguments. However, in Fig. 1a-1d, tbh, "Sum of the reciprocals of the sub-optimality gaps" actually seems the closest to the cumulative regret of the tuned near-optimal agent. Could the authors please add some reasoning as to why this measure ends up being seemingly the closet to the cumulative regret of the tuned near-optimal agent even though it is not suited either for visitation complexity or estimation complexity. >For Q-learning and PSRL (Figs. 2b and 2c), the diameter seems to have a generally smaller influence on the average cumulative regret I cant quite see this in the figures. >(efficiently computable) hardness measures Adding a table with computational complexity of calculating the measures would be highly appreciated. Yes. <doc-sep>This paper first presents a survey of existing hardness measures and results from the MDP literature. Their main contribution is the introduction of `Colosseum`, which is a benchmark for empirically validating hardness results, which they use to compare various existing hardness measures. # Originality It seems to me that section 2 (Hardness in Theory) is a review of existing literature, so the original contributions of this paper would be limited to the `Colosseum` package and the empirical investigations provided. For these, the work is original as far as I can tell, although some of the claims seem to be a bit inflated (e.g. "a pioneering Python package", "the most exhaustive in tabular reinforcement learning", "invaluable analysis tools", etc.). # Quality The authors have done a reasonably thorough survey of hardness literature and evaluated these measures using the various environments in their package. There are a few issues regarding clarity and correctness that I include in the questions below. The code for `Colosseum` seems to be well-written and well-documented, which I consider to be a core part of this paper's contribution. # Clarity The paper is very well written and motivated reasonably well. Some of the plots and tables are hard to digest; specifically, it's often not clear _what_ is being said with them. There's a lot going on in Figure 1 (and even more in Table 1), and even though the main takeaways do seem to be discussed in the text, they're mostly lost in the paragraphs in page 7 (it seems that the last sentence is the main takeaway for each hardness measure). I would suggest highlighting these in a more streamlined manner (to draw the reader's attention directly to the takeaways) and leave the descriptive text until after. Table 1 is a bit overwhelming, it's not clear what we're supposed to be looking for. I would also suggest rewriting this section so there are clear takeaways and insights for the readers; currently it reads just as a verbal description of the (many) numbers in the table. Although it is claimed that in Figure 2 "there is generally a positive relationship between both of these hardness measures and the average cumulative regret", it seems almost like points uniformly spread on the plane (i.e. I don't see a clear relationship at all). There are a few other issues I mention in the questions below. # Significance This, to me, is the weakest point of the paper. Although I appreciate the authors' effort to produce a nice package for benchmarking theoretical results, it's not clear how significant this will be. Empirical evaluations on toy environments (for theoretical results) are typically meant to highlight characteristics or subtleties of the theory introduced, but are not the end goal in itself. In particular, whether the empirical results suggest sub-linear or linear growth, say, does not in any way change the theoretical results. Thus, it is not clear what the added value would be to have a "theory benchmark". Something that I think could make this package more impactful is to try to go beyond tabular. One suggestion would be to look at bsuite (which the authors do cite), as they include both tabular and continuous environments. In particular, it would be interesting to evaluate both as they may allow one to empirically investigate how the hardness measures vary when moving from tabular to larger systems. I acknowledge that in non-tabular systems it may not be possible to compute all of them in closed form, but there may be approximations; alternatively, continuous variants of the tabular systems considered could provide a nice middle ground (e.g. by "smoothing out" each tabular state). Another aspect that could increase the significance of this work is to evaluate non-tabular methods, for instance with linear function approximators. A lot of RL theory does exist for linear approximators (and tabular, of course), so it would be interesting to evaluate how the dynamics of the empirical evaluations change (or not) when moving from tabular to non-tabular methods. Along these lines, in line 366 it says "The development of such measures is theoretically and empirically important.". It would be nice to provide some concrete examples, such as a theoretical bound dependent on one of these hardness measures, or something like that. Otherwise it's not clear why these hardness measures are important. Some limitations are provided (mostly related to future work). It would be nice to have some discussion regarding the significance/impact (or lack thereof) of this work, in line with some of the comments I made above regarding significance. No discussion of potential negative societal impact was provided. <doc-sep>This paper reviews and categorizes some hardness measures on tabular MDPs, and proposes a new tabular RL benchmark named "Colosseum" that enables exact computation of these measures in empirical evaluations. Extensive experiments are conducted to assess the performance of existing tabular RL methods on the proposed benchmark that spans diverse environments focusing on different hardness measures. ### Strengths 1. The paper suggests an interesting viewpoint of connecting the theory and practice of tabular RL by taking into account the hardness measures that are typically used only in theoretical analysis when evaluating and comparing tabular RL algorithms. 2. The "Colosseum" benchmark is of high quality, flexible, and with sufficient docs and tutorials. 3. The empirical evaluations are thorough. 4. The paper is well organized and presented. ### Weaknesses 1. The benchmark and the hardness measures only apply to tabular RL. In contrast, most of the current empirical work in RL is devoted to the non-tabular setting, and there is also an increasing number of theoretical work that explores non-tabular RL. 2. The theoretical claims made by this paper are somewhat vague (see the following for details). No concern.
The reviewers' opinions are quite consistent towards a weak accept. I'm not confident with the big title "Hardness in Markov Decision Processes: Theory and Practice". This paper is more like a survey + benchmark review instead of a research article. Neither the theory part or the practice part is novel enough as a research article. It's a bit thin as a survey paper. I personally tend to weak reject but I respect the reviewers' weak accept.
This paper conducts an extensive set of experiments on RWS and compares it against a set of benchmarks such as GMM and IWAE. The main contribution of the paper is the fact revealed by these experiments, that RWS learns better models and inference networks with increasing numbers of particles, and that its benefits extend to continuous latent variable models as well. The performance of RWS will increase significantly if we increase the number of particles. The experimental part is written in an inspiring way, and I enjoyed reading it. However, there should be stronger baselines incorporated. for example, https://arxiv.org/abs/1805.07445. Also, I think the authors could try to emphasize more on the shortcomings of RWS discovered by the GMM experiments, and how defensive importance sampling fixes it. There are several other parts in the paper that indicates interesting facts, diving deeper into it could possibly lead to more interesting findings. In all, I would consider these comparison results important to be somewhere in the literature, but because its lack of rigorous analysis and explanation for the observations, I personally think these observations alone are not novel enough to be an ICLR paper. <doc-sep>This manuscript investigates the performance of Reweighted Wake-Sleep (RWS) framework for learning deep generative models with discrete latent variables. It gives a clear introduction to variational autoencoder based models for scenarios with discrete latent variables, including IWAE and also models based on continuous relaxations of discrete variables. The paper performs several experiments, which suggest that RWS is more appropriate for discrete latent variables than other methods such as IWAE. Especially, increasing the number of particles, unlike IWAE, always enhances the performance of RWS. While this paper investigates an important problem, and also offers interesting observations, it lacks a rigorous analysis of why the RWS performance is consistently better than IWAE. More precisely, the propositions should be stated in more formal language and they should be accompanied with a minimal rigorous justification.<doc-sep>Main idea: This paper studies a problem of the importance weighted autoencoder (IWAE) pointed out by Rainforth 18, that is, tighter lower bounds arising from increasing the number of particles improve the learning of the generative model, but worsen the learning of the inference network. The authors show that the reweighted wake-sleep algorithm (RWS) doesn't suffer from this issue. Moreover, as an alternative to control variate scheme and reparameterization trick, RWS doesn't suffer from high variance gradients, thus it is particularly useful for discrete latent variable models. To support the claim, they conduct three experiments: 1) on ATTEND, INFER, REPEAT, a generative model with both discrete and continuous latent variables; 2) on MNIST with a continuous latent variable model; 3) on a synthetic GMM. Clarity issues: 1. "branching" has been used many times, but AFAIK, this seems not a standard terminology. What do "branching on the samples", "conditional branching", "branching paths" mean? 2. zero-forcing failure mode and delta-WW: I find this part difficult to follow. For example, the following sentence "the inference network q(z|x) becomes the posterior for this model which, in this model, also has support at most {0, . . . , 9} for all x". However, this failure mode seems an interesting finding, and since delta-WW outperforms other methods, it deserves a better introduction. Questions: 1. In Fig 1 (right), how do you estimate KL(q(z|x) || p(z|x))? 2. In Sec 4.2, why do you say IWAE learns a better model only up to a point (K = 128) and suffers from diminishing returns afterwards? 3. In Fig 4, why WS doesn't achieve a better performance when K increasing? Experiments: 1. Since the motivating story is about discrete latent variable models, better baselines should be compared, e.g. RBM, DVAE, DVAE++, VQ-VAE etc. 2. All experiments were on either on MNIST or synthetic data, at least one large scale experiment on discrete data should be made to verify the performance of RWS.
The paper presents a well conducted empirical study of the Reweighted Wake Sleep (RWS) algorithm (Bornschein and Bengio, 2015). It shows that it performs consistently better than alternatives such as Importance Weighted Autoencoder (IWAE) for the hard problem of learning deep generative models with discrete latent variables acting as a stochastic control flow. The work is well-written and extracts valuable insights supported by empirical observations: in particular the fact that increasing the number of particles improves learning in RWS but hurts in IWAE, and the fact that RWS can also be successfully applied to continuous variables. The reviewers and AC note the following weaknesses of the work as it currently stands: a) it is almost exclusively empirical and while reasonable explanations are argued, it does not provide a formal theoretical analysis justifying the observed behaviour b) experiments are limited to MNIST and synthetic data, confirmation of the findings on larger-scale real-world data and model would provide a more complete and convincing evidence. The paper should be made stronger on at least one (and ideally both) of these accounts.
The paper tackles a very challenging problem and provides a novel approach. The authors have an in-depth understanding of the related works and provide a detailed review. The theoretical contributions of this paper are solid, and the experiments are quite thorough. The assumption of binary data seems a bit strict. Non-Stationarity seems to be the most critical foundation of this paper and is worth more explanation and intuition like "what is the connection between the number of segments and the number of observed/latent variables for model identification". In the experiments, what is the reason that you only run 3 times for each case? I think one may need to provide the computational complexity of the proposed algorithm. Could you give more discussion about how to extend your method to address the discrete setting? <doc-sep>1. This paper is well written and the authors are good at providing intuitive examples for further explanation. And the binary ICA problems they focused on, including the identifiability and estimation methods, are important and potentially useful. 2. The authors found the non-identifiability for the binary ICA model in the two-variable case, which is somewhat surprising, but they showed empirically that the model is identifiable when the dimensionality is higher. Further, they employed correlation identifiability to derive a practical algorithm for the estimation. I think overall the authors did interesting research, but I have some concerns listed below. Since the title of this paper is “Binary Independent Component Analysis via Non-stationarity”, I think that the authors would have used some information about non-stationarity (e.g., the invariance) to help estimate such a non-stationary model, but the authors do not follow this direction. It seems to me that the segment variable $u$ (or the number of segments $n_u$) is given, as shown especially in Algorithm 1. And the authors only estimate the binary ICA model in one segment by another one. Thus it would confuse me whether they focus on handling the non-stationarity problem or not. I have some concerns listed below, - What are the relationships between the identifiability for the binary causal discovery model and the identifiability for the binary ICA model? It might be helpful to discuss more the applications of the binary ICA model in the paper. - In Page 2, “add independent noise $\\epsilon $ form $\\mathcal{N}$” -> “add independent noise from $\\mathcal{N}$”. <doc-sep>Independent Component Analysis via Non-stationarity is an important issue. The identifiability of the proposed model are discussed in detail and the proposed MLE is efficient. Why use a specific link function like, \\Phi(\\sqrt{\\pi\\over 8} y|0,1)? It seems that this setting is critical to the identifiability of the proposed model. More motivation should be given about this setting. ~ <doc-sep>The paper is well written with clear motivation and goals. The presented model is simple but provides a sound answer to a practical problem. The simulation study is convincing. Some part of the methodology would benefit from from further explanation, especially how the non-stationary part is handled. Also, the "regularization" step of the BLICA algorithm would benefit from further justification. More generally, it is better from a scientific perspective to discuss "related work" at the beginning of a contribution and not at the end. As explained in the previous section, it would be beneficial to give more details about the "u" component. It might be clear to a reader familiar with the literature in the field, but not to the general audience such as the UAI community. For instance, two major points could be clarified: in practice, are the segments pre-defined or do you need to estimate them from the data before applying ICA? How does the non-stationnarity increase the model identifiability? On a related topic, Figure 3 is difficult to understand: it would be nice to have a short sentence recalling that lower value of $log_10(1-MCS)$ implies better performance and that a model is considered as identifiable for values below, say $-3$. Some details about BLICA could be better explained / justified: - the full MLE approach does not seem intractable for the dimensionalities considered in the paper. It might be necessary to use parallelization for reasonable computing time. This might be impractical but would be very interesting to compare the performance loss by BLICA. - also for the full MLE, how would you parametrize the correlation matrix to ensures that its estimates satisfies the necessary properties? - for BLICA, the so called "regularization" step seems a bit awkward. First, why is it called "regularization"? In general in ML, this term refers to penalty terms penalizing for the model complexity while in this context, it seems to be closer to an attempt to project on the closest positive definite matrix. Second, the pairwise estimates do not ensure at all that the matrix is well defined, and that is the reason why such "regularization" is needed. Could the authors elaborate on why, beyond empirical evidence, this estimate is consistent? Minor comments: - p 2 c.1: use $\\left( \\text{ and } \\right)$ in equations. - p.5 c.1: "then we fit those correlation": what do you mean? - p.6 c.1: maybe add a sentence to say that iVAE is introduced to be used as benchmark reference.
Meta Review: I had trouble with this paper and I have to say that I am more skeptical than the reviewers, who were generally positive. Some of the concerns were raised by the reviewers and everybody seemed happy after the rebuttal so I will not push this any further, although, I expect that the authors can clarify considerable, e.g. on using segments in their set-up. My main concern is that the title is misleading in many ways. First, it suggests that non-stationarity is handled in some special way in this paper but it is not. Second, such a general title "binary ICA" suggests that they came up with a canonical way of dealing with ICA for binary data. However, they approach is very special. The choice of this specific link function is not well motivated in the paper (it is in the replies). I would add to that that in the classical ICA the mixing matrix has a concrete physical interpretation but here such an interpretation is missing. This is of course an easily fixable concern and I hope the authors will adjust their title. My other concern is about the identifiability results. I am not saying that the results are wrong but that the authors do not have a good understanding of the identifiability issue in this particular scenario and here the paper looks underdeveloped. But after addressing the comments of the reviewers and a title adjustment, I think this could be an interesting paper.
This paper studies the sample complexity for learning heuristic functions for GBFS/A* search on a graph with a fixed number of nodes n. The analysis uses PAC learning framework, and the main results show the upper and lower bound of pseudo dimensions of a class of utility functions in which each utility function associates a search task to a scalar value between 0 and H. The paper also continues to provide upper bounds on the expectation of gaps between the optimal costs and the suboptimal costs, where the expectation was taken over the search task sampled from some distribution D, and the bounds are given in terms of the number of samples and the number of nodes. Strengths are mathematical analysis of the sample complexity for learning heuristic functions for graph search tasks using GBFS/A*. Weaknesses are that this analysis emphasizes theoretical aspects and missing practical implications of the upper bounds. I think this work is not relevant to this section. <doc-sep>The paper presents bounds on the sample complexity required for learning heuristic functions to guide greedy best-first search and A* search for solving the shortest path problem in a given graph. The classical approach to best-first search (and heuristic search in general) is to provide it with a handcrafted heuristic (which is typically obtained by solving a relaxed version of the original problem) in order to guide it more effectively towards the optimal solution. However, more recent work aims to learn the guiding heuristic directly from some training data which could be more appealing in some cases. Therefore, deriving bounds on how much data is required to learn a heuristic function with certain guarantees is called for. The paper is fairly well written and organised. The quality of the presentation is overall very good and therefore the paper is relatively easy to follow. Most of the concepts and technical details are introduced and discussed if a fairly clear manner. I think the paper needs a more detailed running example. Otherwise it's not very easy to follow the details especially for a reader who's not very familiar with this research area. Minor comments: - Definition 1: there is a typo, h(y_i) \\geq t_i instead of h(y_i) \\geq z_i see above <doc-sep>This theoretical paper presents sample complexity bounds for learning heuristics for A* and best-first search. It shows an O(nlogn) upper bound on the pseudo dimension of BFS and O(n^2logn) for A*, with Omega(n) lower bounds for both. It shows that the upper bounds are nearly tight, but can be improved for A* when bounding edge weights and variable degrees. Moreover, when learning a potentially suboptimal heuristic function, the paper gives an upper bound on the suboptimality. The paper is relatively straightforward, in the sense that it gives clear questions and clear answers. It is well written, and explains the weaknesses of the results, namely the relatively big gap between the bounds on the pseudodimension of A*, as well as give some explanation why it is hard to bridge them. I don't see any major weaknesses. I would suggest that another interesting direction here is looking at A* for planning. The graph is obviously exponentially large, so the bounds here are useless, but it has a compact representation (e.g. the STRIPS model). Could some heuristics be learned efficiently in that setting? ---------- Typos, etc: Defn: you use t_1, ..., t_N for the values in the text and z_1...z_N in the formula 107: disrtibution 154: gaurantees No direct societal impact.
Strong paper studying the sample complexity of learning heuristic functions for GBFS and A*. The reviewers were especially impressed with the theoretical results and find the paper a worthwhile contribution to this conference.
This paper proposes a compression method for Transformer-based encoder-decoder or language models. The key idea of the proposed method is to decompose the standard parameters into a much smaller shared parameter matrix and independent parameters for each original matrix. Then, the method can approximately recover the original Transformer models by simple additions and multiplications. The experiments are conducted on three MT tasks, one summarization task, and one language modeling task. Experimental results show that the proposed method seems to reduce model sizes and computations successfully while preventing considerable performance degradation (in some cases, the proposed method appears to improve the performance). The idea of the proposed method is interesting, but there are a few concerns in terms of the presentation. Therefore, it is hard to judge whether this paper has enough contribution for publishing as the conference paper. The following are my concerns in the current version. ### 1, Technical novelty * The idea of the proposed method is interesting and might be effective. However, the idea itself of sharing the parameter is not very innovative. I think that sharing parameters for compressing DNNs is a standard technique nowadays. Therefore, the authors need to clarify the contributions of the proposed method, such as the unique properties that previous similar compression methods cannot achieve. Currently, I do not find any strong properties in the proposed method. * If my understanding is correct, the proposed method is a reconstruction method. Therefore, we need a trained model for applying the proposed method. This means the proposed method requires additional computation. I do not fully understand why this paper compares the computational cost with the standard Transformer. ### 2, Notation and equation * The notations are incredibly messy and hard to understand. The authors need to make notations much simpler for better understanding to readers. ### 3, L1 constraint * If my understanding is correct, the relaxed L1 constraint does not guarantee to find the solution that satisfies the threshold of non-zero factors. This paper seems not to explain the way if such a situation occurs in the solution. ​ ### 4, Typo or misconfiguration? * In Table 1, it says the results for WMT De-En and WMT Fr-En. However, at the beginning of Section 4, the experiments are conducted on WMT "En-De" and "En-Fr," which are not "De-En" and "Fr-En." ### 5, Confirmation of model sizes * According to the original Transformer paper [1], the numbers of parameters of Transformer (base) and (large) are 64M and 213M, respectively. However, in the experiments, the model size of the baseline Transformer is 3.6M (as shown in Table 1) for WMT En-De. Moreover, I checked the previous paper, such as the "Lite Transfomer" paper (Wu et al., 2020) and the "Pay less attention" paper (Wu et al., 2019). However, I could not find the precise experimental settings used in this paper. I recommend clearly showing the model configurations and hyper-parameter settings for keeping reproducibility. Otherwise, the reproducibility of the proposed method may not be sufficient. [1] Vaswani et al., Attention Is All You Need, In Proc. of NIPS-2017. ### 6, Inconsistent results in Table 1 and 3 * I thought that the ablation study of Table 3 is based on the results (settings) of Table 1. However, the numbers of parameters shown in Tables 1 and 3 differ entirely, so I do not understand the meaning of the ablation study in Table 3. Please confirm it and clarify the configuration difference between Tables 1 and 3. Moreover, explain the results of the baseline Transformer and the proposed method corresponding to the ablation results in Table 3. * Additionally, it seems that there is no description about what the "Improved-Embedded" is shown in Table 3. If I miss the description, please let me know. If the paper lacks explanation, this can be an apparent problem for this paper in terms of completeness. The idea of the proposed method is interesting and might be effective. However, the idea itself of sharing the parameter is not very innovative and rather incremental. Experimental settings are ambiguous and seems to use very weak settings. <doc-sep>This paper describes a technique for reducing the size and computation of a Transformer model by projecting and factoring weight matrices. Experiments on MT, summarization, and language modeling show improved results over competing techniques, and even over standard Transformers, despite using significantly fewer parameters and less computation. The paper contains a lot of substance, but it is very dense and hard to follow. The core “dictionary” technique isn’t really explained at a high level before the paper plunges into the details. It seems to be something like the approach in [1] but it’s difficult to be sure (I gave up on section 3 after a while). The results in section 5 are very impressive, but some intuition about why a compressed approach like this could beat a much larger baseline on large data settings really need to be provided. [1] Kaiser, Lukasz, et al. "Fast decoding in sequence models using discrete latent variables." International Conference on Machine Learning. PMLR, 2018. Details: - The “first line of research”: would be good to add a word or two saying how these papers reduce computational complexity. - Figure 1 is really great, but you should say where these stats come from. - Figures 2 and 3: captions crash into text. - This is hard to understand: “ In this paper, the #Params omit word embedding size that would highly dependent on the sentence length and would significantly differ for various tasks. The total #Params in this paper includes the model size of word embedding.” - It’s difficult to align table 3 with figure 5. You should include a line corresponding to the point in figure 5 with highest BLEU (higher than anything that appears in table 3). Potentially a great paper, but if so it deserves to be much better explained. <doc-sep>This work proposes a modification of the original Transformer architecture by replacing attention layers and layers in its Feed-Forward Networks across all of its blocks with learned shared dictionaries. The proposed model, called DictFormer, has a smaller number of parameters and uses a smaller amount of computational operations when compared to the original Transformer and some of its variations. When evaluated against these models on popular machine translation, summarization, and language modeling benchmarks, DictFormer achieves comparable or better performance. ### Strengths - The proposed modification to the Transformer architecture reduces the number of model parameters and computational operations while sustaining competitive performance on various downstream tasks. - To the best of my knowledge, the idea of replacing layers of the Transformer with shared dictionaries is novel. ### Room for Improvement *Shared-dictionary Attention* - I might be missing something but why is it stated that the unshared linear projection $\\tilde{W_{i}^{Q_{j}}}$ is approximately equal to $W_{i}^{Q_{j}}$? My understanding is that this is not directly optimized for in the model. *Group-wise Shared Dictionary FFN* - The motivation behind dividing columns of the dictionary into groups is a bit unclear. What is meant by “high-quality performance” of the shared dictionary projection? Also, have the authors considered using a larger number of dictionary elements $m$ to increase the “flexibility” of the model? - How is the number of groups $G$ determined? *Training the DictFormer* - Since the sparse matrix Z is initialized using values in $C$, how are coefficients $C$ initialized? *Results tables* - Missing confidence intervals. Were the experiments run with multiple seeds? *Suggested related work* - How is this work related to work on Sparse Transformers (e.g. [1], [2]) or fixed attention such as [3], [4]? [1] Child R, Gray S, Radford A, Sutskever I. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509. 2019 Apr 23. [2] Correia, G.M., Niculae, V. and Martins, A.F., 2019. Adaptively sparse transformers. arXiv preprint arXiv:1909.00015. [3] You, W., Sun, S. and Iyyer, M., 2020. Hard-coded gaussian attention for neural machine translation. arXiv preprint arXiv:2005.00742. [4] Raganato, A., Scherrer, Y. and Tiedemann, J., 2020. Fixed encoder self-attention patterns in transformer-based machine translation. arXiv preprint arXiv:2002.10260. *Additional questions* - Is it necessary to have the dictionary size less than the embedding size, namely $m < d$? Would it not be feasible to have a large dictionary ($m > d$) but keep the number of selected components $t$ small (i.e. $t < d$) through a sparsity constraint? - Have the authors tracked whether all columns of the dictionaries are used in practice? - Have the authors tracked what percentage of the $t$ coefficients are non-zero on average? *Nitpicks* Typos: - p. 2, first line: “few unshared linear projection*s*” - p. 3, “Overview” paragraph: “given a*n* accuracy threshold” - p. 4, paragraph starting with “The reason why...”: “C_{i}^{x}” - should not $x$ be capitalized? - p. 5, “Group-wise Shared-dictionary FFN” paragraph: “a $N d \\times d$ weights” -> “$N$ weights of size $d \\times d$” - p. 6, Figure 4: “training sparse coefficients” -> “we train sparse coefficients” - p. 6, first sentence of “Training DictFormer via Constraints and Relaxation” paragraph: “linear projections of *a* dictionary” - p. 7, last paragraph of “Architecture and Evaluation” paragraph: switch first sentence to present tense; “total #Params *i*n...” - p. 8, “Machine Translation” paragraph: “DictFormer obtain*s* more compact” - p. 8, “Sensitive Study” paragraph: rename to “Sensitivity Study” - p. 9 , first paragraph: “coefficient size is fixed *to* 60” - p. 9 , “Ablation” paragraph, first sentence: missing space after period - p. 9, “We will release code and data...”: Is there data to be released? The proposed modification to the Transformer architecture is novel and I believe would be interesting for the community but the methodology and motivation could be explained more clearly and provided with more context, including more details on the hyperparameter selection and on how the DictFormer is trained. The experimental results would be even more convincing if confidence intervals are provided. #### Updates during paper discussion Based on the author's responses to the reviewers' questions and updates to the manuscript (including clarifying some of their methodology and statements and including confidence intervals in the results section), I've decided to increase my score. <doc-sep>The authors proposed an efficient transformer layer based on a dictionary of shared parameters instead of standard self-attention. The goal is to reduce redundant parameters in transformer models. The main contributions are: a lite transformer model, modification of the self-attention parameters, and evaluation on language dowstream tasks. The proposed transformer model outperforms related work on the machine translation and language modelling tasks. Strengths - Clear description of background knowledge. - Clear exposition of the proposed model. - The authors perform a comprehensive comparison on different downstream tasks, such as, machine translation, summarization, and language modeling. - The findings show that the proposed transformer model outperforms related work on the machine translation and language modeling tasks. Weaknesses - It is not clear how the initialisation of hyper-parameters affects model performance. Questions to the Authors. Please address the following questions during the rebuttal: - Does parameter initialization could affect model performance? A possible extra contribution is to perform multiple random runs and report variance. However, how expensive could this exercise become? - Please speculate on how attention representations behave across layers. For example, in Abnar, and Zuidema Quantifying Attention Flow in Transformers or Voita, et al. The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives - By using other pre-training objectives in the langgue modelling task (e.g. next sentence) would it change any finding or results? I recommend acceptance given that the paper clearly describes related work, and proposed model. The authors proposed an efficient transformer model that can be trained with less resources. The authors perform an evaluation of the proposed model with different language downstream tasks, and the model outperforms related work on machine translation and language modelling.
DictFormer is a method to reduce the redundancy in transformers so they can deployed on edge devices. In the method, a shared dictionary across layers and unshared coefficients are used in place of weight multiplications. The author proposed a l1 relaxation to train the non-differentiable objective to achieve both higher performance and lower parameter counts. All reviewers ended up giving the paper a score of 6 after increasing their scores during discussions. While the results are strong (better performance at much lower parameter counts), the paper is not clearly written. Several reviewers noted that the paper is difficult to understand and has a few unresolved points. For example, the method also ended up performing better than the base transformer model that DictFormer is supposed to compress. There seems to be a lack of understanding about what part of the model delivered the improvements. One reviewer said that this is potentially a great paper that deserves to be better explained. The basic concept of sharing a dictionary across layers should be simple enough to explain well and deserve a better explanation than eq 5. The authors promise to release the code, which would be necessary for a full dissemination of this work. I recommend accept.
This paper is a continuation of an original associated learning paper by Kao&Chen 2021. It attempts to propoose new learning approach associated learning as an alternative way to back-propagation. On top of the original paper, it discovers more interesting properties and extend AL to CNN, LSTM and transformers (though lacking sufficient details). The authors have resolved my concerns on the technical details of how AL is applied on RNNs and Transformers. ================================ The paper is well written. Experiments show that in text classification and image classification, the proposed method outperform BP in some basic architecture setting. Here are my concerns: - It is unclear how the AL is applied on RNNs and Transformers. Section 2.1.1 only very briefly described them, but I could not figure out some of the details. For example, how is the temporal data processed in LSTM? -In CNN, when flatting the hidden representation, it also lost the spatial information in feature maps. Furthermore, how to convert si to si' if ti is also a 3d feature map when the spatial information is lost. - From the description in Section 2, it seems that AL introduces around double the parameters for a given neural network. What is the impact of the increased parameters in computation cost? - The experiments uses relatively simple network architecture for text classification. Does the same benefits carry over to large transformer models, and still beats currently popular models like BERT? - The architecture information on CNN in section 3.3 is missing. - If my understanding is correct, the proposed architecture would not work in sequence generation task like LSTM and transformers could do. Right? In summary, I think though the paper proposes AL framework as an alternative to BP, it is actually a simple extension to a previous work, and does not proposes substantially new ideas. Some details are missing, and experiments are not extensive enough to cover state-of-the-art architectures. <doc-sep>Associated Learning puts forth a template that can be applied to almost any network to achieve faster training and inference. They apply their template to several existing deep learning models and perform experiments that show they can achieve comparable if not better results with less training time. The paper clearly lays out the advantages of associated learning: faster inference, dynamic layer accumulation, and pipeline. The paper is clearly written with good figures. The experiments appear to be easy reproducible, too. The decrease in epochs needed for LSTMs is particularly impressive. I found the biological basis a little lacking. Perhaps, some type of curriculum learning or more exploration on what the various shortcuts are doing could make this argument stronger. The related works section neglects to mention other gradient-isolated methods like https://arxiv.org/abs/1905.11786. I think in some ways this work can be seen as encoder-decoder with additional regularization, too? I would recommend this paper to be accepted. While there are several issues, the empirical results are strong (particularly the LSTM reduction in epochs). I think is a lot more to explore with the dynamic layer accumulation and gradient isolation, too, that would be interesting to other researchers. <doc-sep>This paper studies and benchmarks an alternative to back-prop named associated learning. They analyze the pros and cons. Thank you for this read. The results and the methodology are definitely compelling. Why I cannot accept the manuscript as is, is that: · The motivation is not clear enough. It is clear wrt why BP is not ideal. But it is not clear how you landed on this method specifically, as compared to many other attempts on finding more optimal neural network optimization methodologies. · Section 2 is very difficult to follow. I would spend some more effort explaining how your method works in the manuscript. · It would be nice to include an algorithm of how to implement AL. A selection of minor comments: · Some typos throughout the manuscript, e.g., in abstract "associate" and paragraph 4 in the introduction "in Section 4 We". · Notation must be introduced, e.g., f, y, etc. in Section 2 are not introduced properly in relation to Figure 1. · It is difficult to follow the difference in notation when using h, b, and f. I recommend you spend some more time on making this very clear to the reader. · I find the Table 2 epochs for AG News difficult to follow. There is a clear pattern that AL is faster, but then things changes radically for AG News? Would be nice with some further analysis into this. With some more clarification on how you ended up with this methodology and a clear algorithm for how to implement AL, the reviewer would be happy to accept the manuscript. <doc-sep>This paper proposes associated learning (AL) for CNN, RNN, and transformer. Different from back-propagation (BP), AL decomposes BP’s global end-to-end training strategy into several small local optimization targets such that each sub-networks has an isolated gradient flow. To achieve this, the paper proposes to map input $x$ and output $y$ into intermediate AL layers and performs metric learning (e.g., $t_1=b_1(s_1)$) and auto-encoder learning ($t_1=t_1^{‘}$), as shown in Figure 2. Moreover, Each AL layer can be optimized locally. The idea is interesting. The experiments demonstrate the effectiveness on (IMDB Review, AG’s News corpus, DBpedia Ontology, the Stanford Sentiment Treebank, CIFAR10, and Fashion-MNIST. First, as in Figure 2, the paper proposes to map input x and output y into a latent space for metric learning ($f(x)=g(y)$) and auto-encoder learning ($y=h(g(y))$) are also investigated in multi-label classification[r1,r2], which are not discussed in this paper. In my opinion, the main difference is the design of multiple latent spaces compared with these multi-label classification methods. Second, in traditional machine learning, we often map a high dimensional space to a low dimensional space for metric learning. It is unclear why maps the target $y$ to the intermediate layers in this paper. Given a high dimensional space (e.g., images), the inference model extracts useful features and filters unrelated features for metric learning. However, in this paper, I find that the authors conduct experiments on some single label classification (e.g., CIFAR10 and Fashion-MNIST). In this case, $y$ is a scalar or one-hot vector, I am curious about the exact form of $g_1$, $g_2$, $g_3$ in Figure 2. Does the proposed method map a low dimensional latent space to a high one? What is the motivation for expanding representation space? If $g_1(y)$ and $g_2g_1(x)$ are still in a low dimensional space or $g_i$ are very simple, do we really need inverse transformations from $Y$ to AL layers? In this case, we can simply fuse different AL layers to top layers for metric learning. For example, we can move $y$ after $t_3$ and remove $h_1$, $h_2$, $h_3$ in Figure 2. Since $y$ is a specific label, it is unclear why we need to map to a high dimensional space. The design of multi-label classification is reasonable to me because the target $y$ is complex (e.g., the multiple label vectors could miss some labels) and the multi-labels could be in a high dimensional space. In this case, one can map high dimensional space $X$ and $Y$ into a low dimensional latent space for metric learning. [r1] Learning Deep Latent Spaces for Multi-Label Classification, AAAI 2017. [r2] Multi-label Classification via Feature-aware Implicit Label Space Encoding, 2014. ... Third, it would be better to set a baseline by moving $y$ after $t_3$ and removing $h_1$, $h_2$, $h_3$ in Figure 2 for comparison. Fourth, the architecture of AL layers is similar to Ladder networks. It is suggested to analyze the differences. [r3] Semi-Supervised Learning with Ladder Networks, 2015. (1) The proposed method can be optimized locally, and achieve competitive results. The proposed framework can be used for CNNs, RNNs, and transformers. The idea is interesting. (2) More analyses about the motivation and the necessity of inverse transformation for Y to latent space are needed. (3) The analyses and discussions about related works, such as multi-label classification and ladder networks, are missed. (4) Some experiments are suggested to support the authors' opinions (e.g. (2) and (3)) if possible.
The authors propose a method for associative learning as an alternative to back propagation based learning. The idea is to interesting. The coupling between layers are broken down into local loss functions that can be updated independently. The targets are projected to previous layers and the information is preserved using an auto-encoder loss function. The projections from the target side are then compared with the projections from input side using a bridge function and a metric loss. The method is evaluated on text and image classification tasks. The results suggest that this is a promising alternative to back propagation based learning. Pros + A novel idea that seems promising + Evaluated on text and image classification tasks and demonstrated utility Cons - The impact of the number of additional parameters and the computation is not clarified (even though epoch's are lower) The authors utilized the discussion period very well, running additional experiments that were suggested (especially ablation studies). They also clarified all the questions that were raised. In all, the paper has improved substantially from the robust discussion.
The authors present a unifying framework for object-centric learning, bringing together a wide array of distinct methods under a single framework. I personally find this flavor of manuscripts particularly useful, as they've previously helped me better understand fields of research (for example Cunningham and Ghahramani (2015) is in a similar spirit albeit for a different set of problems). I think the current manuscript will be a valuable addition to the workshop and serve to generate useful discussion within the community. However, one aspect of the manuscript which I felt could potentially be improved (perhaps as future iterations of the work) are the insights that can be gained from the proposed interpretation of object-centric learning. For example, given that the authors propose to interpret object-centric learning as nested optimization, perhaps there are relevant methods from the (nested) optimization literature which could now be more easily ported over and used to improve object-centric learning. Or instead, perhaps the proposed framework can be used to further outline similarities/differences between current work. Minor/typos: - Abstract: "promising results in unsupervised decomposition simple visual scenes .." -> "promising results in unsupervised decomposition of simple visual scenes.." References: - Cunningham, John P., and Zoubin Ghahramani. "Linear dimensionality reduction: Survey, insights, and generalizations." The Journal of Machine Learning Research 16.1 (2015): 2859-2900. <doc-sep>The paper aims to identify the underlying computational problem that existing iterative approaches to object-centric learning are trying to solve. Specifically, the paper classifies existing approaches into two categories: those that meta-learn posterior inference and those that meta-learn parameter estimation. The paper then proposes an optimization problem that unifies these two categories, where the inner layer optimizes ELBO with respect to the per-datapoint parameters (e.g., slot representations, cluster assignments), and the outer layer optimizes the task objective (e.g., reconstruction, classification) with respect to network weights (e.g., encoder and decoder). The paper also suggests some connections to other fields. Pros - The paper is well-motivated. A unified problem formulation can shed light on ways to improve the existing methods. Cons - The clarity of the paper can be improved. For example, I didn't understand the key difference between the two proposed categories. In particular, why can't Slot Attention fit in the first category? - I am not sure whether the proposed framework is general enough. In particular, why does the inner objective have to be ELBO? The paper mentioned that the soft k-means algorithm is known to monotonically improve the ELBO. However, in Slot Attention the soft k-means algorithm is replaced by learnable updates. It is unclear whether the learnable updates is still optimizing the same objective.<doc-sep>The authors unify the iterative algorithms in object-centric learning methods into a particular nested optimization problem with solving a maximization of ELBO. They join meta-learn posterior inference and meta-learn parameter estimation in existing methods to the same nested optimization problem and interpret it as the essence of object-centric learning. Although unification is always what scientists pursue, a simple combination is not enough. Some pivotal questions should be answered in this paper. 1. Why do we need unification in object-centric learning methods? What's the advantage of regarding these practical algorithms as a theoretical optimization formulation? 2. Is there any detailed examples that your idea can bring some difference to recent research?
This paper is relevant to the workshop and outlines an interesting connection between iterative object-centric representation learning approaches and nested optimization. As such, I believe it can provide a valuable contribution to this workshop, despite not having shown an immediate practical advantage of this unification as pointed out by reviewer iYd1. We encourage the authors to take the reviewers' feedback into account when preparing the camera-ready version of the paper.
The paper proposes a new method of updating deep neural networks for combinatorial optimization problems during search using reinforcement learning. In particular, the authors show that by updating only part of the network, better results can be achieved at lower cost. They describe and evaluate their method on different combinatorial optimization problems, comparing to other machine-learning-based approaches as well as "traditional" solvers. The paper presents an interesting idea that seems to have a large impact. The evaluation is thorough and fair, the results are convincing. This is a good paper that should be accepted. There are a few minor points that were unclear to me and might warrant further discussion. The results for the TSP in Table 1 show that concorde is often the fastest solver. This is somewhat counter-intuitive, especially compared to LKH, as it is a complete solver. The proposed method is also often much slower. Some explanation of this would be helpful for the reader to understand what exactly is going on there. The most nebulous part of the proposed method to me is the placement of the new layer, which sounds like it might be quite difficult in practice and potentially require expensive evaluation of different alternatives. A more in-depth discussion of how the authors determined this for their experiments, along with some recommendations on how to do this in a new setting, would make the paper stronger and more applicable in practice. Interesting method with promising results. <doc-sep>The paper deals with end-to-end learning of heuristics for combinatorial optimization problems. The authors propose an extension of the active search method of [Bello et al 2016], where only part of the model parameters are updated at test time for each instance. They propose three ways of applying this idea that consist in fine-tuning part of the instance embeddings, the parameters of an additional layer or directly the prediction scores of the model. Applied to the POMO method [Kwon et al 2020] for the TSP and CVPR and the L2D method of [Zhang et al 2020] for the JSSP, the proposed efficient active search leads to significant improvements on instances of the same size and larger than the training ones. **Strengths** 1. The paper is clear, well organised and well written 1. The presented approach seems applicable to any constructive method as long as it has a encoder/decoder type of architecture 1. In the experiments, the proposed approach is applied to 2 models for solving 3 different problems and the results are consistently positive, which hints at the generality of the proposed approach. 1. It improves the performance of the underlying model on test instances from the same distribution as the training instances as well as to larger instances (from the same distribution), effectively addressing the well-known difficulty of standard learning-based models to perform well on larger instances 1.Nice discussion in Sec 4.4. to explain possible reasons of why one of the proposed variants work best for each problem. **Weaknesses** 1. In the experiments, the scale of instances is limited: 200 nodes for TSP and CVRP, while recent learning-based methods such as the cited [Fu et al 2021] manage to solve TSP instances with up to 10,000 nodes. 1. Generalisation is only illustrated (and claimed) with respect to the size of the instances. It would be interesting to see the results on other distributions shifts (e.g. applying EAS to a model trained on TSP100 on instances of TSPlib) 1. For CVRP, results are indeed provided for other distributions. But for each family of instances, the authors say that the model is trained for 3 weeks and then tested on similar instances for each family. Could EAS be helpful to learn good solutions starting from the same model? **Recommendation** I would vote for accepting the paper. The contribution is interesting as a middle ground between fine-tuning a whole model for each instance (active search) and other non-learning based search strategies (beam-search, sampling, etc). The proposed approach is illustrated on 2 models for 3 standard CO problems and consistently shows good results. **Questions** 1. What is the motivation of adding a new layer to fine-tune (EAS-Lay) versus fine-tuning some existing layers of the model? 1. In Tables 1 and 2, why are there no entries for most of the learning-based baselines for N >= 125 ? Since there is no strict time-limit in this setting, I guess with a small-enough batch size for the models to fit into memory, all these methods would provide some results for N up to 200. 1. In Appendix Table 4, have you tried simply applying EAS to the model trained on the uniform distribution CVRP100? That would be a natural test of the impact of EAS on generalization. 1. Looking at Figure 3, it seems the value of the best lambda depends on the problem and the range of potential values is quite wide (0,01-100). Have you checked the scale of the different losses and could it help explain such a difference? 1. If one were to apply EAS to another model/CO problem, could you deduce from your experiments a general kind of rule of thumb of which variant would work better for which situation? **Additional feedback** * In Introduction * “these methods do not react towards the solutions seen so far, i.e., the underlying distribution from which instances are sampled is never changed throughout the search”. Not clear to me. Do you mean …from which solutions are sampled? * “..wide adaption” —> adoption * Sec 3.Background: the decoder is introduced with parameter $\\omega$, but this one is only defined as the embeddings in the next paragraph * Sec 3.1: In the definition of the total gradient right after equation (2), shouldn’t there be a minus before one of the gradients? Since one gradient corresponds to the minimisation of the cost and the other to the maximisation of the likelihood. * Figure 2: y-axis is the average costs. Optimality gap would be more relevant (and consistent) especially if instances have different sizes * The paper provides an interesting contribution to learn to search for high-quality solutions at test time, nicely completing end-to-end learning pipelines for solving CO problems. The proposed approach could be applied to any model that has an encoder-decoder type of architecture, and is experimentally validated on 2 models and 3 problems. A limitation is that it is not clear if it could help a model generalize to instances that are much larger than the training ones, or with significantly different characteristics. I vote for accepting the paper. ### Update after rebuttal I thank the authors for precisely answering all my questions and concerns. I am happy to confirm my initial recommendation of accepting the paper. <doc-sep>The paper studies machine learning-based methods for combinatorial optimization. The paper builds upon Bello et al. (2016) on using reinforcement learning to generate solutions for combinatorial optimization problems (e.g., TSP). The novelty of the paper is to optimize only a subset of the model parameters. The paper then proposes three different implementations based on this idea. Significance: One limitation of existing RL-based approaches for combinatorial optimization is its resource requirement. As demonstrated in Table 1, the active search technique in Bello et al. (2016) takes 5 days to solve 10,000 test instances of TSP. The paper aims to tackle this limitation by proposing to optimize only a subset of the model parameters. I think the paper is making a good and meaningful contribution towards research in the field. Novelty: The paper extends the active search method in Bello et al. (2016). The three proposed implementations are based on one general idea of optimizing only a subset of the model parameters. The novelty of the proposed technique is therefore limited. However, if the method performs well, its simplicity could be of high interest. Presentation: The paper is well-written. The related literature is discussed in detail. The experimental results are clearly presented, with ablation study and trajectory analysis. There are some minor ambiguities in presenting the proposed techniques, as elaborated below. There are some ambiguities in the paper: 1. On page 4, below figure 1, the paper proposes the first strategy: update the embedding of the using the loss function J_{RL} and J_{IL}. These loss functions are not explicitly specified anywhere in the paper. Only their gradient w.r.t. the embeddings are presented in Eq.(1) and Eq.(2). Readers who are not familiar with RL/Imitation Learning literature may not know what J_{RL} and J_{IL} are. It would be great if the author can be more explicit about the loss functions before presenting their gradients. 2. In Table 1, the authors provide wall-clock time for the proposed algorithms and other baselines on a set of 10,000 TSP instances. EAS achieves competitive performance while taking only 5-7 hours to run, as compared to 5 days using the original active search. Is this improvement due to: (a) EAS uses less memory, and hence we can solve more instances in parallel, or (b) EAS is computationally more efficient, i.e., it uses less CPU-time to achieve the competitive performance, or (c) a combination of the above? It would be better if the authors report the CPU-time (instead of wall-clock time), and separately report the space (memory) and time (CPU) of these algorithms. Minor comments: In figure 3, the x-axis should be labeled lambda. The paper proposes a simple extension to an existing RL-based method for combinatorial optimization. Its effectiveness is demonstrated empirically. However, I feel that the results of the experiments should be reported in greater detail, i.e., to compare with the original active search in different performance metrics such as memory and CPU time usage. <doc-sep>This paper studies deep learning methods for solving combinatorial optimization problems. The authors write that state-of-the-art methods typically use models that consist of encoder and decoder units. The methods first create an embedding of the problem instance using the encoder. Then, starting with an empty solution to the problem, the embedding and the decoder are used to autoregressively construct a solution over a series of time steps. Given an already trained model and a test instance, this paper studies how to quickly update the model parameters in order to improve the quality of the solution returned by this procedure. The authors propose three techniques, which adjust (1) the normally static embeddings of the problem instance that are generated by the encoder model, (2) the weights of additional instance-specific residual layers added to the decoder, and (3) the parameters of a lookup table that directly affect the probability distribution returned by model. Strengths: - This paper studies an exciting area—machine learning for combinatorial optimization—where machine learning has the potential to make a big impact. - From the experiments (especially Tables 1 and 2), it looks as through the proposed approaches are much faster than competitor, active search [Bello et al. ‘16], which (from my understanding) searches for ways to adjust all parameters of the trained model at test time. In contrast, the proposed approach only searches for ways to adjust specific subsets of model parameters, which makes the approach faster. - I appreciate that the authors evaluate their approach on a few different types of combinatorial optimization problems: two different types of routing problems and a scheduling problem. For the scheduling problem, the improvement over active search is a bit more modest. Weaknesses: - I found the problem description somewhat hard to follow. In Section 3, it would be helpful to clarify what exactly an “action” corresponds to in this setting. One way to do this would be to summarize the combinatorial problems studied in the experiments section and explain what an action corresponds to and what the state $s_{t+1}$ corresponds to after applying an action $a_t$. - In terms of solution quality, the improvements over problem-specific baselines are sometimes really small (e.g., in Tables 1 and 2, a fraction of a percentage). On such a small scale, I wasn’t sure if I could trust the superiority of any particular method. Confidence intervals would really help here. Detailed comments: - Page 2: I’m not sure that “exemplary” is the right word here; I would remove it. - Equation (1): I’m not sure what you mean when you say that $b_0$ is a baseline used to reduce variance. Can you say more? Overall, I’m leaning toward acceptance because the proposed approach seems to provide a notable improvement over prior methods (in particular, active search by Bello et al. ‘16]) in terms of runtime.
This paper gives a framework for using learning in combinatorial optimization problems. In particular, active search is used to learn hueristics. The reviewers thought the paper had nice conceptual contributions for this approach and that the results would be very interesting to the community.
This paper proposes a method for the detection of adversarial examples based on identification of critical paths (called "effective paths") in DNN classifiers. Borrowing from the analysis of execution paths of control-flow programs, the authors use back-propagation from the neuron associated from the final class decision to identify a minimal subset of input synapses accounting for more than a threshold proportion ("theta") of the total input weight. The identification process is then recursively applied at the preceding layer for those neurons associated with the selected minimal subset of synapses, forming a tree of synapses (the "effective path"). The authors then propose to compare the effective paths (actually, unions of paths) of different examples using simple structural dissimilarity measures, which they extend to allow comparison to a typical (aggregated) path for multiple examples drawn from a common class. In their experimentation with their measure, they noted that examples generated by a number of adversarial attacks tend to be less similar to their first-ranked estimated class than normal examples are to their own first-ranked classes. Similarly, they note that these same adversarial attacks tend to be *more* similar to their second-ranked classes than normal examples are to their own second-ranked classes (as the authors point out, this is likely due to the increased likelihood of the second-ranked class of adversarial examples being the true class for the original example from which it was perturbed). The authors then propose the difference between these two similarities (that is, first-ranked dissimilarity minus second-ranked dissimilarity) as a characterization of adversarial examples. The idea of using critical paths in the DNN to detect adversarial examples is interesting, and the authors deserve credit for showing that these critical paths (as defined in this paper) do show differences from those of normal examples. However, the originality of the approach is undercut by the recent work of Wang et al. (CVPR, 2018), which the authors acknowledge only in the discussion of experimental results. Although the details are different as to how critical paths are identified, and how adversarial examples can be detected using them, the strategies are definitely related - a more detailed explanation of this should have been given in the introduction of the paper. More troubling is the fact that a head-to-head experimental comparison is not provided, neither with Wang et al. nor with other state of the art detectors, other than a qualitative assessment of the capabilities of some detectors in Table 1. Note that even this qualitative discussion does not include some of the recent detection approaches, such as BPDA (Athalye et al., ICML 2018) or LID (Ma et al., ICLR 2018). The question of how best to define critical paths and their similarities is still very much open - the authors' approach is rather simplistic and straightforward. For example, is their similarity measure biased towards the contributions from early layers? Can a layer-by-layer weighting of contributions improve the performance? The authors do not always interpret their own experimental results correctly. For example, their results in Figures 7i and 7j don't really support their conclusion that performance "remains almost unchanged" when theta is in the range 0.5 - 1.0. Also, Figure 4 does not show that their effective path similarity is not *directly* "a great metric to distinguish between normal and adversarial" examples, because a large proportion of adversarial examples have scores that fall in the typical range for normal examples (however, there are differences in tendency which can be exploited, as the authors do show). The organization of the paper is in some need of improvement. For example, the discussion of densities of "effective paths" (Section 2) comes well before the details of the choice of threshold value theta used to generate them (Section 4.1). To summarize: Pros: * A good case is made for the use of critical paths as a way of differentiating adversarial examples from normal examples. * The reported improvement in similarity of adversarial examples with respect to their second-ranked classes is particularly intriguing. * The paper is generally well written and easy to follow. Cons: * The experimental treatment is insufficient; in particular, a more carefully considered experimental justification is needed with respect to other detection strategies. * The question of how best to define critical paths and their similarities is still very much open. * The authors do not always interpret their own experimental results correctly. * The organization of the paper is in some need of improvement.<doc-sep>The authors propose the notion of effective path, for the purpose of identifying neurons that contributes to the predictions and being able to detect adversarial images in the context of image classification. Overall the paper is well written except that the authors are mixing two highly related but still different topics: explanation and adversary detection so that the motivation is confusing. The experimental results indeed show promises that effective path can help understand class similarities and network efficiencies but doesn’t really show how the proposed work is adding value to the field. It lacks the experimental comparison with previous methods but only include discussion in texts. This paper could turn out to be a stronger paper but it is not ready yet. Below are some more detailed comments. 1) The authors motivate by stating that the vulnerability of NN to input perturbations is due to the lack of interpretability (Section Introduction & Abstract). I can understand that we want more interpretability, and we want less vulnerability, but I can’t agree that vulnerability is caused by lack of interpretability. Also, the authors are trying to accomplish both tasks, interpretability and adversary detection, by showing data analysis of how the findings coincide with prior knowledge (eg. Class of digit 1 is the most different from other classes in MNIST task), and by showing detecting adversary images. However, neither has valid quantitative comparison with previous work; actually for the interpretability topic, the authors didn’t really provide a tool or a generalizable method. Thus, I would suggest to choose one of the two topics (ie. adversarial image detection) and focus on it by adding thorough comparison with other methods; in the discussion and result section, include the interpretability analysis to justify why the proposed adversary detection method is behaving in certain ways. 2) One topic that is missing from the paper is the time complexity of the proposed method. At a naïve estimate, it would require tracking and finding the minimum set of effective neurons with threshold \\theta and thus per instance, at least O(m log m) is required at prediction phase, where m is the number of features; for n instances, the asymptotic complexity is O(nm log m) How does it compare to the other adversary detection methods? 3) Page 3 mentions that the work for critical routing path (Wang et al. 2018) requires re-training for every image; this statement is not really true without more context. Also authors discuss this work again very briefly in Page 8 due to the high similarity in methods and motivation with the proposed method, but the authors don’t show any quantitative comparison. After all, both methods are trying to identify neurons that contribute the most to the prediction, some more concrete comparison would be nice. 4) Page 3 mentions that the derived overall effective path is highly sparse compared to the original network and the effective path density for five trained models ranges from 13% to 42% which conforms with the “80%” claim from another paper. Together with the other similar statements, it would be really nice to note what \\theta is used for such statements; how does such statement change with different \\theta. Also some discussion would be nice about what such sparsity implies. Specifically, does the sparsity suggest the opportunity for feature selection, or does it suggest a way for detecting overfitting? 5) Page 5 shows the path similarity between the normal and the adversary examples; from the figure 5a and 5b, we can see the on the first layer, the mean deviate between normal and others but why the last layer they almost reach to the same point? It seems it is the middle layer that distinguish the normal from the adversary examples the most. Some more discussion would be good. 6) Some justification of why \\theta=0.5 is chosen would be good on Page 6. 7) On Page 7, the authors are discussing the performance of the proposed method, however, there is no really comparison with other methods. But rather, the authors stated “better accuracy”, “AUC… is better…” by comparing different evaluation scenarios. I don’t find such discussion helpful in showing the contribution of the proposed method. Also in the parameter sensitivity, it would be nice to add the analysis for the effective path density and see if it still conforms with the “80%” claim with different \\theta. 8) Page 1, need to add citations for the statement “… and even outperformed human beings.” 9) Minor issue: Page 1 “such computer vision…” should be “such as computer vision…”. <doc-sep>This paper proposes a measure (“effective path”) of which units and weights were most important for classification of a particular input or input class. Using the effective path, the authors analyze the overlap between paths across classes for CNNs and between adversarially modified and unmodified images. Finally, the paper proposes an adversarial defense method based on effective path which detects adversarially manipulated images with high accuracy and generality to a variety of settings. Overall, this paper is interesting and provides several novel observations. The clarity of the exposition is generally good, but can be improved in several places (mentioned below). As for significance, effective path is likely to inform future analyses of neural networks, and the adversarial defense may prove impactful, though ultimately, its impact will depend on if and when the defense is broken. However, there are several important controls missing from the analysis, several claims which are unsubstantiated, and experimental details are lacking in a few places. As such, in its current form, I can only weakly recommend this paper for acceptance. If in the revision the controls requested below are included, additional evidence is provided for the unsubstantiated claims (or if those claims are toned down), and exposition of missing experimental details is included, I’d be happy to raise my score. Major points: 1) While the observation regarding path specialization is very interesting, one cannot gauge whether or not the degree of overlap observed between class-specific paths signals path specialization or simply high input-to-input path variance (which is similar both within and across classes). In order to distinguish between these possibilities, a measure of intra-class path similarity is necessary. In addition, an experiment similar to that in Figure 2 with CIFAR-10 would be quite helpful in evaluating whether this phenomenon exists in more natural datasets (the ImageNet results are difficult to interpret due to the large number of classes). 2) Several claims in the path specialization section are unsubstantiated. 2a) In particular, the claim that ‘1’ has the highest degree of specialization “because of its unique shape” is made without evidence as is the similarity between ‘5’ and ‘8’. ‘6’ is also similar to ‘8’ and yet does not show the same similarity in the path specialization. These differences may very well simply be due to chance. 2b) The claim that the path specialization in ImageNet matches the class hierarchy is made only based on the rough non-linearity of Figure 3. Please either measure the overlap within and across class categories or soften this claim. 3) The similarity analysis for adversarial images is also very interesting, but a comparison between unmodified and randomly perturbed images with matched norms to the adversarially perturbed images is necessary to establish whether this effect is due to noise generally or adversarial noise. It’s unclear how the effective path is calculated when negative weights are involved. Further exposition of this aspect would be helpful. Minor points/typos: 1) There are several places where confusing concepts are introduced in one paragraph but explained several paragraphs later. In particular, the distinction between synapses and weights is introduced halfway through page 2 but explained on page 3 and the fact that the coefficients for the defense metric are learned is unclear until page 4 even though they’re introduced on page 3. 2) Typos: 2a) Section 1, fourth paragraph: “...and adversarial images, we uncover...” should be “...and adversarial images, and we uncover...” 2b) Section 1, fourth paragraph: “...by small perturbation, the network…” should be “...by small perturbations, the network…” 2c) Section 2, first paragraph: “...the black-boxed neural…” should be “...the black-box neural…” 2d) Section 2, first paragraph: “In the high level…” should be “At a high level…” 2e) Section 4, first paragraph: “...as it does no modify…” should be “...as it does not modify…” 2f) Title, should be "Neural Network"?
The paper presents an approach to estimate the "effective path" of examples in a network to reach a decision, and consider this to analyze if examples might be adversarial. Reviewers think the paper lacks some clarity and experiments. They point to a confusion between interpretability and adversarial attacks, they ask questions about computational complexity, and point to some unsubstanciated claims. Authors have not responded to reviewers. Overall, I concur with the reviewers to reject the paper.
The author proposes to use a competitive multi-agent setting for encouraging exploration. I very much agree with most of previous reviewers, and their constructive suggestions. However, I find a major issue with this paper is the lack of baseline comparisons. The paper shows that CER + HER > HER ~ CER. I do not think CER should be compared to HER at all. CER to me attacks the exploration problem in a very different way than HER. It is not trying to "reuse" experience, which is the core in HER; instead, it uses 2 agents and their competition for encouraging visiting new states. This method should be compared to method that encourages exploration via some form of intrinsic motivation. There are methods proposed in the past, such as [1]/[2] that uses intrinsic motivation/curiosity driven prediction error to encourage exploration. Note that these methods are also compatible with HER. I'd suggest comparing CER with one of these methods (if not all) both with and without HER. Minor: In the beginning paragraph of 3.1, the paper states: " While the re-labelling strategy introduced by HER provides useful rewards for training a goal-conditioned policy, it assumes that learning from arbitrary goals will generalize to the actual task goals. As such, exploration remains a fundamental challenge for goal-directed RL with sparse reward. We propose a relabelling strategy designed to overcome this challenge. " I think overcoming this particular challenge is a bit overstating. The method proposed in this paper is not guaranteed to address the "fundamental challenge" either --- i.e., why can you assume that learning from arbitrary goals that results from the dynamics of two agents will generalize to the actual task goals? I will change my rating accordingly if there are more meaningful comparisons made in the rebuttal. [1] Curiosity-driven Exploration by Self-supervised Prediction, Pathak et. al. [2] Large-Scale Study of Curiosity-Driven Learning. Burda et. al.<doc-sep>The authors propose a new method for learning from sparse rewards in model-free reinforcement learning settings. This is a challenging and important problem in model-free RL, mainly due to the lack of effective exploration. They propose a new way of densifying the reward by encouraging a pair of agents to explore different states (using competitive self-play) while trying to learn the same task. One of the agents (A) receives a penalty for visiting states that the other agent (B) also visits, while B is rewarded for visiting states found by A. They evaluate their method on a few tasks with continuous action spaces such as ant navigation in a maze and object manipulation by a simulated robotic arm. Their method shows faster convergence (in some cases) and better performance than comparable algorithms. Strengths: Attempts to solve a long-standing problem in model-free RL (effective exploration in sparse reward environments) Clear writing and structure, easy to understand (except for some minor details) Novel, intuitive, and simple method building on ideas from previous works Good empirical results (better than state of the art, in terms of performance) on some challenging tasks Weaknesses: Not very clear why (and when) the method works -- more insight from experiments in less complex environments or some theoretical analysis would be helpful It would also be useful to better understand the conditions under which we can expect this to bring significant gains and when we can expect this to fail (or not help more than other methods) Not clear how stable (to train) and robust (to different environment dynamics) the method is Main Comments / Questions: The paper makes the claim that their technique “automatically generates a curriculum of exploration” which seems to be based more on intuition rather than clear experiments or analysis. I would suggest to either avoid making such claims or include stronger evidence for that. For example, you could consider visualizing the visited states by A and B (for a fixed goal and initial state) at different training epochs. Other such experiments and analysis would be very helpful. It is known that certain reward shaping approaches can have negative consequences and lead to undesired behaviors (Ng et al., 1999; Clark & Amodei, 2016). Why can we expect that this particular type of reward shaping doesn’t have such side effects? Can it be the case that due to this adversarial reward structure, A learns a policy that takes it to some bad states from which it will be difficult to recover or that A & B get stuck in a cyclic behavior? Have you observed such behaviors in any of your experiments? Do you train the agents with using the shaped reward (from the exploration competition between A and B) for the entire training duration? Have you tried to continue training from sparse reward only (e.g. after the effect ratio has stabilized)? One problem I see with this approach is the fact that you never directly optimize the true sparse reward of the tasks, so in the late stages of training your performance might suffer because the agent A is still trying to explore different parts of the state space. Can you comment on how stable this method is to train (given its adversarial nature) and what potential tricks can help in practice (except for the discussion on batch size)? Please make clear the way you are generating the result plots (i.e. is A evaluated on the full task with sparse reward and initial goal distribution with no relabelling?). In Algorithm 1, can you include the initialization of the goals for A and B? Does B receive identical goals as A? It would also be helpful to more clearly state the limitations and advantages of this method compared to other algorithms designed for more efficient exploration (e.g. the need for a resettable environment for int-CER but not for ind-CER etc.). Minor Comments / Questions: You might consider including more references in the Related Work section that initializing from different state distributions such as Hosu & Rebedea (2016), Zhu et al. (2016), and Kakade & Langford (2002), and perhaps more papers tackling the exploration problem. Can you provide some intuition on why int-CER performs better than ind-CER (on most tasks) and why in Figure 1, HER + int-CER takes longer to converge than the other methods on the S maze? In Figure 4, why are you not including ind-CER (without HER)? Have you considered training a pool of agents with self-play (for the competitive exploration) instead of two agents? Is there any intuition on expecting one or the other to perform better? Plots: What is the x-axis of the plots? Number of samples, episodes, epochs? Please label it. Please be explicit about the variance shown in the plots. Is that the std? It would be helpful if to have larger numbers on the xy-axes. It is difficult to read when on paper. Can you explain how you smoothed the curves -- whether before or after taking the average and perhaps include the min and max as well. I believe this could go in the Appendix. Notation: I don’t understand the need for calling the reward r_g instead of r. I believe this introduces confusion since the framework already has r taking as argument the goal g (eq. 1) while the g in the subscript doesn’t seem to refer to a particular g but rather to a general fact (that this is a reward for a goal-oriented task with sparse reward, where the goals are a subset of the states) (eq. 4) Please use a consistent notation for Q. In sections 2.1 and 2.2, at times you use Q(s,a,g), Q(a,s,g) or Q(s,a). Typos: Page 6, last paragraph of section 4.1: Interestingly, even the … , is enough to support … Page 7, last paragraph of section 4.3: Interestingly, … adversely affects both ... <doc-sep>The authors propose a states relabeling strategy (CER) to encourage exploration in RL algorithms by organizing a competitive game between a pair of agents. To verify their strategy, they extend MADDPG as their framework. Then, they compare the performance of agents trained with HER, and both variants of CER, and both variants of CER with HER. The experiments show that CER can improve the performance of HER with faster converge and higher accuracy. My major concerns are as follows. 1. The authors may want to conduct more experiments to compare CER with other state-of-the-art methods such as PPO[1]. As illustrated in Figure 1, the performance of HER is better than that of CER. The authors may want to analyze whether CER strategy alone could properly address the sparse reward problems, and why CER strategy can improve HER. The authors have mentioned that CER is “orthogonal” to HER. I suggest authors provide more discussions on this statement. 2. The authors may want to improve the readability of this paper. For example, in Figure 1, the authors may want to clarify the meanings of the axes and the plots. The results shown in Figure 3 are confusing. How can the authors come to the conclusion that the optimal configuration requires balancing the batch sizes used for the two agents? To better illustrate the framework of CER, the authors may want to show its flow chart. 3. There are some typos. For example, in Section 2.1, the authors use T(s’|s,a) without index t; in Section 2.2, the authors use both Q(a,s,g) and Q(s,a,g). There is something wrong with the format of the reference (“Tim Salimans and Richard Chen … demonstration/, 2018.”) in the bottom of page 10. [1] Schulman J, Wolski F, Dhariwal P, et al. Proximal Policy Optimization Algorithms[J]. 2017. <doc-sep>The paper is well written and easy to read. Exploration is one of the fundamental problems in RL, and the idea of using two agents for better exploration is interesting and novel. However, an explanation of the intuition behind the method would be useful. The experimental results show that the method works well in complex tasks. Since states are compared to each other in L2 distance, the method might not generalize to other domains where L2 distance is not a good distance metric. Pros: - well written - a simple and novel idea tackling a hard problem - good results on hard tasks Cons: - an explanation of why the method should work is missing - plot text is too small (what is the unit of X-axis?) Questions: - what is the intuition behind the method? - during training, randomly sampled two states are compared. why it is a good idea? how the replay buffer size will affect it? - since it is a two-player game, is there anything you can say about its Nash equilibrium? - why A is better than B at the task? - when comparing states, are whole raw observations (including velocity etc.) used? - section 4.2 doesn't seem to be that relevant or helpful. is it really necessary? - fig 4 is missing CER alone results? why is that? it doesn't work by itself on those tasks?
The paper proposes a new method to improve exploration in sparse reward problems, by having two agents competing with each other to generate shaping reward that relies on how novel a newly visited state is. The idea is nice and simple, and the results are promising. The authors implemented more baselines suggested in initial reviews, which was also helpful. On the other hand, the approach appears somewhat ad hoc. It is not always clear why (and when) the method works, although some intuitions are given. One reviewer gave a nice suggestion of obtaining further insights by running experiments in less complex environments. Overall, this work is an interesting contribution.
This paper studies the problem of real-time semantic segmentation with Transformer. The authors proposed an RTFormer block with two attention models to aggregate information on different-resolution features. The experimental results on serval datasets demonstrate the effectiveness of the proposed method. [Strengths] + The proposed method achieves great performance in the serval datasets. + Compared to the baselines, the proposed methods could bring constant improvements [Weaknesses] Some important ablation studies are missing. - The choice of architectural design. The author put the proposed RTFormer block on the last two stages and does not provide the results to support this design. - The baseline which does not use any attention is needed to be included in Table 3 (a). - Comparison with the other lightweight attentions is also needed. Yes <doc-sep>The manuscript presents an efficient model for semantic segmentation. The main contribution corresponds to GPU friendly attention layer which improves the efficiency by using keys and values as learnable parameters. Dimensionality of keys and values is a hyperparameter that is much less than N (HxW). Furthermore, the MLP from the standard transformer is replaced with plain convolutions. The resulting module is somewhat similar to the classic self-attention layer from pre-transformer era. Finally, some further performance improvement is obtained through cross-resolution attention. Experiments address Cityscapes and ADE20k. Strengths - a resonable hybrid model with convolutions on higher-resolution representations and self-attention on lower-resolution representations - state-of-the-art ratio between performance and computational complexity Weaknesses - incremental contribution; GPU friendly attention appears quite related to previous work [13], as well as to Linformer and Nystromformer (see below). - hybrid convolutional-transformer models have been proposed before (eg DPT hybrid, see below) - the proposed improvements perform only slightly better than baselines in Fig.3 - missing configuration in Fig.3a: EA + CA - large training footprint: it appears that only 3 crops 512x1024 can fit into a V100 - incomplete related work in the field of efficient models for semantic segmentation (eg HardNet, SwiftNet, see below) Missing related work: - René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. Vision Transformers for Dense Prediction. ICCV 2021: 12159-12168 - Yunyang Xiong, Zhanpeng Zeng, Rudrasis Chakraborty, Mingxing Tan, Glenn Fung, Yin Li, Vikas Singh. Nyströmformer: A Nyström-based Algorithm for Approximating Self-Attention. AAAI 2021. - Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma. Linformer: Self-Attention with Linear Complexity. CoRR abs/2006.04768 (2020). - Marin Orsic, Sinisa Segvic. Efficient semantic segmentation with pyramidal fusion. Pattern Recognit. 110: 107611 (2021). - Ping Chao, Chao-Yang Kao, Yu-Shan Ruan, Chien-Hsiang Huang, Youn-Long Lin. HarDNet: A Low Memory Traffic Network. ICCV 2019 It appears that large memory footprint precludes training on single GPU systems. <doc-sep>This paper proposes RTFormer for real-time semantic segmentation. The RTFormer leverages GPU-Friendly Attention with linear complexity and discards the multi-head mechanism. The authors demonstrate the efficacy of their method on several benchmarks. Strengths: 1. The proposed method achieves good performances on the benchmarks. 2. This paper is well organized and clearly described. 3. Efficient segmentation is a valuable problem. Weaknesses: 1. The method proposed in the paper is a hybrid of various existing methods such as linear-complexity self-attention, HRNet, and CNN and transformer hybrid model. Therefore, the novelty is weakened by previous works. 2. This paper does not say whether to use tensorrt to accelerate the model. So I don't know if the comparison is fair. 3. In terms of performance and model size, there is no significant advantage between this method and the compared methods. Please refer to “Paper Weaknesses”. <doc-sep>This paper proposes RTFormer, an efficient transformer for real-time semantic segmentation, which achieves better trade-off between performance and efficiency than CNN-based models. To achieve high inference efficiency on GPU-like devices, the RTFormer leverages GPU-Friendly Attention with linear complexity and discards the multi-head mechanism. Besides, the cross-resolution attention is more efficient to gather global context information for high-resolution branch by spreading the high level knowledge learned from low-resolution branch. Extensive experiments on mainstream benchmarks demonstrate the effectiveness of the proposed RTFormer, it achieves state-of-the-art on Cityscapes and CamVid, and shows promising results on ADE20K Strengths of this paper are as follows : 1. A novel RTFormer block is proposed, which achieves better trade-off between performance and efficiency on GPU-like devices for semantic segmentation task. 2. A new network architecture RTFormer is proposed, which can make full use of global context for improving semantic segmentation by utilizing attention deeply without lost of efficiency. 3. RTFormer achieves state-of-the-art on Cityscapes and CamVid, and show promising performance on ADE20K. In addition, it provides a new perspective for practice on real-time semantic segmentation task. Weakness of this paper are as follows: 1. The proposed cross-resolution attention is just the variant of self-attention, which is widely used in network design. This paper is incremental compared with previous work DDRNet. The novelty of this paper is limited. 2. The performance improvement on Cityscape dataset is limited. As shown in Table 1, the mIoU score and FPS improvement are both limited, not obvious enough. 3. The experiments on semantic segmentation is thorough enough and can be used to support the proposed method. But my concern is that the method is simple and not novel enough. In addition, why not apply this method to other vision tasks, like classification, object detection. yes
Reviewers agree that the proposed RTFormer block and overall network architecture achieves good trade-off between performance and efficiency on several datasets. The design of GPU-friendly attention and cross-resolution attention improves the computational efficiency over multi-head attention, and well captures global context information when updating high-resolution embeddings. The main concern, as mentioned by several reviewers, is the overall novelty as some ideas are related to previous work (GPU-friendly attention and hybrid convolutional-transformer architecture). Other issue includes missing baselines that are based on light-weighted attention designs, or do not use attention at all, but this have been well resolved in the author feedback. In summary, the pros outweigh the cons and therefore AC recommends acceptance.
This paper addresses the problem of modeling sequential data based on one of the deep recurrent Gaussian process (DRGP) structures proposed by Mattos et al (2016). This structure acts like a recurrent neural net where every layer is defined as a GP. One of the main limitations of the original method proposed by Mattos et al (2016) is that it is limited to a small set of covariance functions, as the variational expectations over these have to be analytically tractable. The main contributions of this paper are the use of previously proposed inference, namely (i) the sparse spectrum (SS) of Lazaro-Gredilla et al (2010); its variational improvement by Gal and Turnner (2015) (VSS); and the inducing-point (IP) framework of Titsias and Lawrence (2010) into the recurrent setting of Mattos et al (2016). Most (if not all) of the technical developments in the paper are straightforward applications of the results in the papers above. Therefore, the technical contribution of the paper is largely incremental. Furthermore, while it is sensible to use random-feature approximation approaches (such as SS and VSS) in GP models, it is very unclear why combining the IP framework with SS approaches makes any sense at all. Indeed, the original IP framework was motivated as a way to deal with the scalability issue in GP models, and the corresponding variational formulation yielded a nice property of an additional regularization term in the variational bound. However, making the prior over a (Equation 9) conditioned on the inducing variables U is rather artificial and lacks any theoretical justification. To elaborate on this, in the IP framework both the latent functions (f in the original paper) and the inducing inputs come from the same GP prior, hence having a joint distribution over these comes naturally. However, in the approach proposed in this paper, a is a simple prior over the weights in a linear-in-the-parameters model, and from my perspective, having a prior conditioned on the inducing variables lacks any theoretical motivation. The empirical results are a bit of a mixed bag, as the methods proposed beat (by a small margin) the corresponding benchmarks on 6 out of 10 problems. While one would not expect a proposed method to win on all possible problems (no free lunch), it will be good to have some insights into when the proposed methods are expected to be better than their competitors. While the proposed method is motivated from an uncertainty propagation perspective, only point-error metrics (RMSE) are reported. The paper needs to do a proper evaluation of the full predictive posterior distributions. What is the point of using GPs otherwise? Other comments: I recommend the authors use the notation p(v) = … and q(v) = … everywhere rather than v ~ … as the latter may lead to confusion on how the priors and the variational distributions are defined. It is unnecessary to cite Bishop to explain how one obtains a marginal distribution Would it be possible to use the work of Cutajar et al (2017), who use random feature expansions for deep GPs, in the sequential setting? If so, why aren’t the authors comparing to this? The analysis of Figure 1 needs expanding What are the performance values obtained with a standard recurrent neural net / LSTM? <doc-sep>This paper proposes deep recurrent GP models based on the existing DRGP framework, two works on sparse spectrum approximation as well as that of inducing points. In these models, uncertainty is propagated by marginalizing out the hidden inputs at every layer. The authors have combined a series of known ideas in the proposed work. There is a serious lack of discussion or technical insights from the authors for their technical formulations: in particular, what are the non-trivial technical challenges addressed in the proposed work? Furthermore, the authors are quite sloppy in referencing equations and inconsistent in the use of their defined notations and acronyms. I also find it hard to read and understand the main text due to awkward sentence structures. Have the authors revealed their identity on page 2 of the paper? I quote: "We refer to the report Foll et al. (2017) for a detailed but preliminary formulation of our models and experiments." and "DRGP-(V)SS code available from http://github.com/RomanFoell/DRGP-VSS." Detailed comments are provided below: For the first contribution stated by the authors, what are the theoretical and practical implications of the different regularization terms/properties between the lower bounds in equations 10 vs. 8? These are not described in the paper. Can the authors provide a detailed derivation of DVI for equation 13 as well as for the predictive distributions in Sectio 6.3.5? Can the authors provide a time complexity analysis of all the tested deep recurrent GPs? Would the authors' proposed approach be able to extend the framework of Hoang et al. (2017) (see below) that has generalized the SS approximation of Lazaro-Gredilla et al. (2010) and the improved VSS approximation of Gal & Turner (2015)? Hoang, Q. M.; Hoang, T. N.; and Low, K. H. 2017. A generalized stochastic variational Bayesian hyperparameter learning framework for sparse spectrum Gaussian process regression. In Proc. AAAI, 2007–2014. Minor issues: Just below equation 6, equation 9, and throughout the entire paper, the authors need to decide whether to italicize their notations in bold or not. Equations are not properly referenced in a number of instances. The authors have used their commas too sparingly, which makes some sentences very hard to parse. What is the difference between REVARB-(V)SS(-IP), DRGP-(V)SS(-IP), and DRGP-VSS-IP? Equation 7: LHS should be conditioned on U. Page 4: (V)SSGP does not have the same... Equation 8: q_a and q_Z should be placed next to the expectation. Page 4: choosen? Page 5: will makes it possible? Page 5: DRGP-SSGP, -VSSGP, -SSGP-IP, -VSSG-IP? Page 5: to simplify notation, we write h^{L+1}_{Hx+1:} = y_{Hx+1:}? Such a notation does not look simplified. Equation after equation 12: On LHS, should U^(l) be a random variable? Page 17: Should the expressions begin with >=? <doc-sep>Overall Score: 7/10. Confidence Score: 7/10. Detailed Comments: This paper introduces various Deep Recurrent Gaussian Process (DRGP) models based on the Sparse Spectrum Gaussian Process (SSGP) models and the Variational Sparse Spectrum Gaussian Process (VSSGP) models. This is a good paper and proposed models are very sound so I recommend for acceptance although as main weakness I can say that is very technical so it can be difficult to follow. Adding more intuitive ideas, motivation and maybe a figure for each step would be a solution. Apart from that it is a really good paper, congratulations. Related to: RNN models and Sparse Nystrom approximation. Strengths: Models are very sound, solutions are solid, the proposed methodology is correct and the empirical results and experiments are valid and properly done. Weaknesses: It is too difficult to follow and it is written in an extreme technical way. More intuitions and a proper motivation both in the abstract and introduction may be put in order to make the paper easier to read and, hence, more used by researchers and data scientists. Does this submission add value to the ICLR community? : Yes it does, the experiments show the efficiency of the proposed methods in some scenarios and are valid methodologies. Quality: Is this submission technically sound?: Yes it is. Are claims well supported by theoretical analysis or experimental results?: Experimental results prove empirically the methods and appendixes show the analysis performed in a clear and elegant way. Is this a complete piece of work or work in progress?: Complete piece of work. Are the authors careful and honest about evaluating both the strengths and weaknesses of their work?: Yes, and I would enfatize that I have liked that some experiments are won by other methods such as GP-LSTM, they are very honest. Clarity: Is the submission clearly written?: Yes, but it is difficult for newcomers due to the reasons that I have stated before. Is it well organized?: Yes it is. Does it adequately inform the reader?: Yes it is. Originality: Are the tasks or methods new?: Yes, they are sound. Is the work a novel combination of well-known techniques?: Yes it is. Is it clear how this work differs from previous contributions?: Yes. Is related work adequately cited?: Yes, being a strength of the paper. Significance: Are the results important?: I would argue that they are and are a clear alternative to consider in order to solve these problems. Are others likely to use the ideas or build on them?: If the paper is written in a more friendly way, yes. Does the submission address a difficult task in a better way than previous work?: Yes I think. Does it advance the state of the art in a demonstrable way?: Yes, empirically. Arguments for acceptance: Models are very sound, solutions are solid, the proposed methodology is correct and the empirical results and experiments are valid and properly done Arguments against acceptance: Clarity of the paper. Minor issues and typos: -> (V)SS not defined before being used. -> Abstract should be rewritten adding a motivation and focusing more on the problems being solved and less in the details of the solutions. -> Recurrent indexes that go backwards (i) of Eq. 1. should be explained why are going backwards before being used like that. Newcomers may be confused. -> Section 2 writing style lacks a bit of cohesion, relating the paragraphs may be a solution. -> Q is not defined in section 3.1 paragraph 1. -> A valid covariance function must produce a PSD matrix, put that in section 3.1. -> I do not see how U marginalizes in Eq. 7, kind of confused about that, I think that it should be p(y|X,U). -> Section 3.4 statistics should be explained. Reading thread and authors response rebuttal decision: ================================================= I consider that the authors have perfomed a good rebuttal and reading the other messages and the authors response I also consider that my issue with clarity is solved. Hence, I upgrade my score to 7 and recommend the paper for publication.
This paper is concerned with combining past approximation methods to obtain a variant of Deep Recurrent GPs. While this variant is new, 2/3 reviewers make very overlapping points about this extension being obtained from a straightforward combination of previous ideas. Furthermore, R3 is not convinced that the approach is well motivated, beyond “filling the gap” in the literature. All reviewers also pointed out that the paper is very hard to read. The authors have improved the manuscript during the rebuttal, but the AC believes that the paper is still written in an unnecessarily complicated way. Overall the AC believes that this paper needs some more work, specifically in (a) improving its presentation (b) providing more technical insights about the methods (as suggested by R2 and R3), which could be a means of boosting the novelty.
Summary The paper proposes Visual Transformer Network which encodes the relationship between all detected object instances in a frame and uses it for navigation. The paper uses DETR for object detection and learn an association between local descriptors (from the object detector) with global descriptors (ResNet18) using the proposed VT model. They show that using VT improves performance on the object navigation task in AI2-THOR simulator compared to existing methods. Strengths - The paper proposed a novel transformer architecture that learns an association between local object descriptors with global image region features so that actions can be grounded to visual regions in the image. - Different from prior work, the paper uses all the objects detected for a label instead of just the most confident detection. Weaknesses - The paper doesn't fully address why DETR performs better than FasterRCNN features. Appearance features from FasterRCNN have been widely used for several downstream tasks in Vision and Language Navigation[1], Vision and Language tasks[2]. From the experiments, it's not clear why DETR is doing better than Faster-RCNN especially when the detection accuracy of DETR is also better than Faster RCNN. - Additionally, I didn't fully follow how authors obtain the appearance features from Faster RCNN based method. The authors mention that object appearance features are extracted from different layers of a backbone network. How is it different from the approach taken by Bottom-Up, Top-Down[3] paper in which 2048-dim appearance features are extracted for each visual region? - The experimental setup isn't fully reflective of the object goal navigation task. The experiments are conducted in AI2 thor scenes which only contain one room. It's not clear, how this method will perform when evaluated on significantly more complicated environments like Matterport / Gibson [4]. Specifically, I am interested in how will the proposed architecture perform when the goal object is not in the same room as the agent. - The navigation task is also made simpler by discretizing into a grid. Single room environments and discrete grids simplify a lot of navigation-related challenges and the authors don't discuss how the proposed architecture will generalize to more complex object navigation tasks. - The use of spatial embeddings as well as appearance embedding isn't all that surprising. Existing work including Du et al. uses bounding box coordinates to help learn spatial associations between objects. Other questions: - Instead of pre-training without employing the navigation policy, did the authors try using shortest-path based demonstrations to help learn the navigation policy as well? In the first stage, the navigation policy learns using imitation learning and then finetuned with A3C? - What is the step size of the agent for the forward step? What are the turn angles for Turn-left, Turn-right actions? What are the tilt angles for look-up and look-down actions? - What's the reason for improvement over ORG (in absence of TPN). Is it superior visual representations (Faster RCNN vs DETR) or the fact ORG only chooses objects with the highest confidence while VT uses all the detected objects? - How does the agent learn long-term associations between objects across multiple frames. In my opinion, the proposed architecture puts all the burden of learning these long-term object relationships across multiple frames on the LSTM policy since the VT only learns association within a single frame. [1] Improving Vision-and-Language Navigation with Image-Text Pairs from the Web; Majumdar et al. [2] Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks; Li et al. [3] Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering; Anderson et al. <doc-sep>This paper demonstrates a model that uses the Transformer to encode the visual features that appeared in the visual input image during navigation. The model is firstly pre-trained under imitation learning objective with self-generated shortest-path trajectories. The empirical results show that the model used in the paper outperforms previous methods on AI2-THOR environment. The authors also show some studies on the contributions of each component in the model. Paper strengths: + The proposed method further show that the Transformer is a powerful model for feature extraction + The authors demonstrate one method to make the training of Transformer work, i.e. pre-training transformers using shortest-path trajectories + Empirical result support the authors' claims. + A thorough ablation study and discussions are provided. Cons: - The paper adopts the Transformer and adapted it into the navigation problem. No new architecture/model is proposed. - It seems that a similar usage of Transformer already appeared in the vision-and-language navigation task [1]. The paper also shows that pre-training of navigation tasks using Transformers can help to boost the performance. Minor: Two missing citations [2,3] that are potentially relevant. [1] Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training [2] Evolving Graphical Planner: Contextual Global Planning for Vision-and-Language Navigation [3] Are You Looking? Grounding to Multiple Modalities in Vision-and-Language Navigation -- I've read the authors' response and would like to maintain my original score<doc-sep>### summary This paper introduces transformer network to visual navigation, specifically, object-goal navigation. It also develops several new feature descriptors as the input of the transformer encoder and decoder. To properly train the whole model, a supervised pre-training stage is used to warm-up the transformer model. Great performance has been achieved on AI-THOR benchmark. ### pros 1. Lots of people must have thought to use transformer to replace the RNN/LSTM in lots of visual navigation framework. This paper provides a good example. Most importantly, this paper focus on the representation learning part of the whole pipeline, which isn't that straightforward of how to use a transformer. 2. The writing is mostly clear with clear motivation and background discussion. 3. The performance boost, especially SPL, is relatively significant compared to previous SOTA, and the ablation studies have verified most of the design choices. ### cons There are a couple of things which are not clear to me, or confused me when I was reading the paper: 1. The writing in the approach section isn't very clear. First, it would be much better to define clear notations for all the features/descriptors, and use such notations in the figure. The current writing uses "instance feature", "global feature", "positional-global", "spatial feature", "spatial-enhanced"..., which are a little bit confusing to me. Second, I think most details are properly ignored in Fig.2. It becomes not as informative as the detailed version (Fig.4) in Appendix. Note that these two figures are not consistent that the "add" symbol for positional enhancement is missing is Fig.4. I also suggest that the positional embedding blob not crossing the arrow of global feature, they are just added together.Third, Sec.4.2 writes "we first reduce the channel dimension of a high-level activation map from D to a smaller dimension d", how the reduction is done exactly? From appendix it seems like a 256-dim vector is transformed into 249-dim. Fourth, $h$ and "w" are abused. In figure, they are annotated on the long side of the tensor, in eq (1) they seem to be the output of positional embedding, and in Sec.4.2 description they are the resolution of 7. Similarly, $L$ is abused as it means input of encoder in Sec.4.1 but output of encoder in Sec.4.3. Let me stop here, but these things make the approach not super clear to me. 2. In Sec.4.1, I'm not fully convinced of the statement of faster rcnn even thought the experiments empirically verified it. Faster RCNN w/o FPN only output features after "conv4+ROI-pooling" (ResNet-101-C4 variant). Why is it blamed for scale-sensitive? Actually, what does scale-sensitive mean here? Why DETR doesn't suffer from it? Honestly I don't think that's the reason why Faster RCNN performs worse. 3. Also, I'm not fully convinced of the statement of the "early stopping" in Sec.4.4. The penalties are the same for different model in RL, why this transformer based representation learner suffers from "early stopping"? Is there a plausible explanation? It's fine that you cannot conclude something for sure because transformers are always hard to train, but the statement in paper reads not super convincing to me. 4. Sec.5.1 SPL formulation seems to be wrong? The success indicator seems missing? The current equation is simply a ratio between any episode length over the optimal length regardless whether it's an success episode or not. 5. Why not also adding global features into the transformer encoder? For example, reshape and concat with the input. Is the encoder supposed to be local? ### misc 1. The best results of VTNet in Tab.1 used TPN. It might be better to introduce TPN in Appendix for completeness. 2. Variance is not reported in Tab.1, which is uncommon for RL/control paper. 3. Because transformer has attention module and the relationship can be easily visualized. I was expecting more interpretation/visualization like Fig.1 right to show the proposed methods actually attend to proper areas. The numbers are hard to tell what do each modules do exactly. ### questions 1. Just to make sure I understand correctly, the instance feature (100x249) and spatial feature (100x7) are fed into a MLP for fusion? Can you describe the archi? 2. Local spatial feature contains the normalized bounding box, confidence and top-rated semantic label. Is the semantic label the class index (1,2,...,C)? why not use a one-hot embedding or something? 3. Is AI2-THOR the most popular benchmark for object-goal nav? I have seen lots of prior paper running on habitat. What's the specific reasons of using AI2-Thor over habitat? Please address my questions. I'm looking forward to discussing with the authors and the peer reviewers. <doc-sep>**Paper summary** The paper addresses the problem of navigation towards objects (ObjectNav) in a virtual environment. The idea of the paper is to incorporate spatial information of objects using a transformer-based framework called Visual Transformer Network. The paper compares the results with a number of state-of-the-art ObjectNav models and provides an ablation study. The results have been reported on the AI2-THOR framework. **Paper strengths** - The idea of incorporating object information using transformers for a navigation agent is new. - The proposed method outperforms a number of strong baselines. - The ablation studies show that the introduced components are effective. **Paper weaknesses** - It is hard to understand some parts of the paper. For example, the introduction discusses details such the difference between DETR and Faster RCNN or difficulty of training the transformers. It is difficult to understand these details without knowing the proposed method. The introduction should provide a high-level overview of the paper instead of these types of details. Also, the paper requires proof reading. There are several sentences with grammar issues. - It is a bit strange that nothing is learned without the imitation pre-training. It would be good to dig deeper and provide a better explanation for why this happens. - Equation 1 is not clear. A brief explanation would help. - I recommend running the method on some other frameworks which include slightly larger scenes to see if the method generalizes to those as well. RoboTHOR (https://github.com/allenai/robothor-challenge) is very close to the framework used in this paper so it might be a good choice for these experiments. **Justification of rating** Overall, I am leaning towards accepting this paper since it introduces a new way for incorporating object information and it outperforms strong object navigation baselines. Writing is the main issue of this paper. **Post-rebuttal** I read the rebuttal and the other reviews. The rebuttal addresses my concerns to some extent (writing has improved in the revised version, but it still has some issues). So I am going to keep my rating.
This paper addresses the problem of visual object navigation by defining a novel visual transformer architecture, where an encoder consisting of a pretrained object detector extracts objects (i.e. their visual features, position, semantic label, confidence) that will serve as keys in an attention-based retrieval mechanism, and a decoder computes global visual features and positional descriptors as a coarse feature map. The visual transformer is first pretrained (using imitation learning) on simple tasks consisting in moving the state-less agent / camera towards the target object. Then an RL agent is defined by adding an LSTM to the VTNet and training it end-to-end on the single-room subset of the AI2-Thor environment where it achieves state-of-the-art performance. After rebuttal, all four reviewers converged on a score of 6. The reviewers praised the novelty of the method, extensive evaluation with ablation studies, and the SOTA results. Main points of criticism were about clarity of writing and some explanations (which the authors improved), using DETR vs. Faster R-CNN, and the relative simplicity of the task (single room and discrete action space). There were also minor questions, a request for more recent transformer-based VLN bibliography, and a request for a new evaluation on RoboThor. One area of discussion -- where I empathise with the authors -- was regarding the difficulty of pure RL training of transformer-based agents and the necessity to pre-train the representations. Taking all this into account, I suggest this paper gets accepted.
The paper proposes two novel methods for combinatorial black-box optimization (i.e. over an unconstrained binary domain) based on optimistic tree search, one based on a known Lipschitz constant (OLTS) and another one when it is unknown (OCTS). The general idea of the OLTS is to evaluate nodes in a tree with large upper bounds in their subtrees, where the upper bound is based on the Lipschitz constant and the diameter of the subtree. This is extended in OCTS when the Lipschitz constant is not known by searching a superset of nodes that would contain the node in OLTS. Both methods are proven to have linear convergence rates (with a dependence on the Lipschitz constant). Computational experiments show that OCTS outperform several other heuristic-based methods. The black-box methods proposed in the paper are very appealing: they are simple to implement, theoretically grounded, and appear to work well in practice. The approach appears to be original as far as I am aware. The computational section is sufficiently extensive, with six different problem classes and one experiment to illustrate the convergence rates, and the method generally outperforms the baselines. I particularly appreciate the theoretical guarantees and their computational analysis in Section 6.1. The paper could have benefited from a comparison with model-based methods, but I believe it is not too unreasonable to omit them given that they typically have more expensive iterations. The presentation is overall clear, but there are several minor issues that need to be addressed below. Most of my comments below are regarding presentation, which should be fixable. Assuming those are addressed, I recommend acceptance for this paper. No limitations besides the ones discussed above. <doc-sep>This paper presents an algorithm for solving combinatorial optimization problems where the objective function is a "black box" accessible only via an oracle. The algorithm is targeted at problems where this oracle is relatively cheap (as opposed to the standard Bayesian optimization setting), and is accompanied by finite time termination guarantees. The core algorithm relies heavily on Lipschitz constants to guide search and prune the tree; as this constant is often not known, the authors present a variant that instead only relies on the existence of a Lipschitz constant. The authors conclude with a computational analysis of the performance of the algorithms as a function of the number of function evaluations. The paper: presents a novel algorithm in an area of interest to the NeurIPS community, includes interesting theoretical results, and is clearly written. The only weakness I can identify is the lack of a computational comparison against Bayesian optimization techniques (see "Questions"). There is no explicit discussion of potential negative societal impact. <doc-sep>The paper considers the black-box optimization of combinatorial binary functions. The functions are assumed to obey a Lipschitz condition given some metric on the hypercube. For the optimization problem, the authors propose two algorithms, depending on the knowledge of the Lipschitz constant. Both algorithms rely on tree search and optimistic upper bounds. Theoretical guarantees are provided for the convergence of the algorithms. The empirical work show that the algorithm with unknown Lipschitz constant (OCTS) outperforms the considered baselines on a variety of problems. The proposed algorithm is fairly natural given the Lipschitz assumption. The case of unknown Lipschitz constant is treated in a similar way as the DIRECT algorithm (although it is not referenced). The theoretical results are straightforward, but nevertheless useful. The binary tree is assumed as provided, but I would assume that the ordering of the indices might have significant influence on the performance. Given the optimistic tree search approach, the problem is somewhat related to the combinatorial bandit problem. The main difference here is that the function is deterministic, which allows much stronger bounds, but I would assume some techniques from combinatorial bandits could carry over. The empirical performance is a strong argument for the paper. The baselines are difficult to evaluate, since there is little detail provided regarding their implementation and parametrization. It is not clear how meaningful the Lipschitz condition is for the practical problems considered (beyond the constant that results from the discrete nature of the problem). <doc-sep>This paper addresses the problem of combinatorial black-box optimization. The solution is built upon a tree-structure search procedure with optimistic search strategy. The contribution of the paper in my opinion is two-fold: 1) Algorithmically, it designs a new combinatorial black-box optimization solver OLTS (and its practical variant, OCTS) by adapting the optimistic strategy applied on tree-search optimizer. 2) Theoretically, it provides convergence analysis on the proposed solver (and its variant OCTS) which is shown to be superior than random search. Strengths: 1) The structure of the paper is clear and the paper is overall well-written. The clarity is in general good, except for a few points that will be discussed in the weakness part. 2) The problem of combinatorial black-box optimization is an important problem that has vast applicability of various domains, including machine learning. 3) The paper provides the first finite-time linear convergence rates for the problem. It is a significant improvement compared to the logarithmic rates of baselines (random search). 4) The empirical results are promising. The algorithm, though simple, has been shown to be outperforming the baselines on a set of benchmark black-box combinatorial optimization problems, including LABS, MIS, Ising, MaxSAT, and Contamination. Weakness: 1) The novelty of the proposed solvers, OLTS and OCTS, is limited. Both the tree-based search and the optimistic strategy have been well studied under similar contexts. The main critique from me is not that the algorithms are not novel, but that the novelty is somewhat overclaimed. For example, the tree based search has been discussed in a few previous papers (e.g., in [39] and also UCT -- UCB for trees). But this has not been acknowledged in the paper. It appears that the tree structure is first proposed in this paper. As another example, the optimistic strategy for estimating the potential of the tree nodes is also adapted from [39]. Though the paper lists three major differences of OLTS/OCTS vs [39], it still seems incremental. Also, it is not clearly explained why these differences are made to adapt to the tree structure and what are the advantages. 2) It is not clear what are the intuitions of l(n) and n_c in the propositions and theorems, so that it is hard to understand how tight the derived convergence bounds are in the respective theorems/propositions. At least from a first look, the bounds do not seem tight, and therefore the theory is not as informative. The paper would be stronger if these are better explained/clarified. 3) Bayesian optimization is an important category of methods for black-box combinatorial optimization problems, but it is not included in the set of baselines. Why is it? It would be good to explain. 4) The empirical results are promising in general. One question from me, though, is that, what are the reasons that certain problems are selected for evaluation. For example, reference [18] and reference [41] each provided a set of benchmark problems, but this paper selected a subset from each of these two references instead of evaluating all of the settings in either one of them. It does not seem that the proposed OCTS cannot work on the other problems, e.g., the neural architecture search benchmark which is of potential high interest to the ML community. Minor aspects: 1) The introduction well motivates the paper, but is a bit too condensed. Perhaps better to split it into multiple paragraphs. 2) Line 186: I_h seems to be a typo, it should be I_l 3) Line 270: Proposition A.3 -- is it a typo? The authors claimed that they discussed potential limitations (1.b) and negative societal impacts of the work (1.c) in the Appendix, but I cannot see any obvious discussions of this kind.
This paper proposes two methods for black box optimization of Lipschitz combinatorial binary functions. The reviewers agree that the paper is well written, the methods are sufficiently novel, and that the results are of interest to the NeurIPS community. The main drawback with the paper is that reviewer n1bW felt that the theoretical results are straightforward (but nevertheless useful). Several reviewers also had hoped for comparisons with Bayesian optimization techniques, but during the discussion period it was decided that this comparison can be omitted due to the much higher computational cost of Bayesian methods. I tend to agree with the reviewers that this paper is above the bar for NeurIPS.
The work examines properties of Neural Processes (NP). More precisely, of deterministic NPs and how they for finite-dimensional representations of infinite-dimensional function spaces. NP learn functions f that best represent/fit discrete sets of points in space. Based on signal theoretic aspects of discretisation, authors infer a maximum theoretical upper bond of frequencies of functions f that can be used to represent the points. The bond depends on the latent dimension/representation size and the finite interval spawn by the points. Simulations are computed to test the validity of the upper bond. Authors find that NPs behave like a Fourier Transform and decompose the spectrum of the signal. Since the representation during training learns to represent specific frequencies, NPs can be used as band pass/stop filter. The paper is well written, and the basic approach is clearly outlined. The quality of the work and the evaluation are good and support the authors claims. However, it is not fully clear to which extend the claims translate to other data or generalise well. The finding that NPs interpret points in space as signals and implement a frequency decomposition like Fourier/Wavelet transforms seems reasonable. Not sure, however, if an application as filter is ecological in terms of computational complexity. The paper provides a strong theoretical foundation of the method and authors support their claims by empirical stimulation. Also, explainability and more importantly interpretability of how methods generate results is essential. So, the message the paper sends is relevant. However ,the relevance and significance of the findings, and the consequences thereof are not clear. <doc-sep>The paper tries to analyze the behavior of Neural Processes in the frequency domain and concludes that such Processes can only represent oscillations up to a certain frequency. While drawing a parallel between Neural Processes and signal processes, I think that there is some weakness in the experiments of the paper. In particular, the authors only seem to consider the exponential quadratic kernel to generate examples which would mostly show examples of smooth functions as would sampling Fourier linear combinations. I am also unsure how this paper could be helpful to our community in its present form as it sheds some light on the inner workings of Neural Processes but only in a very limited practical setting.<doc-sep>This paper addresses an interesting and timely problem, which is to understand how Neural Processes work to learn a representation of a function space. Offering a closer investigation into a recently introduced framework, this work will likely be of interest to the ICLR community. The work focuses on the 1-dimensional case and tries to analyze the simplest case in a rigorous way, which I think is a good approach in general. However, I have some concerns about the main claims of this paper, as listed below: - One of the main findings of the paper is an observation that Neural Processes perform a "frequency decomposition". However, I think this is an insufficiently supported, and even misleading, over-statement. Indeed, Figure 2 shows that there are different modes dominated by varying characteristic frequencies, where a higher-rank mode shows a more slowly varying feature; but there is no further evidence that the decomposition is actually based on the frequency of the signal. One would get a similar result by simply doing a Principal Component Analysis too. When you say "frequency decomposition" it carries a clear mathematical meaning, and it is a much stronger statement than what the paper reports empirically. - That said, I agree that the empirical observations are interesting. Perhaps the observations in the paper's experiments may be better described in a frame of global mode decomposition (CNP) vs. local feature detection (NP)? - I also think that the claim about the theoretical upper bound on the frequency is overstated, the way it is stated currently. The validity of the statement (Theorem 3.1) really depends on the assumption of uniform sampling, which is mentioned as a note after Theorem 3.1. Of course, I fully agree that it is an important starting step to get rigorous results in simplified conditions. But those conditions should be mentioned as part of the statement, especially when it is highly likely that the conditions are not met in the use case (there is no reason to expect that the x values in the context set is close to uniform). For example, it is possible to encode functions with a localized feature whose (local) frequency is higher than your derived bound, by using more samples around that high-frequency feature. This paper will get views, partly because it is actually asking an interesting question, and partly because of the boldness and attractiveness of the claims made. How exciting is it to discover a naturally emerging Fourier transform? Except... that's not exactly what one can say just yet (I think). I believe the authors should either support the paper's claims by further work, or tone down their overall framing — major changes either way. While I think this work is headed to a promising direction, given the concerns described above, I recommend a rejection at this time. **UPDATE:** I appreciate the authors' responses and the engaged discussion. However, I still think that the claims of the paper are not sufficiently supported by the presented results, and maintain my original rating.<doc-sep>This paper presents an analysis on the neural processes in the signal processing point of view and gives a bound on the highest frequency of the function that a neural process can represent. I recommend to reject this manuscript. My comments are below. The key point of this work is Theorem 3.1. However the theorem itself is just a direct outcome of the Nyquist–Shannon sampling theorem, and it is generally true to not only neural processes but also to all the other approaches. Meanwhile, the authors did not talk about the relationship quantitatively between the representability and the error tolerance in Definition 3.1. In addition, the analysis is limited to only scalar-valued function on a 1D interval. The writing could also be improved. Concerns: - The definition of neural processes in the background section is confusing. Despite the way of defining a map, P is a mathematical object defined by a set of tuples and a map, meaning that the neural processes are also defined by data. In the original paper, the neural processes were however defined as random functions. - In the background section, the words say 'some sources define ...'. Could the authors give the sources? - In Def 3.1, what do the authors mean by 'discrete measurements'? - In the experiment section, do the authors mean sampling from a Gaussian process by saying GP prior? I don't see a GP plays the role of prior in terms of Bayesian inference. - The examples given in the experiment section lack quantitative results. It is better for evaluating the reconstruction by showing the posterior or predictive distribution instead of single reconstructions. - In Sec. 4.2. how did the authors sample regular grid on the 2D plane as y is determined by x. - Eq.11 is defined in the appendix. Better to use separate numbering.
The paper analyses the behaviour of Neural Processes in the frequency domain and, in particular, how it suppresses high-frequency components of the input functions. While this is entirely intuitive, the paper adds some theoretical analysis via the Nyquist-Shannon theorem. But the analysis remains too generic and it is not clear it will be of broad interest to the community.
Summary: This paper proposes a new method that adaptively merges intervals to form a discrete action space where on each interval Q_I values are learned via deep neural networks. Then it applies ready-methods designed for discrete action spaces to do off-policy evaluation. ########################################################################## Reasons for score: The paper offers a new way to apply methods designed for discrete action spaces onto continuous action spaces and it seems to perform better than the two chosen baselines as seen from the experiment results. Although the authors mentioned problems with the baseline models quickly, it would be nice to see a more in-depth analysis in the experiments to demonstrate these problems that this paper has set out to overcome. It is also not very clear to me when and why DJQE performs better than the baselines (does it always perform better than the baselines?). I gave a conservative score 4 but I'm willing to change my evaluation if convinced. ########################################################################## Pros: The paper provided theoretical support to the proposed method by proving its consistency under two reasonable assumptions. The method was tested on both synthetic data and simulated real world data. ########################################################################## Cons: Overall the paper is not very clear to me. It would be nice to see more in-depth theoretical analysis on the main advantages of DJQE compared to the baselines, the lack of which generates the following questions: - Will this method always achieve lower biases than baselines on new datasets? I'm not sure about the quality of evaluation from a simulation model on the personalized dose finding application. - What are the potential problems/limitations of DJQE if there are any? (Although these questions are commonly raised on methods that rely on experimental proofs of their superior performance, they seem particularly relevant for this paper.) ########################################################################## Questions during rebuttal period: - How does the computational cost of DJQE scale with a decreasing maximum threshold of bias? - How accurate is the simulation model trained on Warfarin? <doc-sep>This paper considers the problem of off-policy evaluation with continuous actions. The main idea is to first using multi-scale change point detection to discretize the action space and then apply traditional IPW or DR methods to estimate the value. The DJQE method is theoretically analyzed under both the cases that the Q function is either a piecewise function or a continuous function. For continuous function, it is not surprising that as the number of splits m goes to infinity as n, the estimation is consistent, while additional results in Theorem 2 also shows that for limited m, the estimator can also be shown as a uniform approximation of the Q value. Experiments consider both a toy dataset and a real problem in personalized does finding, and the results show that the DJQE method is superior than existing methods for continuous Q evaluation. The paper is clearly written and easy to follow. I only have a few comments: 1. In the experiments, since computing the optimal bandwidth is very time consuming for the baseline methods, it would good to provide a detailed computation cost comparison. 2. As mentioned in the method part, m is initially set to be proportional to n, and the final partition size is much smaller than m. Would the authors shows these detailed numbers in the experiments? 3. It could be great if more real-world problems can be evaluated in the current experimental section, such as the dynamic pricing example introduced previously. <doc-sep>Summary This paper proposes a new method for offline evaluation when the action space is continuous, one dimensional. This overcomes the drawbacks of the kernel based method, which cannot be applied to non-smooth Q functions and requires heavy computation to optimize the bandwidth. The proposed method can be applied to discontinuous Q functions like step functions, and achieves smaller bias. This is made possible by the adaptive jump q learning method. Pros While the kernel method requires a single bandwidth to control the bias and variance of the value estimator, the proposed method adapts to the shape of the Q function by dividing the action space in an adaptive way, so that the MLP fitted in each interval of the action space approximates well the real Q function. Hence, the intervals can have possibly different lengths according to the shape of the true Q function. A multi-scale change point detection method is used for determining the intervals, which requires only a linear computational cost. Experiment results are convincing. Cons 1) In Algorithm 1, “Collect cost function step” computes MLP regressor for every possible interval. Hence, computation will become heavy when the number of initial intervals (m) is large. Authors should add discussion about this point. 2) Some notations are confusing Minor comments 1) Gamma appears before it is defined. 2) L is both the number of subsets and the numer of layers in neural networks. Are they meant to be the same (as they increase with n), or are they different? In the latter case, they should be distinguished. <doc-sep>Summary of paper: The main contribution of this paper is a new algorithm to learn the expected reward function for a given target policy using the historical data generated by a different behavior policy in continuous action domains. All current Offline-Policy Evaluation (OPE) methods for handling continuous action domains use a kernel function to extend Inverse Probability Weighting (IPW) or Doubly Robust (DR) approaches for discrete action domains. The algorithm proposed in this work adaptively discretizes the action space by combining methods in multi-scale changepoint detection, multi-layer perceptron regression and OPE in discrete action domains. The finite sample performance of the proposed method, known as Deep-Jump Q-Evaluation (DJQE), is compared to that of two kernel-based methods, one due to Kallus and Zhou (2018) and another due to Colangelo and Lee (2020), on synthetic as well as real-world data. To generate synthetic data, four scenarios are considered, where in each case the Q-function is continuous in the action domain or is a piecewise function of the action. In almost all of these cases, DJQE outperforms the two kernel-based methods. Similarly, when applied to real-world Warfarin data (after calibration), DJQE outperforms the two kernel-based methods with respect to the bias, standard deviation and mean squared error, even when the sample size is small (n=50). The average runtime of DJQE in each scenario (for synthetic or real-world data) is about 5 minutes. Plus points: - The experimental results seem to demonstrate quite convincingly that DJQE outperforms the two kernel-based methods in almost all cases. - The methodology seems sound. - The theoretical results also appear correct and prove the soundness of the method for a fairly wide range of functions - those that are continuous in the feature space and action domain, as well as those that are piecewise constant. - The method can model jump discontinuities in the Q-function. Questions: - Why is it reasonable to assume that the Q-function can be well-approximated using piecewise linear combinations of MLPs? - How is the performance of DJQE affected by the choice of the regularization parameter \\gamma? Minor comments/questions: - Page 2, line -2: exists -> to exist - Page 3, line 4 of Section 2.3: segments -> segment - Page 3, line -4: Was there a reason for choosing the logarithm function here? - Page 4, lines 12 to 13: Is there a theoretical justification for such a choice of m, or is it based on empirical observations? Also, to what extent does the performance of DJQE depend on the initial choice of m? - Page 5, Equation (4): Could it be justified why the minimizer is unique? - Page 5, third line after Equation (4): Should it be Figure 3, Appendix A? (There is no Figure A.) - Page 5, line -4: How is \\hat{Q} used in the solution of Equation (5)? - Page 6, Assumption 1: "...number of nodes [in] each hidden layer..." - Page 6, last line of the statement of Theorem 1: Should D be D_0? - Page 6, lines -7 to -6: I did not fully understand what this means; do the change points of \\hat{D}^{\\ell} vary with m? - Page 7, line -7: data -> dataset - Page 7, line -5: How was the exponent 0.2 chosen? - Page 7, last line, and Page 8, line 9 of Section 5.2: "...with 10 hidden [layers]..." - Fifth reference on Page 10: Double occurrence of "Technical report". * Update after reading author(s)' response: Thank you very much for the detailed answers to my questions (as well as the other reviewers' comments/questions). I have upgraded my score; wishing you all the best.
The paper considers the OPE problem under the contextual bandit model with continuous action. They studied the model of a piecewise constant value function according to the actions. The assumption is new, though still somewhat restrictive as it requires the piecewise constant partitions to be the same for all x. The proposed algorithm estimates the partitions, and then used it to build a doubly robust estimator with stratified importance sampling (fitting an MLP for each partition separately). The reviewers have mixed views about the paper. The following is the AC's evaluation based on reading the paper and consolidating the reviewers' comments and the authors' responses. Pros: - The algorithm is new and it makes sense for the new problem setup (though computationally intractable) - The experimental results outperform the baseline and reinforces the theory. But it's a toy example at best. Cons: - The method is called "Q-learning" but it is somewhat disappointing to see that it actually applies only to the contextual bandit model (without dynamics). There is quite a bit of branding issues here. I suggest the authors to revise it to reflect the actual problem setup. - The estimator is assumed to be arg min, but the objective function is non-convex and cannot be solved efficiently in general, e.g., (3) involves searching over all partitions... and (4) involves solving neural network partitions. In other words, the result applies to a hypothetical minimizer that the practical solvers may or may not obtain (the authors cited Scikit-Learn for the optimization algorithm and claims that the optimization problem can be solved, which is not the case ... the SGD algorithm can be applied to solve it, but it does not necessarily find you the solution). - The theory is completely asymptotic and generic. There is no rate of convergence specified, and no dependence on the number of jumps |D_0| at all in Theorem 1. - Theorem 3 is obnoxiously sloppy. The assumptions are not made explicit (do you need Assumption 1 and 2, what is the choice of \\rho? ) The notion of "minimax rate" is not defined at all. Usually the minimax rate is the property of problem setting, i.e., Min over all algorithms, and Max over all problems with in a family. However, in the way the authors described the results in Theorem 3, it says the "the minimax convergence rate of kernel-based estimator is Op(n^{−1/3})." which seems to be restricting the algorithms instead. Such non-typical choices require clear definitions and justification. Based on what is stated, it really appears that the authors are just comparing upper bounds of the two methods. I looked at the appendix and while there is a "lower bound analysis", the bound is not information-theoretical, but rather a fixed example where an unspecified family of algorithms (I think it is a specific kernel smoothing method with a arbitrary choice of the bandwidth parameter h) will fail. Suggestions to the authors: - Instead of a piecewise constant (and uniformly bounded) function, why not consider the total variation class, which is strictly more general and comes with the same rate? - For formalizing the lower bound, I suggest the authors to look into classical lower bounds for linear smoother, e.g., Donoho, Liu, MacGibbon (1990); which clearly illustrates that kernel smoothing-type methods do not achieve the minimax rates; and that wavelets-based approaches, locally adaptive regression splines, and fused lasso (You can think about the Haar Wavelets as a basis function of piecewise linear functions ) do. The authors can improve the paper by ensuring that the theoretical parts are clearly and rigorously presented; and perhaps to iron out the more useful finite-sample analysis that depends on model parameters of interest.
This paper proposes a class of model called monotone deep Boltzmann machines, where the underlying potentials are parameterized (e.g., by CNNs) such that they obey some monotonicity constraint. This constraint ensures that the inference problem has a global optimum, which can be found using some generalized variant of parallel mean field. The method is inspired from monotone DEQ, previously proposed by Winston & Kolter (2020). Experiments on a joint task of image denoising and classification show that the proposed method can effectively model complex data distributions such as images. On one hand, this paper has some significant strengths. First, the paper is fairly well written in general. Second, while this work is heavily inspired by Winston & Kolter (2020), I find that the connection between mean field and monotone DEQ is quite interesting (although relatively straightforward), and the proposed method is theoretically well founded. On the other hand, the paper also has some limitations. 1. First and foremost, I find the experiments quite limited, which is also acknowledged by the authors. A more diverse set of applications would have made the paper much more solid. At the very least, I would have expected some experimental comparison with restricted Boltzmann machines (not to mention also its variants such as extensions to multi-label). The proposed model is theoretically sound, but it is not clear why one should use it. 2. The paper also has some minor presentation issues, but before ending my review with them, I would like to have some comments on the bibliographical discussion. 2a. Since the convergence of mean field is presented as an emphasis in the paper, I would like to point out a very recent NeurIPS 2021 paper on the topic: "Regularized Frank-Wolfe for Dense CRFs: Generalizing Mean Field and Beyond" (https://arxiv.org/abs/2110.14759). In this paper they view parallel mean field as an instance of the generalized conditional gradient method and thus obtain different convergent variants of parallel mean field with different step-size rules. It seems to me that these variants do not have the same limitations as Krahenbuhl's and Baqué's as discussed in this paper (even though their resulting algorithms seem to be similar to Baqué's at first glance). Could you give some comments on this? Including such discussion would give the reader a broader and more up-to-date view of the current state of the art. (Of course no experimental comparison would be needed, that's not the focus of the paper). 2b. "Numerous works also try to combine deep neural networks with conditional random fields (CRF) (Arnab et al., 2018; Schwartz et al., 2017; Zheng et al., 2015)." Even though this is just a minor detail in the current paper, I would like to take this opportunity to raise an important issue regarding credit assignment. The first to view "CRFs as RNNs" for the dense CRFs of Krahenbuhl & Koltun (2011) was actually Krahenbuhl & Koltun (2013) and not Zheng et al. (2015). Krahenbuhl & Koltun (2013) had two major contributions in their paper: (a) convergent parallel mean field, and (b) parameter learning of dense CRFs with reverse-mode automatic differentiation (i.e., viewing "CRFs as RNNs" and backpropagating through time). Unfortunately, Krahenbuhl & Koltun (2013) have been often credited with (a) only and not (b), while (b) is to me even more significant than (a). This is not fair, and I think this happened because some previous work didn't cite them correctly or in a misleading manner. For example, Arnab et al., (2018) didn't even cite this paper (even though they did cite in their previous work (Zheng et al., 2015), not sure why they removed the citation from the journal version). The fact that Arnab et al., (2018) completely ignored Krahenbuhl & Koltun (2013) and credited Zheng et al., (2015) for viewing "CRFs as RNNs" made their presentation misleading (and unacceptable to me). I would like to encourage the authors to give proper credits to Krahenbuhl & Koltun (2013) whenever they have an opportunity to do so. Starting with the current submission, I would suggest to slightly change the above sentence to the following, for example: "Numerous works also try to combine conditional random fields (CRF) with pixel-wise classifiers (such as neural networks) to obtain fully end-to-end models (Krahenbuhl & Koltun, 2013; Schwartz et al., 2017; Zheng et al., 2015)." But of course it is up to the authors to decide. 3. Some comments on the presentation: Major: In the abstract: "In addition, we show that our procedure outperforms existing mean-field approximation methods while avoiding any issue of local optima." I guess the authors are referring to the comparison with Krahenbuhl's and Baqué's that is presented in the appendix. If something is mentioned in the abstract, then it's important enough to be included in the main content instead of being left in the appendix. I would suggest to either remove the above sentence from the abstract, or to move such comparison from the appendix to the main content (the former seems more appropriate to me, since this is not the focus of the paper). Minor: - Eq. (1) should end with a comma instead of a dot. - Page 3, 1st paragraph: "proposed a deep parameterization of MRF. However, their..." --> - Page 3, 1st paragraph: "proposed a deep parameterization of MRF, but their..." - Section 3.1, 1st paragraph: lines 3-4 are not clear to me. - Page 6, before Eq. (13): "similarly-factored A matrix" --> "similarly-factored matrix A" Interesting and theoretically sound model. The set of experiments is quite modest, in addition to some minor presentation issues. <doc-sep>In this paper the authors propose a restricted parameterization of the Boltzmann machine that guarantees that for any set of observations, the mean field objective has a single global optimum. Furthermore, that global optimum can be provably achieved using damped parallel mean-field updates, which make inference efficient. To turn inference into learning, the model is treated as a supervised learning model: some of its variables are considered to be observed inputs and some of its variables are considered to be target outputs (known at test time). The usual, marginal cross-entropy loss is the optimization target for learning. The paper is well written and easy to follow. Most of its contents come from existing literature, but this work nicely puts those existing pieces (single fixed point, parallel updates) together, providing a probabilistic interpretation as a Boltzmann machine that is new. While this paper emphasizes how the proposed approach enables the use of general Boltzmann machines (BM) and not just stacked restricted BMs, the resulting model might actually be more restricted than the stacked RBMs that it intends to improve upon. It is true that the proposed model can contain intra-layer and skip-layer connections that a DBN lacks, but all the parameters are restricted so as to produce a monomodal posterior approximation for _any_ partial evidence. The true posterior, even for a single-layer RBM, can be multimodal if the parameters are not restricted. This means that, as a modeling tool, the proposed BM with restricted weights might be less flexible than a DBN. Many densities of interest are multimodal, particularly as we reduce the available evidence. In fact, in the absence of evidence, any useful BM will have to be multimodal (for instance, to be able to sample different MNIST digits from it). The proposed mechanism for training is also lacking in that it only allows for marginal supervised learning: it cannot be used for unsupervised learning, which is the typical mode of operation for RBMs and DBNs. Basically, the tasks that it can solve need to be crafted in such a way that the evidence provided is enough to disambiguate a single mode of the posterior. For instance, if we want to perform MNIST digits inpainting and we provide only the top 25% of the image showing a semicircle ⌒, possible completions could be 0, 2, 3, 6 8, 9. This method would fail at this task since it would consistently default to a single digit, or even worse, a single combination of the possible completions. Thus, the presented paper does provide an efficient mechanism for conditional training of parameter-restricted BMs (and does a good job at it), but the use cases in which it can be applied are severely limited, both due to the type of training and parameters it can use. The experimental section does not contain meaningful comparisons with other methods/baselines: - Baseline 1: Use your loss function with damped parallel mean field inference (i.e. consider damping a hyperparameter and do not impose any restriction on the parameters of the BM). - Baseline 2: Use a DBN (less raw expressive power, but unrestricted in parameters and with a more proper loss function). So it is difficult to gauge the practical advantage in the provided examples. Minor comments and questions: - You show that the mean field inference problem has a single global optimum. But is the true posterior monomodal under this parameterization? That would be a stronger result and convenient to know. - Is the query (the split between observed variables and variables one wants to predict) fixed throughout training? Although this is not explicitly pointed out in the theoretical part of your paper (it seems to be fixed, citing Domke 2013), this split could be different for each training sample, which seems to be the case based on your experiments. Using different splits is called "query training" in this AAAI 2021 paper "Query Training: Learning a Worse Model to Infer Better Marginals in Undirected Graphical Models with Hidden Variables", which seems to propose a very similar approach, although using a different type of inference. It'd be good to clarify which approach you are using in the description of training. - The definition of the function in Eq. (11) is a bit confusing because of how the domain is included. Could you define it by parts, or define I(.)? - Which is the value of alpha that you use for your experiments? - The figure "92.95% test accuracy" corresponds to the 10-way labels of each digit? Or to the 4-way categories of the pixels? - Typo: "that owning to the restricted" -> owing Pros: This paper does a good job at providing a mechanism for inference in (parameter restricted) BMs with convergence guarantees, as well as an efficient method to learn the parameters of these BMs. Cons: The proposed method cannot be applied in many settings in which BMs can (unsupervised learning, sampling, use of multimodal posteriors). Little experimental validation of the usefulness of the convergent inference. So the settings in which this approach can be used is very limited, but within that setting, it provides the required details for efficient training and robust guarantees for inference. <doc-sep>This paper theoretically shows that the mean-field equation for a certain family of Boltzmann machines with hidden variables, called the monotone DBMs, can be modeled as the recently proposed monotone Deep Equilibrium (DEQ) model. This paper further characterizes properties of such Boltzmann machines and its training, and shows its behavior in experiments on MNIST and CIFAR-10. ## Strength The strength of this paper is the technical contribution of finding the connection between the DBMs and the DEQ model by characterizing the monotone DBMs. I think this is a good contribution as it can be essential for further development of BMs, considering that the current progress of BMs is not so rapid, in my understanding. The quality of presentation is also good, and this paper is clearly written overall. I have just a minor comment: - Since the current explanation of a block hollow matrix is vague, please mathematically define it for the self-completeness. ## Weaknesses The significance of this paper is not high, and evaluation is weak. In particular, the practical advantage of the proposed monotone DBMs is not clear. Although it is true that the family of BMs to which the contrastive divergence algorithm can be applied is limited, any BMs with any connection patterns of hidden variables can be trained by directly applying Gibbs sampling, which of course includes the monotone DBMs. Therefore, the monotone DBMs have no merit with respect to the effectiveness of inference, and I guess the only practical advantage of the monotone DBMs can be the efficiency. However, there is neither analysis of computational complexity nor empirical runtime comparison to such a straightforward approach. In addition, it has been already proven that RBMs can represent any distribution, therefore, from the viewpoint of the representation power, there is no difference between RBMs and monotone DBMs. Of course, I agree that monotone DBMs can be more effective than RBMs and existing DBMs in practice, for example, monotone DBMs can achieve more accurate inference with less parameters than RBMs. However, there are no such comparisons in this paper. I am happy to increase my score if the above my concerns are properly addressed by the authors' response. This paper potentially includes an interesting technical contribution, while the significance is not convincing and the evaluation is weak. <doc-sep>This paper proposes a new family of monotone deep Boltzmann machines where the pairwise potentials satisfy a monotonicity condition, giving rise to efficient mean-field iteration with provable convergence guarantees. The convergence is obtained by drawing connections with monotone deep equilibrium models. Small-scale experiments are done as proof of concept. The paper is very well-written and easy to read. However, I found the novelty aspect of the work to be a bit lacking: - Aside from the new parameterization Eq (3),(4) introduced to satisfy the monotonicity condition, the method of this paper seems like a straightforward combination of [Krahenbuhl & Kolten 2013], [Baque et al. 2016], and [Winston & Kolter 2020]. - It is also unclear how restirctive this parameterization is (which itself is quite simple) within all possible pairwise potentials that satisfy the monotonicity condition. - The parallel updates and the convergence proof are almost exactly the same as [Winston & Kolter 2020], except for the extension to softmax operation. I would be happy with the novelty aspect if convincing experiments results are shown. Sadly this does not seem to be the case: - The practical benefit of deep Boltzmann machine compared to more traditional neural architectures (e.g. CNN for image classification) is not clear to me and has not been highlighted in the paper. When would someone use deep BM instead of the alternatives? The experimental results do not seem to answer this question. - Although deep Boltzmann machine can be more flexible for modeling different conditional distributions without retraining, it seems to come at the cost of being much harder to train while relying on mean-field approximation. I'm wondering how crude the mean-field approximation of the posterior distribution is in the current paper's setting, which has not been discussed. - The experiments are very small-scaled. The images are all with very low-resolution. This seems to suggest the impracticality of deep Boltzmann machine. In CIFAR-10 experiment, the test accurarcy is only 58%, which is a lot lower than using conventional neural architectures. - In Eq (20) a very arbitrary scaling is used after convergence to the mean-field solution. This seems like an ad-hoc fix for a method that doesn't really work due to the monotonicity constraint. I'd be interested in seeing experimental comparison between the scaled version and the original version. - For the patch case, the model works better without the monotonicity constraint. This seems to be against the whole point of the paper. - The proposed method does not seem to have significant improvement compared to past works in this line of work (e.g. diagonal entries in Table 3, 4). Additional comments: - Page 3, "We remark the readers upon ..." this doesn't sound grammatically correct. - Sec 3.5 mentions the model is trained directly to output correct marginals, instead of the usual likelihood maximization, which can be intractable. What is lost in this simplification (in addition to mean-field approximation)? Matching only marginals seems very coarse to me. - How to train the proposed model on batches of images? If I understand correctly, the current training procedure would sample a single image, split it into $x_h$ and $x_o$, run mean-field inference given $x_o$ in a differentiable manner, then backpropagate through loss $\\ell(q_h^*, x_h)$. Are multiple mean-field inferences run in parallel? If so do they use the same number of iterations? If not, I would imagine the training to have very high variance. - At the end of Sec 3.5, at the top of page 8, why is $g(q_h^*)$ not the damped version? This paper is well-written but its contributions are incremental with somewhat weak experimental results.
This is an interesting contribution to the Boltzmann machine (BM) literature that makes a nice connection to DEQ models. On a positive note, reviewers found that it was well-written, clear, and interesting. Unfortunately, there were significant concerns with the manuscript that were not fully addressed in the revision: inappropriate or incomplete baselines, insufficient credit given to previous works, and the fact that this model is limited as compared to its BM relatives. I would recommend that the authors take into account the reviewers' feedback in a revision of the work.
This paper provides a theoretical analysis for batch normalization with gradient descent (GDBN) under a simplified scenario, i.e., solving an ordinary least squares problem. The analysis shows that GDBN converges to a stationary point when the learning rate is less than or equal to 1, regardless of the condition number of the problem. Some practical experiments are carried out to justify their theoretical insights. The paper is in general easy to follow. Pros: This paper provides some insights for BN using the simplified model. 1. It shows that the optimal convergence rate of BN can be faster than vanilla GD. 2. It shows that GDBN doesn't diverge even if the learning rate for trainable parameters is very large. Cons: 1. In the main theorem, when the learning rate for the rescaling parameter is less than or equal to 1, the algorithm is only proved to converge to a stationary point for OLS problem rather a global optimal. 2. To show convergence to the global optimal, the learning rate needs to be sufficiently small. But it is not specified how small it is. Overall, I think this paper provides some preliminary analysis for BN, which should shed some lights for understanding BN. However, the model under analysis is very simplified and the theoretical results are still preliminary.<doc-sep>The paper presents an analysis of the batch normalization idea on a simple OLS problem. The analysis is interesting as presented but several key questions remain, as described below. It is unclear that these questions are answered to the point where the insight gained can be considered transferable to BN in large Neural Network models. - The reason why the auxiliary variable 'a' is included in the formulation (7) is unclear. The whole reason for using BN is to rescale intermediate outputs to have an expectation of zero and variance of one. The authors claim that BN produces "order 1" output and so 'a' is needed. Can you please explain his better? - The scaling proposition 3.2 is claimed to be important, but the authors don't provide a clear explanation of why that is so. Two different settings of algorithms are presented where the iterates should roughly be in the same order if input parameters of the formulation or the algorithm are scaled in a specific way. It is unclear how this leads to the claimed insight that the BN algorithm is yielded to be insensitive to input parameters of step length etc. due to this proposition. also, where is the proof of this proposition? I couldn't find it in the appendix, and I apologize in advance if that's an oversight on my part. - The 'u' referred to in eqn (14) is the optimal solution to the original OLS problem, so has form H^{-1} g for some g that depends on input parameters. Doesn't this simplify the expression in (!4)? Does this lead to some intuition on how the condition number of H^* relates to H? Does this operation knock off the highest or lowest eigenvalue of H to impact the condition number? - Additionally, it is bad notation to use two-letter function names in a mathematical description, such as BN(z). This gets confusing very fast in theorems and proofs, though the CS community seems to be comfortable with this convention. <doc-sep>The author analyze the convergence properties of batch normalization for the ordinary least square (OLS) objective. They also provide experimental results on the OLS objective as well as small scale neural networks. First of all, understanding the properties of batch normalization is an important topic in the machine learning community so in that sense, contributions that tackle this problem are of interest for the community. However, this paper has a significant number of problems that need to be addressed before publication, perhaps the most important one being the overlap with prior work. Please address this point clearly in your rebuttal. 1) Overlap with Kolher et al. 2018: The authors erroneously state that Kolher et al. considered the convergence properties of BNGD on linear networks while after taking a close look at their analysis, they first derive an analysis for least-squares and then also provide an extension of their analysis to perceptrons. The major problem is that this paper does not correctly state the difference between their analysis and Kolher et al who already derived similar results for OLS. I will come back to this aspect multiple times below. 2) Properties of the minimizer The authors should clearly state that Kolher et al. first proved that a^* and w^* have similar properties to Eq. 8. If I understand correctly, the difference seem to be that the algorithm analyzed in Kohler relies on the optimal a^* while the analysis presented here alternates between optimizing a and w. Is this correct? Is there any advantage in not using a^*? I think this would be worth clarifying. 3) Scaling property I find this section confusing. Specifically, a) The authors say they rely on this property in the proof but it is not very clear why this is beneficial. Can you please elaborate? b) It seems to me this scaling property is also similar to the analysis of Kolher et al. who showed that the reparametrized OLS objective yields a Rayleigh quotient objective. Can you comment on this? c) The idea of “restarting” is not clear to me, are you saying that one the magnitude of the vector w goes above a certain threshold, then one can rescale the vector therefore going back to what you called an equivalent representation? I don’t see why the text has to make this part so unclear. Looking at the proof of Theorem 3.3, this “property” seem to be used to simply rescale the a and w parameters. d) The authors claim that “the scaling law (Proposition 3.2) should play a significant role” to extend the analysis to more general models. This requires further explanation, why would this help for say neural networks or other more complex models? 4) Convergence rate It seems to me that the results obtained in this paper are weaker than previous known results, I would have liked to see a discussion of these results. Specifically, a) Theorem 3.3 is an asymptotic convergence result so it is much weaker than the linear rate of convergence derived in Kolher et al. The authors require a sufficiently small step size. Looking at the analysis of Kolher et al., they show that the reparametrized OLS objective yields a Rayleigh quotient objective. Wouldn’t a constant step size also yield convergence in that case? b) Proposition 3.4 also only provides a local convergence rate. The authors argue BNGD could have a faster convergence. This does seem to again be a weaker result. So again, I think it would be very beneficial if the authors could clearly state the differences with previous work. 5) Saddles for neural nets The authors claim they “have not encountered convergence to saddles” for the experiments with neural networks. How did you check whether the limit point reached by BNGD was not a saddle point? This requires computing all the eigenvalues of the Hessian which is typically expensive. How was this done exactly? 6) Extension of the analysis to deep neural networks The analysis provided in this paper only applies to OLS while Kolher et al. also derived an analysis for neural networks. Can the authors comment on extending their own analysis to neural nets and how this would differ from the one derived in Kolher et al.? 7) Experiments How would you estimate the range of suitable step sizes (for both a and w) for BNGD for a neural network?
The reviewers agree that providing more insights on why batch normalization work is an important topic of investigation, but they all raised several problems with the current submission which need to be addressed before publication. The AC thus proposes "revise and sesubmit".
In this paper, the authors propose a method for dimensionality reduction of image data. They provide a structured and deterministic function G that maps a set of parameters C to an image X = G(C). The number of parameters C is smaller than the number of free parameters in the image X, so this results in a predictive model that can be used for compression, denoising, inpainting, superresolution and other inverse problems. The structure of G is as follows: starting with a small fixed, multichannel white noise image, linearly mix the channels, truncate the negative values to zero and upsample. This process is repeated multiple times and finally the output is squashed through a sigmoid function for the output to remain in the 0..1 range. This approach makes sense and the model is indeed more principled than the one taken by Ulyanov et al. In fact, the DIP of Ulyanov et al. can hardly be considered "a model" (or a prior, for that matter), and instead should be considered "an algorithm", since it relies on the early stopping of a specific optimization algorithm. This means that we are not interested in the minimum of the cost function associated to the model, which contradicts the very concept of "cost function". If only global optimizers were available, DIP wouldn't work, showing its value is in the interplay of the "cost" function and a specific optimization algorithm. None of these problems exist with the presented approach. The exposition is clear and the presented inverse problems as well as demonstrated performance are sufficient. One thing that I missed while reading the paper is more comment on negative results. Did the authors tried any version of their model with convolutions or pooling and found it not to perform as well? Measuring the number of parameters when including pooling or convolutions can become tricky, was that part of the reason? Minor: "Regularizing by stopping early for regularization," In this paper "large compression ratios" means little compression, which I found confusing. <doc-sep>Brief summary: This paper presents a deep decoder model which given a target natural image and a random noise tensor learns to decode the noise tensor into the target image by a series of 1x1 convolutions, RELUs, layer wise normalizations and upsampling. The parameter of the convolution are fitted to each target image, where the source noise tensor is fixed. The method is shown to serve as a good model for natural image for a variety of image processing tasks such as denoising and compression. Pros: * an interesting model which is quite intriguing in its simplicity. * good results and good analysis of the model * mostly clear writing and presentation (few typos etc. nothing too serious). Cons and comments: * The author say explicitly that this is not a convolutional model because of the use of 1x1 convolutions. I disagree and I actually think this is important for two reasons. First, though these are 1x1 convolutions, because of the up-sampling operation and the layer wise normalizations the influence of each operation goes beyond the 1x1 support. Furthermore, and more importantly is the weight sharing scheme induced by this - using convolutions is a very natural choice for natural images (no pun intended) due to the translation invariant statistics of natural images. I doubt this would have worked so well hadn't it been modeled this way (not to mention this allows a small number of parameters). * The upsampling analysis is interesting but it is only done on synthetic data - will the result hold for natural images as well? should be easy to try and will allow a better understanding of this choice. Natural images are only approximately piece-wise smooth after all. * The use of the name "batch-norm" for the layer wise normalization is both wrong and misleading. This is just channel-wise normalization with some extra parameters - no need to call it this way (even if it's implemented with the same function) as there is no "batch". * I would have loved to see actual analysis of the method's performance as a function of the noise standard deviation. Specifically, for a fixed k, how would performance increase or decrease, and vice versa - for a given noise level, how would k affect performance. * The actual standard deviation of the noise is not mentioned in any of the experiments (as far as I could tell) * What does the decoder produce when taking a trained C on a given image and changing the source noise tensor? I think that would shed light on what structures are learned and how they propagated in the image, possibly more than Figure 6 (which should really have something to compare to because it's not very informative out of context).<doc-sep>The paper builds upon Deep Image Prior (DIP) - work which shows that one can optimize a neural generator to fit a single image without learning on any dataset, and the output of the generator (which approximates the image) can be used for denoising / super resolution / etc. The paper proposes a new architecture for the DIP method which has much less parameters, but works on par with DIP. Another contribution of the paper is theoretical treatment of (a simplified version of) the proposed architecture showing that it can’t fit random noise (and thus maybe better suited for denoising). The paper is clearly written, and the proposed architecture has too cool properties: it’s compact enough to be used for image compression; and it doesn’t overfit thus making early stopping notnesesary (which was crucial for the original DIP model). I have two main concerns about this paper. First, it is somewhat misleading about its contributions: it's not obvious from abstract/introduction that the whole model is the same as DIP except for the proposed architecture. Specifically, the first contribution listed in the introduction makes it look like this paper introduces the idea of not learning the decoder on the dataset (the one that starts with “The network is not learned and itself incorporates all assumptions on the data.”). My second concern is about the theoretical contribution. On the one hand, I enjoyed the angle the authors tackled proving that the network architecture is underparameterized enough to be a good model for denoising. On the other hand, the obtained results are very weak: only one layered version of the paper is analysed and the theorem applies only to networks with less than some threshold of parameters. Roughly, the theorem states that if for example we fix any matrix B of size e.g. 256 x k and matrix U of size 512 x 256 and then compute U relu(B C) where C is the vector of parameters of size k x 1, AND if k < 2.5 (i.e. if we use at most 2 parameters), then it would be very hard to fit 512 iid gaussian values (i.e. min_C ||U relu(B C) - eta|| where eta ~ N(0, 1)). This restriction of the number of parameters to be small is only mentioned in the theorem itself, not in the discussion of its implications. Also, the theorem only applies to the iid noise, while most natural noise patterns have structure (e.g. JPEG artifacts, broken pixels, etc) and thus can probably be better approximated with deep models. Since the paper manages to use very few parameters (BTW, how many parameters in total do you have? Can you please add this number to the text?), it would be cool to see if second order methods like LBFGS can be applied here. Some less important points: Fig 4 is very confusing. First, it doesn’t label the X axis. Second, the caption mentions that early stopping is beneficial for the proposed method, but I can’t see it from the figure. Third, I don’t get what is plotted on different subplots. The text mentions that (a) is fitting the noisy image, (b) is fitting the noiseless image, and (c) is fitting noise. Is it all done independently with three different models? Then why does the figure says test and train loss? And why DIP loss goes up, it should be able to fit anything, right? If not and it’s a single model that gets fitted on the noisy image and tested on the noiseless image, then how can you estimate the level of noise fitting? ||G(C) - eta|| should be high if G(C) ~= x. Also, in this quote “In Fig. 4(a) we plot the Mean Squared Error (MSE) over the number of iterations of the optimizer for fitting the noisy astronaut image x + η (i.e., FORMULA ...” the formula doesn’t correspond to the text. And finally, the discussion of this figure makes claims about the behaviour of the model that seems to be too strong to be based on a single image experiment. I don’t get the details of the batch normalization used: with respect to which axis the mean and variance are computed? The authors claim that the model is not convolutional. But first, it’s not obvious why this would be a good thing (or a bad thing for that matter). Second, it’s not exactly correct (as noted in the paper itself): the architecture uses 1x1 convolutions and upsampling, which combined give a weak and underparametrized analog of convolutions. > The deep decoder is a deep image model G: R N → R n, where N is the number of parameters of the model, and n is the output dimension, which is typically much larger than the number of parameters (N << n). I think it should be vice versa, N >> n The following footnote > Specifically, we took a deep decoder G with d = 6 layers and output dimension 512×512×3, and choose k = 64 and k = 128 for the respective compression ratios. Uses unintroduced (at that point) notation and is very confusing. It would be nice to have a version of Figure 6 with k = 6, so that one can see all feature maps (in contrast to a subset of them). I’m also wondering, is it harder to optimize the proposed architecture compared to DIP? The literature on distillation indicates that overparameterization can be beneficial for convergence and final performance.
In this work, the authors propose a simple, under parameterized network architecture which can fit natural images well, when fed with a fixed random input signal. This allows the model to be used for a number of tasks without requiring that the model be trained on a dataset. Further, unlike a recently proposed related method (DIP; [Ulyanov et al., 18]), the method does not require regularization such as early-stopping as with DIP. The reviewers noted the simplicity and experimental validation, and were unanimous in recommending acceptance.
This paper presents an approach for only partially grounding classical planning tasks which are too large to be fully grounded by common grounding algorithms. The approach uses machine learning techniques to estimate the probability of operators belonging to a plan of the task, using information from small instances of the same domain. These operators are then considered first for grounding, which can be stopped early if otherwise risking to run out of resources. The resulting partially grounded task is not guaranteed to be solvable even if the original task was. An experimental evaluation shows that the approach works well in several IPC domains where very large tasks can be solved with partial grounding. Since this paper is already published at AAAI and the submitted version is identical (and not a longer technical-report style variant), I will refrain from writing a full review. While the paper does not directly fall into the category of search or heuristic for planning, it addresses the problem of solving domains that are challenging (because of their size) as as such also fits the scope of the workshop. The paper is very well written and easy to follow and hence my recommendation is a clear accept for the workshop. If I had to point out anything that could be improved (in case the authors would actually like to turn this into a one page longer extended paper), then I would to suggest to include a description of the used machine learning techniques because many researchers at ICAPS are probably not very familiar with these. More importantly, I think it would be interesting to discuss why particular ML methods worked or didn't work for the purpose of the paper, or, if that cannot be explained easily, at least state what differences were observed, what parameters ended up being used and how this affected results.<doc-sep>Short summary of the paper: This works introduces partial grounding techniques for planning tasks, shows how machine learning techniques can be used to prioritize the operator order of the grounding process, and presents an empirical evaluation of the techniques on multiple planning domains. Detailed review: In planning, most planners nowadays perform grounding as a preprocess step before search. Therefore even the strongest search algorithms won't be of use if the planner is not even able to complete the grounding step. This was indeed the case in some problems of the latest international planning competition (IPC 19), therefore this work investigates a very relevant field for planning and fits in the scope of the workshop. The paper reads very well and does a good job in presenting the problem and the idea of partial grounding and operator ordering. Related work is cited, although it could relate itself to the recent introduction of action schema networks (Toyer et al., 2018), which also apply machine learning techniques to the lifted task representation, although these are used to guide search. Nevertheless, the presented techniques are novel and for the most part clearly defined. An extensive empirical evaluation shows how the different techniques compare to each other and show that the partial grounding techniques can significantly increase the coverage of a planner, although there is not one dominating technique. All in all, this paper presents highly relevant work to the field of classical planning and I do not see a reason to not accept this work at the workshop. My only real criticism is that the presentation of the ILP and the classification approach is somewhat informal and a bit convoluted. While the remainder of the paper is very well and fluently written I had to reread this section several times before fully understanding the underlying concepts. I think either a more formal setting or a more detailed example on both approaches could be helpful for the reader. Sam Toyer, Felipe W. Trevizan, Sylvie Thiébaux, Lexing Xie: Action Schema Networks: Generalised Policies With Deep Learning. AAAI 2018: 6294-6301 Minor comments: - I would argue that a stopping condition is a condition on when the algorithm stops, but in this work it is a condition on when the algorithm continues - 'Let N^op be a constant, [...]: require the algorithm to continue while...': the colon does not really make sense here. Maybe just end the sentence and start with 'We require'. - The text of Figure 1 is hard to read on printed paper, consider the use of bold font - Fast Downward is cited twice - 'Ridder and Fox (2014) (2014)' => duplicate year
Dear Authors, thank you very much for your submission. We are happy to inform you that we have decided to accept it and we look forward to your talk in the workshop. Please, go over the feedback in the reviews and correct or update your papers in time for the camera ready date (May 24). Best regards HSDIP organizers
This paper focuses on the network load balancing problem in data centers using multi-agent RL paradigm. The main goal in load balancing problems is to minimize the makespan The authors prove various properties of the setting with the main result to be that such setting is a Markov Potential Game. They showed this result via properly defining a workload distribution fairness potential function. Moreover, using facts established in Leonardos et al, they design a distributed algorithm to approximate Nash equilibrium policies. The authors provide an extensive experimental section that suggest that the proposed algorithm is effective. Pros The paper is interesting, with both theoretical and applied merits and an interesting modeling of the network load balancing prblems in data centers as a MARL system. Cons: The result of the proposed framework to be a Markov potential game is not very surprising as load balancing games are known to be potential games (see [Koutsoupias, Papadimitriou 99]). This work has no negative societal impact as far as the reviewer can forsee. <doc-sep>This paper proposes MPG-based MARL solution for the load-balancing problem. Applying RL directly for load-balancing is not favorable as the load balancers (i.e., multiple-agents) need to synchronize observations as well as the action space grows with number of agents (requiring re-training etc.) **Strengths** 1. Does not require re-training with increasing number of multiple agents as the proposed approach decomposes the joint state and action space. 2. Does not require synchronization between the load balancers **Weakness** 1. Poor evaluation * No scaling experiments (i.e., increasing number of LBs or servers). Only 2 LBs in evaluation. If the paper had scaled the experiments more they would find that their approach may not be practical for real DCs (discussed later). * No real traffic/workload. The supposedly real benchmark is a mocked up small testbed that does not mimic real distribution of traffic or scale. * Experimentally weak: no variation in traffic and limited variation of IO/CPU * What about QoS (99th percentile behavior, an important metric to evaluate for LBs)? 2. Invalid and inconsistent assumptions wrt problem statement. * In the intro, the paper claims that "existing algorithms are not adaptive to due to dynamic environments" yet the assumption made in this paper is that each server is capable of processing a certain amount of workload v_j, which is a number only dependent on server capabilities and not on the request (or traffic type itself). For example, a GET request of type X can take 2s whereas a another GET request of type Y can take 20s. The paper mentions collided elephants and yet does not provide any experiments that the proposed technique can handle such situations. In other words, v_j should be stochastic and not fixed on just the server but also on the request characristics. * previous assumption invalidates most of the derivation presented later in the paper. * Another assumption is that active probing is impractical. However, it is okay for LBs to communicate with severs to observe the server state, why? There is no citation or experiment showing that indeed that is a reasonable assumption. All of the work is based on this key assumption. 3. Limited insights: * why Markov Potential Games for handling the stated problem? Why not use mean-field theorem to approximate the behavior of all the other multi-agents using mean or median behavior. Overall the paper reads as an application of MPG rather than "there is a nice solution" to the load-balancing problem. Insights and analysis of different approaches are missing. * No evaluation of the overhead of the RL vs MARL solution in terms of performance as well as overhead (to justify MARL is needed over RL). There are several solutions proposed such as RLB-SAC (Neurips 2021) which reports similar high performance, and Park: An Open Platform for Learning-Augmented Computer Systems. * What happens if RL makes bad decisions (safety of RL: Towards safe online reinforcement learning in computer systems, NeurIPS 2021). 4. Writing needs significant improvement esp. introduction and related work section. * citations on key assumptions/claims or experiments to make those statements * Abbreviations introduced without describing what it stands for?: e.g. NE for MPG * Limited evaluation * Strong assumptions for a practical system * No comparison with other ML-based approaches. <doc-sep>This paper considers the load balancing problem in a network of multiple heterogenous servers, and multiple load balancers. The authors formulate the problem as a multi agent reinforcement learning problem, and specifically consider a Markova potential game. The settings is that of multiple load balancers, each responsible for sending jobs to a set of servers. There might be overlaps in the set of servers the various load balancers serve, and the load balancers thus have partial observability of the system state. Using the cumulative total fairness as the potential function, where fairness is defined as either variance fairness or product fairness, the authors show that the job allocation game where the objective is to minimize the makespan while maximizing the variance fairness or product fairness, is a Markov potential game. A network with multiple load balancers managing load to multiple and overlapping servers is a complex problem. The interactions between the load balancers is such that a closed form solution to the balancing problem is not evident. This approach of setting a potential game within a RL environment is interesting and seems novel. The authors propose a distributed load balancing method where each agent independently learns a policy, through policy gradient methods. The reward function is set to be per-LB variance or product fairness. The authors show that maximizing for these local fairness metrics is sufficient tor minimizing makespan, a global metric. The exact model, with respect to overlap of servers among the load balancers, is not clear. Are all servers allocated jobs by all LBers? What do the results look like with partial overlaps? This seems to be a harder problem. The experimentation doesn’t include comparison with classical methods such as LSQ. It would be interesting to see how a distributed, blind, greedy LSQ compares to the distributed MARL method proposed here, especially since the computation costs are so vastly different. <doc-sep>This paper proposes a distributed Multi-agent Reinforcement Learning based approach for load balancing at the network layer formulated as a Markov Potential game. Current network load balancers have limited observability over the workloads and servers performance and are prone to misconfiguration due to heterogeneity and elasticity. Centralized approaches (CTDE) incur an additional overhead from centralized communication. This work addresses this by using a local variance-based fairness function in each load balancer which, when maximized, can minimize the potential function of the Markov potential game. This approximates the Nash equilibrium of the game. ## Strengths * Significant gains by using proposed design over current in-production load balancing algorithms * Strong theoretical foundation of formulating load balancing as a multi-agent RL-based Markov potential game * Well written paper that puts the pieces of the design in an easy to understand order ## Weaknesses * DCs typically have high bandwidth for internal communication. The paper states that centralized communication leads to heavy overhead which is not convincing in the main paper. The evaluation section mentions it in passing as being evaluated in the appendix but I feel it would be helpful to show in the main paper. * Not sure if I agree with "large-scale" DC networks having only 20 servers. The largest of data centers have thousands of servers and load balancers. This makes the real-world setup slightly less impressive. * Fault tolerance not evaluated in the paper (in terms of failed requests leading to incorrect job completion estimates for next time period, network partitions etc.) * The paper doesn't seem to address elastic setups even though their motivation included both heterogeneous and elastic infrastructures. * Simulator not as complex as real-world (addressed in paper). Still allows to test parts of the system without stochastic network parameters. These could be synthetically injected though. * Need for low communication overhead in DC is not motivated strongly. Centralized methods (QMix for example) still show comparable performance in some application setups. <doc-sep>The paper explores the task of multi-agent network load balancing via formulation as a Markov potential game, using workload distribution fairness as a potential function. A MARL algorithm is proposed based on this formulation and provides for fully-decentralized learning. The paper further develops an event-based simulator which, along with a real-world network setup, is used to evaluate the proposed algorithm against several MARL baselines. Strengths: + Rigorous formulation of network load balancing as MPG with proofs that appear sound. + Generally interesting and well-motivated application for MARL with promising potential Weaknesses: - Concern regarding representativeness of baselines used for evaluation - Practical benefits in terms of communication overhead & training time could be more strongly motivated Detailed Comments: Overall, the paper was interesting to read and the problem itself is well motivated. Formulation of the problem as an MPG appears sound and offers a variety of important insights with promising applications. There are, however, some concerns regarding evaluation fairness and practical benefits. The baselines used for evaluation do not seem to accurately represent the state-of-the-art in CTDE. In particular, there have been a variety of recent works that explore more efficient strategies (e.g., [1-3]) and consistently outperform QMix with relatively low inter-agent communication. Although the proposed work appears effective as a fully-decentralized approach, it is unclear how well it would perform against more competitive CTDE baselines. Comparison against these more recent works would greatly improve the strength of evaluation. Benefits in terms of reduced communication overhead could also be more strongly motivated. Presumably, communication between agents could be done over purpose-built inter-LB links, thus avoiding QoS degradation due to contention on links between LBs and servers. Even without inter-LB links, the increase in latency demonstrated in Appendix E.2.2 appears relatively low. Robustness against dynamic changes in network setup are discussed to some degree, but it’s unclear how significant this issue is in a real-world environment. Even in a large-scale setup, the number of LBs/servers is likely to remain fairly constant at the timescales considered in this work (i.e., minutes). Given this, it seems that the paper should at least discuss trade-offs with a longer training time, which could impact the relative benefits of various approaches. Some confusion in notation: - Algorithm 2, L8 should be t = 1,…,H (for horizon)? - L100, [M] denotes the set of LBs? Minor notes: - Some abbreviations are not defined, e.g., “NE” on L73 - Superscript notation in Eq 6 is not defined until much later (L166), which hindered understanding in an initial read. [1] S. Zhang et al, “Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control”, NeurIPS 2019. [2] Z. Ding et al, “Learning Individually Inferred Communication for Multi-Agent Cooperation”, NeurIPS 2020. [3] T. Wang et al, “Learning Nearly Decomposable Value Functions Via Communication Minimization”, ICLR 2020. n/a
The paper received an uniformly positive evaluation, although all the scores are in the "borderline / weak accept" range. The authors included a long and comprehensive rebuttal and actively participated in the discussion, which made some of the reviewers updating their scores. I recommend the paper to be accepted, but I understand the decision could be reverted when comparing the paper with the other candidates.
Authors present an imaginary coordinator agent that executes graph selection as its action based on possible combinations of pairwise edges and the underlying individual and pairwise utilities. The results show some effectiveness of the presented method, SOP-CG, in simple examples. The main strength of the paper lies in its algorithm being of polynomial-time nature, and that characteristic needs much more emphasis in both text and figures. Does SOP-CG reach peak performance faster? Is its peak performance higher? could be some guideline questions when structuring the paper contents around the polynomial-time algorithm. Q1. I would appreciate if the authors would discuss the relevance of a WWW 2020 paper: How Much and When Do We Need Higher-order Information in Hypergraphs? A Case Study on Hyperedge Prediction by Yoon et al. I think it could also provide some useful insights as to how high of an order the hyperedges need to be, to save some function expressiveness (i.e., representational capacity) at the cost of time complexity. Q2. How did the authors go about designing the didactic examples? How simple of a task is the "sweet spot" for SOP-CG? As task complexity is altered, when does SOP-CG begin losing its simplicity advantage? When do other models take over? Which measures would you say most greatly determine the task complexity, which in turn, compromises SOP-CG's performance? Q3. How would you position SOP-CG in the MARL research? Since the graph selector agent is there at both training at execution times, would SOP-CG be a centralized training centralized execution work? If it is, comparison against CTDE works such as VDN and QMIX would not be fair. It is as though a "free" centralized coordinator is helping out the SOP-CG agents at execution time as well. It may be a good idea to compare against some communication-enabled MARL works. Q4. Despite starting out strong, the paper seems to fall off rather dramatically, especially when it comes to the didactic nature of the examples, tied in with the page 6 remark that SOP-CG would perform best (given its tradeoff) in tasks where the restricted graph classes are enough to express the coordination dependencies. This remark really sounds like going back on the VDN-QMIX-QTRAN line of research, whose focus was about covering a richer class of joint action-value functions. Going back on that trend now only to pursue the polynomial-time nature of the running algorithm would in my opinion require far more diverse evaluation examples, backed by a stronger motivation highlighting real-world threats of all the other MARL algorithms taking longer than polynomial time. As is, SOP-CG does not contend amazingly against other MARL algorithms that chose the "NP-hard? Curse of dimensionality? Fine. We'll approximate, approximate, approximate." path rather than the "Polynomial time is our topmost priority; function expressiveness can wait." path. That leads me back to the question of why pursue polynomial time at the cost of losing both the function expressiveness and the peak performance in the apparent trilemma. My biggest concern is the "imaginary" agent freely collecting information, making decisions, and delivering those decisions to all the agents at both the training and execution times. Comparison against the chosen baselines is not fair, even when the chosen evaluation task is a simple enough one, in which SOP-CG's limited representational capacity would not appear that pronounced. <doc-sep>This paper proposes an extension of deep coordination graph, called Self-Organized Polynomial-time Coordination Graphs (SOP-CG). Instead of pre-specified graph topology used in DCG, their method allows graph topology to be state-dependent, which is achieved by a coordinator agent, and the optimization of this agent is incorporated in a modified temporal difference learning paradigm. Two pre-specified undirected acyclic graph classes are used to ensure polynomial-time graph selection and accurate greedy action selection. The result on sensor network, grid world and MPE shows that such a trade-off between the representational capacity of graph topology and the computational accuracy can improve the performance of MARL and learn meaningful graph topology. **Strengths** 1. To determine state-dependent coordination graph is interesting. 2. Incorporating graph selection into TD learning is desirable. **Concerns/Questions** 1. State-dependent coordination graph needs to be determined at each time step in both training and execution, which means SOP-CG is a centralized method. Thus, a baseline of single-agent RL is desired for comparison in experiments. 2. It is not clear how $q_i$ and $q_{ij}$ are learned. Are they parameter-sharing for agents? 3. The operation $\\arg\\max$ in equation 4 takes $O(n^3)$ for $\\mathcal{G}_P $. It may be too costly for both training and execution. One evidence is that each run of SOP-CG takes up to 2.5 days in these simple experimental tasks. 4. Since both DCG and CASEC include SMAC experiments, it would be better to also include it here to show the performance of SOP-CG in complex environments. In summary, it is currently hard to see the benefit of determining the coordination graph in a centralized way. Moreover, the proposed method is not verified in complex environments. **Minor Comments** 1. As claimed in Appendix C, a graph relabeling technique is used to solve the extra overestimation error introduced by additional max operator over graphs. However, this paper is currently missing an ablation to validate this point. State-dependent coordination graph is important. However, the paper currently has several weaknesses as mentioned above. It seems clearly below the bar of ICLR. <doc-sep>This paper introduced a novel method called Self-Organized Polynomial-time Coordination Graphs (SOP-CG), aiming to handle the decentralized constraint optimization problem (DCOP). This paper is well organized and the experiments are explicitly presented. Therefore, I think the work of this paper is very interesting and the contributions are sufficient. The detailed comments regarding the quality of this paper are listed as follows: 1. In the Introduction section, one of the biggest concerns is that this paper cannot tell the novelty of the proposed SOP-CG compared with the methods in peer researches. Besides, the illustration of experimental results is not convincible to prove the advantages of the proposed SOP-CG in this section. 2. Somewhere in the paper, the authors' presentation is unclear and is worth being improved. For example, in the Background section, the meanings of some symbols in the model are not clear. Furthermore, the authors should give the purposes of introducing formula (2). 3. What is the intuition behind the process of investigating polynomial-time coordination graphs in the Section 4? Or, is there any intuition at all? How did you come up with that idea? Have you borrowed this idea from somewhere else? It is better to give a detailed explanation about polynomial-time coordination graphs. 4. In page 6, the authors proposed Self-Organized Polynomial-Time Coordination Graphs. Whatever techniques are used in the manuscript, there is a need to tabulate computational cost of the proposed algorithm in this paper. 5. In the Experiments Sections, the authors claimed that the graph structures learned by SOP-CG definitely match the ground-truth demands for effective collaboration. However, it is not clear to us why the method can be used in demonstrating the ability of the proposed approach to organizing coordination relations. Therefore, the authors should provide a detailed explanation about the above issue. The work presented is indeed interesting and relevant to the real scenarios. I recommend that this paper could be accepted.
Description of paper content: The paper studies the problem of achieving coordination among a group of agents in a cooperative, multi-agent task. Coordination graphs reduce the computational complexity of this problem by reducing the joint value function to a sum of local value functions depending on only subsets of agents. In particular, the Q-function of the entire system is “expanded” up to second-order in agent interactions: Q = \\sum_{i \\in [n]} q_i + \\sum_{(i,j) \\in G} q_{ij}, where the q_i is function of the i-th agent’s history and current action, and q_{ij} is a function of two agents’ histories and current actions. As G does not include higher-order (third and above) terms, the algorithm does not have exponential dependence on the number of agents. If G includes only a subset of pairs of agents, then the computational complexity is reduced to less than quadratic. Since the coordination problem is cooperative, the authors propose a meta-agent (“coordinator”) that selects the graph G in a dynamic (state-by-state) fashion in order to maximize return. The optimization problems of the meta-agent and the sub-agents are performed by deep Q-learning. Summary of paper discussion: The critical comment made by one reviewer was: “Going back on that trend now only to pursue the polynomial-time nature of the running algorithm would in my opinion require far more diverse evaluation examples, backed by a stronger motivation highlighting real-world threats of all the other MARL algorithms taking longer than polynomial time. As is, SOP-CG does not contend amazingly against other MARL algorithms that chose the "NP-hard? Curse of dimensionality? Fine. We'll approximate, approximate, approximate." path rather than the "Polynomial time is our topmost priority; function expressiveness can wait." path. That leads me back to the question of why pursue polynomial time at the cost of losing both the function expressiveness and the peak performance….” Comments from Area Chair: Looking at the experiments, the number of agents in the empirical problems is not large. For example, there are 15 agents in "Sensor." Any focus on computational complexity at this scale is hard to justify, especially with algorithms that are approximate. It seems favorable at this small scale to use function approximators that can take in all the agents' histories and actions. This obvious baseline is not included in comparisons. It is hard to justify inclusion of this paper in the conference.