Input
stringlengths
251
41.6k
Output
stringlengths
137
9.7k
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper presents a novel algorithm for identifying provably robust adversarial examples large regions in the input space that provably contain only adversarial examples each region corresponds to a single adversarial example tildex found in the center of the region along with all the points that can be generated by applying a sequence of transformations to tildex the transformations considered in the paper are semantically meaningful changes to the original image critically we can be guaranteed that tildex will be misclassified even if it is perturbed the paper demonstrates that the algorithm can generate regions of nontrivial size on networks of nontrivial size for example for a cifar10 classifier with 6 layers and 62k neurons it finds axisaligned regions containing a median of 10573 adversarial examples in addition the paper shows that provably robust adversarial examples can be used to create adversarial examples to l2smoothed classifiers that are more robust to l2 noise as compared to adversarial examples generated directly via pgd attacks strengths originality the paper presents a novel algorithm to generating provably robust adversarial examples corresponding to semantically meaningful regions quality the paper demonstrates that the algorithm can scale to generate provably robust adversarial examples of nontrivial size on networks of nontrivial size the experiments in the paper have clearly been carried out with an attention to detail clarity the paper describes the algorithm in sufficient detail to enable reproducibility in particular the appendix explains important details that would be required to reimplement the approach significance the approach presented is modular using existing certification algorithms as a subroutine this has two key benefits improvements to existing certification algorithms can be used to improve the search efficiency for provably robust adversarial examples certifiers which handle new classes of transformations could be used to generate provably robust adversarial examples for these classes of transformations while this paper focuses on adversarial examples the approach can be used in any setting where we are interested to find large regions of input space with a constant classification or more generally where a linear function of some neuron activations exceeds a threshold i can imagine this being applied to better understandingvisualizing how the output of a neural network varies as the inputs change areas for improvement originality in the introduction the paper states that our regions are guaranteed to be adversarial while prior approaches are empirical and offer no such guarantees section b2 mentions liu et al which is also capable of generating provably robust hyperbox regions is the statement in the introduction wrong quality the baseline used seems to be a straw man since it is simply our method but with uniform rather than adaptive shrinking i would always expect our method to outperform the baseline i would prefer to see the comparison to liu et al and any other methods that produce provably adversarial regions if they exist in the main body of the paper instead in table 2 the transforms selected appear quite arbitrary in particular they appear like they could have been cherrypicked to flatter the presented approach some detail on how the transformations were selected would alleviate this concern clarity experimental setup for section 53 i struggled to understand what experiment was run in this section and what the results in table 3 show i understand that the goal of this section is to show that robust adversarial examples are significantly more effective against defenses based on randomized smoothing but the setup for the experiment is still not clear to me id be happy to discuss this with the authors but some preliminary questions what are the units of robustness is the result for ours normalized to 1 table 3 are the results for baseline and ours mean or some other summary statistic what result exactly is shown for individual attacks were multiple attacks generated for each image or was the individual attack the attack that was used to determine the value of r reporting of concrete examples in table 1 the sizeo column reports an upper bound on the number of concrete examples in the polyhedral region this is not immediately clear from the description a reasonable reader might expect that this is just the number of concrete examples i would request that the authors either estimate the actual number of concrete examples clearly indicate in the table description that this is an overestimate remove the sizeo column i have a similar concern with the over column in table 2 i dont see how an overestimate of the number of concrete examples in the region is relevant additional comments clarity here are some issues with clarity that do not constitute major issues but would still improve the paper significantly if addressed at the high level he paper appears to be trying to squeeze too many results in leading to important details being omitted missing details section 53 we exclude images whose radius is too big what constitutes too big for these images what is the robustness to l2 smoothing defenses of adversarial examples generated by your method table 1 table 2 is the time reported here a median or average and is it only for instances where the methods succeed table 2 the value of splits is listed but no guidance is provided to the reader as to how to interpret the result i would recommend moving this information to the appendix or adding an interpretation definition 2 whose l2 ball is certified as adversarial i didnt find a definition in the paper of what it means for the l2 ball to be adversarial i would have assumed that this means that every sample in the ball has a different classification as compared to x and not that every sample has to have the same classification as tildex but the rest of the paper seems to suggest the latter definition miscellaneous points section 3 overview assumes an algorithm alpha the variable alpha is already used above to indicate the probability that certify fails id recommend using a different variable here section 32 computing an underapproximating region sacrificing a few pixels where the network is not very robust did you mean where the adversarial example is not very robust here if the network is not robust for a certain pixel it doesnt make sense to me to sacrifice those pixels section 51 column regions is referenced but it is reg in both tables spelling grammar section 22 geometric certification creates overapproximation of the region creates an approximation of the region figure 1 repred crosses red crosses questions this is out of the scope of this paper but the result in section 54 suggests that it might be possible to find perturbations to empirically robust adversarial examples empirically verified by an eot approach that result in a correctly classified image do you have any sense whether it would be possible to consistently find such disadversarial attacks on empirically robust adversarial examples overall i recommend accepting the paper the paper presents a novel approach to finding large regions of adversarial examples with strong experimental evidence that it scales well the details provided would enable other researchers to reproduce the presented approach most importantly this approach is likely to be something that other researchers can use and build upon having said that the paper has some issues with clarity details are provided in the main review but id like to highlight in particular section 53 which i found particularly hard to parse nb my current recommendation for this paper asis is 6 but id be quite happy to upgrade the recommendation to 8 if the bulk of my concerns around clarity are addressed after paper discussion period during the paper discussion the authors addressed the bulk of my concerns around clarity and ive upgraded my recommendation to 8 as a result docsepthe manuscript introduces a definition of provablelyrobust adversarial examples a set of examples that are verified to be classified as different labels compared with the input of interest the main idea of the technique is to shrink a boxlike region from an over approximation to a verifiable smaller subregion such that a robustness verifier will return robust for all points in that particular subregion in the evaluation part the author demonstrates the effectiveness of the approach with several experiments ie robustness against intensity transformation and randomized smoothing defense strength this paper has clearly stated its objective the approach and the corresponding evaluations the proposed technique is designed for a concrete objective generating provablyrobust adversarial examples the approaches are welldocumented in the paper and evaluations are conducted over several datasets and models the paper has spent a lot of space explaining the technique from a highlevel perspective down to its implementation details which helps the reader to better understand the algorithm weakness my concerns of the paper mainly focus on the following three aspects 1 motivation of the provable part of the adversarial examples is missing this paper relates to the prior work in generating robust adversarial examples 1 2 where they can serve as a good motivation to generate robust adversarial examples these papers discuss several physical distortions in applying the adversarial examples into the realworld cases for images and audio in the other word these distortions are realworld adversaries for artificial adversarial examples however the motivation for the provable part is missing to me i understand the provable part can be related to a counter problem probablyrobust networks the provablyrobust network is motivated to build networks where the robustness can be guaranteed for all possible attacks and the evaluations are free from the choice of adversaries to that end can the authors explain more about the motivation for provablyrobust adversarial examples a followup question is does the robust region proposed in this paper actually contain the physical distortions that may be encountered in the realword cases 1 2 and how often it seems that the more important part we need to prove is that these regions are guaranteed to contain all or part of the distortions you can possibly encounter so that an adversary does not need to worry about that an adversarial example fails in practice 2 important experimental setups and discussions are missing unfortunately with some amount of time during my review i can not locate the concrete definitions and actual implementations of the intensity changes and geometric changes as mentioned in table 1 and 2 this information should be helpful to understand the importance of the results how provable is evaluated table 1 and 2 seem to only evaluate how big the region is and table 3 seems to use randomized smoothing as an attack to adversarial examples generated by the proposed approach however the motivation of the paper mostly relies on 1 2 where the robustness of an adversarial example is actually not designed against a prediction defense ie randomized smoothing but transformations and distortions unless i misunderstand the results table 1 and 2 seem to only aggregate over less than 100 examples per dataset and it may take up to 5000s for one example in cifar i understand that the bottleneck is that the verifier is usually resourceconsuming if that is the case the authors may need to convince the readers under what circumstances this tradeoff between resource and probablyrobustness is worthwhile compared to the fast empirical approaches 3 writing i find the writing of the method part is wellorganized and polished which makes me enjoy the reading of the approach however the experiment part is relatively dense and sometimes even difficult to read when a lot of notations and symbols appear in the paragraph without sufficient explanations to remind the reader what they refer to also it would be best to add explanations to notations in the captions of figures and tables so the reader does not have to search for what is measured in the table 1 athalye a engstrom l ilyas a kwok k 2018 july synthesizing robust adversarial examples in international conference on machine learning pp 284293 pmlr 2 qin y carlini n cottrell g goodfellow i raffel c 2019 may imperceptible robust and targeted adversarial examples for automatic speech recognition in international conference on machine learning pp 52315240 pmlr overall i incline to a weak rejection at this stage of the reviewing process but i am open to any discussions the reasons that prevent me from giving higher scores are the insufficient descriptions of the motivations and the current way the experiment sections are written with which i have mentioned in my main review docsepthe main contribution of the paper is a framework that will output a large batch of adversarial examples assuming access to a few blackbox mechanisms the term provably robust appears misleading there is no theory showing that the examples must be adversarial while authors highlight that there are massive amount of adversarial examples say 10573 produced by the algorithms such number seems really dependent on particular problems while lacking a theoretical justification on the novelty of the algorithms i feel it relies many blackbox components and their properties which lowers the technical contribution if the work updates after discussion i agree that the paper brings out interesting ideas and the experimental results are convincing however i also feel authors need to tune down the contributions on the theoretical part because many of the guarantees hinge on blackbox components that are leveraged from prior works see above ### Summary:
in this paper authors introduce and study provably robust adversarial examples reviewers had mixed thoughts on the work one reviewer mentioned that the provable robustness is somehow overstated in the work looking at the title and abstract it sounds like the paper develops a new algorithm that is guaranteed to be robust but in reality the robustness hinges on the blackbox verifiers which is acknowledged by the authors during discussion i agree with this this should be more clearly stated in the work i strongly suggest authors to calibrate exaggerated statements of contributions in the revised draft having said this reviewers liked the the experimental study of the paper and found it to be comprehensive and convincing
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: 1 in the introduction the author separately pointed out the issues of duq and dkl however these issues are not convincing as no citations or theoretical proof is provided in this paper the notations in the intro are also not welldefined x x x are used without difference which however should be clearly defined as vectors or matrices 2 the technical contribution is very incremental the proposed vduq is simply applying the inducing point gp in the duq to mitigate the uncertainty collapse in dkl the inducing point variational approximation of the gp predictive distribution referred as inducing point gp is not clear for me what exactly does inducing point gp refer to why the socalled inducing point gp can speed up inference in gp model what does decouple it from dataset size mean all these important points are not clarified in the introduction 3 the theoretical contributions are also not wellorganized the author fails to prove that the spectral normalization as a regularization scheme can be uses to mitigate uncertainty collapse moreover how the spectral normalization guarantees the effectiveness of inducing point gp in vduq 4 i also have some concerns on the experimental results of causal inference why the treatment effect estimation has uncertainty settings the authors should fully explain the uncertainty settings in causal inference as most of the causal baselines are not proposed for uncertainty settings docsepthis paper proposes a single deep deterministic model harnessed with spectral normalization to improve uncertainty in regression and classification tasks the method relies on deep kernel learning inducing point approximation for gp and spectral regularization of the deep model to avoid uncertainty collapse in ood the main contribution of this paper is methodological the paper has extensive simulations and demonstrates the utility of the proposed approach in a widerange of applications for both regression and classification having said that the proposed approach can be seen as a modification to liu etal 2020 with different approximation and different regularization in that sense the novelty of the paper could be seen as modest and a comparison with liu etal 2020 highlighting the differences in practice is missing more comments what is the main advantage of this approach wrt liu etal 2020 how do these compare in terms of uncertainty estimation and computational speed this should be included the authors discuss liu etal 2020 in related work highlighting the important difference that liu etal 2020 is a parametric model but it is similar in that both formulate it as a gp and regularize for distanceawareness a comparison with liu etal 2020 would strengthen this paper considerably figure 1 could be improved it only shows deep ensembles as baseline how about the other approaches discussed in the paper moreover it is unclear whether vduq provides better inbetween uncertainty compared to deep ensembles similar width but deep ensembles interpolation is more smooth a deeper focus on the normalization ie theoretical or empirical properties of spectral normalization and comparison with other normalization schemes would make the paper more interesting the paper has some strongcategorical sentences with which we do not agree eg first intro paragraph there is no single method that works on large datasets liu etal 2020 would be such method for example simulation results are convincing in terms of utility as the authors demonstrate that the proposed approach works in highdimensional big datasets and meaningful applications such as causal inference for healthcare yet the experiments miss the point of elucidating how much spectral normalization compared to other normalization schemes could the authors include an ablation study showing the impact of the number of inducing points in the approximation the authors mention as a strength that a low number of inducing points is good enough so showing evidence for that would strengthen the paperdocsep summary this paper proposes variational deterministic uncertainty quantification vduq which adopts the stochastic sparse variational deep kernel learning dkl method to enable uncertainty estimations for deep models to avoid uncertainty collapse the deep neural network in the gp kernel is regularized with spectral normalization which ensures a bilipschitz constraint experiments show that vduq is effective in uncertainty quantification tasks reasons for score the idea is clear and the paper is easy to follow however my major concern is about the significance of the contribution and the experimental results see my detailed comments below pros the idea of obtaining uncertainty estimations with dkl is interesting moreover by using sparse variational inference in dkl the entire model is just like a dnn with an extra layer of inducing points requiring only a few extra parameters and computational cost which is also desired overall the paper is wellwritten the figures are instructive and helpful for understanding concerns my main concern is about the significance of the contributions sparse variational inference methods for dkl was previously proposed in wilson et al 2016a the main contribution of this paper seems to be the idea of introducing the spectral normalization regularization to stabilize the training of dkl and avoid uncertainty collapse although this is an interesting idea i think the authors did not provide clear enough explanations and rigorous analyses in page 3 the authors mentioned without the spectral regularization on the deep model the deep network is free to map data points that are far away from the training distribution to a feature representation thats similar to in distribution data this perhaps explains how uncertainty collapse happens in outofdistribution data to some extent but is this the primary cause of undesired uncertainties since the parameters of nn become the parameters of the gp prior as mentioned in section 2 optimizing the marginal likelihood or the elbo wrt the variational parameters and the nn parameters is actually fitting the prior to data which could also cause biased uncertainties 1 although the bilipschitz property can intuitively alleviate the biases it is not clearly explained how it works it would be better to provide more theoretical analysis in section 31 the authors raised a question about how informative the distance preserving explanation is about the motivation of using bilipschitz regularization however a more informative explanation is not provided also the last paragraph of section 31 is misleading the author mentioned a complete rigorous theory is not remains an open question if it is addressed and has theoretical insights into the use of spectral normalization the authors should add necessary references and explanations 1 salimbeni hugh and marc deisenroth doubly stochastic variational inference for deep gaussian processes advances in neural information processing systems 2017 minors 1 in table 1 the results of vduq and duq is outperformed by the ensemble method in terms of both accuracy and auroc this seems a different conclusion form the results in amersfoort et al 2020 it would be better to provide more discussion about it which seems not to be expected 2 the authors claim the vduq can be trained in an endtoend fashion in section 3 however since the inducing points are initialized with kmeans algorithms that need to look at the training data which i think is still a lightweight pretraining docsepvariational deterministic uncertainty quantification summary the paper proposes a method for outofdistribution detection by combining deep kernel learning and gaussian processes using neural networks as a kernel for the gp as well as inducing point approximation alleviates the scalability issues of gp the idea itself has merits however the presentation and experiments are not convincing strengths the idea of using deep kernels within gp is a good solution that allows benefiting from both the expressiveness of the kernels and uncertainty estimates for gp additionally using the uncertainty estimates for causal inference is a nice application weaknesses although the approach is interesting it needs to be further developed and evaluated in multiple setups i find it limiting that it relies on the residual connection making it unsuitable for other nn architectures which means it will apply to only a limited number of tasks the presentation of the method should be better structured i appreciate the background on deep kernels and how it helps to overcome the limits of gp however there is a lack of presentation of the method itself a description algorithmic listing or even an equation for the uncertainty score proposed is missing in the current version of the text in the introduction vuqd is presented as favorable wrt uqd due to its rigorous probabilistic interpretation however this was never further analyzed in the text also seems that the method is concerned only with the epistemic uncertainty in the data in general the whole presentation of related work and positioning of this paper in the uncertainty literature is not clear what source of uncertainty does the method address there is much to be elaborated on this topic and i believe the discussion on this will significantly improve the paper the discussion on spectralnormalization and bilipschitz in 31 please clarify it or explain it better in the current writing it is contradicting the proposed method a complete rigorous theory of why the spectral normalization as used in this and previous work is a useful regularization scheme is not remains an open question experiments toy examples figure 1 on regression i do not find this example motivating first why choosing noiseless data second why is the vuqd increasing in reasons where there is data such as the peaks why does it compare only to deep ensembles figure 2 why choosing a toy example where a linear classifier works in the original space what is the sensitivity to the number of inducing points for the gp an ablation study at least for the toy data sets can help why were standard datasets such as mnist and fashion mnist not included the empirical evaluation should be extended with more baselines and datasets minor the manuscript needs proofreading language errors increase increasingly towards the conclusion update after reading authors response i thank the authors for their detailed responses they have answered most of my concerns and i raise my score to 5 i am still not convinced about the method covering both the aleatoric and epistemic uncertainties without any theoretical or intuitive justification and without any discussionclarification on that part if indeed this is the case then additional experiments should be included for example for a regression task the standard uci datasets 1 1 hernandezlobato j m and adams r p probabilistic backpropagation for scalable learning of bayesian neural networks in icml15 2015 ### Summary:
the reviewers all agreed that the paper represent thorough work but also is closely related to existing literature all referees point to other nonoverlapping literature so it is a crowded field the authors have entered the amount of novelty needed can always be discussed but given the referees unanimous opinion and knowledgable input it is better for this work to be rejected for this conference using this input can make this work a good paper for submission elsewhere
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the authors proposed a new method for retrosynthesis which does not require the mapping numbers and extracting templates from the literature basically the model consists of a graphbased encoder and a sequence based encoder the encoder consists of local aggregation from neighbors and global attention using a new positional method the decoder is a transformer model with relative positional encoding the method achieved promising results on several retrosynthesis datasets 1 from eqn1 to eqn5 you choose to use a complex gating mechanism to aggregate information is every component necessary what if using a simple gcn or gat 2 i think the model contains more parameters than conventional retrosynthesis models like conventional transformer gln could you please show compare the number of parameters of different methods 3 the authors should provide some real cases to show how the method outperforms previous baselines and why the method can obtain good results without templates missing references 1 dualview molecule pretraining httpsarxivorgabs210610234 the authors also work on retrosynthesis using transformer and gnn models a comparison is necessary the results in this paper are good although the method itself is not quite novel docsepthis paper proposes a graphtosequence architecture called graph2smiles for the retrosynthesis and the reaction outcome prediction graph2smiles uses an attentionaugmented dmpnn encoder to capture the local information and a global attention encoder with graphaware positional embeddings to capture the global information experiments show that graph2smiles is competitive with transformer baselines but does not outperform stateoftheart methods on tasks of the onestep retrosynthesis and the reaction outcome prediction the main strengths of this paper are as follows 1 this paper proposes graph2smiles which is a graphtosequence architecture without using sequence representations of input smiles therefore graph2smiles is permutation invariant to the input and does not need the data augmentation 2 graph2smiles has a wide range of potential applications because it can serve as a dropin replacement for transformer in many tasks involving the moleculestomolecules transformation my major concerns are as follows 1 this paper states that graph2smiles achieves stateoftheart top1 accuracy on common benchmarks among methods that do not use reaction templates atom mapping pretraining or data augmentation strategies the authors claim that integrating the above features or techniques with graph2smiles could improve the performance however they do not conduct experiments to demonstrate their claim besides as the aforementioned techniques are commonly seen in predictive chemistry tasks the authors may want to explain why they do not equip graph2smiles with these techniques 2 dgat is a variant of dgcn with attentionbased message updates however dgat does not outperform dgcn in terms of the top1 accuracy which is the basis for comparison throughout the discussion in this paper according to table 1 dgat has a small advantage over dgcn only in terms of the top5 and top10 accuracies in the reaction outcome prediction 3 graph2smiles involves calculating pairwise shortest path lengths between atoms which can be computationally prohibitive the authors may want to compare graph2smiles against baselines in terms of the computational complexity this paper studies two important problems in the computeraided organic chemistry and proposes a graphtosequence architecture called graph2smiles however the empirical results do not show a superior performance of graph2smiles to existing methods and the technical contribution is incremental docsepthis paper proposes a graphtosmiles framework which incorporates several recently developed engineering techniques from the community for synthesis planning and reaction outcome prediction tasks the proposed method leverages graph neural networks and transformer attention model to encode the graph inputs and then utilizes a transformer decoder to generate the smiles string as outputs experiments on benchmark retrosynthesis and reaction prediction tasks show that the proposed approach outperformed the vanilla smilestosmiles transformer baseline but obtained inferior results than some other advanced methods the paper is interesting but both the technical novelty and the experimental studies are weak to me the proposed framework integrates several recently developed engineering techniques and empirically shows its superior performance over vanilla smilestosmiles transformer baseline this paper provides another comparison baseline for research on retrosynthesis and reaction prediction nevertheless i have the following concerns regarding the paper 1 the proposed framework is similar to the nerf approach bi et al icml 2021 as cited by the authors nerf formulates the reaction prediction problem as a graphtograph translation problem also nerf first leverages graph neural networks to capture the local information in individual molecules and then utilizes a transformer encoder to further models the intermolecular interactions between nodes from multiple molecules furthermore nerf uses a transformer decoder to decode the output as graph these are almost the same as that in the method proposed in the paper the only different to me is that the nerf uses a transformer to decode the output into graph directly in a nonautoregressive fashion while the proposed method here uses the transformer to decode the output into smiles strings in an autoregressive fashion in this sense the novelty of this paper is limited to me note i think nerf can naturally apply to two or more molecules since the transformer encoder is used by considering all node embeddings from multiple molecules as a node set 2 experimentally the proposed method is not directly compared with nerf i think such comparison is necessary since as shown in the paper nerf outperforms the smilestosmiles transformer baseline and even the augmented version of it to which the proposed method here obtained inferior performance the two methods are similar and closely related i would expect the paper to include nerf into the main results in table 1 also for the usptostereomixed task i wonder what the reason was for only comparing with the vanilla transformer and why not comparing with the augmented transformer or the stateoftheart method chemformer 3 results in table 1 show that the proposed method is inferior to the transformer baseline with simple augmentation also 1 less than the tested method chemformer which makes the papers contribution less significant to me 4 the claim in the abstract molecular graph encoders that mitigates the need for input data augmentation is a strong claim to me nevertheless there is no evidence to support that claim the slightly better performance over the smilestosmiles transformer baseline is not a convincing evidence to me input data augmentation may play a significant role on regularizing the deep neural networks i think better justification to support the claim is necessary 5 the statement in the last sentence of the first paragraph on page2 smiles augmentationbe interpreted as evidence of the ineffectiveness of the smiles representation itself i think this hypothesis may need better support and analysis to me the augmentation of smiles strings can act as a model regularization method which helps the trained model to generalize well to unseen data and may not directly infer the ineffectiveness of the smiles representation itself 6 i am not fully understand the claim in the second paragraph of page2 we guarantee the permutation invariance of graph2smiles to the input eliminating the need for inputside augmentation altogether i think it would be useful to specify how and why so 7 the proposed method integrates several performance engineering techniques such as attention weights and multiheaded attention in the graph encoder integration of shortest path length in the positional embeddings etc so where the improvement is really coming from is not clear to me in the ablation study in table4 both the positional embedding and global attention are key to the transformers performance so the performance degradation is expected when remove them transformer expects a positional embedding to work and without a global attention the encoder will not be able to capture information from multiple molecules their graphs are disconnected the proposed method is similar to nerf as proposed by bi et al so the technical novelty is limited also the experiment study missed important comparison baselines furthermore a more comprehensive ablation study is needed since several engineering techniques are employed and it is difficult to tell where the performance improvement is really coming from when compared to the transformer vanilla model docsepthe paper proposes a gnnbased extension of transformers which have been shown to be effective for reaction prediction etc before in particular the gnnbased embedding of molecules in the reaction embeddings overcomes the artificial bias inherent in the often applied sequence embeddings the experiments show that there are sometimes increases of performance in reaction and retrosynthesis prediction and the approach could be applied to similar problems using a sequenceindependent encoding in the easytouse transformer makes sense and is a research question which is interesting for the i think mostly ai in chemistry community as far as i can see the technical novelty is limited the proposal is a combination of rather wellknown methods dmpnn where attention is added and reparameterized relative positional encodings using 0 to represent atoms in different molecules in transformers it is unclear if the proposed attention in the gnn is useful since we only see for reaction outcome prediction there is a small advantage of using dgat over dgcn and the ablation results are missing the results in table 1 are only convincing for usptostereomixed the experimental comparison for retrosynthesis compares to methods which use different forms of pretraining or augmentation which makes sense it shows that the augmentation still provides advantages and as far as i understand the graphbased molecule embedding entails that smiles augmentation would not improve your results further also it would make sense to include the ablation results for graph2smiles just using the transformer into table 2 directly other comments it is unclear to me what our hyperparameters for dgat and dgcn are adapted from graphretro means in 8 suv should be attnsum how exactly is mathcalbuv used in the learnable tilderuv table 4 since the paper proposes the attentionbased gnn the ablation should be provided for that model table 4 what is no global attention encoder just the combination of gnn embeddings wo transformer since transformer is the baseline i would not consider this as an ablation setting table 4 how do the results look on the other tasks for retrosynthesis all open existing systems that yield full retrosynthesis trees which i know use top50 or similar so the fact that the base model is better at top10 already renders the analysis questionable we include part of our code to reproduce some specic results why not all overall i think the authors proposed model for reaction prediction makes sense however as mentioned above the papers writing could be improved the technical contribution is limited and the experiments also show only limited improvements altogether i therefore suggest to reject the paper at the current moment i am happy to adjust my score in case i missed critical parts ### Summary:
while the reviewers appreciated the methods ability to replace transformer models and smiles data augmentation their main concerns were with a the experimental section and b the technical innovation over prior work which updated drafts of the paper did not fully resolve specifically for a this work performs very similarly to prior work for reaction outcome prediction the proposed method improves top135 for usptostereomixed but is outperformed by prior work for top1510 for uspto460kmixed for retrosynthesis the model is outperformed for usptofull and only outperforms prior work that does not use templatesatommappingaugmentation for top1 on uspto50k the authors argue that their method should be preferred because their method does not require templates atommapping and data augmentation the reviewers agree that templatefree and atommappingfree methods are more widely applicable however the benefits of being augmentationfree is not convincingly stated by the authors who only state that their approach is beneficial by simplifying data preprocessing and potentially saving training time the authors should have empirically verified these claim by reporting training time because it is not obvious that their model which requires pairwise shortest path lengths is actually faster to train for b the reviewers believed that the paper lacked technical novelty given recent work eg nerf the authors should more clearly distinguish this work from past work eg graphical depictions and finer past work categorization may help with this given the similar performance to prior work the lack of evidence to support training time claims and the limited technical novelty i believe this work should be rejected at this time once these things are clarified this paper will be improved
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper proposes to incorporate the proprioceptive and visual information together for quadrupedal locomotion the authors introduce a new model architecture named locotransformer that consists of separate modality encoders for proprioceptive and visual inputs the output of which is fed through a shared transformer encoder to predict actions and values through experiments the authors demonstrate that the robot with the help of both the proprioceptive and visual inputs can walk through different sizes of obstacles and even moving obstacles they have also transferred the learned policy from simulation to a real robot by running it indoors and in the wild with unseen obstacles and terrain strength this paper tackles an important question of how to incorporate visual information in learning policies for quadrupedal locomotion where most existing learningbased control of quadruped robots in the published works only considered proprioceptive information and the robots are essentially blind the use of visual information can allow the robots to be less conservative and plan their actions for a longer time horizon as has been evident from the authors comparison with a stateonly baseline that only considers proprioceptive inputs this paper has extensive experiments and indepth analysis in simulation which provides a good reference for the readers to understand the benefits and limitations of different design choices the realworld demo from simtoreal transfer also provides concrete empirical evidence on the practical use of the proposed method weakness while i like the direction this paper is going im not entirely convinced that the realworld experiments in the paper fully demonstrate the necessity of visual information for example in 1 2 the authors have shown working demos on terrains seemingly much more challenging than this paper 1 2 also showed examples of stair climbing a task where vision is supposed to be extremely helpful a blind robot may have to make a few failed trials before it knows the height of a stair you could also imagine the benefit of vision in cases that require more precise footstep planning eg httpsyoutubek7s1sr4jdlit176 the paper will be much stronger by including some more concrete comparisons with the current stateoftheart learning approaches on what can be made possible via vision while previous blinds robots struggle while i agree that vision is important for robots to make longterm plans and autonomously traverse around obstacles im not sure whether this papers approach is better than more classic robotic pipelines for example instead of treating the depth image as a 2d grid and processing it using cnn one could use the depth camera to build a 3d map of the surrounding environment and blend in the explicit notion of whats traversable and whats not you can then plan the trajectory based on the perception results this seems to be how boston dynamics spot uses the visual information httpswwwyoutubecomwatchvve9kwxkxus and has shown great generalization ability in realworld scenarios showing examples of how this papers way of using visual information is better than classic pipelines may be essential to claim improvements continuing my previous point it would be better if the authors could include more discussion on the current state of quadrupedal locomotion both in academia and industry where boston dynamics spot is seemingly better in terms of generalization and robustness than any of the reinforcement learningbased approaches the authors may also shed light on under which scenarios we should choose rltrained robots over spot how does the method work in the real world if there are moving obstacles eg humans and other animals how well does the method work compared with the builtin controller of the robot the paper may need a few passes of proofreading where the current manuscript includes a lot of typos just to name a few section 1 contribution bulletin points we the propose locotransformer section 3 a mdp an mdp section 6 the visual inputs also inputs the locomotion 1 joonho lee jemin hwangbo lorenz wellhausen vladlen koltun marco hutter learning quadrupedal locomotion over challenging terrain 2 ashish kumar zipeng fu deepak pathak jitendra malik rma rapid motor adaptation for legged robots post rebuttal i thank the authors for the detailed feedback and additional experiments which addressed most of my concerns great job i have also read the reviews from other reviewers and decided to raise my score to 8 accept good paper i like the direction this paper is going combining visual and proprioceptive information to train rl agents for quadrupedal locomotion i also love that the authors include realworld demos of the learned policy on a physical robot however my main concern is that although there are extensive evaluations in the simulation the current set of realworld examples may not be sufficient to show the benefit of visual inputs examples like climbing stairs or scenarios that require more precise footstep planning would make the paper much stronger im generally excited about the progress in this direction thus im currently leaning towards the acceptance side but i hope the authors can address the issues mentioned above docsepin this paper the authors proposed a transformer based architecture that combines both visual depth and proprioceptive inputs ie imu and joint angles to solve visual locomotion tasks the authors demonstrated that their approach can solve challenging visual navigation tasks and locomotions task on uneven terrains the proposed method out perform proprioceptive only visual only and hrl baselines the sim trained policy has been demonstrated on the real a1 hardware the main strengths of the paper 1 proposed a novel transformer based architecture that can train visuallocomotion policies endtoend and demonstrated good navigationobstacle avoidanceuneven terrain walking results in the simulation 2 zeroshot real world transfer to a a1 robot and demonstrates walking navigation behavior in various environments the main weakness of the paper not enough baselines to compare with as the authors cited there are many approaches to tackle visual locomotion navigation problem besides end to end training for example in the hierarchical approach one can combine learnedoptimization based navigation pretrained or hand tune walking ie mpc motions so in total even the hierarchical approach can have four different combinations to compare with yet i saw non of them here i would say the authors should include at least one or two such baselines to compare with and document the performances and cons and pros the authors proposed to use transformer architecture to solve visual locomotion navigation tasks the proposed approach is compared with a few endtoend trained baselines including hrl and has demonstrated advantages the authors also deployed the trained policy successfully to the real robot docsepthis work proposes a novel architecture for quadrupedal locomotion that fuses proprioceptive and visual information with a transformerbased model to enable an agent to proactively maneuver environments with obstacles and uneven terrain by anticipating changes in the environment many steps ahead the method is extensively evaluated in simulation and on a sim to real transfer tasks the method is shown to both achieve higher reward but also better capacity to generalise in the context of sim to real overall the paper is well written and the provided evaluation is conducted fairly and well strengths and weaknesses things i liked about this paper a powerful framework fusing visual and proprioceptive data for quadrupedal locomotion using transformer architectures is an interesting and also valuable approach that works well and sets an excellent opportunity for future work a well written paper overall the paper is well written and all sections are broadly very clearly described useful insight i like the provided key insight that proprioceptive states offer contact measurements for immediate reaction while visual sensory observations can help with longerterm planning things that can be improved number of seeds is not great the adopted modelfree approach is known to have very unstable learning process of the dynamics function which ideally requires 10 or more seeds to provide a solid results using only 5 seeds is not great more details below prose is not perfect there are some minor details and clarifications that may help further improve clarity using 5 random seeds for a model free approach is rather small as a number ideally the evaluation should be done on 10 or more seeds in fact i suspect that some of the results such as the moving obstacles from table 3 would change if the approach was evaluated on more runs nevertheless the provided training curves seem to have fairly small variance as illustrated in figure 4 which makes me more inclined to agree that 5 seeds are sufficient to report on accurate results in addition i would expect that the variation in the learnt dynamics to primarily affect the performance of the learnt agent on the physical quadruped system however this does not seem to be the case in the reported results which is great as long as all 5 seeds were used to extract those results it is great that the paper considers 15 runs per seed but i wonder if the results were acquired through cherry picking best n seeds this is a detail that is not currently mentioned in the paper but would certainly improve clarity if it did there are a few additional minor comments currently the distance measurement reported in meters is mentioned only in the text and not in the tables stating this there too would make it much clearer similarly what exactly does the collision happened represent are these total number of collisions over 1000 steps the number of time steps where collision happens between the robot and obstacles over the course of an episode states the explanation seems a bit overly complicated why not just the number of collisions with obstacles per 1k step long episode or something along those lines there is a typo in the contribution we the propose should be we propose another typo is whereas it for our method either plateaus should be whereas for our method it either plateaus overall this paper is written well and has a sound idea that is supported well by an extensive evaluation there are some minor details that may further improve the quality of the paper but i see this as a strong submission which i can recommend for acceptance docsepthis paper proposes an approach to legged locomotion which leverages a transformerbased model and is trained via endtoend reinforcement learning it provides extensive experimental evaluation of the approach in terms of performance and safety metrics both in simulation and using realworld experiments the code is expected to be opensourced this paper is very clear in its exposition providing a detailed diagram of the method clearly labeled inputs and outputs and relevant implementation details the experiments appear sound with a number of baselines and ablations provided i want to emphasize the value of realworld experiments in this space as the simtoreal gap can be significant and invalidate otherwise good looking results scientifically the paper proposes an architecture that is novel and can serve as a broader proof point that visionbased locomotion can be competitive and robust when trained with a sufficiently expressive model strengths the paper tackles an important research problem visionguided locomotion and has made significant progress along this direction novel network architectures such as transformers are underexplored in the legged robots community this paper demonstrates that incorporating such architecture indeed makes a difference in performance the proposed method is validated on a real robot the evaluations are comprehensive and the conclusions are convincing weakness the technical novelty is lean neither of the key components of the paper are novel rl for locomotion and transformers although this can be considered as a weakness it is not a deal breaker given that the combination of these two and the application to legged robots are novel and potentially influential the results are mostly obstacle avoidance on the flat ground where legs are not essential most of the experiments can be done with a wheeled robot to show the true value of this paper more challenging terrain needs to be considered and tested on such as stairs stepstones tall grasses rocks etc in these terrains both vision and legs are critical from the accompanying video it is a bit disappointing that the robot only learns steering for obstacle avoidance but does not learn foot clearance or foothold location on different types of terrains the robot always drags the foot even on pebbles 525s in the video where higher foot clearance is clearly a preferred choice since this paper trains endtoend i would expect that these behaviors would emerge automatically if trained in relevant environments showing these behaviors foot clearance foothold location change of gait pattern in addition to obstacle avoidance would significantly strengthen the paper additional questions 1 in addition to domain randomization does the paper apply other techniques for simtoreal transfer for example i would imagine that there will be a large simtoreal gap in vision the depth images from intel realsense can be noisy and with holes especially in outdoor environments do these simtoreal gaps in vision cause any problems when deploying the policy on the robot 2 how much tuning is needed to learn natural deployable locomotion gaits in the video the learned locomotion gait is quite reasonable and deployable on the robot the paper explicitly mentioned that it did not use trajectory generators does it purely rely on reward shaping if so how much tuning is needed and what are the most important terms in the reward function that encourage the emergence of natural gaits good paper on relevance and experimental evidence on a topic that is very much of interest to the robot learning community today novelty limited due to combination of known techniques edit bumped confidence to a 5 based on comments and rebuttal this is a solid contribution ### Summary:
the paper addresses visionbased and proprioceptionbased policies for learning quadrupedal locomotion using simulation and realrobot experiments with the a1 robot dog the reviewers agree on the significance of the algorithmic simulation and realworld results given that there are also realrobot evaluations and an interesting simtoreal transfer the paper appears to be an important acceptance to iclr
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: summary this paper proposes an unsupervised graphlevel representation learning method considering globallocal disentanglement specifically the authors propose a gldisen model based on graph vae architecture to jointly learn global and local representations for a graph the global information is shared across the whole graph while the local information varies from patch to patch corresponding to common and local factors respectively empirical experimental results show that the learned representation achieves superior performance in the downstream graph classification task and analyses demonstrate the learned representations exhibit some disentangle property pros 1 unsupervised graph representation learning considering global and local disentanglement seems to be a novel problem 2 the proposed method generalizes disentangled vae into graph data to disentangle common factors from the local ones the formulations and model descriptions are clear 3 experiments including both qualitative analysis and quantitative results demonstrate the effectiveness of the learned global factors in downstream tasks cons and questions my major concern lies in the insufficiency of experiments specifically 1 the disentanglement part is modified from betavae since normal vae is adopted in graphs eg variational graph autoencoders by kipf and welling the authors need to compare these methods to demonstrate the improvement is actually from the disentanglement part rather than the vae structure 2 although the authors demonstrate the effectiveness of disentanglement in downstream tasks ie graph classification it is unclear whether these global factors have intuitive explanations on some of the datasets eg the showcases of molecular graphs in duvenaud et al 2015 or the authors may adopt some synthetic datasets 3 since both the global and local node representations are disentangled i am curious whether the local node representations can also be validated in some downstream nodelevel tasks 4 figure 2 in section 421 is not entirely convincing since there is no reference line of how much correlation a nondisentangled method will have eg in ma et al 2019 the authors compare the disentangled method with gcn other questions 5 how the proposed method can handle the mode collapse problem ie only a few latent factors learn useful information 6 as shown in table 1 though the proposed method outperforms other gnns it does not always compare favorably to kernelbased methods such as gckn the authors may want to further elaborate on the pros and cons of using gnn vs kernelbased methods 7 there lacks a discussion on the complexity of the proposed method 8 the technical contribution is somewhat limited since both betavae and graph vae are known in the literature it would be more interesting if the authors can integrate localglobal disentanglement with local neighborhood disentanglement in ma et al 2019 to derive a more novel architecture i will be happy to improve my scores if authors can address the above questions i have updated my score considering the paper has improved its quality after the revision adding more experimentsbaselines comparison with the literature etc new updates following the new comments of reviewer 4 i also briefly check the code in the supplementary material and find it indeed seems to have the consistency problem ie not reconstructing graph edges as mentioned in the paper thus i am also wondering how the authors implement graphvae in the rebuttal phase and whether the improvement of their proposed method over graphvae is really from disentanglement or the differences in the autoencoder based on this potentially serious problem i reinstate my original score and think the paper should be clarified before acceptancedocsepthe authors propose a vaetype generative model approach to characterize the hidden factors with a divided focus on the global and local reconstructions the claim is that the learnt hidden representations are disentangled which is not defined clearly using two reconstruction terms the setting of the problem adopts the graph vae setting in 12 which i think the authors should mention in the related work and the elbo local aggregation convolution approaches used in this paper are relatively standard in the generative modelling and graph representation learning domain apart from the limited novelty which would not have affected my evaluation if it solves the problem as claimed i have several major concerns about this paper 1 the notion of disentanglement is not welldefined in the first place in the vae setting where the hidden factors are stochastic does disentanglement refer to independence or they are orthogonal under a specific measure induced by the graph itself the claims made by the authors can never be examined rigorously the visual results do not constitute supportive evidence as i shall discuss later 2 there is no guarantee that the socalled global and local factors are not confounded both the global and local reconstruction terms involve the two types of factors given the high expressivity of deep learning models the local factors can easily manage both tasks or the global factors are merely enhancing the signals of the local factors there no mechanism to prevent the crossterms during the optimization so the learning process of the global and local factors confounded as a result of how the authors design the objective function 3 unclear interpretation of the visual results it seems that the visual results showcase a similar pattern among the local and global factors despite the difference that the signal is stronger for the local factors which is evident as they play a more critical role in the objective in the absence of a clear definition of disentanglement more persuasive numerical results and interpretations are needed 1 kipf t n welling m variational graph autoencodersj arxiv preprint arxiv161107308 2016 2 xu da et al generative graph convolutional network for growing graphs icassp 20192019 ieee international conference on acoustics speech and signal processing icassp ieee 2019 docsepi think the idea of the paper is interesting the writing is well and easy to read however it does not meet the condition of acceptance from my point of view i have some concerns with its characterization of the literature some important related work is missing it seems authors ignore talking about some literature of unsupervised graph representation learning such as 1 2 etc also they do not make a performance comparison with the methods above in experiments 1 contrastive multiview representation learning on graphs icml 2020 2 selfsupervised training of graph convolutional networks arxiv 2020 disentangling the global and local generative factors graph representation learning is important however the authors didnt explain the definition of global and local factors clearly it would also be better if they can show an example of globallocal factors when generating graph the experiments are missing i have some concerns as follows what is the best number of generative factors which is important for this method can this method occur mode collapse and how to valid or prevent it how can this method prove that each factor is necessary for the generative process what is the real meaning of each factor how about the timespace complexity of this method more experiments or discussions should be conducted to answer these questions based on the above reasons this paper can have much more improvement docsepin this paper the authors proposed to disentangle the global level information from the local level one to reduce the effect of the irrelevant information the proposed method outperforms several stateofthearts on multiple datasets for graph classification overall i like the idea of applying unsupervised disentangled learning to graph level representation learning some concerns are on the experimental study and missing references strong points 1 disentanglement learning is a cuttingedge field and has gained much attention in recent years it is true that global and local features often entangle together when we learn graph representations the problem is real and important 2 the architecture of the model is easy to understand and reasonable 3 the experimental study is comprehensive including both qualitative analysis and quantitative analysis the experimental setup instructions and pseudocodes are very clear making the algorithm easy to be reproduced weak points 1 performing experiments only on graph classification tasks weakens the significance of the paper it is common for graph representation learning methods to be tested on other tasks such as graph similaritydistance computation and graphlevel clustering in order to draw a general and convincing conclusion 2 some important references are missing the authors should discuss and compare with them on graphlevel representation learning bai et al unsupervised inductive graphlevel representation learning via graphgraph proximity ijcai 2019 on disentangled representation learning yang et al factorizable graph convolutional networks nips 2020 guo et al interpretable deep graph generation with nodeedge codisentanglement kdd 2020 3 the paper mentioned that the global and local latent generative factors are sampled from their respective posterior distributions more details are expected docsepin this paper the authors proposed a disentanglement learning based approach for unsupervised graph level representation learning they assume that disentangled representations which capture these global and local generative factors into independent latent units can be highly beneficial for graph level tasks the extensive experiments and analysis show that our method achieves the stateoftheart performance on the task of unsupervised graph representation learning strengths 1 the paper is well written and the disentangling factors can benefit the unsupervised graph representation learning 2 the performance of this work is good compared with the stateoftheart baselines the source code is also available 3 the related work is sufficient to understand the motivation of this work the weakness 1 the idea is not very novel for example two important assumptions 1 a global and local factor for graph analysis 2 local latent factors are independent those two assumptions actually have been explored in unsupervised learning tasks for example the follow paper1 exactly disentangle local and global information into two separate sets of latent variables within the vae framework it seems that migrating this idea under graph is straightforward the paper is more like a mixture of 1 and higgins et al 2017 and gcn 1 charakorn rujikorn et al an explicit local and global representation disentanglement framework with applications in deep clustering and unsupervised object detection arxiv preprint arxiv200108957 2020 2 in figure4 it seems that the gldisen global has very good accuracy the gldisen globallocal combines only outperform gldisen global within a very small range of lambda but with large fluctuation does that mean the local factor contribution little to the overall performance in concludethe authors propose a vae based learning algorithm to disentangle the global graphlevel information the overall presentation is good the similar ideas have been explored in unsupervised learning the novelty of this work is thus not very impressive ### Summary:
in this paper the authors designed a disentanglement mechanism for global and local information of graphs and proposed a graph representation method based on it i agree with the authors that 1 considering the global and local information of graphs jointly is reasonable and helpful as shown in the experiments and 2 disentanglement is different from independence however the concerns of the reviewers are reasonable eq 2 and the paragraph before it indeed show that the authors treat the global and the local information independently moreover the disentanglement of the global information the whole graph and the local information the patchsubgraph is not welldefined in my opinion for the mnist digits the angle and the thickness or something else of strokes can be disentangled not independent factors that have influences on different properties of the data in this work if my understanding is correct the global and the local factors just provide different views to analyze the same graphs and the proposed method actually designs a new way to leverage multiview information it is not sure whether the views are disentangled and whether the improvements are from disentanglement if the authors can provide an example to explain their disentanglement simply as the mnist case does this work will be more convincing otherwise this work suffers from the risk of overclaiming
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper proposed a joint approach to learn embeddings for kg entities from both descriptions and kg structure for the task of openworld knowledge graph completion the main idea is separate modules for obtaining structural embeddings of entities and relations in a kg and description embeddings of descriptions of possibly new entities and a description projection module to project the description to a space where the structure and description embeddings are aligned kg embeddings the approach was evaluated on three different datasets showcasing the robustness of the approach comprehensive analysis of the proposed approach compared with several prior work on the topic the differences with considered baselines are well explained some design choices of the approach need justification see q1 questions 1 what is the disadvantage of using transformerbased model to represent an entity description when the description is short ten of fewer words why is there a need to use two different encoders depending on the description length 2 in section 62 four tasks were introduced but the last one was without any elaboration unlike the other threedocsepthe authors propose a new method for computing knowledge graph embeddings on kgs where textual descriptions of entities are available they evaluate the effectiveness of their method on a number of downstream tasks including openworld kg completion closedworld kg completion and entity classification they show that their method folk outperforms baselines across all tasks in key metrics the authors also perform several ablations to show effectiveness of different parts of their model they also highlight areas in which their model is better than baselines such as owe and provide intuition for it however there are several shortcomings in the way the paper is written and organized some of these are 1 cross domain alignment sec 43 is hard to understand authors say they treat an entitys immediate graph neighbourhood as its context and use skipgram objective to maximize the probability of the entity given the neighbourhood entities however this is nowhere reflected in the loss functions used loss function mathscrl as well which is used in eqn 2 and 3 hasnt been defined in the main text which makes things even more confusing mathscrlproj is not described anywhere training algorithm should be part of main text since it is a vital differentiator from owe 2 dsa score is hard to understand given description is not enough to be able to reproduce it 3 folk is the first framework to use a transformerbased encoder to embed openworld entities kgbert 2019 uses bert for encoding closedworld entities extending it to openworld entities is a straight forward solution and not a novelty 4 the score for any triple must be of the order of the embedding dimensions section 1 point 3efficient ranking what does this mean 5 whats the difference between folks and folkl its not specified 6 a lot of the content that is being referred to in the main text lies in the supplementary material the authors should move at least the most relevant stuff such as training algorithm loss functions to main text it seems that the paper has been written in a hurry and is lacking sufficient description for the method given that the main contribution is the proposed new method folk a better method section and reorganizing is needed docsepthis paper presents an approach to learn representations of openworld entities in a kg with given textual descriptions of openworld entities this paper describes a technique that jointly learns embeddings for kg entities from descriptions and kg structure for openworld knowledge graph completion the technique is experimentally validated on the yago310open and wn18rropen datasets and it beats previous open world entity representation learning methods i think this paper is a decent focused contribution however i think it can be significantly improved in its presentation quality 1 the drawbacks of existing approaches is not clearly described and thus it is hard to understand the contributions of this work 2 the method description is not complete for example lproj is not described it is hard to understand the approach 3 the descriptions of the dsa score is also fuzzy it is not clear how i would implement this with just this description the method is taken from biskip but it is hard to understand how the application is different here statistical significance numbers should help in table 4 i also have the following questions 1 why do we need a 2 phase learning approach how does it compare to a one phase learning model 2 how do you find the optimal value of alpha ### Summary:
this paper presents a new approach called foik to learn entity embeddings that exploits not just the kg structure but also the textual entity descriptions evaluated on the task of openworld knowledge graph completion extensive experiments show the value of the approach the authors were able to address most reviewer feedback in the revised version that was uploaded to the system
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the present paper analyzes the learning dynamics of node perturbation np a biologically plausible learning algorithm used to train feedforward networks overall the paper states a negative result about the unstability of np due to weights diverging through learning grounded in analytically tractable results obtained on linear models in the studentteacher setting carefully checked against numerical experiments this analysis leads the authors to prescribe weight normalization to prevent this phenomenon the predicted behavior in the linear regime is empirically observed in nonlinear models on two training tasks which validates the soundness of the aforementioned analysis more precisely section 3 introduces the np algorithm recalls that the resulting weight update provides an unbiased estimate of the sgd weight update and shows that the cosine similarity between these two weight updates for a given output layer scales as the inverse of the squareroot of the size of this output layer eq 8 suggesting that np and sgd updates become nearly orthogonal for wide networks also by comparing the covariance matrix of the weight updates for np and sgd eq 10 they want to highlight that np is much noisier than sgd beginning of section 4 recalls the minimum number of training steps needed to reach an error level for np in the linear regression setting and introduces a deep linear model consisting of two linear layers namely yw2 cdot w1 cdot x the rationale behind this choice is to analyze separately the impact of the number of perturbed units and that of the size of the output layer this deep linear model is studied in the studentteacher setting whereby the target is given by a teacher network consisting of a linear transformation with some additive gaussian noise if this noise is nonzero then the authors say there is a mismatch between the student and teacher networks first paragraph of section 4 shows the analytical np learning dynamics in terms of the error epsilon defined as the squared distance between student and teacher model and the input weight norm aw1 through time where time unit is a batch iteration obtained in the large hidden layer noiseless ie no mismatch limit all the calculations leading to these results are provided in the appendix this analysis unveils two main results summarized by eq 14 first the weight norm grows monotonically increases through time second there are two working regimes for np depending on the value of the learning rate used if the learning rate is smaller than a critical threshold the error converges to zero if not the error decreases until the input weight norm a reaches a threshold corresponding to the sign inversion of dotepsilon wherefrom the error rises again hence the instability which is caused by the input weight norm crossing a threshold this theory is successfully checked against numerical simulations fig 1 also the minimal number of training steps required to reach an error level is analytically computed eq 15 and numerically verified fig 2 and highlights its weak dependency with the hidden layer size second paragraph extends the previous study to the case of mismatch eg the labels given by the teacher network are noisy sigmat neq 0 it shows that the instability previously always happens regardless of the learning rate value the frontier at which it occurs is analytically derived eq 16 and these results are checked numerically fig 3a there is still a critical learning rate threshold delimiting different scalings of the number of updates required to reach minimal error as a function of the learning rate fig 3b a behavior which is in stark contrast with that of sgd fig 3c fig 4d shows that by normalizing the np weight update np learning is stabilized at the cost of a bias in the weight update which grows with the hidden layer size in section 5 the learning dynamics of np are numerically analyzed on nonlinear models on the mnist and sarcos training tasks again the instability is observed and vanishes when normalizing the np weight updates suggesting that the previous analysis also holds in the nonlinear setting it is also shown that np is up to 1000 times slower than sgd which is partially explained by the fact that np relies on a scalar rather than vectorvalued error signal strengths the structure and the writing of the paper are clear the figures are neat the meanfield model derived in the linear studentteacher setting matches very well the experiments the theoretical analysis seems to lead to a simple trick to unlock np scalability weaknesses the derivations in the appendix are not easy to follow especially because multiple approximations are sometimes used without detailed steps i will come back to this in the questions the title raises big expectations that are not met the term scalability calls for complex tasks beyond mnist this is a major limitation of the paper my main concern is that this work titled stability and scalability of np learning does not go beyond the mnist task i think the most important contribution of this work is theoretical there is an excellent match between theory and experiments as per the figures shown in the main on linear models which extend to nonlinear models and gives a very simple insight as to how to use np properly that is by normalizing the np weight updates i think this simple result would have been even more compelling if for instance demonstrated on a 45 layers convolutional architecture trained by np on cifar10 the ideal situation being without weight update normalization the model doesnt train with weight update normalization it trains even very slowly but it does then it could be concluded that the theoretical model proposed totally accounts for the unscalability of np when normalization is not applied as it stands with mnist being the most difficult task tested its harder to conclude the same in the bioplausible deep learning literature the mnist task alone as a benchmark to claim scalability is unsufficient to my eyes however it might be that the sole theoretical contribution of this work and mnist benchmark abide by the standards of another community eg statistical physics x ml i may not be aware of so my judgement might be biased by the fact i come from the bioplausible deep learning literature also i think that the theoretical contributions should be better highlighted in the main i would recommend accept if 1 the authors ran a cifar10 experiment with a 5 layersdeep convolutional architecture observed the same kind of behavior as on mnist and sarcos 2 computations in the appendix are more detailed than they are now see the questions section above docsepthe paper considers the dynamics of training deep networks with node perturbation the authors provide a detailed theoretical analysis of learning dynamics of node perturbation vs sgd in linear networks with one hidden layer in the limit of infinite width the analysis reveals that the input weight norm determines whether the loss increases or decreases during training with node perturbation with large weight norm corresponding to unstable training this motivates using weight normalization for node perturbation empircally on linear and nonlinear networks weight normalization stabilizes training with node perturbation originality the work is original the analysis on deep linear networks is novel and the insights from the experimental and theoretical results are not previously explored to my knowledge this paper is the first to provide a theoretical explanation for why weight normalization is useful for node perturbation quality the contributions are high quality the theoretical analysis appears sound and the experiments are comprehensive one potential drawback is that the experiments mainly consider simple datasets sarcos and mnist and mostly consider only one hidden layer networks although a few experiments consider multilayer nonlinear networks investigating the performance of node perturbation in more challenging settings and more complex architectures could help justify the usefulness of the linear network analysis however this is not strictly necessary given that many of the main contributions of the paper are theoretical clarity the paper is well written the mathematical notation is clear and the figures are well illustrated however many of the key and interesting theoretical results of the paper are in the supplementary material particularly the meanfield dynamics of np it may help to provide a brief sketch of these results in the main paper significance the paper may have significance to researchers specifically studying node perturbation as a learning rule however its applicability to the field of biologically plausible learning appears more limited especially given that node perturbation does not appear to be empirically as effective as other biologicallyplausible alternative learning rules moreover the experimental results are limited to simple datasets and architectures the significance of the paper could be significantly enhanced by exploring more complex settings and showing for example significantly improved performance of node perturbation when it is combined with weight normalization the authors address the limitations of their work in the discussion section as they note the utility of node perturbation is limited in the supervised setting although its utility is more clear in a reinforcement learning setting the authors may also want to comment on the potential applicability of their linear analysis to other settings as they empirically find the qualitative observations of the linear network extend to certain nonlinear networks the authors may wish to specify in which settings these observations may not apply docsepthe authors present a neat analysis of the scalability and stability of the node perturbation algorithm which is one of the popular bioplausible credit assignment algorithms for deep networks based on their analysis and inferences they introduce a weight normalization trick that seems to alleviate the issues with the algorithm albeit at the cost of adding a bias to the gradient estimates in the first part of the paper the authors use analytical tools to dissociate the effect of number of output nodes of the network from the number of perturbed nodes of the network to claim that node perturbation is scalable for deep linear networks however they demonstrate that the dynamics entails an instability in the weight norm they validate these analytical results empirically in deep nonlinear networks and therefore establish a key result in training networks using node perturbation they also show that the instablity is worse when there is noise in the labels ie teacher noise finally they demonstrate that the weight normalization trick can help stabilize the algorithm albeit hurt performance by introducing a bias in the algorithm as evidenced by lower performance on the sarcos and mnist tasks strength 1 the paper is well motivated and tackles an important problem in the field the analytical methods used in the paper are an important contribution towards understanding how node perturbation algorithms can be used to develop bioplausible learning rules in deep networks 2 the paper is theoretically strong and demonstrates via simulations how the analytical results hold in practice as well as when nonlinearity is introduced in the network 3 the weight normalization solution is a smart solution and a good integration of the inferences from the analytical framework furthermore it also fits well with the homeostasis viewpoint wherein weights of a neuron are thought to conserve some quantity over time weaknesses 1 the presentation of the paper can be improved particularly how the analytical results are presented i felt a lot of the details of how the results were derived were buried in the appendix with limited reference to these considerationsassumptions in the main text the authors could consider rewriting certain sections of the paper to better reflect the derivations of their theorems and in doing so allow the reader to appreciate the contribution that this paper makes 2 the weight normalization strategy could be better introduced particularly how in adding a weight normalization step introduces bias in the weight update it would be great if the authors could add elaborate on this bias and how a bigger hidden layer size contributes to a higher bias 3 the metrics used in the paper make sense while analyzing the instablity of the node perturbation algorithm however it would be nice to report the accuracyperformance metrics on sarcos and mnist for sgd and weightnormalized np i feel this would be more complete and lay the foundation for further researchers who could use weightnormalized np as a baseline method 4 the discussion about representation similarity is a bit sudden and lack compact presentation given that the merits of the paper lie in the stability analysis of the np algorithm and proposing a possible workaround to that i would suggest either cutting down on analyzing learned representations although it is very important and an interesting direction or fleshing it out more to convey the inferences better overall i feel that the paper presents a very thorough analysis of the np algorithm and demonstrates a key feature in the dynamics if the authors could improve the writing a bit this work would appeal to a larger audience and could make a significant impact in the field i think the authors do a commendable job in stating the limitations of their proposal but i feel it could be elaborated in the discussions specifically how does the weight normalization introduce bias and its effect on the performance docsepthe authors study a stochastic gradientfree learning rule for deep feed forward neural networks node perturbation the learning rule in brief is to perturb the activity of each neuron by a gaussian and then update the corresponding weights in proportion to the sampled difference in the loss function the authors develop a mean field theory for a twolayered linear network whose target function is given by a linear transformation with additive gaussian noise this allows them to solve for a critical learning rate in which the learning process is as fast as possible they find that even with the critical learning rate node perturbation is slower than stochastic gradient descent in particular it is asymptotically slower by approximately a factor of the output size in addition their theoretical results reveal an instability in the error dynamics once a certain threshold is crossed and the authors introduce weight regularization to remedy this finally the authors numerically test the performance of the regularized node perturbation algorithm on the supervised learning tasks mnist and sarcos and show that the algorithm is indeed capable of learning these tasks the authors provide original theoretical and numerical results about a particular learning algorithm which are clear and straightforward to understand from a machine learning perspective the main drawback of the paper is that they are mostly showing weaknesses of an existing algorithm instead of introducing something new and performant that said the weight regularization does successfully stabilize the learning process and it is somewhat interesting that the algorithm is able to learn mnist and sarcos with the regularization in place one problem i have with the sarcos figures is that there is no benchmark shown everyone knows that an accuracy of 98 on mnist is acceptable but since im not intimately familiar with the sarcos task i would like to see either a sota mse taken from the literature or at least the mse of the sgdtrained models alongside the nptrained models for comparison it is explained in the text that the target error was 50 but it is still not clear to me whether or not this is a good value in the grand scheme of algorithms which solve this task another way to make the error values easier to interpret would be to report r2 instead of mse but either way some kind of benchmark should be shown the central justification for the lacking performance of the algorithm is its biological plausibility this makes sense but i did not get a very clear message from the authors as to whether or not they really believe this algorithm is used somewhere in the brain i agree with the authors statement an algorithm is biologically implausible if it takes an inordinately long time to reach good performance however i did not understand the citation of figures 4e and 5b when they said its nps performance deficit is smaller when compared to a reinforcement learning rule using the error backpropagation from what i understood these figures show that it takes longer to train np than reinforcement learning or sgd especially as depth increases if i understood correctly it seems this would be in contradiction to what is said in the discussion and then it is difficult to rationalize the use of np in the brain there should be some clarification either in the figures or in the discussion i also think its worth noting that inverse problems including sarcos are often solved innately by the brain when a calf or any quadruped is born they dont spend any number of epochs learning what torques are required to move their hooves from one place to the other walking occurs immediately suggesting that the capability was genetically encoded into the architecture not learned the authors could of course defend the claim that np is used for learning more complex motor tasks such as those performed by humans but then it would also have to be clear whether or not the algorithm is really fast enough for this i understand the motivation of using sarcos to test the capability of the algorithm to solve a motor task however since the papers story has biology at the center i think it would be much more convincing if the authors also took one of the tasks from one of their biological citations and replicated it with their algorithms even including the purely modelbased task from fiete et al 2007 would make the argument that np aids in songbird learning more than a passing remark overall i think a more clear story would be greatly beneficial to the paper and better determine its significance if a central argument is that the brain probably does use regularized np for motor learning i would be skeptical and would be interested in further discussion of the biological literature the citations kornfeld et al 2020 and possibly bouvier et al 2018 might support this line of argument if the results were better reflected upon in the paper on the other hand the story could be that the slowness of the algorithm even in an idealized setting demonstrates that the brain cannot be using it except as an auxiliary tool to a more powerful learning method societal impact is not very relevant the results of the paper are far too theoretical for negative societal impact to be speculated upon performance limitations were adequately reflected upon ### Summary:
authors theoretically analyze and numerically verified statistical properties of node perturbation which is one of the more biologically plausible but slower learning rule authors show both the benefits and limitations of the naive node perturbation in terms of learning trajectory in particular node perturbation is unstable under practical regimes they propose a biologically plausible weight normalization scheme which overcomes some of the limitations of the naive version but introduces some bias this work advances the theoretical understanding with significant contribution to the neuroscience of learning the expert reviewers agree that the work is original clear and of high quality i suggest revising the title to indicate both the negative and positive sides of the analysis perhaps on the stability and scalability of node perturbation learning would be better
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: summary the paper presents a graph neural network gnn architecture with learnable lowrank filters that unifies various recentlyproposed gnnbased methods the local filters substitute the graph shift operator gso by a learnable set of parameters that capture the local connectivity of each node in the graph moreover a regularization penalty is proposed to increase the robustness of the model and prevent these local structures to overfit the paper provides proofs to justify the generality of the approach and how different methods can be seen as a particularization of the proposed scheme two theorems are also proved to claim the stability of the gnn architecture against dilation perturbations in the input signal several numerical experiments are conducted to empirically test the usefulness of the model strong points the paper introduces a new gnnbased approach with larger discriminability power the proposed approach generalizes various previously existent architectures this is proved in the appendices a regularization technique is proposed to avoid overfitting to local particularities of the data two theorems are introduced to proof the stability of the network against dilation in the input signal the numerical experiments are extensive and convincing this is one of the strongest points of the paper the paper is wellstructured and the style is appropriate it is easytofollow and the points are clearly stated weak points replacing the gso with a set of learnable parameters increases the discriminability power of the network at the cost of sacrificing various properties of common gnn architectures for example this technique is no longer permutation equivariant and transferability to larger networks will no longer be an option as long as the learnt parameters are graphdependant scalability problems appear when the network grows in size this and some possible ways of tackling it are discussed in the conclusion although the theorems offer insights on the robustness of the network against perturbations on the input signal they are restricted to dilation perturbations which are proportional to the input signal this is not commonly the case perturbations often follow distributions that has nothing to do with the input signals overall assesment and recommendation the paper introduces a new architecture with larger discriminative power that generalizes various stateoftheart methods although the theoretical results are not particularly strong they are undoubtedly insightful then the empirical performance of this technique is exhaustively validated through several experiments thus in my opinion this paper should be accepted recommendations to the authors using matrix notation would be helpful to clarify various equations and capture the attention of a broader range of researchers equally important it will also contribute to establish links between the schemes proposed in the paper and wellestablished techniques in the field of graph signal processing gsp describing the connection between the operator bk and classical graph operators adjacency laplacian normalized laplacian would be clarifying consider adding a couple of lines pointing out this relation docsepthis paper proposed l3net which is a new graph convolution with decomposing the learnable local filters into lowrank it can contain both spatial and spectral graph convolution including chebnet gat edgenet and so on as subsets it is also robust to graph noise experiments are conducted on mesh data facial recognition and action recognition indicating outperformed performance over baselines its robustness to graph noise is also tested in general the motivation novelty and validation are good however i have the following concerns although the authors demonstrate and explain the advantage and disadvantages of l3net the authors do not explain the applicability of downsampling and upsampling in l3net for example following maxpooling and upsampling in dcnn chebnet can be integrated into muftiscales for enhancing the performance see httpsarxivorgpdf180710267pdf how about formulating l3net into multiscales in the experiments i saw that three convolutional layers are used in section 41 42 43 and 44 seems do not offer the depth information of l3net i feel that three is a really shallow network why deeper networks are not used can the authors kindly commentvalidate on this please in proposition 1 it seems that a control of k and l can offer different options between chebnet gat and edgenet in practical training are these settings of k and l easy to trainconverge with always good performance and computationally light are validation offered to prove this proposition and to compare the different options offered by l3net to chebnet gat and edgenet if not can the authors give reasonable comments on this please it seems the regularization in section 22 is a very important contribution of this paper however no ablation study is offered comparison between with or without regularization can the authors kindly validate this please in figure 1 the definition of variable mu c k m and k are not defined which makes the figure less easy to understand could the authors kindly add these on please in figure 1 the authors use left and right to distinguish the four figures this is kind of confusing could the authors use a b c and d for the figure label please the same for figure 2 where the authors use plots and table to distinguish different subplots i think this can also be improved in similar way also other figures in section 21 vn i feel that n would be more appropriate here as it represents the number of nodes docsep pros bullet the proposed graph convolution method is tested on different problems object recognition on the spherical mesh facial expression recognition action recognition on the face and body landmarks and realworld datasets bullet the proposed method performs well both under the missing node and feature and graph noise particularly the effect of local graph laplacian regularization is notable bullet the relationship between the proposed l3net and previous spatial and spectral graph convolutions is theoretically explained bullet the complexity of the l3net is significantly lower than locally connected gnn and it is more suitable for smallsize graphs such as the face or body keypoints cons bullet in face experiments pixel values around each node were taken as node features however in a similar setting in action recognition they were not used considering the temporal data as in ck applying a similar setting by changing stgcn blocks with l3net would make it comparable minor in many parts typo error in mnist datasets name mnsit 23 first paragraph graph convolutoins convolutions 11 second paragraph chebyshev polynomial in chenbet chebnet or chebconv docsepthe authors present a new definition of graph convolution that is also shown to generalize wellknown existing ones pros interesting new definition of graph convolution good theoretical contribution cons experiments could have used more widely adopted benchmarks the novel contribution of the paper is sound and the theoretical explanation allows to understand the connections with existing graph convolution definitions the experiments are well conducted and i appreciate the indication of the standard deviation in the results they also show significant gains with respect to the other techniques while the currently reported experiments are adequate it would have been more interesting to test the method on emerging benchmarking frameworks such as 1 to get better insights on the perfromance on a standardized setting 1 benchmarking graph neural networks vijay prakash dwivedi chaitanya k joshi thomas laurent yoshua bengio xavier bresson httpsarxivorgabs200300982 ### Summary:
all reviewers expressed consistent enthusiasm on this submission during the review process no reviewers expressed concerns and objections to accept this submission during discussion it is quite clear this is a strong submission and deserves accept
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: authors correctly point out the tradeoff between adaptation costs and performance on novel datasets motivated by this tradeoff authors introduce a novel adaptation layer based on squeezeandexcitation that performs taskbased feature modulation and is metalearned in combination with a pertask learned linear head the proposed method performs well on a variety of largescale and difficult fewshot adaptation benchmarks at a reasonable computation cost strengths a wellmotivated wellreasoned and widely applicable approach results are convincing and the model clearly accomplishes what it sets out to accomplish in the way that it claims to accomplish it paper is clearly written if not entirely wellorganized or focused see below weaknesses the paper suffers from a misplaced focus in its presentation many of the named concepts presented as novel are not and the truly novel contribution is somewhat limited and discussed only briefly 1 case is introduced as a novel layer but is identical to se but for the context pooling discussed below and the final activation layer an implementation detail discussed in supplementary 2 related to above the adaptive mode inference mode described as fundamental on pg4 line 142 of case is identical to tadam tadam task dependent adaptive metric for improved fewshot learning neurips2018 and also just a commonsense approach to handling a support vs query set 3 i hesitate to call the uppercase metatraining scheme particularly novel or coordinatedescentbased while optimization does switch back and forth between case and head parameters the head parameters reset with every new batch in actuality the case parameters are being metalearned while head parameters are being set in the inner loop and uppercase is just a straightforward and commonsense approach to metatraining this procedure train linear head to convergence propagate gradient into metalearned layers is also the exact same procedure already introduced by metaoptnet metalearning with differentiable convex optimization cvpr2019 in my eyes the true novelty of case is in taking the filmtadam approach to taskconditioning and replacing the redundant task encoder layers with the appropriate intermediate network activations this crucial difference is discussed only briefly in related work pg6 line 237238 i also consider this a somewhat limited conceptual contribution empirical results aside in my eyes this calls for a fairly substantial text revision where the contribution is mainly a novel approach to efficient adaptation rather than a new adaptive block lines 78 and the conceptual linkages to tadamfilm are explored and discussed rather than the similaritiesdifferences relative to se which are much less relevant in this context i recognize this could be a pretty idiosyncratic and overly specific take though ill be curious to see what other reviewers think on the noveltycontribution less importantly case is conceptually quite similar to tadam and above proposed revisions aside tadam is at least worth a mention in related work as a filmderivative approach to fewshot learning limitations are discussed clearly and fairly societal impacts are addressed very briefly though since the impacts match those of fewshot learning more broadly this is sensible docsepthe work proposes a novel method for fewshot image classification the method is named uppercase and is based on adaptation of the squeezeandexcitation block to learn task contextual parameters their approach is hybrid in terms of a combination between metalearning and finetuning of the network the proposed hybrid approach aims to bridge the gap between finetuning approaches which are more accurate and the metalearning approaches which have lower adaptation cost optimization protocol is based on coordinatedescent between the case blocks in the network body cross task parameters and the task specific parameters of the head last linear layer the approach achieves new sota results for metalearners on the vtabmd and orbit strengths the paper is written in a clear and easy to follow manner the method requires only a single forward pass over the context the methods is simple and novel yet achieves very good results compared to other metalearners experimentation study is comprehensive and also shows the downside on structured tasks weaknesses i would have liked to see some ablation study and discussion on the case block architecture choice i believe that the authors adequately addressed the limitations and potential negative societal impact of their work i appreciate for being honest about the lower results on structure datasets and the hypothesis that this case might require finetuning of the networks body docsepthis paper mainly focuses on fewshot learning the authors advocate the importance of an efficient finetuning algorithm for fsl to the end they propose a new module based on se block which generates scaling parameters using taskspecific information furthermore they propose to leverage coordinatedescent in metatraining to solve the problems of instability vanishing gradients and high memory consumption 1 the authors conduct extensive experiments to show the effectiveness of their method 2 the proposed coordinatedescent for metalearning is interesting and can be a potential plugin solver for other metalearning methods detailed comments are in the questions in summary i find the proposed module similar to the existing method in fsl which is the main problem of the method ### Summary:
the reviewers consider the work technically solid but were concerned about the contextualization of this work in the literature postrebuttal some of the reviewers concerns are resolved
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper proposes fast lifelong adaptive irl flair a learning from demonstration framework that aims to maintain a collection of learned strategies that may be mixed in order to model subsequent demonstrations the method is similar to multistrategy reward distillation msrd but does not assume access to a strategy label overall the paper is wellwritten and provides a thorough experimental evaluation strengths the paper proposes a new method for constructing a policy mixture for a new demonstration as well as a new objective betweenclass discrimination bcd that seems to be significantly more effective than existing work at adapting to a sequence of demonstrations the paper conducts a thorough experimental evaluation which includes ablations on flair and even a real robot experiment with a table tennisplaying robot the paper also includes a very thorough appendix weaknesses it seems like the effectiveness of the method may depend on the range of demonstrations availableif they are not diverse then a lack of strategies is modeled and if they are too diverse the method may learn too many strategies im not sure about the importance of the problem setting and i think the assumptions made in the problem setting could be motivated better its not clear to me how many real world scenarios are actually modeled by this problem statement where individual demonstrations must arrive in sequence for example a robot learning to play table tennis will likely have access to a library of demonstrations to start instead of requiring each demonstration to arrive in sequence questions as a baseline what is the performance when not given demos in a sequential manner eg given demos all at once with airl or gail it would be helpful to add titles to the plots in figure 2 and 3 docsepthis paper proposes an lfd framework to allow adapting to different user preferences over how a task is carried out it uses an initial set of demonstrations to build an initial set of policies that correspond to different strategies then users specify how they would like the task to be carried out by adding a demonstration this demonstration is used to infer a policy mixture of the base set of strategies or learn a new strategy if the demonstration is sufficiently different from the current set of strategies in this way the proposed method can learn continually the proposed method is demonstrated on 3 simulation environments and a real world table tennis environment strengths the proposed method is an interesting idea that in principle allows for more strategies to be learned and modeled as more demonstrations are added into the system results on the realworld table tennis domain are impressive especially those shown in the supplementary video weaknesses one of the main weaknesses is the experimental evaluation in simulation the most important aspects of this work with respect to prior work appear to be the ability to continually adapt to new strategies specified by new demonstrations however the evaluation of the adherence to strategies is mostly qualitative it would be very useful to show additional experiments in simulation where there are a collection of groundtruth strategies that are specified by an initial set of humans similar to the realworld table tennis experiment then quantitative evaluations on how well learned policies adhere to the desired strategy would be possible instead of the current experiments that rely on unsupervised strategy discovery through diayn reference 37 from this perspective it could also be valuable to show results on other domains in simulation where groundtruth strategies can more easily be specified for example robotic manipulation with different grasps possible for objects or different speeds for trajectories that are executed on the arm related to this it would be nice to have more quantitative metrics for evaluating adherence to desired strategy in sec 51 eg returns under the groundtruth reward function for the strategy not just the general task reward function more comments follow what if it takes more than one demonstration to show a userspecific strategy would the method be able to handle this case sec 42 how many trajectories are needed to estimate the objective and find the policy mixture weights do you then need to collect additional trajectories to estimate the kl divergence for line 5 of algorithm 1 once the mixture weights have been found sec 6 how is rl run on the real robot in a sample efficient way does the human need to do manual resets more details on this process would be helpful its a pretty strong claim to put crowdsourced demonstrations in the title without having extensive evaluation with several humans there are a few missing references for crowdsourced demonstrations in the paper httpsarxivorgpdf181102790pdf httpsarxivorgabs191104052 httpsarxivorgabs220202005 docsepthis paper presents flair a new algorithm for lifelong personalized inverse rl that can rapidly adapt to new heterogeneous demonstrations they maintain a set of learned policies corresponding to unique skills in demonstrations encountered so far new demonstrations are modeled as a mixture of existing policies if the behavior is captured sufficiently well or a new policy if the behavior is not in three simulated continuous control environments flair outperforms baselines in adaptability efficiency and scalability a real robot experiment is also performed to evaluate the utility of flairs policy mixture this is a strong paper that tackles practical problems in learning from demonstration lifelong deployment and heterogeneous demonstrations due to varying human preferences the proposed approach is novel and technically sound with an intuitive procedure algorithm 1 and a novel betweenclass discrimination loss function experimental results are organized and presented well demonstrating a win for flair in adaptability efficiency scalability and policy performance over baseline approaches adversarial irl airl and multistrategy reward distillation msrd experiments suggest flair is a new stateoftheart approach for lifelong irl from heterogeneous demonstrations they also evaluate the quality of flairs policy mixture in a real robot experiment and the video supplement gives good visual intuition for the experiment while strong the paper has some weaknesses firstly the authors claim that the code and data will be opensourced but it currently is not and is not available in the supplement it would be ideal to have these available in order to reproduce simulation results for example secondly the clarity of the writing and presentation can be improved in particular figure 1a is very difficult to parse and the text could benefit from proofreading thirdly the results seem to be weaker for lunar lander and bipedal walker than the very simple inverted pendulum environment as the respective figures are relocated to the appendix figure 2 in the appendix shows weak correlation compared to figure 2 in the main text or absent from both the text and appendix eg the counterpart for figure 4 fourthly it is unclear why the real robot experiment is a different experiment than the simulations ie the flair vs airl vs msrd comparison some clarification from the authors here would be appreciated more minor notes requests for clarification i am not sure why crowdsourced is emphasized in the title and democratize access to robotics is emphasized in the introduction as there is no largescale data collection from crowdsourced humans in this paper have the authors considered a smoothing as opposed to filtering approach in algorithm 1 which perhaps could recompute mixture weights for old demos with newer policies that were not available at the time in section 52 under q6 why does flair recover more than 5 strategies if there are only 5 ground truth policies is it approximation error which environment is the data in figure 3 from it would be nice to include visualizations of the different heterogeneous demonstrations in simulation for better intuition in figure 4 and other parts of the paper ### Summary:
phase 1 strengths the submission provides a new and technically relevant method it is well structured and intuitive in its argument it provides a thorough experimental evaluation including real world robot experiments in a challenging domain weaknesses some weaknesses are found in the evaluation and multiple reviewers ask about clarifications in particular these include the different choices for comparisons and baselines between simulation and real experiments weaker results on some of the toy domains as well as quantitative metrics for adapting to different strategies phase 2 the feedback has originally been a borderline case with slightly more positive than negative feedback many points could be addressed during the review and final reviews are generally positive 3 weak accepts the reviews point out that it is well structured and intuitive in its argument it provides a thorough experimental evaluation including real world robot experiments in a challenging domain but also required clarifications around the evaluations and confusion about some of the baselines i agree with the reviewers and recommend acceptance please take the remaining points from the review process seriously and follow up with improvements on open points and promised changes
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper shows that even when the posterior is as private as targeted in the beginning sampling from posterior with sgls might not be as private as targeted the authors prove the theorem on bayesian linear regression problem they prove that for n big enough sampling from the posterior is epsilon delta differentially private dp but there is a step in which releasing a sample will not be epsilonprime deltadp for epsilonprimeomegan epsilon this work is quite interesting and important in the sense that sgld is used in many works in literature and it is before proved that sgld with specific parameter choices provides epsilon deltadp this paper finds a counter example to the previous finding with correct analysis however the structure of the paper can be improved in the title and introduction it is claimed that sgld might not provide epsilon deltadp for deep learning but the analysis are made for bayesian linear regression it is not clear to me that whether it is generalizable to bayesian deep neural networks or not one weakness of this paper is the literature review there are papers that uses sgld for differentially private deep learning it will be very useful to cite these works to understand whether these methods provide epsilon deltadp eventually or not it is confusing what is proposed in section 5 it is mentioned that the bound scale poorly with dimension but can still be useful for bayesian sampling in lowdimensional problems is the method still proposed for epsilon deltadp for deep networks as an alternative it is important to show sgld might not always give epsilon delta differential privacy guarantee but the text should be improved to clarify the points that i mentioned above maybe the title and the introduction should be revised or some analysis could be added to show this is also applicable to deep learning docsepthis paper provides one concrete example showing that revealing one posterior sample generated by sgld has the risk of a privacy breach when the slgd sampling iterations number is moderate while the exact posterior sampling has little risk of a privacy breach 1 the counterexample constructed is fairly restrictive it is for a particular model for a particular data set for a particular stochastic scheme ie cyclicsgld and for a particular learning rate does the same result hold if we use the common sample in each step of the sgld so i doubt that this example provides general insights 2 according to the proofs in the appendix k in lemma 45 is fairly small in other words the privacy breach can occur when the sgld has only scanned the full data set a very small number of times less than 10 epochs as shown in figure 1 thus even for this particular counterexample i dont think the result is practically meaningful 3 section 5 is really an incomplete analysis overall i consider the contribution of the paper is quite restrictive by definition it is sufficient to find a pair of neighboring data sets to counterprove the loss of privacy but the results also depend on the specific setup of the sgld algorithm which i believe is not very proper in the common privacypreservation algorithm one typically injects laplace or gaussian noise to show it works we always need to have some lower bound of noise variance similarly if sgld preserves privacy there are potentially some requirements on the algorithm implementation to counterprove that i suppose one needs to show that no matter how one tunes the sgld algorithm the privacy breath is inevitable docsepthis paper studies the privacy guarantee of bayesian learning using stochastic gradient langevin dynamics sgld since the sgld updates are stochastic it is often thought the solution can be suitable for privacypreserving of the data that is used to train the algorithm using a counterexample this paper shows that it is not necessarily correct to assume so overall this paper presents a rigorous analysis of differential privacy of bayesian learning using sgld it uses bayesian linear regression as a simple example to demonstrate that while differential privacy holds at the beginning of the sgld updates and similarly at the convergence but it may not hold during the intermediate steps of sgld updates both the theoretical analysis and the empirical graph in figure 1 backs their claim the paper is mainly a theoretical paper and seems to appropriately analyse the differential privacy of sgld the claims seems accurate although i could not verify the details of all the proof as it is fairly long having said that the paper can be significantly improved in its writing at many places it assumes a lot of background from the reader and uses terms without providing required explanations for example on page 2 when discussing ma et al 2019 it mentions about epsilonmixing time bound without providing any clear context or explanation also as per my understanding there are a couple of statements which seem incorrect on page 2 when starting to discuss differential privacy it says a differentially private algorithm promises the data owners that their utility will not change with high probability by adding their data to the algorithms database i do not think differential privacy makes any claim on utility theorem 1 which is a key result of this paper uses three notations epsilon epsilon epsilon the role of epsilon and epsilon does not seem clear it appears there is an error are epsilon and epsilon same on page 4 the parameters n c xl xh gamma1 etc are not explained properly the figure 1 is referred as figure 4 should be corrected on page 7 the sentence it then estimates the average slope and throws away the outliers that deviate too much from the average slope it is not clear what authors mean by slope here spelling errors on page 4 known should be know on page 4 a well known results should be well known results on page 6 peeked should be peaked at many places in the text ith jth etc use the math symbols without latex mode a lot of places full stop is missing both in text and in lemma statements this paper analyses the differential privacy of the sgld algorithm it uses bayesian linear regression as an example to demonstrate that while differential privacy holds at the beginning of the sgld updates and similarly at the convergence but it may not hold during the intermediate steps of sgld updates the results seem convincing docsepthe paper studies the differential privacy of stochastic gradient langevin dynamicssgld as mcmc method the paper shows that approximate sampling using sgld may result in an unbounded privacy loss in the middle regime via bayesian linear regression strengths 1 it is interesting to know that sgld can result in unbounded privacy loss during the middle of the sampling procedure 2 figure 1 clearly illustrates the main idea of the claiming point weaknesses 1 theoretical results in this paper are based on bayesian simple linear regression problems as shown in eq4 page 3 however the paper mentions bayesian neural networks in several misleading places for example in the abstract this interim region is essential especially for bayesian neural networks as it is hard to guarantee convergence to the posterior or in introduction neither of these cases is suitable for deep learning and many other problems as one would limit the models accuracy and the other is unattainable in a reasonable time in page 1 the authors should make it clear about their contributions so readers can position this paper appropriately 2 subsections 41 42 are proof sketches to show that approximate sampling of the posterior with bayesian linear regression by sgld is not differential private in some steps theorem 1 it is better to put them into a new section and explicitly state their relation with theorem 1 3 section 43 tries to remove unknown c in eq 5 however ptheta w is not the original posterior ptheta d based on dataset d anymore whats the relationship between ptheta w and ptheta d 4 the manuscript is not ready and needs to be further proofread for example comma and period are missing in many places in this section we will consider the differential privacy guarantees provided by taking one sample from the posterior for the bayesian linear regression problem on domain d similarly theorem 1 lemma 41 44 and 45 it also should be steps in but there will be some step in which sgld will result in unbounded loss of privacy the paper explores the privacypreserving performance of sgld and shows its privacy loss can be unbounded in the middle regime of sampling the finding is interesting and useful however the authors should make their contributions clearly and some sentences are misleading the paper should also be reorganized and proofread before it can be accepted ### Summary:
this paper shows that slgd can be nonprivate in the sense of differential privacy even when a single step satisfies dp and also when sampling from the true posterior distribution is dp i believe that it is useful to understand the behavior of slgd in the intermediate regime at the same time the primary question is whether slgd is dp when the parameters are chosen so as to achieve some meaningful approximation guarantees after some fixed number of steps t and the algorithm achieves them while satisfying dp but at the same does not satisfy dp for some number of step t t otherwise the setting is somewhat artificial and i find the result to be less interesting and surprising so while i think the overall direction of this work is interesting i believe it needs to be strengthened to be sufficiently compelling
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper proposes the use of an evolutionary algorithm to construct decisionbased blackbox adversarial examples with l0 or sparsity constraints against image classifiers such as cnns and image transformers the algorithm uses an l2 distance constraint to check the fitness of a solution and employs several tricks such as differential recombination and mutation to improve the quality of the solution the experimental results demonstrate that the attack is more effective than the current sota sparse attacks and is almost as effective as whitebox attacks given enough queries strengths the paper is largely clear and wellwritten the experimental results are solid and experiments are carried out on vision datasets and models of interest the attack is both sparse and effective weaknesses the main issue with this paper is the lack of an intuitive explanation as to why this attack is better at finding sparse and effective adversarial examples than previous work i would have liked to see a more detailed algorithmic comparison with previous work overall this is a solid paper that makes a reasonable contribution to a problem of some interest to the community having read the rebuttal i retain my score docsepthis work proposes a novel sparse attack method called sparseevo based on evolution algorithm the sparseevo searchs a sparse adversarial perturbation in limited query budget it can significantly reduce the queries compared with the sota method ie pointwise the paper also conduct the first vulnerability evaluation of a vit on imagenet in a decisionbased and l0 norm constrained setting the proposed methods are wellmotivated and novel the paper is easy to follow for an adequately prepared reader prior work is sufficiently discussed the experiments are convincing and the experiment results show the effectiveness of the proposed attack the amount of detail is good it seems sufficient to reproduce results overall i think this paper is a good one docsepthis paper proposes a blackbox decisionbased spare attack based on the evolution algorithm called sparseevo the authors test their method on two types of classification models and two popular vision datasets resnet cifar10 and imagenet and vision transformer imagenet through the comparison with pointwise attack for efficiency and sparsity and pgd0 for success rate sparseevo achieves good performance in both success rate and efficiency pros 1 the experimental performance is really good in terms of it being a decisionbased sparse attack 2 using the l2l1 distance as the fitness function and using an evolution algorithm instead of some estimated gradients to generate adversarial examples is novel cons 1 the comparison with the pointwise attack in the targeted attack experiments is somehow unfair sparseevo relies on a random target image to generate the adversary while the pointwise attack doesnt it would be better to find a way to let the pointwise attack leverage the target image or adapt another blackbox attack for doing the sparse attacks 2 i am wondering whats the image size used in the imagenet experiments since you only reduce the search space by a factor of the channels typically 3 so i am wondering how the scalability of sparseevo is against the big images this paper proposes a novel blackbox decisionbased space adversarial attack method based on the evolution algorithm the basic idea is to use the l2l1 distance with the original image as the fitness function to adjust the current images towards the target images the experimental results are good i am only concerned a little bit about the comparison in the targeted attack since it is somehow unfair see in the main review docsepthe paper proposes an evolutionbased algorithm to conduct a sparse attack against convolutional deep neural networks and vision transformers the evaluation results show that the proposed method requires fewer model queries than the stateoftheart sparse attack pointwise for both untargeted and targeted attacks strengths the paper shows promising experimental results for the method as a paper proposing a blackbox method it also shows comparison with a whitebox attack to showcase its superiority my concerns are described in the weakness part below weaknesses 1 the experiment section can be more comprehensive the submission only compares with one paper on decisionbased sparse attack and that work only shows experiments on mnist dataset not imagenet a the comparison on imagenet shown in this submission is not fair the pointwise method sparsity is always 1 which means it basically fails to create any sparsity if the comparison were to be made the submission can instead make some minimal adjustments to the baseline method to make it not completely useless b there are many other decisionbased attacks on imagenet although most of them are showing results in l2 metrics eg ba hsja qeba nlba psba signopt etc and some of them show linfty metrics eg rays many of them can be easily adapted to l0 case with projections based on my experience the submission can try to compare with these stronger baselines on imagenet to showcase its method performance 2 the paper can discuss its relationshipdifference with the existing literature more clearly for example using evolutionary methods for decisionbased attacks is not an invention by this submission the paper efficient decisionbased blackbox adversarial attacks on face recognition has proposed one in 2019 also as mentioned above though many existing decisionbased attack papers did not show results on l0 metrics they can be adapted easily thus very related with this paper the paper should consider a more detailed discussion on its related works and justify its novelty in terms of the proposed method questions on the second plot in figure 6 for the two solid curves the red curve spaevo vit even goes lower than the black curve pgdl0 vit both at the beginning and the end is there an explanation for this observation are you using different images for different curves so that the whitebox attack pgd is not the upper bound of the attack performance in this plot or is the pgd not optimized properly the untargetedimagenet plot in figure 5b is also weird in a similar sense the paper shows good experimental results but there are some concerns about the experimental part whether its fair and valid also the novelty of the method and the relationship with the literature are not discussed in detail ### Summary:
this paper introduces a technique to generate l0 adversarial examples in a blackbox manner the reviews are largely positive with the reviewers especially commenting on the paper being well written and clearly explaining the method the main drawbacks raised by the reviewers is that the method is not clearly compared to some prior work but in the rebuttal the authors provide many of these numbers on the whole this is a useful and interesting attack that would be worth accepting
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: gradient stochasticity is used to analyse the learning dynamics of sgd it consists of two aspects norm stochasticity and directional stochasticity although the norm stochasticity is easy to compute it vanishes when the batch size increases therefore it can be hard to measure the learning dynamics of sgd the paper is motivated by measuring the learning dynamics by the directional stochasticity directly measuring the directional stochasticity with the ange distribution is hard so the paper uses vmf distribution to approximate the uniformity measurement the paper theoretically studies the proposed directional uniformity measurement in addition the experiments empirically show the directional uniformity measurement is more coherent with the gradient stochasticity 1 as im not a theory person im not very familiar with the related work on this line but the analysis on the directional uniformity is interesting and original so is the vmf approximation 2 the theoretical analysis looks comprehensive and intuitive and the authors did a reasonably good job on the experiments 3 this paper provides some insights that warn people to pay attention to the directions of sgd but the paper didnt provide an answer on how this study can inform people to improve sgd its true that the directional uniformity increases over training and it is correlated to the gradient but what could this bring us remains unstudied 4 can the authors provide any theoretical or empirical analysis on why the directional uniformity didnt increase in deep models like cnn and why it increases when bn and res are applied docsep quality and clarity good originality and significance this paper studies the stochasticity of the norms and directions of the minibatch gradients to understand sgd dynamics the contributions of this paper can be summarized as a this paper defines gradient norm stochasticity as the ratio of the variance of the stochastic norm to the expectation of the stochastic norm it theoretically and empirically shows that this value is reduced as the batch size increases b this paper empirically finds that the distribution of angles between minibatch gradient and a given uniformly sampled unit vector converges to an asymptotic distribution with mean 90 degrees which implies a uniform distribution of the minibatch gradients c this paper uses von misesfisher distribution to approximate the distribution of the minibatch gradients by theoretically and empirically observing that the estimated parameter hat kappa decreases during training they claim that the directional uniformity of minibatch gradients increases over sgd training the idea of measuring the uniformity of minibatch gradients through vmf distribution seems interesting but it is unclear how the study of this stochasticity dynamics of sgd can be related to the convergence behavior of sgd for nonconvex problems andor the generalization performance of sgd there are additional concernsquestions regarding both theoretical part and empirical part 1 section33 assumption that piw00 piw10 pi is not reasonable when theoretically comparing hat kappaw10 and hat kappaw00 the concentration parameter hat kappaw should be estimated by the sum of the normalized minibatch gradients hat giwhat giw instead of using minibatch gradient this paper uses the sum of piw by assuming that piw00 w is parallel to hat giw which is ok however when comparing hat kappaw00 and hat kappaw10 we say hat kappaw00 hsum piw00 w00 and hat kappaw10 hsum piw10 w10 it is not reasonable to use the same pi for piw00 and piw10 because piw00 w10 is definitely not parallel to hat giw10 2 section 33 assumption hat giwti1 hat giwt0 is not convincing with this assumption the paper writes w10 w00 etasumi hat giw0i1 w00 etasumi hat giw00 w00 eta sumi piw00 these equalities are not persuasive because sumi hat giw00 is the full gradient gw00 at w00 in other words these equalities imply that from w00 to w10 one epoch sgd is doing a full gradient descent w10 w00 eta gw00 which is not the case in reality 3 experiment batch size should be consistent with the given assumption in the theoretical part in theoretical part hat kappaw10 hat kappaw00 is based on the assumption that hat giwti1 tat for all i with large minibatch size but in the experiment they prove hat kappaw10 hat kappaw00 by using smallbatch size which is 64 the authors should either provide experiments with large batch size or try to avoid the assumption of large batch size in theoretical part 4 the cnn experiment it is better to add a discussion why the kappa increases in the early phase of training 5 the experiment results show by the end of training all models fnn denn and cnn have very large value of kappa which is around 104 this value implies that the minibatch gradients distribution is pretty concentrated and it is contradictory to the statement in the introduction which is sgd converges or terminates when either the norm of the minibatch gradient vanishes to zeros or when the angles of the minibatch gradients are uniformly distributed and their nonzero norms are close to each other it is also contradictory to the experiment in 32 which implies the minibatch gradient are uniformly distributed after training 6 the notations in this paper can be improved some notations are using i for batch index some notations are using i for one data sample some notations in section 33 and 31 can be moved to section 2 preliminaries it will be clearer to define all the notations in one place typos section 31 first paragraph ehat gw ehat gw paragraph before lemma2 hat kappa increases hat kappa decreases paragraph after theorem2 double the directions in if sgd iterations indeed drive the directions the directions of minibatch gradients to be uniformdocsepsummary this work provides an analysis of the directional distribution of of stochastic gradients in sgd the basic claim is that the distribution when modeled as a von misesfisher distribution becomes more uniform as training progresses there is experimental verification of this claim and some results suggesting that the snr is more correlated with their measure of uniformity than with the norm of the gradients quality the proofs appear correct to me clarity the paper is generally easy to read originality significance i dont know of this specific analysis existing in the literature so in that sense it may be original nonetheless i think there are serious issues with the significance the idea that there are two phases of optimization is not particularly new see for example bertsekas 2015 and the papers claim that uniformity of direction increases as sgd convergence is easy to see in a simple example consider fix xbi2 quadratics with different centers clearly the minimum will be the centroid outside of a ball of certain radius from the centroid all of the gradients grad fi point in the same direction closer to the minimum they will point towards their respective centers it is pretty clear then that uniformity goes up as convergence proceeds depending on the arrangement of the centers the analysis in the paper is clearly more general and meaningful than the toy example but i am not seeing what the takehome is other than the insight generated by the toy example the paper would be improved by clarifying how this analysis provides additional insight providing more analysis on the norm snr vs uniformity experiment at the end pros sgd is a central algorithm and further analysis laying out its properties is important thorough experiments cons it is not entirely clear what the contribution is specific comments the comment at the top of page 4 about the convergence of the minibatch gradients is a bit strange this could also be seen as the reason that analysis of the convergence of sgd rely on annealed step sizes without annealing stepsizes its fairly clear that sgd will converge to a kind of stochastic process the paper would be stronger if the authors try to turn this insight into something actionable either by providing a theoretical result that gives guidance or some practical algorithmic suggestions that exploit it dimitri p bertsekas incremental gradient subgradient and proximal methods for convex optimization a survey arxiv 2015 ### Summary:
the paper presents a careful analysis of sgd by characterizing the stochastic gradient via von misesfisher distributions while the paper has good quality and clarity and the authors detailed response has further clarified several raised issues some important concerns remain reviewer 1 would like to see careful discussions on related observations by other work in the literature such as low rank hessians in the overparameterized regime reviewer 2 is concerned about the significance of the presented analysis and observations and reviewers 2 and 4 both would like to see how the presented theoretical analysis could be used to design improved algorithms in the acs opinion while solid theoretical analysis of sgd is definitely valuable it is highly desirable to demonstrate its practical value considering that it does not provide clearly new insights about the learning dynamics of sgd
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the authors combined existing ophthalmology datasets and introduced additional biomarkers labels the authors identified ml tasks that are relevant to patient care and benchmarked classification performance on image data biomarkers and multimodal inputs the authors identified good ml tasks relevant to ophthalmology patient care and how they can be incorporated into patient care the authors showed good summaries of the labels biomarkers created for the data it would be nice if the authors can elaborate more on the clinical significance of the biomarkers why were these markers chosen and what are the value ranges of the biomarkers and their implications it is unclear about the graders qualification and how the biomarkers are acquired is there only one grader for each scan or are there multiple graders for each scan there are standard deviations for the balanced accuracy why arent they included for other metrics it is unclear if there are any domain shift between the data collected from two different studies they study different conditions which are also the labels of the classification task any domain shift between the datasets would compromise the classification result it is also unclear why these two datasets are selected and what are the clinical impact of such a combined dataset table 1 is confusing it is unclear if there is any relationships between the merged rows in image modalities and label modalities time series data in the dataset correspond to visits which have an average of 16 data points and may have different frequencies and intervals it us unclear how these data contribute to the classification of patients and there are no benchmarks included in the paper based on the time series data docsepauthors presented a longitudinal multimodal dataset comprising of 2d fundus image 3d oct scans clinical labels and biomarkers collected from patients undergoing treatment for diabetic retinopathy or diabetic macular edema authors have also presented baseline results for tasks such as drdme detection disease classification biomarker detection and clinical outcome prediction longitudinal data covering multiple modalities facilities research in several different directions disease detection and classification treatment progression as well as understanding the relationship between multiple modalities authors explicitly state this data is collected from one geographical location and could be biased sample size in terms of the number of patients docsepthis paper presents olives an oct and nearir fundus dataset that includes clinical labels biomarker labels disease labels and timeseries patient treatment information from associated clinical trials the dataset contains the information of 96 eyes averaged over a period of at least two years with each eye treated for an average of 66 weeks and 7 injections benchmark experiments benchmark models and baseline results are presented this dataset contains 96 eyes and an average of 16 visits per patient and 1268 fundas eye images figure 1 clearly illustrates the clinical practice described in section 1 this dataset is collected over a long period of time the long spanning of time series data allows future researchers to perform experiments on predictive models good level of details on data collection and hyperparameters used the authors have discussed related work in different aspects all mentioned research was properly referenced compared with existing datasets olives contains a comprehensive set of modalities and is large enough in volume to be leveraged by ml algorithms according to the paper this is currently the largest and most diverse dataset of its kind the entire paper is hard to follow for reviewers who are not experts in biology because of the extensive use of abbreviations of biological terminologies i understand that this paper is targeted toward an audience in the domain of biologymedicine still to facilitate interdisciplinary research it would be great if the authors could include in their appendix the corresponding full names of the abbreviations used in the paper i would suggest the authors reorganize section 41 table 2 presents experiments with increasing balanced accuracy but section 41 presents different tasks in different orders which makes readers hard to follow would be great if the authors could indicate which ml model is used for which task in the tables table 3 is unclear at first glance it would be clearer if the authors could discuss the first three models in detail in the corresponding section also it would be better to mention that table six is in the appendix at first glance i thought the authors forgot to present table six in the paper also in the last section of section 4 there is no figure c3 in appendix c3 the overall paper needs more careful review in the discussion section the authors should consider elaborating more upon the ethical implications of this study docsepthe paper provides medical data from different modalities with potentially positive impact on medical research and treatments the authors train different ml models to analyze the ability of the presented data to detect the relevant diseases drdme as well as predicting the effects of the successive treatment and the final occular state they explain the technical details of their experiments and their outcomes the paper provides medical data from different modalities with potentially positive impact on medical research and treatments the authors train different ml models to analyze the ability of the presented data to detect the relevant diseases drdme as well as predicting the effects of the successive treatment and the final occular state they explain the technical details of their experiments and their outcomes the paper the pesented dataset seem good grounded and valuable to me but it is hard for me as someone without any medical background to evaluate the medical analysis and justifications made in this paper it is unclear to me why they trained the vision models used to test the abilities of the dataset with a resnet 18 backbone which is pretty small and old compared to 2022 ml standards docsepthe authors provided an ophthalmic dataset with oct and nearir fundus images including clinical labels biomarker labels disease labels and timeseries patient treatment information from associated clinical trials the authors introduced the olives dataset to bridge the gap between existing ophthalmic datasets and the clinical diagnosis and treatment process the paper is well written and correctly addresses the problem statement the paper has introduced a dataset with three modalities and shown its scope in the field of ml 1 the size of the dataset introduced is small 2 the data is collected from two trials prime and trex the authors have not mentioned the differences which maymay not affect the model evaluation with collected samples 3 the significance of the clinical features such as bcva etc should have been better explained to draw the comparison across the modalities 4 clinical labels and biomarkers are associated with each eye how is the relation across the two modalities developed for the datasets per my understanding there should be a patient id with left and right eyes and corresponding clinical labels and biomarkers associated with each sample eye 5 are the mentioned three modalities correspond to the same patient this means there are three samples across three modalities for each patient 6 results in terms of sensitivity and specificity are missing which are important for evaluating the ml model for disease diagnosis 7 in table 3 many inputs used to train the model have shown a random accuracy for binary classification this proves the insignificance of these features and contradicts the authors claims similar results are found in table 7 in the supplementary 8 data collected from a single centre might encourage data bias ### Summary:
the reviewers struggled to find a consensus for this paper concerns about applicability of the dataset due to domain shift issues with the data collection size of the dataset and clarity of the paper were raised at the same time i believe that despite its size the value of longitudinal data for diagnostics is extremely valuable to the community and the authors have made efforts to improve the readability therefore i recommend accept
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper extends the wasserstein autoencoder wae work by splitting the divergence on the variational marginal into 2 terms akin to what was done in tcvae this enables directly controlling the explicit contribution of the total correlation term which is likely to contribute to disentanglement more directly they explore 2 variations of their model based on different estimators of the tc term tcwaemws using minibatchweighted sampling tcwaegan using a density ratio trick overall i found this work to be a nicely complete exploration of a simple extension of an existing framework they mostly rederive existing methods in the wae framework but results are promising and the paper addresses several datasets compares to baselines well and seems well executed it seems to lack comparison and discussion to a paper which seems directly related xiao et al 2019 but i feel this is still a worthy piece of research to showcase at iclr questionscomments 1 a cursory search indicated the following paper which also addresses disentanglement with the wassertein total correlation xiao et al 2019 they use another estimator of the tc instead opting for the kantorovichrubinstein formulation 1 can you comment on how their work relates to this current paper 2 a direct comparison would be rather interesting but might be out of scope for a rebuttal 2 reconstructions for the tcwaemws appear rather bad figures 1319 in the appendix but figure 1c doesnt seem to reflect that which is slightly surprising 1 could you comment on this discrepancy 3 relatedly the tcwaegan disentanglement doesnt seem particularly exciting metricwise would you still recommend using it instead of tcwaemws 1 it is still a clear improvement over vanilla wae so theres value to the work in this current state but id wonder when one would prefer choosing this versus tcvae 4 it might be appropriate to discuss 2stage vae dai et al 2019 and associated family of models which currently obtain really good results on more complex datasets references xiao et al 2019 httpsarxivorgabs191212818 dai et al 2019 httpsarxivorgabs190305789 docsepthis submission proposes to add a kl term to the wasserstein autoencoder objective in order to improve its disentanglement capabilities this combines the idea of hoffman jonhson 2016 of using a marginal kl term with the wasserstein autoencoder framework challenges regarding the estimation of the kl term are also addressed with two previous works this results in a two regularization parameter objective whose superiority to existing approaches using a single parameter is not clear strengths wae with a disentanglement term was as far as i now not attempted before the authors offer two well justified techniques to do it weaknesses 1 the work is very iterative existing approaches are only combined 2 superiority to wae without this term is not surprising and i failed to see a clear superiority to competing unsupervised disentanglement approaches 3 given the emphasis on the wasserstein distance of the original approach it is also a bit disappointing to resort to a kl term for disentanglement 4 most importantly comparison to simpler alternative kl nonmarginal losses is absent as far as i can tell that was for me the most interesting appeal of the paper overall i tend to think the paper would require a more exhaustive investigation of disentanglement approaches contextualized to the wasserstein distance and issues raised regarding marginal versus nonmarginal divergences i recommend rejection on this last point it remains unclear to me whether the original hypothesis of the paper page 3 that the indexcode mi term of the kl divergence may be detrimental to disentanglement is supported by the current study and thus whether the extra technicalities required to eliminate it are worth the effort perhaps the authors could elaborate on that with an alternative objective close to the classical kl term and thus easier to optimize docsepthis paper addresses disentanglement in the latent space of autoencoders to this end it combines ideas from four existing papers namely the reconstruction loss of the wasserstein autoencoder the regularization term decomposition from the total correlation autoencoder and entropy estimation using minibatchweighted sampling or the densityratio trick this combination certainly makes sense as it brings together methods that have previously been shown to work well in isolation the main part of the paper is devoted to an empirical evaluation of the new autoencoder training procedure the new method is compared against various baselines in terms of l2 reconstruction error and three disentanglement scores on four toy datasets in addition latent space traversals on 3dchairs and celeba are shown to qualitatively demonstrate the disentanglement capabilities of the proposed methods unfortunately the description of the experiments is not very precise the role of the hyperparameter gamma remains unclear in the ablation study the authors simply set gammabeta without further explanation and in the comparison they just state we first tune gamma and for gamma 1 better disentanglement is obtained again without further explanation in the comparison experiment they report results for the values of beta that achieve an overall best ranking on the four different metrics without explaining what an overall best ranking is choices like this must not be taken lightly as the analysis in why rankings of biomedical image analysis competitions should be interpreted with care nature communications 9 5217 2018 impressively demonstrates the experiment in figure 2 seems to have three degrees of freedom the data instance x the latent index i and the size of the modification in direction zi however only two degrees of freedom are shown and it remains unclear from the caption and associated main text which ones moreover i cannot deduce justification for the statement all methods learn to disentangle capturing four different factors from the figure i do not see any obvious disentanglement the bigger problem with the paper however is the question what have we learned from these experiments the rankings in table 1 are pretty inconsistent between different metrics and the corresponding figure 3 appears to be cherry picked as the screamdsprites is the dataset where the proposed methods perform best i also do not agree with the claim that tcwaes achieve good disentanglement on realworld datasets figure 4 shows severe entanglement between unrelated factors for example the size feature for the chairs also changes the type of chair all features in the celeba examples have a tendency also to change the background appearance the gender feature dramatically influences person identity in the mws results whereas it does not change the gender at all in the gan variant substantial variations in person identity are also visible in most other examples in summary while the paper provides numbers it lacks new insight in light of mathematical proofs indicating that the true generative factors are generally unidentifiable in nonlinear unsupervised settings cf the work of aapo hyvrinen and others i am skeptical that heuristic trialanderror investigations of disentanglement like the present one will yield interesting results in a sense this is also acknowledged by the authors who merely state in the conclusion that our methods achieve competitive disentanglement on toy data sets thats not much given the effort that went into the experimentsdocsepsummary the paper is motivated by the need for a better tradeoff between the reconstruction and disentanglement performance of an autoencoder the proposed solution is to use kl as a latent regularizer in the framework of wassestain autoencoders which allows for a natural interpretation of total correlation the paper reads well all related work and relevant background concepts are nicely integrated throughout the text the experiments are exhaustive and the results show competitive performance wrt disentanglement while improving reconstructionmodeling of the aes if a dataset is of dynamical nature how difficult would it be to extend the current version of tcwae to dynamical systems do the authors have any intuitionhint on what should change to make their method applicable to dynamical setups significantly changing the probabilistic model or modifying only the and encoderdecoder architecture could suffice minor consider changing the naming of the baselines either in tables or figures to make them consistent chen et al 2018 tcvae kim mnih 2018 factorvae ### Summary:
there were both positive and negative assessments of this paper by the reviewers it was deemed a well written paper that explores cleanly rederiving the tcvae in the wasserstein autoencoder framework and that has experiments comparing to competing approaches however there are two strong concerns with this paper first novelty appears to be strongly limited as it appears a rederivation using known approaches second two reviewers were not convinced by the experimental results and do not agree with the claim that the proposed approach is better than competing methods in providing disentangled representations i agree with this concern in particular as assessing unsupervised disentanglement models is known to be very hard and easily leads to noninformative results see eg the paper cited by the authors from locatello et al 2019 overall i recommend rejecting this paper
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper proves a theoretical limitation of narrowanddeep neural networks it shows that for any function that can be approximated by such networks its level set or decision boundary for binary classification must be unbounded the conclusion means that if some problems decision boundary is a closed set then it cannot be represented by such narrow networks the intuition is relatively simple under the assumptions of the paper the neural network can always be approximated by a onetoone mapping followed by a linear projection the image of the onetoone mapping is homeomorphic to rn so that it must be an open topological ball the intersection of this open ball with a linear hyperplane must include the boundary of the ball thus it extends to infinity in the original input space the critical assumptions here which guarantees the onetoone property of the network are 1 the network is narrow and 2 the activation function can be approximated by a onetoone function the authors claim that 2 captures a large family of activation functions however it does exclude some popular activation families such as the polynomial activation which were proven effective in multiple areas as a concrete example the simple function fx1x2 x12 x22 has bounded level sets but it can be represented by a narrow 2layer neural network with the quadratic activation overall i feel that the result is interesting but it depends on a strong assumption and doesnt capture all interesting cases it is also not clear how this theoretical result can shed insight on the empirical study of neural networks docsepthis is a very nice paper contributing to what i consider a relatively underexplored but potentially very promising research direction the title of the paper in my opinion undersells the result which is not only that deep skinny neural networks are not universal approximators but that the class of functions which cannot be approximated includes a set of practically relevant classifiers as illustrated by the figure on page 8 the presentation is extremely clear with helpful illustrations and toy but insightful experiments my current rating of this paper is based on assuming that the following concerns will be addressed i will adjust the score accordingly after authors reply main a very similar result can be found in theorem 7 of beise et als on decision regions of narrow deep neural networks from july 2018 httpsarxivorgabs180701194 some differences the other paper considers connected whereas this paper considers pathconnected components the former is more general the other paper only considers multilabel classification this paper is relevant to all classification and regression problems the latter is more general the other paper requires that the activation function is strictly monotonic or relu whereas this paper allows uniformly approximable with onetoone functions activations the latter is more general the result in this paper seems slightly more general but largely similar can you please comment on the differencesrelation to the other paper proof of lemma 4 thus the composition hatf is also onetoone and therefore a homeomorphism from rn onto its image ihatf is it not necessary that hatf has a continuous inverse in order to be a homeomorphism i do not immediately see whether the class of activation functions considered in this paper implies that this condition is satisfied please clarify minor proof of lemma 5 it seems g is assumed to be continuous at several places eg level sets of are closed as subsets of rn seems to assume that preimage of a closed set under g is closed or later this implies gf is a compact subset of r perhaps you are assuming that m is a set of continuous functions and using the fact that uniform limit of continuous functions is continuous please clarify on p4 this is fairly immediate from the assumptions on varphi and the fact that singular transition matrices can be approximated by nonsingular ones is the second part of the sentence using the assumption that the input space is compact please clarify second line in section 5 i k should probably be i kappadocsepthis paper shows that deep narrow neural networks ie all hidden layers have maximum width at most the input dimension with a variety of activation functions including relu and sigmoid can only learn functions with unbounded level set components and thus cannot be a universal approximator this complements previous work such as nguyen et al 2018 which study connectivity of decision regions and lu et al 2017 on relu networks in different ways overall the paper is clearly written and technically sound the result itself may not be super novel as noted in the related work but its still a strict improvement over previous results which is often constrained to relu activation function moreover the proofs of this paper are really nice and elegant compared to other work on approximation capability of neural networks it can tell us in a more intuitive way and explicitly which class of functionsproblems cannot be learned by neural networks if none of their layers have more neurons than the input dimension which might be helpful in practice given the fact that there are not many previous work that take a similar approach in this direction im happy to vote for accepting this paper minor comments the proof of lemma 3 should be given for completeness i guess this can be done more easily by setting deltaepsilon a0a and ai1epsilonneighborhood of fiai page7 the square brackets in gxyepsilonyepsilon should be open brackets page7by lemma 4 every function in nn has bounded level components unbounded ### Summary:
the paper shows limitations on the types of functions that can be represented by deep skinny networks for certain classes of activation functions independently of the number of layers with many other works discussing capabilities but not limitations the paper contributes to a relatively underexplored topic the settings capture a large family of activation functions but exclude others such as polynomial activations for which the considered type of obstructions would not apply also a concern is raised about it not being clear how this theoretical result can shed insight on the empirical study of neural networks the authors have responded to some of the comments of the reviewers but not to all comments in particular comments of reviewer 1 whos positive review is conditional on the authors addressing some points the reviewers are all confident and are moderately positive positive or very positive about this paper
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper describes an approach to finetuning large language models which can improve zeroshot accuracy on unseen tasks overall wellwritten with compelling results this paper describes a new language model flan and shows how it improves upon the zeroshot task performance of previous language models such as gpt3 while the paper is lacking some additional analysis i am hesitant to recommend extremely computeintensive ablations due the large size of the model 137b parameters strengths considers a reasonably wide set of 62 datasets although the inherent arbitrariness in dataset clustering was listed as a limitation the clusters look quite reasonable to me and the removal of overlapping datasets eg reading comprehension w commonsense seems appropriate results are better than a strong base lm baseline as well as existing stateoftheart models gpt3 overall the approach is intuitive and conceptually compelling highly relevant to ongoing work on language modeling prompt tuning and zeroshot learning weaknesses from these experiments it is unclear whether models are actually learning to follow instructions or just learning a very large space of tasks from the finetuning procedure in other words even though prompt variance is reported at inference time the models could potentially perform just as well with nonsense or missing prompts during finetuning as far as i can tell no experiments that rule out this possibility exist although qualitatively useful the analysis in 41 does not conclusively show that the number of instruction tuning clusters aids performance or that this trend is likely to continue with more clusters most of the gain could be acquired by tasks which are most difficult or most similar to the heldout task and this analysis cannot disprove such an interpretation a proper analysis would consider more heldout tasks and permutations of training data but presumably this is prohibitively expensive the paper is missing important details about hardware usage and training time some possible issues which might be resolved by the additional questions below additional questions for each dataset we manually compose ten unique templates that use natural language instructions to describe the task for that dataset do you have unique prompts for each dataset or only for each dataset cluster based on a cursory look at the supplementary material i would assume the latter i didnt fully understand the justification for the options token are the finetuned models successfully putting almost all of their probability mass on the corresponding options how is the base lm evaluated if its not finetuned presumably it doesnt learn how to handle these options figure 6a why does the untuned model see worse performance with more parameters nits figure 1 bottom is possibly misleading since afaict zeroshot flan underperforms fewshot gpt3 on the majority of tasks not clear what turning the task around means for some tasks or why this is a useful type of prompt diversity i give this paper a strong recommendation in spite of some missing ablations docsepthe paper explores a simple and effective method to improve zeroshot performance of pretrained language models authors take a 137b parameter pretrained model and finetune it on multiple tasks verbalized via natural language instruction templates as the result the instructiontuned model performs well on unseen tasks with the zeroshot setting pros 1 the problem addressed has high practical value it tries to make large pretrained language model more accessible to a range of nlp tasks the instruction tuning idea will significantly reduce the cost for taskspecific fine tuning labeled data and prompt engineering compared to other approaches 2 the method is simple and easy to implement authors carefully design the experiment to minimize the leakage between the finetuning and inference data given that it still shows superior performance on different types of nlp tasks the result on specific task can be further improved when adapting with prompt tuning on labeled data which shows that the instructiontuning process does not drop much taskspecific knowledge from the original pretrained model 3 the analysis presented in the main paper and the appendix is thorough enough authors also discussed about the limitation of model when downstream tasks are more similar to language modeling tasks cons there are still a few questions that can be addressed to make the analysis comprehensive 1 have authors try to use the flan prompts on gpt3 or baselm and how does the performance look like 2 since instruction tuning will adjust all the parameters in the original pretrained language model there is a question what about what is the potential impact of this tuning process will it drops any knowledge of any tasks which will be a disadvantage when the tasks labeled data is available in the analysis c in the appendix it will be good to have results for tasks other than classification such as summarization or question answering and also to have a baseline where the baselm model is finetuned directly with the task labeled data without promptsoftprompt overall the paper proposed an interesting idea and showed strong empirical results hence i vote for accepting docsepthe paper creates a dataset of over 60 nlp tasks described via instructions using templates for each task and finds this boosts zeroshot performance on unseen tasks detailed comments for each dataset we manually compose ten unique templates why not have templates per task cluster instead of per dataset it is likely a relatively minor effect given the results from appendix b but it seems like it could slightly prevent overfitting the ablation in 41 was great number of clusters nit i would have tried to move the datasets per clustertemplates per dataset ablation to the main body as well and shortened section 3 the 42 scaling laws ablation is perhaps the most interesting of all in figure 6a why was performance not increasing for untuned models wrt model size this seems to contradict findings from brown et al where larger models did better on essentially all tasks were there perhaps some poor datasets that happened to be in the heldin split since the heldout tasks dont seem to have the same trend appendix i liked the section b ablations as implied above that more templates per dataset didnt help is particularly interesting and suggests some questions you hypothesize that more templates doesnt help because models at such scale do not easily overfit to a finetuning single task but my intuition is for an opposite explanation that the models at such scale easily memorize a small number of templates one may even wonder if the instruction nature of the templates is helping at all from what i can tell appendix c on prompt tuning which is very interesting is maybe the primary evidence the instructions are important i think more could be done here some ideas probably there are better ways to test have templates that leave out instructions i would guess it wouldnt affect heldin task performance much but would affect heldout tasks consider hellaswagpiqaetc where flan underperformed fewshot and even zeroshot one might hypothesize that if using a subotimal template that is less natural for language modeling that zeroshot performance would suffer but that flan performance wouldnt one might hypothesize that the turn the task around templates help more than the other more straightforward templates that dont swap information between the prompt and response easy but probably not great thing to try heldout tasks with wronguseless templates a final thought its not obvious that using as many training examples per dataset as possible is optimal given that the model could overfit to datasetspecific spurious correlations this could be another area to investigate misc unifiedqa seems potentially worth citing as prior work overall the papers idea is powerful but of somewhat limited novelty and the results are good but not great its greatest strength imo was the ablations my biggest complaint is that its not completely clear the instructions themselves are important at all i suggest a few more experiments though they dont seem crucial docsepthe paper proposes a simple method instructiontuning to improve the zeroshot learning capability of large language model which 1 annotates prompts for a wide range of tasks and then 2 finetunes the model to answerrespond to those prompt the empirical results are impressive after instructiontuning the 0shot performance is better than gpt3 0shot sometimes fewshot on a wide range of datasets nevertheless on datasets with formats already similar to language modeling the performance gain is negligible or even negative the paper also made a few other observations 1 performance benefits from the number of task clusters 2 instructiontuning is only beneficial when the model size is larger enough and 3 fewshot learning still helps while the method is a simple and straightforward scaling up of concepts and ideas from prior works eg zhong et al adapting mishra et al crosstask generalization the empirical results are thorough and impressive outperforming gpt3 with a slightly smaller model the analyses also helps us understand when this method would work and inform us about future research directions below are my concrete questions and comments additional tasks results 34 in appendix a1 the paper mainly draws conclusions based on comparisons between gpt3 and flan which i do not think are fair gpt3 and flan differ in model size and pretraining data distribution instead i think base lm vs flan might be a better comparison between off the shelf lm and instructiontuned model though it wont change the conclusion it is also worth pointing out in the main paper that for most of the additional tasks even though it does not lead to higher accuracy the performance of flan is still at least comparable eg 1 worse and difference generally negligible to baselm 0shot the only outlier seems to be record where the performance drops significantly after instructiontuning and this probably deserves some discussion also i might have missed it for the base lm 137b zeroshot result is it on the average template or the best template number of task clusters section 41 for figure 5 can you add the untuned model to the curve with the xaxis0 0task cluster this can help us understand how much even 1 cluster eg summarization may help explanation for scaling section 42 it is an insightful empirical result that instruction tuning only works when model size reaches 68b however i am not entirely sure about the potential explanation of model capacity there might be two potential explanations to this phenomena 1 model capacity as the paper has mentioned smaller pretrained models do not have enough model capacity and underfit the instruction tuning data and 2 better ood generalization better quality pretrained models have higher ood generalization ability and ood accuracy and they are less likely to overfit to indistribution data i personally find the second explanation more convincing for example sahn et al httpsarxivorgabs211008207 finds that even models with only 11b parameters can generalize to unseen tasks using t5 mlm and a larger set of prompts the use of mlm objectives might improve the pretraining quality while more prompts reduce the overfitting to indomain data issue i appreciate the fact that the author explicitly states the model capacity hypothesis more as a conjecture rather than a solid explanation itd be great if the authors can support the explanation further with more empirical evidence on the other hand however since the results from sanh et al came out only 2 weeks ago i would not change the score based on the response to this question incontext fewshot vs finetuned fewshot section 43 can the authors compare finetuningprefixtuning an instruction tuned model with 16 examples appendix c but with only 16 examples with incontext prompting in 43 of the main paper similar to chen et al httpsarxivorgabs211007814 this would further inform us how we should use the fewshot learning examples for larger language models put it incontext or finetune again since the comparison of chen et al came out only 2 weeks ago and the paper limit is 9 pages i would not change the score based on the response to this question others results of appendix c are interesting and potentially impactful this might imply that instructiontuned models will become the new base model for the pretrainingfinetuning paradigm is it possible to briefly mention it in the main paper as well and redirect the readers to the appendix to see the full results it might be too late to change the name but finetuned language net flan is uninformative since it does not capture any unique aspect of this method what does language mean here natural language instruction or language model if it is the former then directly including the word instruction might be better and hopefully its not the latter since even finetuned bert on sst2 counts as a finetuned language model typo intro instruction tuning is a simple method that as conclusion moreover our work supercedes recent work such as while the method is not new the empirical results are strong and comprehensive though i disagree on the interpretation of some empirical results overall the additional analyses bring us further insights on what method works for very large language models ie 100b dense model i highly recommend the paper to be accepted to iclr 2022 ### Summary:
this paper examines the extent to which a large language model lm can generalize to unseen tasks via instruction tuning a process that finetunes the lm on a large number of tasks with natural language instructions at test time the model is evaluated zeroshot on held out tasks the empirical results are good and the 137b flan model generally out performs the 175b untuned gpt3 model all reviewers voted to accept with uniformly high scores despite two commenting on the relative lack of novelty the discussion period focused on questions raised by two reviewers regarding the usefulness of finetuning with instructions vs multitask finetuning without instructions the authors responded with an ablation study demonstrating that providing instructions at during tuning led to large gains overall the papers approach and detailed experiments will be useful for other researchers working in this fast moving area in nlp
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper focuses on the attack that tries to recover text data from the gradients this type of attack is a threat particularly for federated learning where the central server may recover the private client data through gradients based on previous work that optimises the input to minimise the distance between gradients this paper further proposes to alternate continuous optimization and discrete optimization that is guided through a language model prior the discrete optimization is claimed to help obtain text data that is more like fluent language the resulting approach lamp greatly outperforms previous approaches on three binaryclassification benchmarks strengths 1 the proposed approach is novel and technically sound the motivation and contribution are clear from the examples in table 2 and quantitative improvements it does seem like previous approaches fail to yield grammatical text while the proposed discrete optimization step seems to help it a lot 2 the empirical results are strong especially with batch size 1 the ablation study is appreciated as well 3 the paper is wellwritten weaknesses 1 minor the proposed approach adds several additional hyperparameters to tune for example alphalm alphareg nc nd ninit this surely adds complexity of the approach and may make the proposed attack difficult to be practically applied while i appreciate the detailed hyperparameter paragraph in line 268274 i think it would be better to report the hyperparameter selection range as well so that the readers could have an idea how much effort is imposed to tune these hyperparameters 2 major i am a bit worried that the model comparison is only conducted on a randomly selected set of 100 sentences 100 sentences sounds too few for me and i am not sure how robust the model rank is based on only 100 random examples i feel this point should be justified properly either through constructing a larger test set or using different random seeds to generate different sets of 100 test examples and testing on each of them 3 minor while line 239 mentions to illustrate the generality of the proposed approach with respect to model size i dont think this paper really made that goal by only using tiny and base sizes of bert at least bertlarge should be included to have a relatively complete coverage i understand that the authors may be limited by resources to use larger models which is fine i just wanted to point out the current experiments are not sufficient to indicate generality with respect to model sizes this is rather a minor point anyway 4 major line 257259 mentions the baselines use 2500 iterations while the proposed approach uses 2000 is it in algorithm 1 2000 and what are the values of nc and nd how did you choose these numbers do the baselines fully converge i would like to see more justifications to show that the comparison is fair because lamp employs a nested for loop while the baselines do not in optimization and i feel the number of total optimization steps and the costtime in lamp is actually larger than the baselines right 5 minor line 272 mentions that lamp additionally adopts a twostep initialization procedure that seems important to me i would like to see the ablation results on this twostep init in table 3 to know how much of the improvement over baselines is from this initialization after author response the author response addressed most of my concerns and i would like to increase my score to 7 given that the authors will update the paper accordingly as promised yes docsepthe authors propose a model for recovering user data from gradient updates in a federated learning system for text classification they achieve this by alternating continuous gradient based optimization with discrete heuristic based tokenreshuffling the authors show that the proposed model outperforms methods that use only gradient based updates to the tokens the primary contribution of the paper is the alternation between gradientbased updates and token reshuffling for reconstructing user text i am unaware if such an approach has been attempted for adversarial attacks in text hence this can be assumed to be novel the paper is wellwritten and easy to follow the ablation studies confirm the importance of tokenreshuffling for learning a good attack the ablation studies also show the importance of l1l2 loss although this was proposed in a different paper from a novelty perspective this alternation between token reshuffling and continuous updates for attacking text classifier appears to be novel however i am not an expert in this field some parts of the paper are unclear for instance the authors mention that they use adam for learning the embeddings for each input during the continuous optimization phase however they do not mention how they compute the gradient of the loss with respect to the input embeddings x the gradient computation requires second order derivatives which hasnt been discussed in the paper at all it is unclear how tokenreshuffling with continuous gradientbased updates influences the descent direction specifically i am wondering if tokenreshuffling is only performed at the end of all gradientbased updates how will the performance be affected the limitations havent been discussed docsepthis paper proposes a novel strategy to attack pretrained models for text models which aims to reconstruct the private user data used to finetune the model through federated learning the algorithm takes the finetune gradient as well as the target label as the input then search for the private user input sentence the general idea is a iterative twostep process where the first step is a continuous optimization to search for a embedding that leads to similar gradient and the second step is a discrete optimization that uses gpt models to find a output sentence with the lowest perplexity thanks for the authors response i think it might be worth it to clarify the main novelty of the paper in the main method section it would also be nice to clearly discuss the limitation at the end of the paper as researchers not familiar with this line of research might benefit from some background information about the social impactlimitation of the method strengths 1 the method is very interesting and novel it also addresses an interesting problem in the pretrainfinetune paradigm that is very popular now 2 the writing is mostly well structured and easy to understand 3 the final performance also seems good compared to previous methods the ablation study and examples are pretty nice addition to the results weakness 1 the paper didnt really clarify the main difference between the proposed method and the prior work such as tag therefore it is hard to tell what are the exact novelty the paper adds and the effect of the novelty my understanding is that the random transformation and the use of gpt models allow the output more natural however it is really hard to tell from the current structure of the paper 2 the method can only be applied to classification where label is known which seems pretty limited it might be more natural to have unlabeled data for federated learning setting as one might not be able to annotate the user text the authors did not address limitations and potential negative impact there are several points worth mentioning 1 works on adversarial attack could be exploited by hackers 2 the method is only limited to classification data ### Summary:
this paper describes a novel method to recover the input text based on the computed gradient this is important in the context of federated learning which promises to enable learning through gradient sharing while keeping the input text secret the findings of the paper demonstrate that gradients are sufficient to recover significant parts of the input text questioning the federated learning premise at least in the context of large language models the approach in novel and technically sound empirical results are convincing the paper is wellwritten and clear given current trends to growing model size it will be great if the paper can further scale the experimental results to larger models
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper investigates a semisupervised continual learning sscl setting and proposes a new method called distillmatch for this setting the major contributions are 1 the authors carefully design a realistic sscl setting where objectobject correlations between labeled and unlabeled sets are maintained through a label superclass structure and then they develop the distillmatch method combining knowledge distillation pseudolabels out of distribution detection and consistency regularization 2 they show that distillmatch outperforms other existing methods on cifar100 dataset and ablation study results are shown also however there are some downsides that should be considered before its publication 1 in abstract the authors claim that they can significantly reduce the memory budget of labeled training data by leveraging unlabeled data perhaps with large volume this motivation seems to be contradictive 2 from a methodological viewpoint the proposed distillmatch method is just a combination of existing methods listed as in above so where is the novelty of this new method 3 in experiments the chosen baseline algorithm is very weak there are some strong baseline methods such as gem agem and er so i wonder to know the real improvements over stateoftheart methods for continual learning 4 the label superclass structure existed in cifar100 has been used in their experiments but this is not very common for other more realistic datasets such as miniimagenet if there is no superclass structure we dont know how to apply the proposed distillmatch method in summary i think this semisupervised continual learning setting is interesting but the proposed distillmatch method can not persuade me that this method is a novel significant contribution to this problem so at present time i believe there is much room for the authors to improve their method before publication docsep summary this paper proposes classincremental learning with unlabeled data correlated to labeled data and a method to tackle it the task can be considered as a variant of lee et al which has no assumption on the unlabeled dataset while this paper assumes the correlation between labeled and unlabeled dataset explicitly the proposed method is inspired by stateoftheart classincremental learning semisupervised learning and outofdistribution ood detection methods local distillation li and hoiem ood detection hsu et al consistency regularization and pseudo labeling or hard distillation sohn et al and loss balancing based on class statistics lee et al experimental results support that the proposed method outperforms prior works in the proposed task reasons for score 1 extending continual learning to the semisupervised setting is natural given that the extension to selftaught learning has already been considered in lee et al however i cannot agree that semisupervised learning is more realistic than selftaught learning which is emphasized throughout the paper 18 times in an early work of raina et al selftaught learning is proposed to make the scenario of learning with unlabeled data widely applicable to many practical learning problems oliver et al also argued that unlabeled data from outofdistribution violates the strict definition of semisupervised learning but it nevertheless represents a common usecase for semisupervised learning for example augmenting a face recognition dataset with unlabeled images of people not in the labeled set i am not saying that semisupervised learning is unrealistic but the argument in this paper sounds overclaimed i believe both semisupervised and selftaught learning are realistic in some cases i also recommend to provide real world scenarios that the proposed task correlation between labeled and unlabeled data exists and no memory for coreset is available is useful in practice 2 the proposed method is not novel which is essentially the combination of stateoftheart methods in relevant tasks but i do not discount this much because this work would be valuable as the proposed task is interesting but not investigated before however the name of task might need to be changed because a similar name semisupervised incremental learning is already taken by a kind of semisupervised learning which incrementally incorporates unlabeled data to training 3 though the improvement over prior classincremental learning methods is impressive the overall performance is still too low in fact the scale of the experimental setting is too small so i doubt it is scalable all experiments are bounded on cifar100 and even only 20 of training data are used as labeled one frankly in this smallscale setting in both number of data and image resolution keeping all data is just fine as the coreset size is negligible compared to the model size i recommend to experiment in largescale settings eg on imagenet also i recommend to compare the oracle setting as well which keeps all previous training data 4 in addition to smallscale experimental setting the architecture is larger than the prior work lee et al wrn282 vs wrn162 in the worst case scenario it is possible that the best performance of the proposed method is simply from the complexity of their learning objective ie all methods overfit to training data but the proposed method did not have enough updates to overfit to them 5 in figure 3 why do gd and dm not have a coreset i think there is no reason to give an unfair constraint to them i recommend to draw curves with respect to increasing number of coreset for those methods as well 6 could you provide results on the selftaught learning setting like lee et al it would also be interesting to see the performance of the proposed method in the setting 7 hyperparameter sweep results provided in table 4 are either minimum or maximum of the range so you could improve the performance by enlarging the range minor comments 8 subscripts of theta often are dropped is theta equal to thetan1n 9 the parameters of no more than three models i believe it is four because you need to temporarily store gradients during training 10 hatq is not a probability vector which makes eq 2 mathematically do not make sense 11 citation format issue you can use citet for noun and citep for adverb 12 typo on page 5 statoe state 13 table 4 what is tpr here threshold for consistency regularization raina et al selftaught learning transfer learning from unlabeled data in icml 2007 li and hoiem learning without forgetting in tpami 2017 oliver et al realistic evaluation of deep semisupervised learning algorithms in neurips 2018 lee et al overcoming catastrophic forgetting with unlabeled data in the wild in iccv 2019 hsu et al generalized odin detecting outofdistribution image without learning from outofdistribution data in cvpr 2020 sohn et al fixmatch simplifying semisupervised learning with consistency and confidence in neurips 2020 after rebuttal id like to thank authors for their efforts to address my concerns they have addressed most of them so i increased my score from 5 to 6 however there are two concerns that couldnt be resolved during the rebuttal period 1 i am still not sure if the proposed task is practical at glance it looks realistic but i couldnt find a detailed scenario that can only be solved by the proposed task any real world scenario i can think of is closer to lee et al which is a prior work of this paper authors provided an exploring robot example in the thread of responses but i think lee et al fits better for the provided one i recommend authors to find a concrete usecase in realworld applications which can only be solved by the proposed setting or at least lee et al is not applicable in the revised intro you may emphasize that there are some realworld problems that lee et al is not applicable but yours is r1 and r4 seem to have a similar concern 2 the scale of experiment is too small as cifar10100 have a limited number of data for your purpose you can borrow some data from tinyimages fyi cifar10100 are a subset of 80m tinyimages or focus on imagenet i am okay with the lack of novelty on the proposed method for a newly proposed task i think proposing a simple and effective baseline is good enough however because of the two concerns above i cannot strongly agree with its acceptance docsepthe paper presents a novel semisupervised continual learning sscl setting where labeled data is scarce and unlabeled data is plentiful the proposed framework is built on pseudolabeling consistency regularization outofdistribution ood detection and knowledge distillation in order to reduce the catastrophic forgetting in the proposed setting the paper is in general clear and wellwritten the contributions are clearly highlighted and the proposed approach is conveniently compared with other state of the art methods demonstrating its superiority positive aspects the definition of a realistic semisupervised setting for continual learning a novel approach for continual learning in order to cope with catastrophic forgetting the proposed approach is memory efficient since it does not need exemplars to replay past tasks negative aspects the ood implemented in this paper rejects the unknown samples in other words all unknown samples are considered a single class it would have been a plus to distinguish between several unknown classes and somehow introduce them in the framework the lack of recabilibration step after a number of tasks in the case of pseudolabeled samples could lead to an undesired error propagation which is not quantified in the paper however i have some questions 1 what is the relationship between the fi and theta models section 4 are they completely separate or there is a relationship between them for instance when theta is extended with a new task is fi extended accordingly or is fi trained offline from the beginning with all tasks 2 there are some different source of errors distilation pseudolabels do you perform any kind of system recalibration after how many tasks i mean do you make a study of error propagation of pseudolabeled data or at some point do you have a humanintheloop to correct misclassification what is the misclassification error of pseudolabeled samples 3 do you assume that labeled and unlabeled data come from different distributions or you have a single distribution which is divided in labeled and unlabeled data at the beginning of the process 4 does your scenario foresee that when learning a new task t all the previous tasks are represented 1t1 in the unlabebeld data or only a subpart ie kind of selective replay 5 when the number of tasks increases the number of unlabeled data per task remains constant or is scaled accordingly ie reduced 6 would be interesting to test your approach in a realworld scenario ie robot navigationdocsepthis paper comes up with a novel scenario where the unlabled data are available as well as labeled data in the continual learning scenario overall based on my understanding the major contribution is the proposal of a task scenario aka experimental setting the novelty of distillmatch is an incremental modification of previous work the task setting sidesteps the learning with nonstationarity problem than solving it further this setting potentially makes the task easier for the proposed method to verify whether this is true more information are needed the presentation of the paper needs polishing i listed a few points below pros the novel scenario of semisupervised continual learning is proposed the argument is that in several realistic scenarios old data are often reobserved without label the funiture labeling example therefore instead of storing a coreset one may make use of the unlabeled data for pseudorehearsaldistillation it is reasonable to make use of it when this assumption is true with the setting the author proposed the distillmatch method is able to perform better than previous methods cons 1 the novelty mostly comes from the task scenario the distillmatch method is incremental 2 although sscl is a new scenario and the author argues it is more realistic imo taking this assumption sidesteps the problem of continual learning rather than solving it the central problem of continual learning imo is to learn under nonstationary distribution the assumption made in this submission makes the distribution more stationary 3 it is true that this assumption should be utilized when available however the only dataset used is manually constructed from cifar100 contradicting the initial motivation to move towards a more realistic scenario 4 theres a lack of information on how the compared methods are adapted to the new scenario i searched the supplementary but failed to find a detailed documentation with the given information it is hard to tell whether the comparison is fair my concerns are following increasing from 3 4 as this point is resolved in the rebuttal in the randomclasses setting it is stated that no coreset is used if the compared methods depends on coreset to replay it would be unfair if thats the case the only conclusion we can draw is that replay is better than no replay which seems trivial to me gd depends on internet crawled data is it replaced with the unlabeled data since it is available in the experiment setting if not then i think it is just the setting that favors distillmatch with the above said i suggest the author to list clearly the objectives replay buffer sizes or even pseudo code for each of the compared method and their own method in a table which will help the reader identify what major component in the proposed method is making the contribution regarding the quality and clarity i found myself confused and making guesses sometimes while reading it to list a few introduction paragraph 2 to determine which unlabeled data is relevant to the incremental task i guess the incremental task means learning the newly observed data but then for rehearsal well pick the unlabeled data which is from the distribution of past tasks section 1 save up to 023 stored images per processed image over naive rehearsal compared to lee here seems lee et al is the naive rehearsal but then which only saved 008 confuses me seems to be saying lee saves 008 compared to naive rehearsal section 3 where data distributions reflect object class correlations between and among the labeled and unlabeled data distributions not enough information to infer what reflect and object class correlation means here section 4 let sn1 denote the score of our ood detector for valid classes of our pseudolabel model what is the valid classes needs to be clarified as i understand it sn1 measures how likely the unlabeled data is in the distribution of past tasks super class parent class are not defined clear enough ### Summary:
this paper proposes a semisupervised setting to reduce memory budget in replaybased continual learning it uses unlabeled data in the environment for replaying which requires no storage and generates pseudolabels where unlabeled data is connected to labeled one the method was validated on the proposed tasks pros the semisupervised continual learning setting is novel and interesting the proposed approach is memory efficient since it does not need exemplars to replay past tasks cons the scale of experiment is small it lacks evaluation in real world environment the novelty is limited because it is a combination of existing technologies pseudolabeling consistency regularization outofdistribution ood detection and knowledge distillation the comparison might not be fair due to different settings the authors addressed the fairness and scalability with additional experiments and leave some suggestions of reviewers for future work r3 had a concern on the error propagation of pseudolabels which i also share the authors agreed that this is a challenge for all cl methods in summary the reviews are mixed all reviewers agree that the semisupervised continual learning setting is novel and interesting and some have concerns on scalability and novelty of the method which i also share so at present time i believe there is much room for the authors to improve their method and experiments before publication
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: docsepthis paper aims to construct a gan that can be applied to noniid federated data to achieve this aim the authors propose an extension of bayesian gan called expectation propagation prior gan epgan which obtains a partitioninvariant prior using expectation propagation in particular the authors introduce a closedform solution for efficiency the effectiveness of the proposed method is demonstrated using noniid data including the toy data image data and speech data strengths 1 as far as i know applying a gan to noniid federated data is not actively studied in the previous studies and is an interesting research topic 2 the proposed method is solid and mathematically grounded the detailed derivatives are also provided in the supplementary materials 3 the effectiveness of the proposed method is demonstrated using various data including toy data image data and speech data weaknesses 1 improving the calculation speed of ep is included in the main contribution however its validity is not empirically demonstrated at the bottom of section 41 the difference in the calculation order is discussed however i cannot understand how significant the difference is in practice in particular i am curious whether the calculation in ep is dominant in the total framework including the gan training 2 it seems that in the baseline models simple parameter averaging is applied to all the layers when fedavg is used however it can be easily improved by introducing clientspecific parameters using a conditional module eg conditional batch normalization which is used in a typical conditional gan the comparison with such a baseline is interesting 3 in practice it is assumed that the number of clients is considerably large however in the experiments the number of clients is relatively small the order of 10 therefore in the current manuscript i consider that the effectiveness in a practical setting is not sufficiently demonstrated although some benchmark performance is provided 4 in appendix f34 a statistical significance test on the inception score is provided however a statistical significance test on the fid is not presented in the current manuscript if the authors intend to emphasize the utility of mixtureepprobgan this test also should be conducted this paper addresses an interesting problem and proposes a reliable method that is mathematically grounded however i still have some questions regarding the experimental evaluation i expect that the authors clarify them in the rebuttal docsepthis paper proposes a method for learning generated adversarial networks gans for noniid data in a federated learning setting this is accomplished with the use of a partitionaware prior via an expectation propagation ep algorithm embedded into a bayesian gan setting additionally it proposes a closedform solution for ep updates aiding efficient federated learning the claims are substantiated with experiments on both synthetic and real data the paper is well written and the use of ep prior to this purpose is a fine idea however i would like the authors to address the following questions 1 while figure 1 is a good way to motivate the problem it would be good to supplement all figures with some quantifiable metrics at least for a few ones it is difficult otherwise to ascertain the gains 2 in the related work section the two large paragraphs on federated learning and bayesian gans seem to be disconnected it would be good to have a connecting paragraph would be good 3 the small paragraph on federated learning and deferring it to the appendix is not a good idea some details on fedlearning here will be apt 4 i am not an expert on ep but i am curious why cant the approximated factor come from a nonexponential family 5 related to the previous question why are gaussian distributions used for approximate factors what bearing does this have on the entire method 6 i didnt quite follow the need for a sigmoid in eq 5 7 is this method generic enough to be applied for other bayesian gan settings beyond the ones considered here 8 the other related question is can this epprior help in the case of iid data as well 9 the other important question i have is it is not clear from the paper why should the proposed method aid in handing iid data in the federated learning setting there is empirical evidence but can there be a more principled way of describing the same 10 it would be good to include an iid case too in table 1 if possible 11 more baselines could be added for the speech experiment in addition the task is not well defined in section 53 this paper addresses a very useful problem of federated gan learning for noniid data it is moderately novel to be considered for publication in iclr docsepthe goal of this paper is to train a bayesian gan on noniid federated data specifically the authors propose to adopt the newlyintroduced expectation propagation ep prior being partitioninvariant to address the noniid federated challenge experiments on synthetic and real datasets are conducted the writing should be improved significantly the current manuscript is quite challenging to understand whats really going on in the introduction how and where do you identify the mode collapse problem of bayesian gans under noniid crosssilo unsupervised learning scenarios whats the novelty over existing methods for training gan under federated learning settings eg 12 in the paragraph before eq 1 qboldsymbol theta is not a distribution ie its not normalized right similar questions for the following qeptboldsymbol thetag in eq 5 how will you train the auxiliary neural network f in federated learning settings also in eq 5 it seems one should specify an ffunction for each theta parameter of the gan generator how expensive is the proposed method both in space and time the notations starting from eq 5 are quite confusing eqs 67 are not easy to follow in the paragraph following eq 14 i cannot see clearly why theorem 41 shows that we are able to analytically approximate the prior of the global data distribution with the datasets stored on different clients while following the crosssilo federated learning settings the notation jg is not defined in eq 16 1 fedgan federated generative adversarial networks for distributed data 2 training federated gans with theoretical guarantees a universal aggregation approach the writing should be improved significantly the current manuscript is quite challenging to understand the contents in section 41 ie the main novelties are challenging to follow important comparisons with existing federatedgan training methods are believed missing docsepthe authors targeted federated generative modelling in an unsupervised setting specifically the work is built on top of bayesian gans in order to aggregate the information from different clients the authors proposed to use expectation propagation ep it makes sense despite being a well established bayesian inference algorithm ep operating on the neural network parameters can suffer from intractability the authors presented a low complexity solution the experiment results showed improved fid and is over multiple baseline methods however the overall performance is quite poor on the rather simple dataset cifar10 owing to the scalability issue of bayesian models federated learning with noniid data partitions is particular challenging as naive averaging does not work well ep offers an information aggregation framework which can deal with different data partition styles in a unified way in order to apply ep it requires to have bayesian gan as the basis model at each client while bayesian inference is a powerful framework the scalability issue of bayesian gans can potentially hinder the use of epgan for instance the oracle baseline model still has fid on cifar10 above 25 while the best performing gan on cifar10 is below 6 moreover the gap can potentially become even larger when the resolution is higher on the algorithm side eq 5 lacks a justification on why it suffices the quality of likelihood modelling besides being simple and thus permitting the closedform ep update for epprobgan how does the newly introduced ep prior affect the guarantee claimed by probgan in the experiment part despite being introduced in the text the performance of the baseline model bayesgan 2 was actually not reported in table 1 furthermore as reported by the authors of probgan the ns loss outperformed wasserstein distance and ls for both probgan and bayesian gan however the table 1 did not consider the top performing case also only having one natural image dataset is probably not enough eg probgan also considered stl10 and imagenet furthermore epgan variants on iid n2 outperform oracle what is the potential reason behind overall the problem is trending challenging and highly relevant within the bayesian framework the use of ep for federated learning is reasonable of course complexity remains as a critical issue the authors proposed some lowcomplexity solution and empirically showed the benefits of using ep over existing schemes which however are not really developed for noniid scenarios furthermore the baseline models do not take the top performance configuration which lead to my general concern on how strong the baselines are on the algorithm side bayesian models need to deal with priors regardless of ep therefore having or adding an ep prior in bayesian models seems to be straightforward the closedform update is definitely interesting but the authors shall empirically analyse its fidelity ### Summary:
this paper presents a bayesian gan approach designed for a federated learning setting in contrast to recent bayesian gan approaches that use gaussian priors or iterativelyupdated priors on gan parameters this paper proposes a more complex prior motivated by expectation propagation dubbed as epgan and uses this formulation to construct a federated gan the paper claims that this prior better captures the multimodal distribution structure of the noniid heterogeneous data across the different clients the paper looks at an interesting problem ie federated training of gans which is indeed a problem that has received a lot of interest lately the paper received mixed reviews the reviewers raised several concerns some of which included 1 weak baselines 2 not considering what happens when we switch to more advanced gan models 3 performance of the approach when the number of clients is large and 4 lack of clarity in the presentation the authors responded to some of these concerns and it is commendable that they reported some additional results during the discussion phase however after an extensive discussion among the reviewers and between reviewers and authors and after my own reading of the manuscript concerns still lingers over many of the abovementioned points another concern is the overly complex nature of the approach as compared to other recent federated gan approaches which raises the question as to whether the actual improvements warrant the complexity of the proposed approach from the report experiments the improvements appear to be rather slim considering these aspects unfortunately the paper in its current shape does not seem ready for acceptance the authors are advised to consider the feedback from the reviewers which will strengthen the submission for a future submission
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the present paper proposes a fast approximation to the softmax computation when the number of classes is very large this is typically a bottleneck in deep learning architectures the approximation is a sparse twolayer mixture of experts the paper lacks rigor and the writing is of low quality both in its clarity and its grammar see a list of typos below an example of lack of mathematical rigor is equation 4 in which the same variable name is used to describe the weights before and after pruning as if it was computer code instead of an equation also pervasive is the use of the asterisk to denote multiplication again as if it was code and not math algorithm 1 does not include mitosis which may have an effect on the resulting approximation how are the lambda and threshold parameters tuned the authors mention a validation set are they just exhaustively explored on a 3d grid on the validation set the results only compare with shim et al why only this method why would it be expected to be faster than all the other alternatives wouldnt similar alternatives like the sparsely gated moe dsoftmax and adaptivesoftmax have chances of being faster the column flops in the result seems to measure the speedup whereas the actual flops should be less when the speed increases also a 1x label seems to be missing in for the full softmax so that the reference is clearly specified all in all the results show that the proposed method provides a significant speedup with respect to shim et al but it lacks comparison with other methods in the literature a brief list of typos sparse mixture of sparse of sparse experts if we only search right answer it might also like appear which is to design to choose the right sparsly will only consists partial with is a lasso threshold an arbitrarily distance function each 10 sub classes are belonged to one is also needed to tune to achievedocsepthe paper proposes doubly sparse which is a sparse mixture of sparse experts and learns a twolevel class hierarchy for efficient softmax inference it reduces computational cost compared to full softmax ablation study is done for group lasso expert lasso and load balancing which help understand the effect of different components of the proposed it seems to me the motivation is similar to that of sparselygated moe shazeer et al 2017 but it is not clear how the proposed twohierarchy method is superior to the sparselygated moe it would be helpful the paper discuss more about this besides in evaluation the paper only compares doubly sparse with full softmax why not compare with sparselygated moe overall i think this paper is below the borderline of acceptance due to insufficient comparison with sparselygated moe docsepin this paper the authors introduce a new technique for softmax inference in a multiclass setting the idea is to take the output of a nn and turn it into a gating function to choose one expert then given the expert output a particular category the first level of sparsity comes from the first expert the second level of sparsity comes from every expert only outputting a limited set of output categories the paper is easy to understand but several sections starting from section 2 could use an english language review eg search right search for the right predict next word predict the next word in section 3 can you be more specific about the gains in training versus inference time i believe the results all relate to inference but it would be good to get an overview of the impact of training time as well you motivate some of the work by the fact that the experts have overlapping outputs maybe in section 37 you can address how often that occurs as well nits it wasnt clear how the sparsity percentage on page 3 was defined can you motivate why you are not using perplexity in section 32 ### Summary:
this work proposes a new approximation method for softmax layers with large number of classes the idea is to use a sparse twolayer mixture of experts this approach successfully reduces the computation requires on the ptb and wiki2 datasets which have up to 32k classes however the reviewers argue that the work lacks relevant baselines such as dsoftmax and adaptivesoftmax the authors argue that they focus on training and not inference and should do worse but this should be substantiated in the paper by actual experimental results
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this submission considers a good problem but contributes a little the critical aspect of considering the convergence property for overparameterized implicit networks is to show the nonsingularity of the feature matrix z which is the fixed point of the nonlinear equation z sigmaazphix since we treat the final output as y wz this is a challenging and open problem for the community of theoretical implicit models however the submission considers a different outputhaty uz vphix hence there is no difficulty and it is meaningless to get the smallest singular values as thetalambda012 which is the same as previous overparameterized explicit networks phix and cannot show any difference between implicit and explicit dnns unfortunately the submission just got the results in this way 1 the only difference between this submission and the previous works on explicit dnn convergence in the sense of proof roadmap is the additional proof for the existence of a fixed point at the initialization however constructing a shrink operator which guarantees the existence of the fixed point is not a complex task we can even guarantee the wellposedness by setting a00 in fact as the authors discussed in the submission we need to prove that the operator sigmaacdotphix is shrink during training rather than at initialization for guaranteeing this the scaling factor may depend on the other term such as step size rather than only m since we need to bound the difference ata0 the author needs to deal with the existence more carefully 2 for the convergence speed it is the same as the previous ones hence it further verifies that the convergence guarantee comes from the explicit additional term vphixtwolayer overparam relu dnn instead of the implicit feature z a straightforward guess is that all the results still hold when we set a0 or set u 0 or even drop z ie haty vphix 3 when proving the nonsingularity of h the submission says that it utilizes a different data assumptionno two data are parallel to each other however the same setting and almost the same linear convergence results are given in 1 4 more importantly the current convergence guarantee for overparams dnns can be divided into two categories in the sense of activation settings relu and sufficient smooth activation function for proving the plinequality one relies on the smoothness of activation to provide the lower bound while the others prove the flipping feature is small and the overall bound can hold during training confusingly this submission mixes these two roadmaps utilizes the routine for a smooth activation function in the relu setting which may cause the problem for the conclusion of some auxiliary lemmas 1 gradient descent provably optimizes overparameterized neural networks consider a important problem but heavily rely on the the results in the previous work docsepthis paper theoretically analyzes the optimization of deep relu implicit networks it first shows the wellposedness of the problem ie the existence and uniqueness of the equilibrium point then proves that under overparameterization both continuous and discrete gd have global convergence in a linear rate and the approach is similar to the standard proof for dnn to be honest i am not familiar with both theorems and applications of implicit networks seems that it is empirically successful but lacks theoretical understanding then i think this paper provides a good starting point under the form 1 and 2 the paper first shows the existence of the equilibrium point given a is bounded then the proof of the convergence is similar to dnn 1 write down the dynamics 15 and show that one of the terms in h has lowerbounded eigenvalues the following calculation heavily relies on the form 2 of phi is this commonly used in applications 2 for sufficiently large m overparameterization the random initialization g0 is close to the infinitewide ginfty 3 the lower bound of gt gives the linear convergence rate then the fast convergence indeed guarantees that gt is not far from g0 during the trajectory despite that the approach is sort of standard and the dynamics seems to be simpler than dnn since all the layers share the same weights the proof is not trivial and the theorem is good as it gives the first theoretical optimization result for implicit networks the paper also implements numerical experiments on several standard image dataset to show the effectiveness of the implicit networks ps thank the authors for the detailed response the paper proves the convergence of the optimizing nonlinear implicit networks the proof techniques follow the standard approach for dnn and i think it is a good starting point for the theoretical analysis of implicit networks docsepthe paper presents a proof of exponential convergence to global optimality in the overparametrization settings for an implicit model with scaled weights parameters although existing work has established similar proofs for feedforward explicit neural networks such methods dont work with nonlinearly activated implicit models where the wellposedness issue poses challenges to the training process the authors shows that by scaling the weights wellposedness can be ensured the convergence result is obtained first on continuous settings and is then extended to discrete settings numerical experiments on real datasets confirms the finding strength the paper studies the very important problem of convergence of training for implicit models the problem is nontrivial even given recent advances in relavent proofs for explicit forwardfeeding because of the wellposedness issue in implicit models which presents because implicit models can be seen as infinitely deep neural networks the authors show that by puting a proper simple scaling factor to the weights the wellposedness property can be maintained throughout the training process with no extra regularization or projection steps this enables the proof of training convergence for implicit models thorough mathematical proofs for both the continuous setting and the practical discrete setting are given in the paper to support the results which are then varified by numerical experiments weekness there is a typo in the notations section i suppose it is lambdamaxa a since a is not assumed to be positive semidefinite the paper sets the foundation for the training theories for implicit models though some common techniques are employed to in the derivations the authors successfully tackle the key issue of wellposedness to make the convergence result possible the reviewer believes this result is significant for implicit models which have become increasingly popular in the community docsep in this paper the authors theoretically analyze the convergence of gradient descent for an implicit neural network with infinite layers with relu activation the authors show the unique fixed point of the infinitelayered mapping when the weight matrix boldsymbola has a properly bounded spectral norm using implicit differentiation the authors show the partial gradient at the fixed point furthermore the authors show the linear convergence rate by proving the strictly positivedefinite of the gram matrix boldsymbolgt and boldsymbolht pros 1 this paper makes a clear contribution to proving the convergence of gradient descent for an implicit neural network with relu activation with infinite layers and finite width i think that using implicit differentiation for the partial gradient at the fixed point is interesting enabling the proof of convergence by showing the strictly positivedefinite of the gram matrix boldsymbolgt 2 to ensure the strictly positivedefinite of the gram matrix boldsymbolgt the required number m tilde omegan2 is competitive or better than recent results for the finitelayered neural network in addition the result in this paper hold for infinite layers 3 the paper is well organized and clearly written cons 1 the gradient nabla boldsymbolal and nabla boldsymbolul involves equilibrium point boldsymbolz however it is not easy to achieve the equilibrium point explicitly how to compute the gradient for training does it need an approximation or a solver for the equilibrium point it seems to me that a solver demands a high time cost does it scale to a largescale problem it is interesting to discuss the relationship advantagedisadvantage compared with neural networks with explicit proximal mapping architecture eg lyu et al 2021 has a similar nn architecture boldsymbolyt1 h boldsymboldtoptboldsymbolx boldsymboliboldsymboldtoptboldsymboldtboldsymbolyt with boldsymboly0 boldsymbol0 when sharing weight boldsymboldt boldsymbold and set tilde gamma boldsymbola boldsymboliboldsymboldtop boldsymbold it seems to be a finitestep updated nn instead of the fixed point boldsymbolz in eq3 2 relu function fx max0x is not differentiable at point x0 how does this influence the continuous time ode analysis for the linear convergence minor typos in the proof of lemma 22 in appendix a1 it should be sigma tilde gamma boldsymbola boldsymbolzl1 phi instead of sigma tilde gamma boldsymbola boldsymbolzl1 phi lyu et al neural optimization kernel towards robust deep learning overall i think this paper makes a clear contribution to proving the convergence of gradient descent for an implicit neural network with relu activation with infinite layers so i recommend acceptance ### Summary:
this paper shows gradient flow of relu activated implicit networks converges to a global minimum at a linear rate for the square loss when the implicit neural network is overparameterized while the analyses follow the existing ntktype analyses and there are disagreements among reviewers on the novelty of this paper the meta reviewer values new theoretical results on new emerging settings implicit neural networks and thus decides to recommend acceptance
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: overall comments this paper propose a principled method to generate hard positive and negative samples based on conditional likelihood for contrastive learning of seq2seq models and it shows significant improvements in training conditional text generation tasks compared to nave approach with random negative samples overall the idea is interesting and the experiments are wellconducted however i still have some detailed questions regarding to the method and experiment as follows methods 1 i am a bit confused with eq2 what is bfm do you mean xi is the source sentence yi is the corresponding target sentence is it meaningful to match the hidden representation between source and target sentence especially for tasks such as summarization also training with eq2 did not involve any decoding process nor supervising how to decode a sentence some form of mle training also noted in eq 9 seems to be unavoidable which in some sense still relies on teacher forcing 2 the proposed method to create positivenegative examples is related to virtual adversarial training vat in nlp miyato takeru andrew m dai and ian goodfellow adversarial training methods for semisupervised text classification arxiv preprint arxiv160507725 2016 it would be nice to include for discussion or comparison 3 for sec 33 34 a how do we know the perturbed hidden states bfh still lay in the manifold of valid sentences it is possible the hidden states may not be corresponded to any sentences b using the conditional likelihood over the original target sentence to measure the negative samples may also be misleading for example it is also possible to get a very different sentence with the same semantic meanings with the target sentence c what is haty and bary in eq 6 and 7 are they different target sentence where are they from as the proposed methods did not seem to include decoding experiments 1 it seems that all experiments are initialized with t5 does it mean that the proposed method only works with large scale pretraining it would be more important to show results with training from scratch 2 the results on wmt16 roen do not seem to be too low especially with t5 pretraining which makes the improvement difficult to tell 3 for many tasks the improvements of the proposed method are actually marginal it may improve the paper by include discussion of statistical significance 4 there are also methods such as reinforcement learning which also aims to overcome the problem of teacher forcing it should be also discussed in experiments docsepthis paper presents a method for conditional text generation tasks that aims to over the exposure bias problem through contrastive learning where negative examples are generated by adding small perturbations to the input sequence to minimize its conditional likelihood and positive examples are generated by adding large perturbations while enforcing it to have a high conditional likelihood experimental results on machine translation text summarization and question generation show the effectiveness of the proposed approach my only concern is that compare to mle the improvements either on table 1 or on table 2 are relative small the study in the paper by massimo caccia lucas caccia william fedus hugo larochelle joelle pineau laurent charlin language gans falling short iclr 2020 shows that the exposure bias problem for text generation by mle appears to be less of an issue and simple temperature sweep in the softmax significantly boosts the performance and gives pretty good results that beat all language gans so i think in the experiments all results should be compared using the trick of temperature sweep moreover if diversity is an issue the results should be compared in the qualitydiversity space as did in language gans falling short paper hopefully the authors can address my concern in the rebuttal period docsep paper summary this paper proposes to add contrastive learning to the sequencetosequence generation problem more specifically the authors apply a contrastive loss on the globally pooled hidden representation of the generated hidden states the key novelty is to apply adversarial gradients to obtain both hard negative and hard positive examples the proposed method can improve a stateofart pretrained transformer model t5 on 3 tasks machine translation wmt16 enro abstractive summarization xsum and question generation squad overall review although the proposed method seems to be effective and new the concerns outweighs the contributions in my opinion i am leaning towards rejection for now please try to address my concerns during the rebuttal period pros the idea of using adversarial gradients to generate hard negativepositive is novel at least for contrastive learning and sequence generation problems improvement is demonstrated on a strong pretrained transformer model t5 this method is experimented on 3 tasks and could possibly be extended to any seq2seq generation cons the sdfgclaim of solving the exposure bias is somewhat exaggerated the proposed method is somewhat straightforward and lacking theoretical insightsguarantees the method is only applied on a small version of t5 which is limited how about other pretrained potentially larger models how about nonpretrained models such as randomly initialized transformerslstms detailed review the authors claimed to mitigate the exposure bias problem for sequence generation however the original exposure bias problem refers to not seeing incorrectly generated text tokens as training input which leads to traintest mismatch in this work the model does not see any selfgenerated negative tokens as input but only pooled adversarial hidden states it does not mitigate traintest mismatch at all therefore the current presentation may be misleading it might benefit the paper to also compare and contrast to adversarial training for nlu such as smartfreelb moreover this work does not provide new theoretical insights the hard negativespositives do not have theoretical guarantee it is not clear to me why eqn 6 7 will be a distant positive if g f then h hbar moreover in mt and sum adversarial step size eta and epsilon are set the same this is inconsistent with the intuition of near negative and distant positive claimed in the paper other questions suggestions in eqn 2 why not use 2layer mlp as in simclr in experiments maybe add a claps wo negative so that readers would know which is more important why not train as how you generate table 3 example this will then better solve the traintest mismatch exposure bias although maybe at a cost of slow training some human evaluation on a larger set of generated examples would help for example how many hard negatives are actually being recognized as negative by human docsep summary proposes contrastive learning method for conditional textgeneration here we maximize similarity of representations between source and target sequences positive while minimizing similarity with false targets negative additional positives and negatives are created in the sequence representation space by adding perturbations to decoder output hidden states to minimizemaximize conditional likelihood pyx it is shown this works a lot better than the naive contrastive approach of sampling random nontarget sequences the full model is based on t5small raffel et al and combines contrastive objective with regular mle objective by simple addition modest improvements over t5small are observed on translation summarization and questiongeneration seq2seq tasks pros 1 diversity of seq2seq tasks with consistent improvements over baseline t5mle small 2 possibly improves the exposure bias issue of regular mle seq2seq training 3 complementary to seq2seq mle training and can be used to improve it in general not just text generation cons 1 the improvements are consistent but appear to be modest it is unclear whether the improvements would persist on the larger t5 model sizes would it be possible to study this eg medium size 2 please add sota results in the tables for the various tasks for reference 3 please discuss effect on traininginference speed 4 since this is generation more noncherrypicked example decodes would be informative to have in the appendix 5 even better would be some basic human evaluation of generated outputs to verify whether meaningful quality improvements are made 6 scheduled sampling bengio et al should be discussed and perhaps compared as it is a wellknown method for addressing exposure bias 7 should discuss relationship to virtual adversarial training miyato et al clarifications 1 are all the models initialized with t5mle or are they trained from scratch on c4 for the same number of steps as t5mle ### Summary:
this paper proposes a new method for conditional text generation that uses contrastive learning to mitigate the exposure bias problem in order to improve the performance specifically negative examples are generated by adding small perturbations to the input sequence to minimize its conditional likelihood while positive examples are generated by adding large perturbations while enforcing it to have a high conditional likelihood this paper receives 2 reject and 2 accept recommendations which is a borderline case the reviewers have raised many useful questions during the review process while the authors has also done a good job during the rebuttal to address the concerns after checking the paper and all the discussions the ac feels that all the major concerns have been solved such as more clarification in the paper more results on nonpretrained models and smallscale human evaluation on one hand reviewers found that the proposed method is interesting and novel to a certain extent the paper is also well written on the other hand even after adding all the additional results the reviewers still feel it is not superclear that results would extend to better models as most of the experiments are conducted on t5small and the final reported numbers in the paper are far from sota as shown in table 1 2 the ac agrees that the final results are far from sota and the authors should probably also study the incorporation of claps into stronger backbones on the other hand the ac also thinks that t5 is already a relatively strong baseline to start with though it is t5small and it may not be necessary to chase sota under a fair comparison the ac thinks that the authors have done a good job at demonstrating its improvements over t5mle baselines as a summary the ac thinks that the authors have done a good job during the rebuttal on balance the ac is happy to recommend acceptance of the paper the authors should add more careful discussions to reflect the reviewers comments when preparing the camera ready
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the proposed method can be a very useful tool to identify in an unsupervised fashion similar characteristics in groups of nuclei and can serve as an important tool in potentially defining morphologydriven ground truths 1 the evaluation of the presented methodology is for the most part qualitative in nature various similarity evaluation metrics have been proposed in computer vision that can be leveraged here 2 how do the authors propose to evaluate an unseen patch and fit it to one of the n identified clusters 3 though the task is mostly motivated by lack of enough training samples it is not clear if the identified groups are actually driven by the underlying nuclei morphology or some other characteristics additionally how important are these features in the actual diagnostic process docsepthe proposed method is well described and easy to follow in the experiments the authors used various types of tumor which present that method can be applied for many organscancer types this method can be used for future research on cell biology cancer biology the main weakness is a limited novelty but the paper is interesting and can be useful for the research community other comments 1 the images presented on figures are significantly too small nothing is visible 2 captions used on figures should be readable now they are too small docsep the proposed approach is novel and can be very helpful for the research community in the field the paper is wellwritten and easy to follow in all parts stateoftheart methods were used in different parts of the proposed method for segmentation embedding and clustering the results were only qualitatively analysed it would be interesting to investigate the performance of the presented method on the publicly available datasets that provide nuclei segmentation and classification masks examples of such datasets set can be found below monusac dataset httpsmonusac2020grandchallengeorg consep dataset httpswarwickacukfaccrossfactiadatahovernet ### Summary:
three knowledgeable reviewers recommend accept and maintained their rating after the rebuttal and discussion all of them agreed that the paper will be very interesting for the research community in the field the authors addressed the points raised by the reviewers during the discussion and updated their manuscript moreover the authors said that they will share code and trained model for their submission i think that this paper will be a good contribution to midl 2021 authors should address the main points in the reviews when preparing a final version
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper argues that the main reason or a good reason for the meaningfulness of a gradient with data manifold the authors perform a set of controlled experiments with different feature attribution methods finally they theoretically show that alignment of the gradient with data manifold has nothing to do with generalizability the main question in the paper is interesting does explainability has something to do with alignment of the explanation map with data manifold the papers central claim about meaningfulness is not quantifiable at least with experiments in this paper and as a result not falsifiable the figures shown in the paper can only show that the gradientbased explanation is more aligned with data manifold than random but the main argument about the meaningfulness of a method vs another method is very subjective also there is no clear trend between different gradientbased explanation methods perhaps the only trend is that explanation methods are more aligned with data manifold than the gradient which is interesting but hardly conclusive about the papers central claim the authors observe the alignment increases and decreases with epochs this phenomenon can happen for many reasons and i am not sure how it has anything to do with explainability perhaps the most exciting part of the paper is the negative theoretical results at the end alignment with data manifold has nothing to do with generalizability however this theory has not much to do with the central claim the paper reads a set of subjective observations about the meaningfulness of explanation and relationship with data manifold tangential theory the paper does not have a coherent story however the central question is interesting docsepthis paper studies the following hypothesis gradientbased explanations are more meaningful the more they are aligned with the tangent space of the data manifold the work has three contributions i an autoencoderbased approach to estimate data manifolds of five datasets in order to evaluate the alignment between explanations and tangent space of the data ii analysis of the alignment between explanations and tangent space during training iii theoretical analysis to show that generalization does not imply alignment with data manifold strengths the paper is wellwritten the empirical and theoretical results are easytofollow the evaluation metric and data manifold construction are clearly explained the hypothesis is wellposed and relevant to the highlevel problem of evaluation metrics for instancespecific explanation methods the generative approach to create datasets with a completely known manifold structure is interesting by training a new model on this dataset with known manifold this approach sidesteps the possible mismatch between the true manifold and the estimated manifold it would be great to have some discussion on algorithm 1 tangent space computation with this approach the discussion of why a reconstructive approach is needed for highdimensional datasets kd argument is insightful as well the experiments on realworld datasets are quite thorough section 3 evaluates four different gradientbased methods and the random vector baseline on five realworld datasets using multiple architectures the results consistently show that raw gradients are worse than gradientbased methods such as smoothgrad also figure 2 clearly shows how the outofmanifold component of explanations looks less meaningful than the onmanifold component of explanations the experiment on the fraction of gradient in tangent over the course of training is novel and interesting the observation that the fraction of gradient in tangent space increases rapidly and then slightly deteriorates is quite surprising however it would be good to sanitycheck whether this phenomenon holds on larger nonmnist datasets weaknesses in order of significance insufficient evaluation of explanation meaningfulnesscorrectnessquality to test the proposed hypothesis it is necessary to test whether explanations that are better aligned with the data manifold are more meaningful while there are multiple experiments to test data manifold alignment the paper uses qualitative visual inspection to evaluate explanation meaningfulnesscorrectnessquality it is now well known that qualitative visual inspection is subjective and misleading explanation methods that are known to output visually sharp saliency maps often fail basic sanity checks r1 instead of visual assessment evaluation metrics such as roar r2 not cited and diffroar r3 can be used to quantitatively test and compare the correctness quality of explanation methods the current evaluation visavis explanation quality is insufficient to reliably test the hypothesis limited novelty first r4 not cited and r5 cited but not in this context use generative models such as vaegans to obtain a learned data manifold in order to evaluate whether gradientbased adversarial perturbations and raw gradients resp are close to the data manifold tangent space second similar to results in section 5 r5 show that adversarially robust models raw gradients are better aligned with the data manifold r3 show that robust models raw gradients have better explanation quality section 4 when and why are gradients aligned with the data manifold shows that i adversarial training improves alignment between explanations and data manifold and ii evaluates the effect of training with random labels however the section title is misleading because it does not study the why aspect for example there is no discussion on why adversarial training improves alignment section 5 generalization does not imply alignment does not justify the choice of the dataset or 1dimensional manifold design that is used in the theoretical analysis what is the design principle behind this synthetic dataset is it representative to some extent of natural data distributions real datasets considered in previous sections clarifications and questions why is the hypothesis restricted to gradientbased explanations can explanations not based on gradients eg occlusionbased saliency maps be meaningful if they are orthogonal to the data manifold the results in section 4 suggest that integrated gradients and input times gradient are better than raw gradients as they are better aligned with the tangent space this seems to possibly contradict previous findings r1r6 that show that unlike raw gradients integrated gradients and input x gradients fail basic sanity checks if a gradientbased explanation approximately lies in tangent spacecontribute to prediction section 1 this statement is a bit unclear based on how i understood it i am not sure that it is fully correct if an explanation approximately lies in the tangent space it may still lack fidelity wrt the prediction rules learned by the model for example it is possible that an explanation that lies in the tangent space can highlight some component eg texture of object in image that is different from the components eg shape and location of object of the image that the model employs for its predictions r1 adebayo j gilmer j muelly m goodfellow i hardt m and kim b 2018 sanity checks for saliency maps arxiv preprint arxiv181003292 r2 hooker s erhan d kindermans pj and kim b 2018 a benchmark for interpretability methods in deep neural networks arxiv preprint arxiv180610758 r3 shah h jain p and netrapalli p 2021 do input gradients highlight discriminative features arxiv preprint arxiv210212781 r4 stutz d hein m and schiele b 2019 disentangling adversarial robustness and generalization in proceedings of the ieeecvf conference on computer vision and pattern recognition pp 69766987 r5 kim b seo j and jeon t 2019 bridging adversarial robustness and gradient interpretability arxiv preprint arxiv190311626 r6 yang m and kim b 2019 benchmarking attribution methods with relative feature importance arxiv preprint arxiv190709701 overall the weaknesses of the paper outweigh its strengths while the hypothesis is wellposed and the experiments are thorough some major weaknesses are i insufficientmisleading evaluation of explanation correctnessquality ii limited novelty visavis methodology and results on robust models iii missing discussion on why robustness improves alignment justification of synthetic dataset and theory and connection to previous work on sanity checks docsepthe paper constructs a synthetic classification task with a known manifold structure by training the classifier with data from a variational autoencoder with a lowdimensional latent space the paper argues that the components of image gradients that lie in the tangent space of the data manifold are semantically meaningful whereas the part orthogonal to the image manifold is nonsensical the experiments in the paper support this hypothesis to an extent this is an interesting although not unexpected conclusion the paper is well written and the experimental design is explained in detail much of the evaluation relies on informal observation of gradients and the examples appear to be carefully picked in figure 2 many of the digits are slightly ambiguous and exhibit artifacts that highlight the explanation the qualitative difference between onmanifold and orthogonal components appears consistent and convincing in figure 3 it is difficult to say if the measured fraction of onmanifold component correlates with quality of explanation except maybe for the top rows with cifar10 judging if the lower rows make sense would require expertise in diagnosing pneumonia or retinopathy which i believe to be rare in the reviewer pool i am somewhat concerned by how the relative ordering of various explanation methods changes between different variants of mnist how certain can we be that this ordering is not due to random chance i would expect that training the same vae multiple times with different random initializations could result in quite different latent spaces and these might favor different explanation methods in other words how robust are the measured distributions such as those shown in figure 2 and 3 and the related explanation figures to the inherent randomness in training this should be measured in order to assure the reader that the differences are real and consistent formula 1 calculates the cosine of the angle between the vectors v1 and grad i dont think it is appropriate to call this a fraction because a similar computation between v2 and grad corresponding to the sine of the angle between v1 and grad and this fraction do not sum to 1 squaring the formula would correspond to the length of projection of v1 onto grad relative to length of grad and would seem like a perhaps more appropriate choice the adversarial training test in section 41 is very interesting and convincing i have no opinion of the usefulness of the theoretical result in section 5 section 31 heading has a typo graysacle the paper constructs a clever setup to test its central hypothesis and provides some convincing results that the hypothesis holds true however there is no analysis of stochastic variation in the quantitative results so they may not hold water as well as the central tenet that the gradients in line with the tangent space of the data manifold are qualitatively different from gradients orthogonal to it docsepthis paper makes the hypothesis that gradientbased explanations are meaningful if they are aligned with the tangent space of the underlying data manifold experiments in the paper compute this alignment for various explanation methods and adversarially trained models and conclude that smoothgrad integrated gradients and adversarially trained models generally produce gradients that are more aligned with the tangent space 1 no definition of meaningful explanation makes core hypothesis unverifiable the main drawback of this paper is that it fails to quantify or define what it means for an explanation to be meaningful which is central to the hypothesis presented in the paper without such a definition it is impossible to verify the correctness of the hypothesis for example is an explanation more meaningful if is more structured if so then it is plausible that for highly structured domains such as images the tangent space is also similarly structured and hence meaningful however when the underlying domain itself is unstructured what constitutes a meaningful explanation note that it is perfectly fine if meaningful explanations exist only for highly structured domains and it is important nonetheless to define these terms precisely to be able to verify their correctness it is also unclear if this meaningfullness is distinct from the socalled faithfullness condition where the explanation method must accurately reflect model behaviour for instance an explanation can be highly structured but be unrelated to the underlying model how do we guard against these issues in a coherent definition of meaningful explanations 2 no quantitative metrics to measure saliency map quality similar to point 1 the paper does not compute any quantitative metrics regarding the quality of saliency maps besides the alignment of saliency map with the tangent space the hypothesis in the paper is that tangentspacealigned saliency maps are higher in quality and the experiments in the paper demonstrate simply that some saliency maps are more aligned to the tangent space than others but the question remains are they of higher quality according to some welldefined metric unfortunately the experiments do not answer this question and this is yet another major drawback of the paper 3 clarification regarding gradient direction within the subspace in the setting proposed in the paper the tangent space is a kdimensional subspace of rd however the gradient corresponds to a single direction within this subspace it is unclear to me whether any specific direction within this subspace must be preferred or any direction is equally good some discussion on this would be illuminating 4 missing highly related reference the paper misses reference to a highly related work srinivas fleuret rethinking the role of gradientbased attribution methods for model interpretability iclr 2021 both papers are similar in that they hypothesize that discriminative models seem to have a generative modelling manifold learning component which ensures that the gradients are related to the underlying data distribution however they also present different hypotheses in the sense that srinivasfleuret state that model gradients are interpretable if they are aligned to the gradients of the data distribution whereas this paper posits that gradients are aligned to the tangent space of the data manifold the above paper also shows that presoftmax gradients can be arbitrarily structured which seems related to section 5 of the current paper generalization does not imply alignment with the manifold overall i think this is an important point to discuss and compare the two hypotheses presented in both papers on a related note it would be nice to present visualizations like figure2 where the gradient components are presented in the normal space and tangent space but for image datasets such as cifar10100 5 nice experimental approach on a more positive note i like the approach taken by this paper to verify its hypothesis by explicitly generating data that lies on a manifold i also like figure 4 which shows how such alignment changes during training this seems to point to some form of approximate manifold learning being performed by the model implicitly figure 5 is also very interesting as it shows the dramatic shifts that adversarial training can produce im wondering whether similar observation holds true for simple gradient norm regularization which is also shown to boost robustness overall while i certainly think that this paper makes a hypothesis that is interesting and is at least partly true it does not make its definitions see point 1 and experiments point 2 precise which makes it impossible to prove or disprove the hypothesis i would be willing to accept this work only when the paper makes a more clearly stated hypothesis and designs similarly clear experiments ### Summary:
this paper studies the following hypothesis that gradientbased explanations are more meaningful the more they are aligned with the tangent space of the data manifold the reviews are negative overall the general feeling is that the paper reads like a set of subjective observations about the meaningfulness of explanation and relationship with data manifold tangential theory there isnt a coherent story
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: it is a well written paper that flows well i think the community will find it interesting as its application has not been well explored yet with deep generative models the paper also demonstrates a theoretical framework for doing so that many readers probably will find interesting too my main concern with this paper is the low precision from the latent space classification experiment in the conclusion section the authors state however the latent representation that the mopoe learns when learning to reproduce the data is still meaningful in the sense that it can be separated into the different classes the data belong to however a precision below 05 effectively means the classification predicts more false positives than true positives therefore i am not sure their claim is justified meaning how useful is the latent representation their model encodes also i am not sure that picking frontal and lateral slices from the cxr images qualifies as different modalities if i understood the meaning of the f and l images it is not explained in the paper why not simply use two modalities 3d cxr and text reports ### Summary:
the paper addresses a challenging yet important issue of multimodal learning images and reports with deep generative models the reviewer is fairly convinced by the proposed method based on iclr paper and the potential for the application however the results are currently of somewhat performance the work will likely stimulate fruitful discussions and can therefore be accepted as short paper
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper presents various upper bounds rademacher complexity based and lower bounds fat shattering dimension based on single hidden layer neural networks these bounds shed light on the question whether the bounded spectral norm of the weight matrix is sufficient for having width independent uniform convergence guarantees moreover there are similar discussions for frobenius norms and input dimension dependence the paper considers both generic and convolutional neural networks contributions of the paper are summarized in 8 theorems theorem 1 shows that the fat shattering dimension is scaled with the network width for nonsmooth activation function if only the spectral norm is bounded theorem 2 generalizes golowich et al 2018 and neyshabur et al 2015 to show that frobenius norm bound is sufficient to bound the sample complexity for lipschitz activation functions width independent bound using rademachar complexity analysis theorem 3 shows that the fat shattering dimension for frobenius norm bounded networks is input dimension dependent for smaller input dimensions theorem 4 shows that the spectral norm is sufficient to bound the sample complexity for polynomial activation functions theorem 5 extends the result to the multilayer case for polynomial activation functions of type zk lemma 4 and 5 are crucial for the proof of theorem 4 and 5 theorem 6 shows that the spectral norm is sufficient for convolutional networks with linear last layer theorem 7 shows that rademacher complexity bound for convolutional networks with pooling has logarithmic dependence on width theorem 8 shows using fat shattering dimension that the logarithmic width dependence is unavoidable strengths from technical perspective the paper provides a valuable contribution by clear exposition of the role of spectral and frobenius norm in uniform convergence results the paper is well written given the sheer volume of works on explaining generalization error for neural networks the authors do a good job connecting the results to the existing works for instance in remark 2 on bubeck et al 2021 or remark 3 on implicit regularization the paper contains many theoretical results of interest combines many ideas from previous works and provide new proof techniques notably theorem 4 and 5 to the best of my knowledge the proofs are well presented and are sound overall this is a great paper and i enjoyed reading it weaknesses although this might seem like a gratuitous comment and request i think having numerical support even a toy example can be good to support the theory i have some concerns about ophi term in theorem 4 see below i feel that the authors do not adequately comment about implications of the current result for a general theory of generalization for deep learning covering all existing works are of course not possible however works like nagarajan et al 2019 or zhang et al iclr 2017 are widely discussed it is at least expected that the authors clarify in which regime their bounds are applicable and do not suffer from examples of those papers as mentioned by the authors themselves there is a debate whether uniform convergence can explain generalization in neural networks mainly supported by some carefully designed experiments it is not clear how one should understand the bounds of this paper in light of papers like nagarajan et al 2019 docsepthis paper studies the normbased uniform convergence bounds for twolayer neural networks their results give a tight understanding of spectralnormbased uniform convergence in particular they proved that spectral norm is not sufficient for general settings to get a uniform convergence result however for nns with certain smoothness conditions or some convolution structures the spectral norm is sufficient overall this paper is wellwritten and clear the authors show that in general bounding the spectral norm cannot lead to sizeindependent guarantees this negative result is quite interesting and insightful however i have a concern about the significance of the results since the size of the parameters is known through the training it seems unnecessary to get a sizeindependent guarantee moreover the spectralnormbased convergence result seems hard to apply both empirically and theoretically which may limit the application of the convergence result the authors have addressed their works limitations and potential negative social impact docsepbased on bounded norms of the weight matrices the authors investigated the upper and lower bound sample complexity for neural networks with one hidden layer in this research they demonstrated that in contrast to bounding the frobenius norm however bounding the spectral norm generally cannot result in sizeindependent guarantees although it is not surprising that the spectral norm is insufficient to get widthindependent sample complexity bound the paper and its theoretical analysis are very important the constructions did however highlight two situations in which the lower bounds can be avoided and a spectral norm control is sufficient to provide widthindependent guarantees the first scenario is when the activations are sufficiently smooth and the second is in certain situations involving convolutional networks in general this is a very important paper as deep neural networks still lack a very basic understanding of their behavior strengths 1 a very solid theoretical paper the theorems and proofs are exciting 2 the paper deals with very important and required questions in the field of deep learning not many papers focus on the theoretical side of deep learning i find this paper a very important step towards better understanding dnns weaknesses although it is clear that the paper is a theoretical paper however i have the following minor comment the writing could still be improved giving some intuitive explanation and details as to what and why each theorem holds and what each theorem mean it took some time to understand the theorems and the details some simple numerical experimentss would help justify the theory docsepthis paper studies the sample complexity of simple neural network model classes the focus is on upper and lower bounds under various norm constraints on the network coefficients it is shown that a spectral norm constraint is not sufficient to obtain bounds that are independent of the network width to prove this the authors estimate the fat shattering dimension of the considered hypothesis class the surprising aspect of this result is that it does not hold if the activation is the identity but crucially depends on the nonlinearity as a second contribution the authors show that a frobenius norm constraint yields hypothesis classes whose sample complexity is independent of the network width finally the authors identify two interesting settings where a spectral norm constraint is sufficient for getting bounds independent of the width i think that this is an interesting paper well written and adding to the current body of knowledge on understanding the generalization behaviour of deep learning algorithms in terms of originality i am not sure if the paper introduces new proof techniques maybe they can elaborate the novelty of their technical and mathematical contributions a bit more this has been adequately addressed ### Summary:
the paper proves a novel tighter bound normbased bound for the generalization error of twolayer networks all the reviewers agree that this is an important theoretical result and should be accepted
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper proposes a hypergraph neural network model exploiting a double attention mechanism in the message passing scheme the overall architecture is designed to process subhypergraphs once representations are computed for the nodes and edges the learning objective includes a regularization term based on the hypergraph laplacian the proposed model is evaluated on disease classification based on genegenetic pathways data showing higher f1 values with respect to a set of competitors finally due to the attention mechanism intepretions in terms of gene pathways hyperedges can be derived from the model outputs strengths the model incorporates a dual attention mechanism applied to nodes and hyperedges respectively exploiting the same attention context vector this desing choice is claimed to prevent overfitting reducing the number of parameters the architecture includes an attention module to derive subgraph representations this scheme allows the application of the mode in a inductive setting the model compares favourably with respect to the considered competitors on two benchmarks in genetic medicine weaknesses the presentation of the model is tightly interconnected with the proposed application in genetic medicine making it appear less general given the focus on bioinformatics the paper is hard to follow for readers not completely familar with this topic the effect of the number k of layers is not investigated maybe i missed it but the number of layers used in the experiments is not reported it is known that convolutional graph neural network suffer from oversmoothing when the depth of the network increases this may hinder the results in some applications since uniform representation of nodes are developed attention may perhaps limit this phenomenon but on the other side nodeedge regularization may produce add a related effect the ablation study show a positive effect of regularization but some discussionanalysis should be provided the authors discuss some limitations that need futher work in section of the paper both for the model architecture and the specific application considered for the evaluation as listed in the weaknesses a more general description not tailored for the considered task in genetic medicine would have improved the presentation of the proposed hypergraph neural network architecture docsepthe authors propose a gnn approach to learn embeddings of subhypergraphs the approach has an explicit treatment of hyper edges eg it does not resort to clique expansion and makes use of a regulariser based on the hyper graph laplacian the application chosen is that of disease prediction for patients modelled as sub hyper graphs given a pathway network modelled as a hyper graph with genes as nodes and setspathways as hyper edges pos explicitly treating hyper edges as first class citizens in the gnn modelling is of interest since in this was hyper edges can be the subjects of notions of regularisation or attention neg the relative importance of the various ideas introduced is not clear ie a better experimental design with clearer baselines and an ablation study is warranted more specifically 1 is the regularisation effective or of importance ablation case 2 is the proposed architecture much different from using a standard graph neural network with attention on a preprocessed hyper graph in particular the preprocessing could consist in representing an hyper graph as a bipartite graph where hyper edges are materialised as the nodes of one part and the genes are the nodes of the other part experiment note that the whole discussion regarding strong duality would follow automatically in this case 3 is the introduction of wsa the weighted subgraph attention needed what happens if we replace the whole subgraph treatment by mji directly ie what if we consider a subgraph simply the sum of the nodes genes that are of interest with mutations for each patient subgraph that is we could learn directly the embedding of the nodeshyperedges for the classification task when they are simply summed up for each patient experimentablation yes docsepthis paper suggests subhypergraph representation learning a niche problem related to subgraph and hypergraph learning to tackle this problem the authors propose the shine subhypergraph inductive neural network model which consists of three modules strongly dual attention message passing hypergraph regularization and weighted subgraph attention experiments on two realworld datasets demonstrate the superiority of shine on performance against baselines including gnns for hypergraphs and interpretation using attention strengths the authors present a novel and niche problem of subhypergraph representation learning which has not been explored in the gnn community a specific example cancer patients as subgraphs of genes in hypergraphs can be a practical application for this task the performance improvement by the authors approach is significant weaknesses however i think this paper is not ready for publication for the following reasons first the technical novelty of shine is limited this model consists of several parts and each of them is a slightly modified version of existing approaches using the attention to both nodes and hyper edges is presented in hypergat and the authors are aware of it nevertheless the idea of strongly dual attention is aligned with the dual form of hypergraphs and i can see the novelty of this paper here however explicit regularization by laplacian hypergraph regularization 1 and pooling by attention weights weighted subgraph attention 2 are wellknown methods in using gnns in this case shines novelty is limited to strongly dual attention and a more detailed analysis of this part is required second related to the first paragraph there are no rigorous ablation studies on the architecture as many submodules make up the model it is necessary to study where the performance gain comes from in the supplementary material only the ablation study on hypergraph regularization is presented and the study on dual attention message passing is presented by comparison with hypergat however there are other differences in attention forms between hypergat and shine and comparing these two does not provide a fully controlled ablation study of dual attention message passing i recommend authors retain all parts except parametersharing in the attention in addition the performance comparison between shine withwithout wsa and other gnns withwithout wsa also should be presented third it is skeptical that model interpretation by attention is an exclusive strength of shine there are learned attentions in other attentional models like hypergat can these models provide interpretation at the same level as shine can you compare interpretations between models does shine give more precise explanations than other models lastly there are missing baselines for subgraph classification in particular the subgnn can be a strong baseline of course subgnn is not designed for hypergraphs but it is straightforward to create graphs from hypergraphs such as clique expansion the transformation from hypergraphs to graphs is done only once before training thus it has a low overhead comparing shine and gnnsforsubgraphs can justify that these specific problems in this work should be represented as a hypergraph references 1 learning with hypergraphs clustering classification and embedding 2 gated graph sequence neural networks the authors do not address the potential negative societal impact of their work this paper targets a highlevel machine learning problem called subhypergraph representation learning however all datasets are related to a particular area genes pathways and diseases there could be a potential societal impact that should be considered in realworld applications in this area eg privacy it would be nice if the authors addressed this point docsephypergraph neural networks can exploit multiway connections in relational datasets but they are underexplored in domains such as genetic medicine in this paper a hypergraph attentionbased message passing neural network is proposed for subhypergraphlevel tasks eg genes nodes pathways hyperedges patients subgraphs predict cancer type of patient task experiments on genetic medicine datasets demonstrate the effectiveness of the proposed method shine subhypergraph inductive neural network originality even though the paper explores an underexplored research topic in an interesting domain subgraph representation learning for hypergraphs in genetic medicine the methods proposed are incremental extensions of existing methods and not novel combinations of existing techniques specifically in section 33 the ideas of hyperedge attention over nodes and node attention over hyperedges with parameter sharing are incremental extensions of wellknown hypergraph attention networks by viewing the nodes and hyperedges as two types of vertices of a bipartite heterogeneous graph the ideas of strongly dual attention mechanisms would be incremental extensions of existing attentionbased methods for heterogeneous graphs eg see a survey on heterogeneous graph embedding methods techniques applications and sources quality the authors have discussed interesting weaknesses of their work in addition to highlighting the strengths moreover baseline comparison table 3 interpretability analysis table 4 and ablation study table 3 in supplementary support the claims made in the paper empirically to an extent however formalising the key differences with existing similar methods eg hypergat in lines 159169 and confirming the differences with convincing syntheticrealworld experiments eg on a dataset chosen cleverly to show clear failure of hypergat but success of shine would improve the papers quality clarity the paper is well organised details on datasets and hyperparameter tuning could help an expert to reproduce the results of the paper and build effective models those with the best hyperparameters from scratch a discussion on computational complexity and an algorithmpseudo code would further enhance the clarity of the paper significance it is unclear from the paper why modelling genetic medicine datasets with hypergraphs despite being a natural choice is the best choice compared to straightforward alternatives more specifically it is unclear why a bipartite heterogenous graph with genes nodes of type 1 pathways nodes of type 2 patients sub heterogeneous graph would not be a reasonable choice the paper can be improved by positioning and comparing with setbased methods for exploiting hyperedges in hypergraphs eg you are allset a multiset function framework for hypergraph neural networks in iclr22 the authors have addressed the limitations and potential negative societal impacts adequately ### Summary:
the paper proposed a gnn that explicitly treats hyperedges and makes use of strongly dual attention hypergraph regularization and weighted subgraph attention the proposed method shows better performance than existing baselines on two genetic medicine datasets explainability is also demonstrated reviewers originally raised many concerns on presentation too specialized for the target application lack of ablation effectiveness of each proposed component is not clearly shown novelty combination of small modifications of existing methods and explainability existing methods can do the same the authors made an amazing job to address most of the concerns they reported additional ablation results and baseline results and showed that the proposed method still performs better and each proposed component plays a significant role two reviewers have been convinced by the authors response while the other two have not insisting that the novelty issue remains and with the limited novelty more careful investigation is required for publication this is a borderline paper and i recommend acceptance because i think adjusting existing methods to target applications is important research even if the modifications are small the proposed method significantly outperforms existing baselines including the ones reviewers suggested and the additional ablation study shows each of the proposed components is effective on the other hand i also sympathize with the reviewers with negative evaluations on the following comments formalising the key differences with existing similar methods eg hypergat in lines 159169 and confirming the differences with convincing syntheticrealworld experiments eg on a dataset chosen cleverly to show clear failure of hypergat but success of shine would improve the papers quality the paper can be strengthened by positioning strongly dual attention in shine with different attention mechanisms in heterogeneous graph neural network literature some are listed below heterogeneous graph attention network in www19 hetgnn heterogeneous graph neural network in kdd19 metapath enhanced graph attention encoder for hins representation learning in bigdata19 magnn metapath aggregated graph neural network for heterogeneous graph embedding in www20 heterogeneous graph transformer in www20 there is no need to empirically compare and run them as baselines but explaining the key differences conceptually to make hypergraphs a more compelling choice for genetic medicine than heterogeneous graphs can strengthen the paper i hope the authors would make a bit more effort to incorporate these suggestions in the final version
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: to tackle the unsupervised skill discovery problem the authors attempt to maximize the mutual information between latent skills and states itau z by using the nonparametric particlebased entropy estimation for mathcalhtau and noisecontrastive estimation for mathcalhtauz they evaluate their method and the baselines on the unsupervised reinforcement learning benchmark urlb where the agents are pretrained without extrinsic rewards and finetuned for downstream tasks with extrinsic rewards and fewer environment steps strengths the experiments are done on urlb which provides a good evaluation scheme for unsupervised rl wellestablished and common evaluation schemes are important for assessing methods empirically the empirical performance which is provided with relevant statistics is good compared to multiple baseline methods the analysis on the effect of different choices of multiple hyperparameters fig6 weaknesses this works originality is somewhat limited particlebased entropy maximization with state representations trained using a contrastive learning scheme has been explored in apt liu abbeel 2021a using the same entropy maximization form for skill discovery was done by aps liu abbeel 2021b the motivation for using noisecontrastive estimation is not entirely clear while the authors make a comparison with cpc as a lower bound of mutual information cpc is not very commonly used in skill discovery and the usual variational lower bound can already be tight if the variational approximation q approximates the true distribution p perfectly theorem 1 is not technically correct because of the particlebased entropy term in fcic which is also supported by the authors in that they introduced the weighting hyperparameter alpha due to that it doesnt consider the proportionality constant sec42 the authors employ itau z mathcalhtau mathcalhtauz as the decomposition of the mutual information the 2nd line in sec41 but they use qztau instead of qtauz for the rest of sec41 i think the derivation of the noisecontrastive estimator in sec41 needs more details for instance if there are noise samples where do the correct samples come from using the notation tau for something other than trajectories can be misleading lack of empirical analysis with different values of alpha while the empirical results basically show that cic can outperform multiple baselines on urlb with an appropriately tuned value of alpha i am mainly concerned about the correctness of the claims and the novelty of the work docsepthe paper builds upon the diayn idea eysenback 2018 that an agent could develop skills in an unsupervised environment by finding a set of skills that collectively visits the whole state space but encourages each skill to cover a different subspace and later use one of these skills to simplify the learning of a downstream task in this paper skills are learned in an unsupervised way using a mutual information based objective the mutual information between latent states t and skill vector z itz is decomposed as itzhthtz which the paper argues leads to explicit maximization of diversity of latent states t as well as distinct skills with focused effects by penalizing with htz more diversity equals better exploration of more distant states leading to more interesting behaviors the paper explains this decomposition also allows them to user higherdimensional skill vectors that improve representational capacity and downstream performance the paper develops a method to calculate the terms of this decomposition and formally shows it is a lower bound for the true mutual information the latent state entropy ht is estimated using an unnormalized knearest neighbor method requiring an ad hoc scaling factor alpha and the conditional entropy htz is calculated using an nce supervised neural net the paper illustrates the method by using their loss function to pretrain a ddpg architecture on unsupervised scenarios and then showing a benefit on downstream tasks the proposed method cic shows a larger iqm stabilized expert normalized score compared to state of the art methods experiments also show that highdimensio the empirical evaluation is rigorous in comparison to common practice 120 runs and iqm stabilization with strong soa and baselines liked the structured review of prior art that organizes work in an interesting thematic way i liked the empirical decomposition of reward into entropy and discriminator terms it is interesting that latent state entropy is important early in learning and task discrimination is more important later on useful to know that higher dimensional skills are important 64d performed best technically itz ht htz hz hzt so theoretically it should not matter which way it is decomposed the way the terms are calculated however could have a significant effect on the practical performance it isnt clear what a simple grid sweep of skills over the interval 01 means early stopping does not leak information so much as stop wasted exploration in irrelevant parts of the state space typo we use particule particle the text refers to optimality gap but the figures see to use expert normalized score which i assume is algorithm performance ddpg baseline performance did not fully understand the implications of the noise and projection argument but it seems like an important and worth while design decision to investigate the paper proposes a new algorithm for unsupervised behavior learning that is rigorously shown to be more effective and clearly argues for its design choices through supplementary experiments docsepthe paper proposes an algorithm contrastive intrinsic control for unsupervised skill discovery by maximizing mutual information between skill latents and state transitions the proposed algorithm is a refinement over existing methods 2 it uses a contrastive method to estimate conditional entropy and measures entropy on statetransitions as opposed to simply states as done in previous methods 2 the proposed algorithm shows good performance gains compared to existing competencebased skill discovery algorithms further the paper also contains a rather extensive empirical evaluation of various skill discovery algorithms on the recently proposed urlb benchmark strengths the empirical evaluation carried out in the paper is extensive with many baseline algorithms for skilldiscovery from each class knowledgebased databased competencebased on the recently proposed urlb benchmark further evaluation metrics iqm optimality gap etc used to measure performance are adopted from the recommendations in 1 the discussion and analysis on the reasons for failure of current competencebased skill discovery algorithms ablation experiments in figure 6 help justify the design choices made by the cic algorithm weaknesses the proposed algorithm seems like a variation on an existing algorithm 2 it refines some of the practical design choices used in the general framework of competencebased algorithms for unsupervised skill discovery however the authors have shown the differences between the various algorithms in table 1 and discussed their pros cons which is useful to contextualize their contributions a few questions i have for the authors on the highlevel motivations behind their algorithm design and would like some clarification on 1 if we intuitively think of the notion of a skill as a form of abstraction of longterm behavior for example motion primitives like walking flipping etc as shown in the paper which occurs over maybe a few tens or hundreds of steps so why do several skill discovery algorithms use only highly localized information in the state space various tau instantiations such as single states or in this case single statetransitions s s to infer its corresponding skill latents isnt it more intuitive to infer these latents from more global quantities like entire episodes of policy rollouts could the authors comment on this 2 to maximize the mutual information between state transitions and skills as defined in the cic we need to maximize the first term htau and minimize the second term htau z this conditional entropy would be minimal when the corresponding distribution ptau z is sharpnarrow ideally like a deltalike density function over the statetransition space this seems rather counterintuitive to me if we think of a single skill latent say walking shouldnt the density p tau zzwalk have a high value for all the possible statetransitions of the walking primitive which would be a rather wide distribution wouldnt the pressure to keep this distribution as narrow as possible over the statetransition space given a single skill latent lead to several latents codes which essentially cover the same underlying behavior redundant copies of different walking styles wouldnt this be undesirable for generalization on downstream tasks which would require composing these unsupervised skills 3 the authors argue for the need for increasing the dimensionality of the skill latents to ensure skill can be decoded back to a diverse set of behaviors couldnt this be achieved by largely retaining the small latent spaces used for skills in prior work and using a more expressive policy decoder this could allow for greater representation flexibility when the skill latents are decoded back to the actionspace and ensure that skill latents give rise to a diverse set of behaviors writingpresentation the paper on the whole is wellwritten and easy to follow i found the phrasing in this sentence rather confusing and ambiguous if the set of behaviors outnumbers the set of skills this will result in degenerate skills when one skill maps to multiple different behaviors what does behaviors refer to in this context action trajectories if so isnt it expected that the set of skills would be much smaller essentially a compressed representation than the total number of unique action sequences unless the authors a few minor typos i found are below why most competencebase algorithms competencebased algorithms both cic and cic use do you mean cic and aps in algorithm 1 contrastive intrinisc control intrinsic 1 agrawal etal deep reinforcement learning at the edge of the statistical precipice neurips 2021 2 aps active pretraining with successor features icml 2021 although the proposed algorithm cic is a variation on an existing algorithm 2 it shows impressive performance gains over several existing algorithms on a large suite of continuous control tasks from the urlb benchmark these largescale empirical evaluations and analysis of several algorithms for unsupervised skill discovery methods on a standard benchmark such as urlb would benefit the community docsepthis paper tackles the problem of unsupervised pretraining of a codeconditioned policy to improve the performance of downstream rl tasks in line with previous works in unsupervised skills discovery it proposes a method called cic to maximize a variational lower bound to the mutual information between the code and the visited states the lower bound is obtained through the combination of nonparametric state entropy estimation and a contrastive predictive coding loss for the conditional entropy the paper provides an empirical analysis of cic over a set of continuous control domains i report below some detailed comments and concerns that the authors might address in their author response mutual information objective 1 there is now a bunch of works targeting the mutual information between a code and the visited states to the purpose of unsupervised rl some of them are summarized in table 1 of the paper every work is proposing a lower bound to the mutual information i was wondering if there is a formal way to compare the lower bound of cic with previous works which one is the tightest do the authors believe that getting the tightest bound of the mi is really the goal in this setting or even maximizing the exact mi would not be necessarily better than other methods ie a good inductive bias matters more than the mi approximation 2 since the work motivates the approach as an approximation of the mi as it is common in previous works as well i was wondering if the authors also considered a direct nonparametric estimation of the mi eg httpsjournalsapsorgprepdf101103physreve69066138 instead of independent estimates of the state entropy and the conditional entropy especially such a direct estimation would not require the additional hyperparameter alpha methodology 3 can the authors clarify what is the entropy term htau denoting is it the entropy of statetostate transitions or the entropy of the joint probability of two states within a trajectory or something else especially what is the intuition behind using tau instead of s 4 the method is built upon a nonspecific base algorithm ddpg that was originally developed for standard rl ie rl problems where the reward does not change over time but it comes from a consistent reward function do the authors experienced any instability working with the nonmarkovian intrinsic rewards do they believe that the methodology could benefit from an objectivespecific algorithm ie an algorithm carefully designed to work with this kind of intrinsic reward experiments 5 how can we rule out the possibility that cic is just a better way to pretrain a ddpg agent in these settings wrt other baselines this would be significant anyway but do the authors believe that the same results would generalize to different base algorithms say trpo sac a2c 6 moreover ddpg is known to be quite strong on continuous control tasks do the authors believe that the combination of ddpg and cic would be successful in different settings eg visual discrete domains such as atari games as well or perhaps the base algorithm should be selected to accomodate the specific domain 7 cic is quite similar to aps liu and abbeel 2021 as they both employ nonparametric entropy estimation and a discriminator loss based on contrastive predictive coding and successor representations respectively however in the reported results cic is way better than aps do the authors think it is the different discriminator loss the main cause for this performance gap or there is some other factor at play can they confront the discriminator rewards of aps and cic as in figure 5 8 from my understanding the empirical results are not directly comparable with the urlb laskin et al 2021 despite a very similar setting i see that the benchmark is very recent and thus should be considered concurrent to this work but i believe that reporting a direct comparison with their results would further strengthen the empirical analysis 9 the results section seems to imply that a key factor under the improvement over previous competencebased methods is the ability of cic to cope with a larger skill space can the authors clarify why previous methods are prevented to work with a comparable skill space can they also provide a comparison with previous methods when working with the same potentially lowerdimensional skill space minor adaptation efficiency paragraph fig 3 is reported instead of fig 4 the normalized score of fig 6 does not seem to match the one of fig 4 it is not easy to track the different baselines in fig 4 top what is this plot representing to my understanding the main selling points of this paper are it tackles the very relevant problem of unsupervised pretraining for reinforcement learning the methodology is clear and a quite natural extension of previous works in unsupervised skills discovery literature strong empirical results cic seems to advance significantly the stateoftheart performance of unsupervised pretraining in continuous control domains instead potential shortcomings are the novelty seems limited as cic is essentially similar to aps liu and abbeel 2021 with a different discriminator loss which has been employed for unsupervised skills discovery before it is not completely clear from the paper what are the specific factors that lead to such a performance improvement over previous works whereas the reported empirical progress might be a sufficient reason for acceptance my current evaluation is just slightly positive in consideration of the mentioned concerns i do not think the limited novelty is a crucial problem here if the authors could better clarify in their response how the cic methodology is so successful i will consider raising my score to a clear accept ### Summary:
the paper addresses the question of skill discovery in reinforcement learning can we without supervision discover behaviors so that later when supervision is available via a reward signal we can learn faster the paper proposes a new contrastive loss that an agent can optimize for this purpose based on a decomposition of mutual information between skills and transitions the reviewers praised the extensive experimental evaluation and good empirical results as well as the analysis of failure modes of related algorithms unfortunately there appeared to be errors in the derivation and implementation these include typos in derivations that made them difficult to follow as well as uploaded code that didnt match the experimental results while the authors claim to have fixed all of them the reviewers were not all completely convinced by the end of the discussion period in any case these errors caused confusion during review so whether the errors are fixed or not it seems clear that there hasnt been time for a full evaluation of the corrected derivations and code for this reason it seems wise to ask that this paper be reviewed again from scratch before being published
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper is well written and clearly explained the previous literature is adequately discussed and the experimental results are clear i think that the subject of interclass relationships is a relevant and important one since there are many conditions with very subtle differences in their appearance on imaging and few authors make use of this information the problem being solved does not seem to be particularly common or clinically relevant see detailed comments below and the authors do not address this or speculate about how the method could be generalized to more commonrelevant tasks the dataset description is a bit unclear see detailed comments it is always better to include a data table for clarity docsepin general the idea of incorporating interclass relationships is quite novel in the medical domain and might be very interesting in some areas as an alternative to standard classification additionally designing a framework to learn the sord relationships directly from data is challenging and i am impressed that the authors could make the network converge with such a large optimization within the loss function the collection of such a large dataset is definitely worth mentioning following the open science spirit of midl i would highly recommend making the data publicly available and to include a link in the document unfortunately i have several concerns about the manuscript and the main one brings me to the very basic assumption of the presented work regarding iv ct phase classification it is not clear to me why some phases are more similar to each other than others are if the contrast bolus is in the arteries they have a high contrast and the veins have a high contrast when the bolus arrives there i do not see why misclassifications between these images should be counted less harsh and why this should help the network to gain performance or to reframe it i doubt that there is some causal relationship between the different phase images that guides the network towards better generalization or is the bolus spreader that much over the vascular system that there are some inaccurate states and why could that not be addressed via data curation to be fair your results show the superior performance of the sord approach but this brings me to my second concern that is closely related to the first one i am missing a discussion of the results as stated above i can imagine that the performance increase might not follow the class relationships but may be induced by adding additional gradient information to each class as each datapoint now spans multiple classes the sord approach works like some form of oversampling or data augmentation and hence provides higher robustness on small datasets i am sure you have a great argument against this hypothesis but this is exactly what i am missing in the discussion especially for such unintuitive results an adequate discussion is a must furthermore i have several major concerns the aim of the presented paper is a bit ambiguous and partially unclear the title claims a very general methodological development which should incorporate indepth analysis of the performance and hyperparameter choices on several standard datasets however all results are computed on a dataset called proprietary leaving the impression that the goal of the study is to apply a known computer vision method to iv ct phase classification please clarify the storyline in 51 you state that the ordinal permutation which contributes the lowest training loss would be assigned the largest weight during training i do not see why the model should increase any of the lambdas as the loss function is basically a weighted sum with the lambdas as weights in fact the best strategy for the model to decrease the loss is to set all lambdas to zero how do you make sure that this will never happen did you assess the lambdas after training please comment on that unfortunately the structure of the dataset remains unclear the claimed 264198 samples in the training set do not match 90 of the full dataset consisting of 334079 samples it is not clear why only subsamples of the validation and test data were used additionally it is not described whether the splits cared for patient distinction andor center distinction through the sets hence it is unclear whether data of the same patient could be in the training and the validation set for example furthermore i do not understand why the ground truth labels in the training dataset could be assessed by simply reading the dicom header while the test data was manually labeled by an expert radiologist if the header information is available i would assume that this data is optimal and hence the expert is not necessary and in the worst case introduces human errors to the ground truth please revise section 61 and make the data used in this study clear even though the paper is well written in general the mathematical notation is partially insufficient and some variables appear without explaining what they are details on this can be found in the detailed comments section docsep there is substantial interest in encoding ordinal relationships between classes into the training process of neural networks eg when training networks that classify images according to some kind of severity grade which is essentially a hybrid between a classification task and a regression task this paper investigates an elegant approach for circular dependencies the experiments demonstrate that encoding prior knowledge about the order of the classes improves the performance especially when a limited amount of training data is available next to explicitly encoding a specific circular relationship of the classes the paper also investigates variants of the approach where the relative weights of the classes are learned as part of the training process the approach is validated with almost 200 ct scans the experiments with learning the optimal encoding are somewhat limited only a single experiment was performed results for different training set sizes and different values for the parameter s would have been valuable docsepthe paper addresses a common challenge faced when training models with labels which are coarse and could benefit from domain knowledge to more correctly define them the method seems overly complicated in its presentation the evaluations could include more common baselines to justify the complication of this method the evaluations are not very convincing ### Summary:
this is a borderline paper while the underlying idea is good and relevcant the authors dont do a very good job of selling it their experiments are performed on a very specific task with limited clinical relevance the reviewers had a number of questions regarding experimental setup which were largely answered in the rebuttal
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper discusses recurrent networks with an update rule of the form ht1 rx r ht where rx is an embedding of the input x into the space of orthogonal or unitary matrices and r is a shared orthogonal or unitary matrix while this is an interesting model it is by no means a new model the idea of using matrices to represent input objects and multiplication to update state is often used in the embeddingknowledgebases or embeddinglogic literature eg using matrices to model symbolic relationships by ilya sutskever and geoffrey hinton or holographic embeddings of knowledge graphs by maximillian nickel et al i dont think the experiments or analysis in this work add much to our understanding of it in particular the experiments are especially weak consisting only of a very simplified version of the copy task which is already very much a toy i know several people who have played with this model in the setting of language modeling and as the other reviewer notes the inability of the model to forget is an actual annoyance i think it is incumbent on the authors to show how this model can be really useful on a nontrivial task as it is we should not accept this paper some questions is there any reason to use the shared r instead of absorbing it into all the rx can you find any nice ways of using the fact that the model is linear in h or linear in rx docsepthis is a nice proposal and could lead to more efficient training of recurrent nets i would really love to see a bit more experimental evidence i asked a few questions already but didnt get any answer so far here are a few other questionsconcerns i have is the resulting model still a universal approximator providing large enough hidden dimensions and number of layers more generally can one compare the expressiveness of the model with the equivalent model without the orthogonal matrices with the same number of parameters for instance the experiments are a bit disappointing as the number of distinct inputoutput sequences were in fact very small and as noted by the authr training becomes unstable i didnt understand what success meant in this case the authors point that the experiment section need to be expanded but as far as i can tell they still havent unfortunately docsepmy main objection with this work is that it operates under a hypothesis that is becoming more and more popular in the literature that all we need is to have gradients flow in order to solve long term dependency problems the usual approach is then to enforce orthogonal matrices which in absence of the nonlinearity results in unitary jacobians hence the gradients do not vanish and do not explode however this hypothesis is taken for granted and we dont know it is true yet and instead of synthetic data we do not have any empirical evidence that is strong enough to convince us the hypothesis is true my own issues with this way of thinking is a what about representational power restricting to orthogonal matrices it means we can not represent the same family of functions as before eg we cant have complex attractors and so forth if we run the model forward without any inputs you can only get those if you have eigenvalues larger than 1 it also becomes really hard to deal with noise since you attempt to preserve every detail of the input or rather every part of the input affects the output ideally you would want to preserve only what you need for the task given limited capacity but you cant learn to do that my issue is that everyone is focused on solving this preserved issue without worrying of the sideeffects i would like one of these papers going for jacobians having eigenvalues of 1 show this helps in realistic scenarios on complex datasets ### Summary:
paper has an interesting idea but isnt quite justified as pointed out by r2 very minimal experiments are presented in the paper pros interesting idea cons insufficient experiments with no real world problems no rebuttal either
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: in the era of deep learning pretrained models have been regarded as intellectual properties of ai companies thus protecting these models has been more and more important to achieve this aim this paper proposes a nontransferable learning ntl method to capture the exclusive data representation in the learned model and restrict the model generalization ability to certain domains this approach provides effective solutions to both model verification and authorization specifically 1 for ownership verification watermarking techniques are commonly used but are often vulnerable to sophisticated watermark removal methods by comparison the ntlbased ownership verification provides robust resistance to stateoftheart watermark removal methods as shown in extensive experiments with 6 removal approaches over the digits cifar10 stl10 and visda datasets 2 for usage authorization prior solutions focus on authorizing specific users to access the model but authorized users can still apply the model to any data without restriction the ntlbased authorization approach instead provides a datacentric protection which is called applicability authorization by significantly degrading the performance of the model on unauthorized data in general this paper contributes a novel method to the field and experiments verified the success of the proposed method pros the research direction is promising and important in the real world nowadays ai companies will train their own deep models with abundant labelled data that costs a lot of resources thus it is a good timing to research how to protect these models which have become very important and practical this paper proposed a method that can be effective solutions to both model verification and authorization which is general and is promising to be applied in other applications this paper is easy to follow experiments are enough to support the claims made in this paper a plus should be that experiments are conducted with 6 removal approaches over the digits cifar10 stl10 and visda datasets cons the presentation should be improved the first paragraph in intro is too long it is better to divided it into several paragraphs to better demonstrate the key points of this paper i am not sure if it is necessary to list the contributions in the introduction such contributions have been described clearly in intro and abs it seems that you do not need to restate them key related works are missing for an ai company they need to be aware of many adversarial attacks such as reprogramming attacks modelinversion attacks these works are also related to ip protection of deep learning it would be better to conclude these attacks as related works as well some discussions should be also added for general readers of iclr some notations should be changed for example we will not use x or y to present distributions instead we will use them to represent random variables it is better to use spx to represent the distribution corresponding to a random variable x it is unnecessary to use gmmd you can use mmdpq k where k is a gaussian kernel you can follow the notations from recent deep kernel mmd papers how many times do you repeat your experiments i did not see error barstd values of your methods this should be provided to verify that the experimental results are stable if we consider to add bandwidth to your kernel function how does the kernel bandwidth affect your results in general considering the significance of the researched problem this paper can be accepted by the iclr2022 however some points should be clarified and strengthened in the revision i would like to strongly support this paper if my concerns can be fully addressed docsepthis paper introduces the idea of nontransferable learning which is roughly what the name indicates the authors explain the value of this as a securityip protection tool to protect the model from being used on unauthorized data in addition this presents a kind of attack against domain adaption works that try to improve generalization bounds without access to source data basically the authors design a clever technique for learning nuisancedependent representations such a representation can be made to perform accurately for a particular source domain but poorly for another target domain furthermore the authors design a gan type technique for generating samples outside the source domain to serve as a kind of generic target domain this is obviously important as one cannot know to which target domain the model would be later adapted to this is a very interesting paper although i have to say im not an expert in this topic at all most of the paper is really nicely written and is pretty easy to follow the experimental verification is clear and detailed but mostly limited to small images so its hard to say how it actually performs in some reallife scenarios couple questions come to mind can you imagine uses of this to other kinds of models eg language models or is this mainly meaningful for image data it sounds like an ntl representation by nature is highly vulnerable to training data privacy attacks like membership inference have you considered if one could use the ntl representation to particularly efficiently generate samples from something close to the training data distribution nontransferable learning is an interesting idea to explore and this is the first step in that direction i can imagine that there will be a lot of followup ideas both for attacking this as well as improving upon it i would definitely recommend accepting this for iclr docsepprotecting the intellectual property of the trained models has received appealing attentions existing researches to protect intellectual property fall into two major categories ownership verification and usage authorization to this end the authors propose to utilize nontransferable learning to achieve both the goal of ownership verification and usage authorization extensive experiments on several representative datasets validate the effectiveness of the proposed method in terms of ownership verification generally this paper proposes a novel idea to address a practical problem in realworld applications which could inspire many readers to follow it and have an important influence on the community of computer vision i support the acceptance of this paper for a better iclr conference this paper could be significantly improved via addressing the following issues 1 in table 1 what is the number of training epoches when transfering mt to mm did you try to increase the epochs of finetuning if you train for enough epochs the model would eventually reach the original accuracy the sensitivity analysis regarding the epoches of your finetuning is necessary when compared to training from scratch and the transfer learning from the original model to the target task 2 the training complexity of using your ntl approach and the gan training should be introduced in this paper the computing time of the mmds during each time step is at least twice your training time 3 the propsoed methodology is well presented however the differences between the proposed model and realted sota works should be presented clearly 4 comparing table 2 and table 3 it can be seen that sometimes the sourceonly method shows greater performance compared to the targetspecific method the reasons why would this happen are interesting since providing the targetdomain target should be more accurate when removing some part in the generalization space however the experiments seem does not agree with it 5 a future research section should be added in the revision this paper proposes an interesting question and gives the corresponding solution i recommend the acceptance of this paper ### Summary:
the paper addresses two important aspects of deep learning model transferability and authorization for use it presents original solutions for both of these problems all of the reviewers agree that the paper is a valuable contributions minor concerns and critical remarks have been addressed by the authors during the discussion
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: after looking over authors responses ive decided to increase my rating of the paper the main concern i original had was sufficiently motivating the need for this specific dataset compared to existing alternatives like ace the authors in the comments below have articulated qualitatively how ace is insufficient and demonstrated with experiments that generalization from ace pretraining to this new dataset is poor edit the authors present a corpus of news articles about protest events 10k articles are annotated with document level labels sentencelevel labels and tokenlevel labels coarsegrained labels are protestnot and finegrained labels are things such as triggersplacestimespeopleetc 800 articles are protest articles this is very detailed work i think the resource will be useful the biggest question here is if my focus is to work on protest event extraction what am i gaining by using this corpus vs existing eventannotated corpora eg ace that arent necessarily specific to protest events id like to see experiments of models run on ace evaluated against this corpus an analysis to see where the mistakes are coming from and whether these mistakes are made by those models when trained on this new corpus below this are specific questionsconcerns annotation just a minor clarification question for the tokenlevel annotations how did you represent multitoken spans annotated with the same label for example in stonepelting did you indicate stone and pelting tokens with their own labels or did you somehow additionally indicate that stonepelting is one cohesive unit section 4 mild nitpick can you split the 3 annotation instruction sections into subsections w headings for easier navigation section 6 it says your classifier restricts to the first 256 tokens in the document but your classifier is modified to a maximum of 128 tokens can explain this why is the token extraction evaluation only for the trigger regarding the statement around these numbers illustrate that the assumption of a news article contain a single event is mistaken it was mentioned earlier that this assumption is being made can be more clear which datasets make this assumption can also explain how your limit to 128 or 256 tokens doesdoesnt make sense given multiple events occur per article docsepthis paper provides a detailed guideline for annotating sociopolitical corpus the detailed annotation of documents can be time consuming and expensive the author in the paper proposed a pipelining framework to start annotations from higher levels and get to more detailed annotation if they exist along with their framework they have provided the dataset of annotated documents sentences and tokens showing if the protestrelated language exists or not the author also outlines the baseline line of the transformer architecture regarding the document level and sentence level classifications the paper describes the details very clearly the language is easy to follow so to list the pros will be as follows introduction of a new framework for annotating political documents annotating a large scale corpus they baseline results although they have provided the baseline results on the document and sentence level classifications they have not provided the results of them over the token level task it would have been interesting to see if those results are also promising the author has mentioned that they have three levels of annotations document sentence and token to save time and not spent time on detailed annotations of negative labels can they examine how many samples are labeled negative and how much time in percent and money it reduced for annotations some minor comments in page 2 i think result should change to resulted in sentence below moreover the assumptions made in delivering a result dataset are not examined in diverse settings on page 3 who want to use this resources who want to use these resources in page 4 we design our data collection and annotation and tool development we design our data collection annotation and tool development page 6 as it was mentioned above as it is mentioned above you are 1 page over limit but there are some repetition in annotation manual especially when talking about arguments of an event you can just say as mentioned above the author has mentioned that they have three level of annotations document sentence and token to save time and not spent time on detailed annotations of negative labels can they examine how many samples are labeled negative and how much time in percent and money it reduced for annotationsdocsepthe paper describes a corpus of news articles annotated for protest events overall this is an interesting corpus with a lot of potential for reuse however the paper needs some clarifications a key contribution of the paper is that the initial candidate document retrieval is not based purely on keyword matching but rather uses a random sampling and active learning based approach to find relevant documents this is motivated by the incompleteness of dictionaries for protest events while this might be true it would have been good to see an evaluation of this assumption with the current data it is a bit unclear in the paper but were the k and al methods run over the same dataset what are the datasets for which the document relevance precision recall are reported on page 8 i would also like to see a more detailed comparison with more generalpurpose event extraction methods is there a reason why methodologies such as 1 and 2 cannot be reapplied for protest event extraction a small formatting issue the subsections on page 8 need newline breaks in between 1 pustejovsky james et al temporal and event information in natural language text language resources and evaluation 3923 2005 123164 2 inel oana and lora aroyo validation methodology for expertannotated datasets event annotation case study 2nd conference on language data and knowledge ldk 2019 schloss dagstuhlleibnizzentrum fuer informatik 2019 edit thank you for addressing the issues i raised i have changed the review to accept ### Summary:
the paper presents a corpus of 10k news articles about protest events with document level labels sentencelevel labels and tokenlevel labels coarsegrained labels are protestnot and finegrained labels are things such as triggersplacestimespeopleetc all reviewers agree that this paper is interesting and the contributed resource will be useful for the community hence we propose acceptance there were some concerns that the authors fully addressed in their response updating their paper we recommend authors to take the remaining suggestions into account when preparing the final version
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper provides a benchmark dataset that can be used for training evaluation of machine learning models their contribution is that they have a large collection of 12868 sets of natural images across 410 classes obtained from visual genome data and its metadata for annotations this helps in accounting for large natural shifts in the data they also provide a way to measure distance between two subsets to quantize the distribution shift between any two of its data sets the paper provides a good justification for the need of such a dataset for computer vision tasks and motivate the idea well it also talks in detail about the steps taken to generate metashift from visual genome it also provides a generalization of their 4step process of dataset creation on any dataset with multilabel results presented for coco dataset the paper further discusses the use of this dataset for two major cases evaluating distribution shifts assessing training conflicts they provide the impact of shift distance on the domain generalization by keeping test set same and varying training subsets randomly further it talks about subpopulation shifts where the train and test distribution of same domain with different mixture weights they show that no algorithm consistently performs better than other algorithms for larger shifts it provides a detailed understanding of training conflict by analyzing the contribution of each training subset to the change of the validation loss of each validation set during the training process overall its a well written paper about the motivation use cases applicability and generalizability of their proposed data set strengths paper has a strong motivation for building the dataset the authors also present a detailed understanding of major applications of their dataset across ml models it provides a good quantization for the population shift and show with experiments how it impacts domain generalizability the generalizability of the dataset creation for any dataset with multilabel is another strong point of the paper weakness the paper only talks about all advantages for metashift thats derived from visual genome data but dont have any similar comparison or quality analysis for other datasets generated using their 4step dataset creation for example analysis could have been provided on coco dataset too the paper also doesnt address the dependency of dataset performance on metadata what if there are inconsistencies in the metadata of dataset but images are perfect how will the dataset creation and performance be impacted for a dataset with such metadata it would also have been interesting to see the analysis of performance of certain model around underrepresented subsets vs over represented subset setcharacteristics of datum leading to training conflict in general if there is any pattern in images other obvious underlying data bias issue has already been acknowledged in the paper too and i hope the authors will research more into solving it i feel the authors have done a good job highlighting the motivation of such a dataset steps of creation of the dataset from visual genome paying attention to the generalizability of the approach discussing the major applications of the dataset in detail the github link also provides holistic understanding of the work this dataset will help the research in the field of cv once they overcome and address the current weaknesses of the dataset it will become even better dataset asset docsepthis work proposes a collection called metashift to study the impact of dataset distribution the major advantage of metashift is that it provides annotationinformation to measure the amount of distribution shift between any two of its data sets in the experiment this work constructs two applications 1 evaluating distribution shifts assessing training conflicts strengths good idea to study the impact of distribution shift this paper is well written and easy to understand section 3 gives a good introduction to the stepbystep construction of metashift the main advantage of metashift is it contains systematic annotation about the differences between different shifts this further helps to study the effects of distribution shifts eg subpopulation shifts the idea of generating a large number of realworld distribution shifts that are wellannotated and controlled is attractive the proposed metashift is well illustrated and the figures are helpful to know the information of metashift section 43 is interesting and the results give some insights weaknesses the major concern is about the experimental evaluation where the constructed tasks are only binary classification in section 41 this work constructs four binary classification tasks to study the impact of the shift under the generalization setting one question is how about constructing more challenging tasks which involve more classes when evaluating subpopulation shifts the tasks are also binary which contain spurious correlation the same question is multiclass classification tasks might be needed metashift is a collection of 12868 sets of natural images from 410 classes why do the experiments only focus on binary classification eg cat vs dog bus vs truck in another word it seems the same settings in sec 4 can be constructed based on other classes it would be helpful if this work could discuss this again more challenging multiclassification setting would be very useful this work introduces a good way to study the effects of distribution shifts specifically this work proposes a framework called metashift which contains systematic annotation about the differences between different shifts however the major concern is about the constructed tasks in the experiment more explanationdiscussion can be included to eliminate the question docsepin this paper the authors introduce a new dataset actully a collection of datasets called metashift metashift is built on top of visual genome and leverages its metadata to cluster images thus providing a context for each image labels are of the form classcontext eg cat in grass dog in bathroom this context is then used to generate dataset shifts besides been much larger than similar openly available datasets metashift explicitly provides the context which can be used to compute a distance score pf distribution shift between any two datasets pros the paper is well written and easy to follow the proposed approach to leverage metadata form previously published largescale dataset to create datasets of domain shifts is simple and well motivated splitting a large dataset with multiple labels in a meaningful ways to study dataset shift is not trivial however the authors came up with an intuitive and relatively simple approach the problem of studying shifts in dataset distribution is very relevant and important to machine learning this dataset can benefit the community by allowing a more system evaluation of dataset shifts cons it would be nice to have more descriptions of the methods used to benchmark the dataset erm irm dro coral and cdann architectures used which model pretrained or from scratch what is the model capacity and training details which loss which optimization learning rate batch size etc if not enough space on the main text these informations could be added on appendix it would be nice if the authors would give more detail on how the embeddings of metagraphs are computed for example what is the matrix a and how are the embeddings computed why using spectral embeddings specifically rather than other approaches eg using the word embedding pretrained on large language corpus of each context it could be nice to show some more quantitative or qualitative eg tsne for the metagraph embeddings used to compute the shift between datasets the paper states it generates 12k datasets across 410 classes however experiments are done only on a very tiny number of datasets catdog bustruck elephanthorse and bowlcup the proposed dataset would be much more useful to the community if the authors would provide a much larger subset of premade datasets for easy experimentation i am inclined to accept this paper because of i the simplicity of the approach to generate the datasets and ii the usefulness to the community however i would be more confident with acceptance if the authors would address the weaknesses of the paper se above ### Summary:
this work studies the impact of distribution shift via a collection of datasetsmetashift reviewers all agreed that this work is simple effective and wellmotivated and has key implications and will be quite useful to the community there were some concerns about the lack of analysis of metashift and the binary classification setting which was addressed by the authors responses thus i recommend an acceptance
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: summary the paper is presenting a learning algorithm for credal sum product network cspn a type of graphical model that is tractable easy to compute partition function and can encode uncertainty in the network parameters instead of fixed weights the network parameters have range of values or more generally are defined using a set of convex constraints between them prior work maua et al 2017 introduced cspns and provided an inference algorithm and this paper is the first to propose a learning algorithm for cspns pros first paper to introduce a weight learning algorithm for cspns evaluation shows better results than sum product network spns cons evaluation is limited in two aspects baselines and tasks 1 baselines the only baseline considered is spns which is a reasonable but old baseline it would be good to see how well cspn learning works compared to more recent models especially that even cspns inference evaluation maua et al 2017 was similarly limited 2 tasks evaluation avoided large datasts it excluded the largest of the subtasks footnote page 21 and evaluating on large scale textual data is left for future work even though the motivation for spn was that its inference is tractable and fast the proposed learning algorithm for cspns seems to be 10x slower than that of spn and didnt scale to large datasets notes the paper mentioned that cspn avoids the closedworld assumption and can work with incomplete examples i agree with the second but not the first the proposed learning algorithm takes into account that some instances have unknown values but it is still assuming that the world only contains the provided list of instances closedworld assumption the paper use of the term lifting seems different from how it is used in broeck et al 2011 doing inference at the firstorder level without grounding to predicate logic this needs to be clarified docsepin this paper the authors investigate probabilistic representations for learning from incomplete data and specifically investigate credal sum product networks cspn are better able to consider data incompleteness which is an important aspect of knowledge bases the authors perform experiments on a large number of datasets with varying amounts of artificially missing data observing that the optimized log liklihood computed on a learned cspn generally performed the best the paper is generally well written and does a good job of explaining the underlying models and algorithms the paper is not particularly novel but contains a large number of experiments that could be useful to those interested in probabilistic models in regimes with missing data other comments table 4 is a bit busy there could be a clearer way of presenting and highlighting the relevant results section 42 has an occurrence of a cpsn typeodocsepthe paper revisits credal spns and proposed a learning approach for credal spns in the presence of missing information that is now the weights on sum nodes vary in closed and convex set and in turn one gets a imprecise probability model overall the paper is well written and structured the main technical contribution are 1 a groupwise independence test and 2 clustering method both for the credal setting assuming missing data specifically the independence test is a directly application of complete case analysis plus interpreting missing values as contribution to the base population for the clustering thee authors should argue why not existing methods for clustering with incomplete data could be use in any case the likelihood approach presented also follow the same logic as the independence test in both cases the arguments are a little bit hand waving and fluffy for instance it is not clear to me what 2is that value that is poorest fit2 page 6 still the clustering is interesting although as said a discussion of related work on clustering incomplete data is missing the empirical evaluation is interesting and follows the standard protocol for spn what i am missing is a repeated argument of why cspns are important furthermore the running time should be reported also the authors should provide some insights into the structures learned also in comparison to the complete data case and the even to the standard spn setting furthermore it might be interesting to use random credal spns based on random spns peharz et al uai 2019 as a baseline to illustrate the benefit of structure learning currently the results just show likelihood but shouldnt we also consider here the number of parameters at least getting some numbers here would be appreciated also sincce you consider the cll one should also show a discriminatively learned spn general the experimental protocol should be described at sufficient details what were the hyperparameters was this crossvalidated to summarize nice direction with follows the standard approach for learning spn for learning cspn with using ideas from data imputation the empirical evaluation is not well presented in the main part some missing related work on the clustering with incomplete data ### Summary:
this paper develops the first structure learning algorithm for credal spns the paper is somewhat difficult to evaluate since the credal paradigm is so different from the usual maximum likelihood paradigm which makes a direct empirical comparison challenging by providing more detailed information about the uncertainty the credal approach certainly has some merit and while upgrading some spn structure learning heuristics to the credal setting may not be technically challenging they are done for the first time in this paper on the other hand the reviewers did find many ways in which the paper can be improved overall we recommend acceptance the authors are encouraged to improve the paper as suggested by the reviewers
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the topic addressed is interesting the proposed approach is well described and extensively compared against several baselines also an ablation study is performed the approach itself integrates wellknow techniques the difference with approaches in the baselines are not discussed to highlight the novelty of the approach the approach combines a number of wellknow techniques to come up with an approach for finegrained crossmodal retrieval in spite of addressing a relevant problem the novelty of the approach is not sufficiently highlighted in the paper experimentation was done with two standard datasets and the performance of the approach was compared with several baselines the approach outperformed the baselines in most cases however a detailed analysis of those cases in which is not the best performer would have enrich the discussion also the differences of the baselines with the proposed approach need to be explained docsepthe most prominent advantage lies in the consideration of both similarity distance and direction for similarity representation and learning the proposed knowledge graph iterative propagation algorithm is used to explore finegrained modal representation and the similarity is improved by constructing and adaptively reconstructing the similarity diagram the ablation studies and analysis is clear and thorough compared with other methods the advantages of the experimental results are reflected in the multiperspective strategy while the proposed kgid and rgr do not show advantages although the experiment in this paper contains many comparison methods the two data sets are still a little insufficient formula 18 has a clerical error docsepthis paper contributes new ideas and this idea seems works according to authors experiments i think proposed finegrained matching will benefit crossmodal retrieval i am not an expert on this topic and educationally guess that the technique in this paper is ok the experiments in this paper are sufficient quantitative results on flickr30k and mscoco are reported ablation studies disclose the effect of the two modules this paper is well written i can easily follow section 3 consists of 5 sub sections and each subsection is comprised by server points it is hard to understand so many technique points and the title of each subsection is not very friend for readers to understand the relationship of each points i think that section 3 should be reorganized ### Summary:
meta review this paper develops an approach for finegrained matching with multiperspective similarity modeling for crossmodal retrieval it contains two main novel modules one is the knowledge graph iterative dissemination kgid module for iteratively broadcast global semantic knowledge and learn finedgrained modality representations the relation graph reconstruction rgr module is developed to enhance crossmodal correspondence by adaptively reconstructing similarity relation graphs the proposed model is well motivated and novel results also show that the model perform state of the art models overall this paper is a nice paper that uai audience will be interested to hear about
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper presents a method for enforcing strict orthogonality of convolutional layers by means of a factorization in the spectral domain it shows that the technique can be extended to dealing with practical convolutions with strides dilations and groups the experiments demonstrated the superior performance in terms of adversarial robustness orthogonality is an important problem in the design of neural network architecture that relates to many fundamental properties of the network such as trainability generalizability and robustness while the study of orthogonality in fully connected layers or convolution layers with 4d kernels treated as 2d matrices has a long history it is only until very recent in the past 23 years that work on orthogonality of convolution layers emerges this paper provides a solid study in this area by providing a method of enforcing orthogonality in convolutions revealing its technical connections with previous methods designing a deep residual lipschitz network architectures and conducting solid experiments i find the presentation to be mostly clear and easy to follow though i feel that there is a tendency of overclaiming the contribution in the abstract intro see below complete parameterization of orthogonal convolution the paper claims that it offers a complete parameterization of orthogonal convolution but this is not really the case as stated in sec 2 it only offers complete design for separable orthogonal 2d convolutions this puts the technique in an unfavorable position compared to previous methods that does not require separability eg trockman kolter orthogonal networks the paper frequently use the phrase orthogonal networks but it is not clear what that term entails for example it is claimed that our versatile framework for the first time enables the study of architecture designs for deep orthogonal networks which seems an overclaim since orthogonality in neural networks have already been extensively studied before in addition if orthogonal network means that the entire network is an orthogonal transformation then this is kind of a useless network since orthogonality implies linearity as long as it is surjective if it means approximately orthogonal then it should consider in addition to the convolutional layers the effect of the nonlinear layers right now there is no discussion of whether the groupsort that is used as the nonlinear layer is approximately orthogonal or not solid work on orthogonality of convolution though there seem to be some overclaiming imprecise statements in the introabstract that may be misleading docsepthis paper proposed a theoretical framework for orthogonal convolutional layers based on the equivalency of orthogonal convolution in the spatial domain and the paraunitary systems in the spectral domain the proposed method parametrizes the orthogonal convolution layer as compositions of multiple convolutions in the spatial domain resulting in exact orthogonality the layers are also more memory and computationally efficient than most previous methods strengths the proposed method is theoretically grounded and relatively efficient in computations the analysis of strided dilated convolution layers is inspiring numerical evidence on orthogonality evaluation of different designs for standard convolution shows that exact orthogonality is achieved weaknesses it is nice to see that exact orthogonality is achieved however it remains unclear exact orthogonality is actually helpful or needed for example from table 2 the proposed scfac achieves the most accurate orthogonality result in worse performance in both certified and practical robustness even though the author claims that the method achieves comparable results with other baseline methods the results are consistently worse than the baselines the authors can compare their core idea with related work that is more heuristic such as 1 which also considers achieving orthogonality in the spectral domain as well as 23 1 liu et al convolutional normalization improving deep convolutional network robustness and training 2 wang et al orthogonal convolutional neural networks 3 bansai et al can we gain more from orthogonality regularizations in training deep cnns even though the method is more computationally efficient it is only compared with methods such as cayley which is known to be computationally heavy the method is still much computationally heavier than the original networks it would be nice to have an extra line in figure 4 showing the train time of the ordinary network the proposed method achieves exact orthogonal convolutional layers through reparametrization the method is theoretically grounded and easy to understand numerical proofs are provided to show exact orthogonality is achieved by composing a sequence of learnable convolutions docsepthis paper suggests a framework for designing orthogonal convolutions using paraunitary systems the proposed framework studies orthogonalization of different convolution operations the proposed methods were examined on several datasets in comparison with stateoftheart orthogonalization methods the paper proposes a nice framework which covers different orthonormalization methods for convolutional kernels in the analyses the proposed framework performs on par with the stateoftheart however there are several issues with the paper in general first some of the claims should be revised since they are not verified in the analyses second experimental analyses should be improved with additional analyses on additional datasets and tasks in comparison with the other stateoftheart methods more detailed comments are given as follows 1 the paper states that however many previous approaches are heuristic and the orthogonality of convolutional layers is not systematically studied however there is a nice literature on orthogonal cnns which explores these cnns from various aspects including generalization and convergence properties of models in various tasks including image recognition speech processing and nlp including adversarial robustness studied in this paper then could you please describe the systematic study referred to and proposed in this paper which is not covered in the literature 2 please explain the statement there are mature methods that represent orthogonal matrices via unconstrained parameters how does this enable to optimize parameters of orthogonal convolutions using standard optimizers instead of optimizers designed for optimization on orthogonal convolutions 3 there are some issues with the definition and interpretation of orthogonal layers first an orthogonal layer is defined according to preservation of norm of input and output this can be achieved using different types of parameters of convolutions even with scaled gaussian parameters in addition orthogonal convolutions proposed in the literature satisfy some particular orthogonality properties of matrices of parameters second the orthogonal convolution proposed in this paper is associated with paraunitary property of the transfer matrix 4 in the experimental analyses in most of the results stateoftheart outperforms the proposed method while in some results the proposed method outperforms them to show a more clear benefit of the proposed method over the stateoftheart could you please perform analyses on additional larger scale datasets such as imagenet could you please also compare the proposed method with the methods which employ orthogonal matrices 5 it is proposed that related works proposing orthogonalization methods do not guarantee lipschitzness after training however the proposed orth conv employ deep lipschitz networks to guarantee lipschitzness if the proposed orth conv does not employ deep lipschitz networks then does it guarantee lipschitzness 6 how do you optimize learnable columnorthogonal matrices 7 while training models how do you estimate and optimize hz hz ortho factors and model params in the code adam is used to optimize parameters however it is not clear how orth factors are also optimized do you also optimize them using adam 8 how do you apply ztransform on input kernel and output for instance if an input nxn image x is convolved with a 7x7 kernel h then how do you apply z transform on x and h that is do you apply path wise or holistically also if x is a feature map of size cxwxh where c is the number of channels w and h are weight and height of the map then how do you apply the ztransform on the map 9 how do you compute ortho factors efficiently 10 how do you calculate model parameters a 11 in the experiments the proposed methods perform similar to the stateoftheart to show superiority of the methods in comparison with the stateoftheart additional analyses on larger scale datasets and models should be provided the proposed framework is nice and the initial results are promising however there are various unclear parts in the paper in addition some of the claims are not verified and experimental analyses are limited therefore the paper should be improved with additional analyses and in detail revision for clear acceptance docsepthis work proposes a new method for orthogonalizing the convolutional layers by exploring the equivalence between spatial orthogonal and spectral paraunitary the work then empirically demonstrates the effectiveness of the proposed methods by comparing 1 the lipschitzness 2 the results of adversarial robustness and 3 the time and memory cost among different methods this work proposes a new method for orthogonalizing the convolutional layers by exploring the equivalence between spatial orthogonal and spectral paraunitary the experiments are conducted on various networks including the shallow kwlarge networks and slightly deeper wideresnet22 although the reviewer does not check the submitted code in detail the code is wellwritten and clearly commented the major concerns are 1 the work seems to have made some overstatement of the contributions claiming that all the previous work are heuristic and the proposed approach is systematic with theoretical justification the reviewer does not quite buy this point and better explanation on this is needed 2 the experimental results are not consistently showing the advantages of the proposed method also the improvement in terms of computational efficiency seems to be marginal below are some more detailed comments 1 the reviewer found the implementation of the proposed method somewhat hard to follow it could be better to incorporate an algorithm view of the method to clearly present it 2 in the paper the q matrix is defined as an orthogonal matrix that is randomly initialized and fixed during training but the reviewer didnt find the associated implementation in the code correct me if i missed anything so the reviewer is wondering how the q matrix is constructed in the experiments 3 when demonstrating the results of adversarial robustness the paper devotes to ell2 norm based attacks the reviewer is curious about the results of ellinfty based attacks 4 the reviewer notices that in the code when considering striding the authors include 2 use cases stridewide strideslim the reviewer is curious about the actual definition of the different use cases besides the code of the proposed method mentions that the kernel size should be a multiple of stride in the stridewide case this constraint is bypassed by letting kernelsize kernelsize stride the reviewer would appreciate it if this part is presented in more detail an algorithm for handling different cases would be nice 5 this paper is missing quite a few citations on related work httpsarxivorgabs181009102 httpsarxivorgabs210300673 httpsarxivorgabs191112207 httpsarxivorgabs190511926 please refer and discuss the relationship the paper is wellpresented overall however better positioning of the work and more convincing experimental results are needed ### Summary:
this paper proposes a method for parameterizing orthogonal convolutional layers that derives from paraunitary systems in the spectral domain and performs a comparison with other stateoftheart orthogonalization methods the paper argues that the approach is more computationally efficient than most previous methods and that the exact orthogonality is important to ensure robustness in some applications the reviewers had diverging opinions about the paper with most reviewers appreciating the theoretical grounding and empirical analysis but with some reviewers finding weakness in the clarity reproducibility and discussion of prior work the revisions addressed many but not all of the reviewers criticisms one point that was highlighted in the discussion is that the method is restricted to separable convolutions the authors acknowledged this limitation justifying the expressivity of the method with a comparison to cayleyconv trockman kolter and a suggestion that more expressive parameterizations are not necessarily available in 2d i am not sure this is entirely accurate in the discussion of related work the paper briefly mentions dynamical isometry and the prior work of xiao et al 2018 who develop a method for initializing orthogonal convolutional layers what the current paper fails to recognize is that algorithm 1 of xiao et al 2018 actually provides a method for parameterizing nonseparable 2d convolutions simply represent every orthogonal matrix in that algorithm in a standard way eg via the exponential map while i think there is certainly value in the connection to paraunitarity systems it seems to me that the above approach would yield a simpler and more expressive representation and is at minimum worth discussing overall between the mixed reviewer opinions and their lingering concerns and the existence of relevant prior art that was not discussed in sufficient depth i believe this paper is not quite suitable for publication at this time
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper presents a benchmark for contemporary symbolic regression constructing a set of regression problems on which sr methods are tested the authors use a variety of datasets some from pmlb some other sr databases and use r2 testing for establishing accuracy overall genetic programmingbased models seem to perform best while aifeynman perform best for finding solutions for synthetic problems overall the paper is clearlywritten with extensive datasets for benchmarking the authors do a good job at bringing together sr methods and comparing their performance clearly explaining drawbacks and limitations the work has good contributions despite the limitations and is a good exposition of presented methods the paper presents limitations in the using realworld data from models as authors also explain the presentation of the results in the main text is clear yet a more indepth analysis of different datasets and variance across them would be helpful in further understanding the benchmark although a more extensive analysis is presented in the appendix the applications could be strengthened in the exposition as well docsepthe authors introduce an opensource benchmarking platform for symbolic regression and benchmark an array of different methods on over 200 regression problems the benchmarking code and data is readily accessible and appears to be straightforward to use although i have not tried the benchmarking results reveal that the best performing methods for realworld regression combine genetic algorithms with parameter estimation andor semantic search drivers in the presence of noise the authors find that deep learning and genetic algorithmbased approaches perform similarly the benchmarking code appears to be easy to apply and the data is made available the proposed way of benchmarking allows more detailed comparison of symbolic regression methods than other available benchmarking software the concrete advantage over commercial symbolic regression benchmarking platforms the paper mentions eureqa and wolfram should be stated more clearly it is also not exactly clear to me how the algorithms used by these platforms differ from the proposed solution and why the proposed solution is more straightforward as mentioned in the paper overall i think the paper spends too little time describing the proposed benchmarking method and its advantages the summary of existing algorithms which spans a large part of the paper could be shortened to make room for these explanations the papers focus is placed mainly on the benchmarking results and not on the benchmarking methodology shifting the focus a little could be beneficial to the wider community given this is a dataset benchmark track docsepthe paper is motivated by the need for an accessible reproducible and realistic benchmark for symbolic regression sr research to help the field evaluate new methods agree on the quality of existing ones and reduce reliance on toy problems to that end it contributes srbench a repository of datasets and 14 sr methods constructed to allow integration of future work the paper states that this represents the largest and most comprehensive sr benchmark effort to date relative to previous work the paper also contributes particular attention to the multiple criteria at play in evaluating sr methods because sr involves not just prediction but learning an generative form these criteria include prediction accuracy model simplicity and whether the method learns the true underlying model for this reason srbench augments the existing penn machine learning benchmark pmlb of datasets without specified ground truth with a second set of 130 synthetic datasets from physics that have known ground truth the paper also uses the benchmark to compare the performance of the 14 sr methods it describes statistically significant differences among the algorithms in terms of both prediction and groundtruth though some algorithms do well for one task and not the other for the blackboxed prediction tasks it also compares sr methods to nonsr machine learning methods and finds some sr methods can produce models that are both better and simpler first a disclaimer i am not an expert in symbolic regression or genetic programming so this review is written from the perspective of a technically literate audience that is interested in but not wellversed in the details of the symbolic regression algorithms presented here as such i cannot vouch for the technical accuracy of the descriptions of the algorithms in section 3 that said i appreciate the organization of section 3 the descriptions clearly reference back to table 1 which includes both sources for each method and links to its code on github clicking for example on the operon row link yields a wellformatted github page with building and usage instructions and example code these would be valuable resources to the expert or to a person looking to get into this area more broadly the paper and srbench represent a resource for the sr community although the benchmark draws on existing datasets and approaches the project of gathering those together into an accessible reproducible standardized framework incorporating 14 sr methods and making the whole thing open to future contributions represents extensive valuable work the experimental results may also be useful both in guiding practitioners in their algorithm choices and as results that other researchers may seek to replicate or extend the introduction and conclusion note that the benchmark provides a large and diverse set of regression problems and the paper repeatedly mentions the need to assess realworld performance but it does not make it clear what kind of variety the datasets cover or what real means here the datasets derived from physics equations are clearly more real than contrived toy examples but are they a good representation of the main real context that sr is applied to the authors note in the conclusion that future improvements might incorporate more realistic applications settings but the paper would be improved if it gave a bit more attention to this point earlier on in particular the introduction gives little sense of what is at stake in a broader sense in the use of sr methods i would recommend including some mentions here of the kinds of areas where sr methods have been or could be applied or perhaps list a few example tasks more specific than physics engineering statistics in what contexts is it advantageous or necessary to learn an analytical model rather than only predict or do sr algorithms sometimes do better at prediction than ml algorithms and are they being adopted in ml applications it would also be helpful to have a summary of what the datasets cover clearly the groundtruth datasets come from physics but what kinds of application areas are the pmlb datasets drawn from does it have any relevant gaps known limitations or conscious omissions this can all be fairly brief but it would help both motivate the paper and make the limitations and social implications of the benchmark more specific there is additional detail on the data in the appendix but it remains unclear to me a whether sr is actively being applied say in criminal justice applications the example dataset context mentioned in the appendix and b whether any information warning of potential biases or contextual factors for particular datasets is provided with the datasets ### Summary:
all reviewers support accepting the paper especially after taking the author feedback into account one concern was how the proposed algorithms compare to those implemented in commercial software platforms such as wolfram the authors correctly point out that the closed source nature of these platforms makes a comprehensive comparison difficult a limited comparison could still be a valuable addition to the benchmark so that researchers can compare their algorithms to the commercial stateoftheart in any case the paper is of high quality and i recommend accepting it
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper proposes a submodel optimization technique in federated learning in the presence of diverse data feature distributions across clients heterogenity in features of local data leads to distinct update speed for individual featurerelated parameters degenerating convergence of the model the paper claims that the global objective function of typical federated learning framework is illconditioned when each client updates and communicate only subset of the total model submodel this paper handles the problem by compensating the amount of parameter updates based on the feature heat dispersion it also provides formal analysis for the convergence of the proposed method it demonstrates the effectiveness of the proposed method over baselines on three benchmarks rating classification text sentiment classification clickthrough rate prediction strengths the submodel optimization setup where each client updates the subset parameters of the central server is realistic and would be the proper extension with clear motivation in realworld applications edge devices are likely to have limited capacity in memory computation and communication bandwidth experimental results are quite compelling figure 3 presents that fedsubavg converges fast and shows similar tendency with centralized training centralsgd weakness the assumption that server knows indexes for the featurerelated parameters of all clients is unrealistic due to the assumption the proposed algorithm is not compatible with deep models eg dnn mlp with multiple nonlinear layers in the similar vein the proposed method needs to calculate data feature dispersion which can bring about privacy concerns as mentioned above the proposed algorithm is incompatible with modern deep learning architectures cnn mlp the more compatible approach should be proposed without using prior knowledge that the feature dispersion is known and featurerelated parameters are predefined the writing is hard to follow and some details eg implementation details were not explained well docsepthe manuscript considers a specific use case of federated learning in a recommender system or nlp scenario and proposes to scale the model update per coordinate through the ratio between the total number of clients and the number of clients who involve this model parameter coordinate some theoretical results are provided to motivate and justify the necessity of design choices strengths the considered scenario is interesting and important to the community the proposed method is simple yet intuitively effective in alleviating the identified limitation the entire manuscript is generally wellstructured and most claims are wellsupported extensive numerical results are provided for some aspects weaknesses 1 the index set si determines the submodel of x sir however for the four considered datasets the exact distribution of features hot vs cold features is still unknown and some specific treatments line 211 line 212 line 226 line 227 may require some justifications such as we label the samples with the ratings of 4 and 5 to be positive and label the rest to be negative will these treatments magnify the large imbalance phenomenon between hot and cold features 2 given the considered challenges and proposed coordinatewise scaling solution one strong competitor should be considered ie using an adaptive optimizer on the server as in 1 3 the clients index set must be sent to the server and a clusteringbased fl technique can utilize this information naturally the reviewer is also interested in how clusterbased fl approaches perform as these algorithms may have already addressed the issue of hot vs cold features reference 1 adaptive federated optimization post rebuttal the reviewer acknowledged the authors feedback and checked other reviews the response has addressed concerns 2 and partially 3 however the reviewer still believes that the manuscript needs some revisions to polish its text eg make it selfcontained wellstructured precise etc check weaknesses part docsepthe authors consider a specific federated learning scenario where different data features involve different clients for some features a large number of clients can be involved while other features might involve only limited clients the authors show that in this case the classical fedavg can suffer from slow convergence in the proposed new algorithm the aggregations of parameter updates are weighted per parameter by the ratios of the local clients involved the authors prove that by reweighting the parameter updates in this way the condition numbers of the hessian of the learning objectives become smaller than the original hessian in the experiments with four realworld datasets the authors demonstrated that the proposed algorithm fedsubavg offers faster convergence than existing alternatives strengths improving the convergence of federated learning is an important and active area of research the authors contribute a new algorithm that can potentially improve the convergence of the federated averaging algorithm the experiments were conducted on largescale realworld datasets limitations the paper would need a thorough rewriting for example the example illustration of feature heat dispersion in recommender systems at l4550 is difficult to comprehend what does less than 1 of the average mean please formally define involvement l122 the number of clients who involve this model parameter how is this number determined in general the proposed algorithm seems applicable only when submodels are welldefined ie the individual clients do not have to update the full model parameters but instead can download and update only the required small parts submodel of the complete model on the other hand typical local learning steps tend to require simultaneous updates of all model parameters unless the model is linear the authors should provide a detailed discussion as to when such submodels are welldefined and how the index set si is determined in practice can this algorithm be applied to the standard mlps minor comments the proposed algorithm can be considered as diagonal preconditioning on stochastic gradient descent sgd l167 the authors could discuss connections to existing sgd preconditioning methods eg adagrad adagrad preconditions the sgd update based on the magnitudes of individual parameter updates extending adagrad to the federated learning setting can be straightforward please enlarge the plots in figures 3 and 4 the limitations and potential negative societal impact were not discussed docsepthe authors point out the fact that in the context of federated learning clients local data normally involve a small subspace of the full feature space especially in the case of models that contain large sparse embeddings this would mean that each client downloads and updates only a small part of the full global model ie a submodel as some features are more popular than others eg words in a vocabulary some embedding will then be averaged by a larger fraction of clients than others the authors then show that this discrepancy called heat dispersion might result in slower convergence of algorithms like fedavg they then propose a new method where each weight essentially has a different learning rate based on how many clients participate in its update they show both analytically and through an evaluation that this method improves the convergence speed the authors address an interesting problem on training fl models with sparse embeddings the fact that not all of them are equally popular the authors conducted a theoretical analysis of their model they compared with 4 baselines some of them designed to speed up convergence on noniid data the results show that their methodology is promising there are some assumptions in the evaluation see below in questions for details it is unclear how well this methodology would work with privacypreserving mechanisms eg local dp noise please see above questions ### Summary:
this paper considers a particular fl scenario where the model includes a large embedding layer as is typical in nlp and recommendation models to make training feasible or more efficient the fedsubavg method is proposed in particular it deals with the setting where not all features are equally encountered in training data this is leveraged to reduce communication and computation overhead and also to improve optimization dynamics the proposed approach comes with theoretical guarantees and the paper also provides a thorough numerical evaluation demonstrating benefits over other approaches the reviewers raised concerns about the relevance and potential narrowness of the setup assumptions and whether the proposed fedsubavg approach would be comparable with privacyenhancing technologies like secure aggregation and differential privacy it is clear that the setup considered is indeed relevant given the prevalence of models with large embedding layers in nlp and recommendation models and the useful of such models in several applications given this it isnt necessary for the authors to demonstrate any relevance to training standard mlps since that isnt the focus and no claims are made in the paper about such architectures the authors responses also were convincing that the approach can be made compatible with dp and secure aggregation in a reasonable way im happy to recommend that this paper be accepted when preparing the camera ready to make the paper accessible to a broader audience it would be helpful to include in the intro or early in the paper additional material and references to motivate the relevance of models with large embedding layers in addition to the key revisions already made in response to the initial reviews
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: in this paper the authors propose to use contrastive learning for matching in latent space in the wasserstein autoencoder wae in addition they employ techniques such as momentum contrast in contrastive learning experimental results show that the proposed method moca is more stable and converges faster than existing methods it is also capable of generating highresolution images such as celebahq strengths overall this paper is well organized the method proposed in this paper is a reasonable combination of existing stateoftheart methods experimental results show that the proposed method has stability and faster convergence which is promising weaknesses the authors claim that moca can generate images with high quality however the experimental results in this paper do not show this very well first of all although the authors claim that the results in figure 5 and figure 6 are look realistic some of the face images seem to be collapsed and the interpolation between the two images seems to be discontinuous since there is no qualitative comparison with existing methods we cannot judge these methods as realistic in table 1 the authors show the quantitative comparison results with the existing methods but there are some puzzling points first why does mocaa2 which has fewer parameters have higher performance also why does waegan perform better than moca since table 1 shows that waegan is better than moca shouldnt it be compared with waemmd furthermore why are the quantitative results of celebahq not shown the authors show in their experiments that moca achieves faster convergence than existing methods but they do not fully explain why moca shows such convergence why does the convergence of the proposed method become better when contrastive learning is included in addition figure 1 shows the line graphs of only one training trial for each method and the variance of each method is not shown therefore i cannot judge whether the difference in results between methods is large or small in section 41 the authors compare waemmd with moca as the original wae algorithm but i do not understand why they do not compare it with waegan which is also original i think this should be done because table 1 shows that waegan has better image generation performance than the proposed method for example the authors employ moco but how effective is this in improving the performance of the proposed method the authors do not seem to have verified such a thing minor comments the significant figures of the results of each method in table 1 should be the same section3 any fixed t any fixed tau in terms of convergence and stability the proposed method is considered to be effective to a certain extent also the idea of using contrastive learning for wae is interesting however the explanation of the claim and the presentation of the results are insufficient docsepthe paper presents a regularization technique for wasserstein autoencoders based on contrastive learning strengths 1 the paper is wellwritten and easy to follow i do think that some of the notation such as push forward from measure theory is really not needed or particularly useful here simpler terminology such as just using encoding and decoding functions would be more than sufficient 2 some of the experiments are interesting and show the effects of the proposed regularization eg on the singular value distribution of the latent representation 3 using a contrastive approach is a potentially effective way to match the prior and posterior distributions weaknesses it is unclear that the proposed regularizer results in qualitatively better reconstructions than the baselines fid is not a perfect measure and the samples from baselines should be shown sidebyside with the proposed approach to know whether there is indeed an improvement i found the cifar10 reconstruction results are somewhat poor question the value of the lambda parameter is very large what are the relative loss values during trainingconvergence the reconstruction loss vs regularizer loss the paper is interesting but has some shortcomings i would like to see some results of the baselines to decide if the proposed regularizer does indeed improve results qualitatively i do not believe that fid is a proper measure of quality not just for this paper but for measurement of gan sample quality in general i give the paper a slightly positive score based on the idea but i am looking forward to some samples in the rebuttal to decide my final score docsepthis paper proposes a new approach to train wasserstein autoencoders wae with contrastive learning techniques specifically the paper proposes to enforce the marginal matching constraint of wae by exploiting the fact that contrastive learning objectives optimize the latent space distribution to be uniform over the unit hypersphere i notice this is a resubmission from iclr2021 thus some of my comments are based on the differences between two versions strengths 1 the paper is well written and well motivated 2 i think the idea of using contrastive learning to enforce a hypersphere prior for wae is clever and neat 3 the authors provide extensive ablations on hyperparameters weaknesses 1 my main concern is the performance of the proposed method on cifar10 and celeba the interpolation reconstruction and samples in figure 6 are very blurry and hard to justify the benefit of using the proposed approach the reported fid in table 1 and 2 are very high it would be nice to include a comparison of 1 which has fid of 525 and 2408 on celeba and cifar10 respectively also why is the twostage vae baseline in the previous version removed 2 it would be nice to include waegan in figure 1 and 2 since it outperforms the proposed moca in table 1 3 i think it would be interesting to see how to integrate the instance contrastive loss as in dcvae 2 into the proposed moca 1 aneja jyoti et al ncpvae variational autoencoders with noise contrastive priors arxiv preprint arxiv201002917 2020 2 parmar gaurav et al dual contradistinctive generative autoencoder proceedings of the ieeecvf conference on computer vision and pattern recognition 2021 the main idea of the paper is well motivated however i still find the results on image datasets such as cifar10 and celeba hard to justify the superiority of the proposed method over baselines i lean towards weak rejection but am willing to amend my score if my concerns are addressed ### Summary:
this paper presents a variant of the wae which uses a contrastive criterion to enforce the marginal distribution matching constraint experiments show faster convergence in terms of wasserstein distance more visually appealing samples and better fid scores compared with other wae models the original wae framework leaves open the choice of approximation for enforcing marginal distribution matching and the original paper gives two such algorithms therefore its pretty natural to replace this approximation with something else such as the contrastive criterion used here so a submission would need to show evidence that its significantly better than other approaches reviewers have expressed various concerns about the experiments none of them are major problems but overall the method doesnt seem consistently better than other wae methods eg the fid score is worse than that of waegan i encourage the authors to take the reviewers comments into account in preparing the submission for future cycles
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this submission proposes a novel loss function based on maximum mean discrepancy mmd for knowledge transfer distillation from a teacher network to a student network which matches the spatial distribution of neuron activations between the two the proposed approach is interesting but there is significant room for improvement in particular clarity it is not clear how the distribution of neuron activation are matched between the teach and student network the ct and cs are not defined specifically enough does it include all layers or does it only include a specific layer such as the last convolution layer interpretability section 41 tries to interpret the approach but it is still not clear why matching distribution is better the mmd loss proposed could run into problem if the classification task does not involve spatial variation for example for a extremely simple task of classifying three classes r g and b where the whole image has the same color of r g and b respectively the spatial distribution is uniform and the proposed mmd loss would be 0 even if the student networks channels do not learn discriminative feature maps another example is when a layer has hw1 significance the experiment shows that polynomialtwo kernel gives better result but sec 432 mentions that it is equivalent to li et al 2017b in this case practical usefulness not justified in the experimental section the student networks number of parameters and flops are not detailed so it is unclear how much efficiency gain the proposed method achieved note in practice small networks such as mobilenet and shufflenet have achieved significantly better accuracyefficiency tradeoff than the teacher networks considered here either for cifar10 or for imagenet1k improvement not significant the results obtained by the proposed approach is not very significant compared to kd alongdocsepthis paper targets knowledge distillation of a large network to a smaller network the approach is summarized by equations 3 and 4 which in short proposes that one should use the maximummeandiscrepancy mmd of the network activations as a loss term for distillation when considering cifar image classification tasks it is shown that only when using a specific quadratic polynomial kernel which as described in httpsarxivorgpdf170101036pdf is tantamount of applying neural style transfer the proposed approach is able to match the performance of the seminal paper of hinton et al when embarking to imagenet the proposed approach is only able to match the performance of standard knowledge distillation by adding the quadratic term texture in neural style synthesis jargon this is actually a sensible proposal yet the claims about mmd as a way of explaining neural style transfer has appeared in the paper cited above which the authors mention the idea of transferring from one domain to another using mmd as a regularizer appeared in httpsarxivorgpdf160506636pdf by long et al indeed equation 3 of this paper matches exactly equation 10 of long et al note too that long et al also discuss what kernels work well and which work poorly due to vanishing gradients and propose parametrised solutions this is something this paper failed to do the two works cited above make me wonder about the novelty of the current paper in fact this paper ends us being an application of the neural style transfer loss function to network distillation as such this could be useful if not already done by someone else previously i find that the paper is poorly written with many typos and lacks focus on a single concrete story the cifar experiments fail to use kdnst ie the thing that works for imagenet neural style transfer and section 53 appears trivial in light of the cited works for all these reasons i am inclined to reject this paper docsepthis paper proposes a simple method for knowledge distillation the teacher and student models are matched using mmd objectives the author demonstrates different variants of matching kernels specializes to previously proposed variants of knowledge distillation the extensive evaluation suggests that the mmd with polynomial kernel provides better results than the previously proposed method it is interesting to see that mmd based transfer has more advantage on the object detection tasks can the author provides more insights into the behavior of different kernels for example visualizing the gradient map might help us to understand why certain kernel works better than another one did you consider translation invariance or other spatial properties when designing your kernels in summary this is an interesting paper with good empirical results the technique being used generalization is quite straightforward but the paper also includes a good amount of discussion on why the proposed approach could be better and i think that really helps the reader ### Summary:
the paper presents a sensible algorithm for knowledge distillation kd from a larger teacher network to a smaller student network by minimizing the maximum mean discrepancy mmd between the distributions over students and teachers network activations as rightly acknowledged by the r3 the benefits of the proposed approach are encouraging in the object detection task and are less obvious in classification r1 and r2 the reviewers and ac note the following potential weaknesses 1 low technical novelty in light of prior works demystifying neural style transfer by li et al 2017 and deep transfer learning with joint adaptation networks by long et al 2017 see r2s detailed explanations 2 lack of empirical evidence that the proposed method is better than the seminal work on kd by hinton et al 2014 3 important practical issues are not justified eg kernel specifications as requested by r3 and r2 accuracyefficiency tradeoff as suggested by r1 4 presentation clarity r3 has raised questions regarding deploying the proposed student models on mobile devices without a proper comparison with the mobilenet and shufflenet light architectures this can be seen as a suggestion for future revisions there is reviewer disagreement on this paper and no author rebuttal the reviewer with a positive view on the manuscript r3 was reluctant to champion the paper as the authors did not respond to the concerns of the reviewers ac suggests in its current state the manuscript is not ready for a publication we hope the reviews are useful for improving and revising the paper
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper discusses a algorithm for variational inference of a nonlinear dynamical models in this paper model assumption is to use single stage markov model in latent space with every latent variable zt to be defined gaussian distributed with mean depends on zt1 and time invariant variance matrix lambda the non linearity in transition is encoded in mean of guassian distribution for modeling the likelihood and observation model the poisson or normal distribution are used with xt being sampled from another gaussian or poisson distribution with the nonlinearty being encoded in the parameters of distribution with variable zt this way of modeling resembles so of many linear dynamical model with the difference of transition and observation distribution have nonlinearity term encoded in them the contribution of this paper can be summarized over following points the authors proposed the nonlinear transition and observation model and introduced a tractable inference model using laplace approximation in which for every given set of model parameter solves for parameters of laplace approximation of posteriori and then model parameters get updated until converges the second point is to show how this model is successful to capture the nonlinearity of the data while other linear models do not have that capabilities novelty and quality the main contribution of this paper is summarized above the paper do not contain any significant theorem or mathematical claims except derivation steps for finding laplace approximation of the posteriori the main challenge here is to address effectiveness of this model in comparison to other nonlinear dynamical system that we can name papers as early as ghahramani zoubin and sam t roweis learning nonlinear dynamical systems using an em algorithm advances in neural information processing systems 1999 or more recent rnn paper lstm based papers i think authors need to distinguish what this paper can give to community beside approximate posteriori of latent variables that other competing models are not capable of if the aim is to have that posteriori the authors should show what type of interpretation they have drawn from that in experiments there are lots of literature exist on speech language models and visual prediction which can be used as reference as well clarity the paper is well written and some previous relevant methods have been reviewed there are a few issues that are listed below 1 as mentioned in quality sections authors should be more clear about what is distinguished in this paper that other nonlinear dynamical systems 2 they used short form rm for recognition model or fpi for fixed point iteration that need need to be defined before being used significance and experiments the experiments are extensive and authors have compared their algorithm with some other linear dynamical systems lds competing algorithms and showed improvement in many of the cases for trajectory reconstruction a few points can be addressed better it can be seen for many of experiments exhaustive search is used for finding dimension of latent variable this issue is addressed in kalantari rahi joydeep ghosh and mingyuan zhou nonparametric bayesian sparse graph linear dynamical systems arxiv preprint arxiv180207434 2018 that paper can use nonparametric approaches to find best latent dimension although the paper applied the technique on linear system same technique could be adopted to nonlinear models also that model is capable of finding multiple linear system that model the non linearity by switching between diffrent linear system for switching linear system this paper can be named as well linderman scott et al bayesian learning and inference in recurrent switching linear dynamical systems artificial intelligence and statistics 2017 it is shown that the model can reconstruct the spikes very well while linear model do not have that power which is expected but it is interesting to see how other nonlinear models would compare to this model under those certain conditions it is desired and interesting to see how the model behave one step ahead and kstep ahead prediction please address why it cannot be done if there is difficulties in thatdocsepill start with a disclaimer i have reviewed the nips 2019 submission of this paper which was eventually rejected compared to the nips version this manuscript had significantly improved in its completeness however the writing still can be improved for rigor consistency typos completeness and readability authors propose a novel variational inference method for a locally linear latent dynamical system the key innovation is in using a structured parent distribution that can share the nonlinear dynamics operator in the generative model making it more powerful compared however this parent distribution is not usable since its an intractable variational posterior normally this will prevent variational inference but the authors take another step by using laplace approximation to build a child distribution with a multivariate gaussian form during the inference the child distribution is used but the parameters of the parent distribution can still be updated through the entropy term in the stochastic elbo and the laplace approximation they use a clever trick to formulate the usual optimization in the laplace approximation as a fixed point update rule and take one fixed point update per adam gradient step on the elbo this allows the gradient to flow through the laplace approximation some of the results are very impressive and some are harder to evaluate due to lack of proper comparison for all examples the forward interpolate really forecasting with smoothed initial condition provides a lot of information however it would be nice to see actual simulations from the learned llds for a longer period of time for example is the shape of the action potential accurate in the single cell example it should be since the 2 ms predictive r2 shows around 80 except in fig 2 the 3 other examples are only compared against gflds since gflds involves nonconvex optimization it would be reasonable to also request a simple lds as a baseline to make sure its not an issue of gflds fitting for the r2049 claim on the left to right brain prediction how does a baseline fa or cca model perform was input current ignored in the single cell voltage data or you somehow included the input current as observation model as for the comment on gaussian vind performing better on explaining variance of the data even though it was actually count data i think this maybe because you are measuring squared error if you measured point process likelihood or pseudor2 instead poisson vind may outperform both your forecasting and the supplementary results figure show that poisson vind is definitely doing much better what was the sampling rate of the guo et al data the supplementary material is essential for this paper the main text is not sufficient to understand the method this method relies on the fixed point update rule operating in a contractive regime authors mention in the appendix that this can be guaranteed throughout training by appropriate choices of hyperparameters and network architecture this seems to be a crucial detail but is not described please add this information theres a trial index suddenly appearing in algorithm 1 that is not mentioned anywhere else is the adam gradient descent in algorithm 1 just one step or multiple mse msek in eq 13 lfads transition function is not deterministic page 4 log qphivarphi is quadratic in z for the llds case text shouldnt be includes terms quadratic in z misleading regular gradient ascent update need reference page 4 due to the laplace approximation step you dont need to infer the normalization term of the parent distribution this is not described in the methods page 3 eq 4 and 5 are inconsistent in notation eq 16 are not novel but text suggests that it is predictive mean square error page 2 introduction can use some rewriting arxiv papers need better citation formattingdocsepthe paper presents a variational inference approach for locally linear dynamical models in particular the latent dynamics are drawn from a gaussian approximation of the parent variational distribution enabled by laplace approximations with fixed point updates while the parameters are optimized the resulting stochastic elbo experiments demonstrate the ability of the proposed approach to learning nonlinear dynamics explaining data variability forecasting and inferring latent dimensions quality the experiments appear to be well designed and support the main claims of the paper clarity the clarity is below average in section 2 the main method is introduced however the motivation and benefits of introducing a parent and child variational approximation are not discussed adequately it would be helpful to move some of the stuff in the appendix to the main text and present in a neat way i also struggled a little to understand what is the difference between forward interpolate and filtering originality given the existing body of literature i found the technical novelty of this paper rather weak however it seems the experiments are thoroughly conducted in the tasks considered the proposed method demonstrates convincing advantages over its competitors significance the method shall be applicable to a wide variety of sequential data with nonlinear dynamics overall this appears to be a boardline paper with weak novelty on the positive side the experimental validation seems well done the clarity of this paper needs to be strengthened minor comments abstract uncover nonlinear observation maybe change observation to latent dynamics ### Summary:
the reviewers in general like the paper but has serous reservations regarding relation to other work novelty and clarity of presentation given nonlinear state space models is a crowded field it is perhaps better that these points are dealt with first and then submitted elsewhere
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the authors propose to encapsulate the update rule for a neural net into a lookup table specifying weight changes for each combination of presynaptic input to the weight and postsynaptic activation of the unit receiving that incident connection they learn the elements of this matrix by gradient descent and then use that learned update rule to train neural nets on a new task this is motivated by a separation of timescales biologically wherein learning rules might be evolved over long timescales and then act within each brain over shorter ones there is a nice discussion of related previous work but it misses a few key items that to me diminish somewhat the novelty of this work thats okay being first isnt everything but i think it is important to point out to readers what is new and better about this work vs previous work a the auto ml zero paper from quoc le et al arxiv 200303384 they learn both architectures and learning rules via simulated evolution b andrychowicz m denil m gomez s hoffman m w pfau d schaul t de freitas n 2016 learning to learn by gradient descent by gradient descent in advances in neural information processing systems pp 39813989 they use gd to learn plasticity rules c a few recent papers on bioplausible backproptype algorithms i burstdependent synaptic plasticity can coordinate learning in hierarchical circuits biorxiv httpsdoiorg10110120200330015511 from naud and colleagues ii guerguiev jordan timothy p lillicrap and blake a richards towards deep learning with segregated dendrites elife 6 2017 e22901 iii sacramento joo rui ponte costa yoshua bengio and walter senn dendritic cortical microcircuits approximate the backpropagation algorithm in advances in neural information processing systems pp 87218732 2018 aside from the relation to prior work i have a few technical and conceptual questions comments 1 fig 2 were all three nets given the same initialization that could matter for comparing the training curves of accuracy vs training time because a good initialization could give one learning rule an apparent advantage and given the accuracy at t0 it doesnt look like they are the same 2 i like that the authors studied generalization of the learned rule between tasks that is important although sgd also generalizes well im a bit less impressed by the performance obtained in the mnist and fashion mnist tasks at the same time using twofactor rules update is a function just of pre and postsynaptic inputs to solve mnist sounds hard given that theres no credit assignment signal i think that the authors would be well served to read up on the papers on bioplausible deep learning and consider variants of this work that include a credit assignment signal docsepsummary this is a fascinating paper which push the bounds of artificial neural network knowledge and understanding while questioning the typical approach of considering various neural networks branches in isolation concretely it shows that by using concepts from neuroevolution together with deep learning concepts we can learn how to learn learning rules ie plasticity rules which can generalize across tasks and can train neural networks which are more robust to adversarial attacks than typical networks trained with stochastic gradient descent strong points in my opinion the paper is visionary it answers few questions while opening the path for a large number of new research directions and new unanswered questions the paper has a very wellbalanced content of novelty math computer science neuroscience and even philosophy the paper is very well written and anchored in a multidisciplinary literature it has the potential of becoming a must read paper in the future weak points the datasets used including mnist and fashionmnist are rather simple it would be very interesting to see how the approach behaves on more complex datasets during the discussion phase i would recommend to the authors to address the following comments 1 in the limit of time try to perform experiments also on cifar 10100 i believe that it would be interesting to see on cifar 100 the behavior of three types of learned plasticity rules 1 plasticity rules learned on the simple datasets 2 plasticity rules learned on cifar 10 and 3 plasticity rules learned on a subset of the cifar 100 training set 2 are you encountering problems with relu activation in recurrent networks such as exploding or vanishing weights does your approach work also with hyperbolic tangent 3 i believe that it would help the paper clarity if you can add a table towards the end of the paper to summarise the main results in terms of accuracy training time etc 4 perform a proofread of the whole paper to improve the english usage and the presentation for instance typos eg rule the next theorem unit measures for axis labels eg figure 5 accuracy etc docsepsummary this paper uses metalearning to search for novel local learning rules in artificial neural networks this is done by parameterizing local learning rules for feedfoward and recurrent neural networks then using minibatch gradient descent to update the parameters of the learning rule the authors argue that this is a promising strategy for discovering the learning rules used by biological systems with three main contributions 1 they provide proofs that this approach does what we would hope it would do when applied to a single linear layer 2 experiments demonstrate metalearning for simple nonlinear and recursive architechtures and 3 the authors provide an argument that evolution could replace gradient descent as the method of searching over possible learning rules the authors also show through experiments that models trained with these nongradient methods are more robust to gradientbased adversarial attacks overall i thought this paper was wellwritten and provided interesting arguments and proofs however the experiments are not enough to support the main claims so i think this paper is borderline pros while the main idea of using metalearning to search for new biorealistic learning algorithms is not new the particular formulation used here with recursive neural networks was new as far as i know and i found this idea very interesting most of the work in this area has been focused on feedforward networks but as the authors emphasize recurrent neural networks add a whole new dimension to the space of biologicallyplausible local learning algorithms an example of such an algorithm is recirculation described in hinton and mcclelland 1987 learning representations by recirculation which is closely related to feedback alignment baldi and sadowski 2018 learning in the machine recirculation is random backpropagation cons my main criticism is that the experiments are not enough to show that the discovered learning rule does anything useful in the rnn as the authors admit one can achieve good performance in a multilayer nn by fixing the random weights in the hidden layers and by training only the output layer i would have really liked to see how the metalearned algorithm compared to fixed random representations the experimental performance of the metalearned algorithm on mnist is quite poor 80 test accuracy figure 3 so its unclear what is going on the results of the adversarial robustness experiments are not surprising adding additive or multiplicative noise during training will also make the trained networks more robust it think these experiments actually distract from the main ideas of the paper it would have been better to more carefully explore whether the learning algorithm can learn more difficult functions section 2 describes two possible alternatives for the plasticity rules one that incorporates information about the error and one that does not i think it is important to highlight the fact that the latter is an unsupervised plasticity rule the metalearning algorithm has access to the target output but the local plasticity rule does not so while a plasticity rule trained on dataset 1 does have some information about the dataset 1 targets by way of the metalearning updates when it is applied to dataset 2 it never receives any information about dataset 2s target and is thus unsupervised this is an important distinction between the two approaches docsepthis paper introduces a new method for metatraining plasticity rules allowing networks to learn new instances of a given domain quickly and efficiently the method consists in implementing plasticity as an arbitrary function of the past few timesteps of local activity at a synapse input and output experiments show that the method finds reasonably successful rules that these rules generalize across some domains and that the learning seems more robust to some adversarial attacks than plain gradient descent while metatraining plasticity rules is not new i believe the method is novel and quite interesting i also appreciate the experiments to demonstrate crossdomain generalization and robustness to adversarial attacks a possible caveat is that the experiments though diverse are still a bit limited the paper only uses relatively small feedforward networks and recurrent networks with only 3 timesteps in terms of real datasets only mnist and fashionmnist are considered similarly the trained plasticity rule seems robust to a certain type of adversarial attack but is it more robust to other forms of distortion such as plain noise deformations etc i suppose this is tolerable for an introductory paper minor comments bengio et al 1992 on the optimization of a synaptic learning rule used gradient descent to metalearn plasticity rules and should be included in the related work section similarly metz et al 2019 httpsopenreviewnetforumidhkndsic9kq sometimes in the figures it is not clear what exactly is shown eg which type of plasticity rule is used networkbased or lookup tables also in figure 4 how exactly are these image generated whats the criterion for a sufficient error speculative comment iiuc the networks in this paper are binary and can be seen as spiking networks interestingly the space of rules explored by this method seems to include standard models of biological plasticity such as spiketiming dependent plasticity as well as more complex triplet rules see eg httpswwwpnasorgcontent1084819383 theres probably some interesting work to be done in this direction for future work of course not for this introductory paper ### Summary:
this paper explores metalearning of local plasticity rules for anns the authors demonstrate that they can metalearn purely local learning rules that can generalize from one dataset to another though with fairly low performance it should be noted and they provide some data suggesting that these rules lead to more robustness to adversarial images the reviews were mixed but some of the reviewers were very positive about it specifically there are the following nice aspects of this work a the metalearning scheme has interesting potential for capturinglearning biological plasticity rules since it operates on binary sequences which appears to be a novel approach that could help to explain things like stdp rules b it is encouraging to see that the learning rules can generalise to new tasks even if the performance isnt great c the authors provide some interesting analytical results on convergence of the rules for the output layer however the paper suffers from some significant issues 1 the authors do not adequately evaluate the learned rules specifically the comparison to gd in fig 2 is not providing an accurate reflection of gd learning capabilities since a simple delta rule applied directly to pixels can achieve better than 90 accuracy on mnist thus the claim that the learned rules are competitive with gd is clearly false the authors do not compare to any unsupervised learning rules despite the fact that the recurrent rules are not receiving information about the labels and are thus really a form of unsupervised learning there are almost no results regarding the nature of the recurrent rules that are learned either experimental or analytical given positive point a above this is particularly unfortunate and misses a potential key insight for the paper 2 the authors do not situate their work adequately within the metalearning for biologically plausible rules field there are no experimental comparisons to any other metalearning approaches herein moreover they do not compare to any known biological rules nor papers that attempt to metalearn them specifically several papers have come out in recent years that should be compared to here httpsproceedingsneuripsccpaper2020filef291e10ec3263bd7724556d62e70e25dpaperpdf httpswwwbiorxivorgcontent10110120191230891184v1fullpdf httpsproceedingsneuripsccpaper2020filebdbd5ebfde4934142c8a88e7a3796cd5paperpdf httpsopenreviewnetpdfidhjlknmfiub httpsproceedingsneuripsccpaper2020fileee23e7ad9b473ad072d57aaa9b2a5222paperpdf and the authors should consider examining the rules that are learned and how they compare to biological rules eg forms of stdp if indeed biological insights are the primary goal 3 the paper needs to provide better motivation and analyses for the robustness results why explore robustness what is the hypothesis about why these metalearned rules may provide better robustness there is little motivation provided also the authors provide very little insight into why you achieved better robustness and insufficient experimental details for readers to even infer this this section requires far more work to provide any kind of meaningful insight to a reader what was the nature of the representations learned how are they different from gd learned representations was it related to the ideas in theorem 4 note theorem 4 is interesting but only applies to a specific form of output rule 4 in general the motivations and clarity of the paper need a lot of work what are the authors hoping to achieve biological insights then do some analyses and comparisons to biology more robust and generalisable ml then do more rigorous evaluations of performance and comparisons to other ml techniques some combination of both then make the mixed target much clearer 5 the authors need to tidy up the paper substantially and do better at connecting the theorems to the rest of the paper particularly for the last 2 theorems in the appendix also note theorems 2 4 appear to have no proofs given the above considerations the ac does not feel that this paper is ready for publication this decision was reached after some discussion with the reviewers but the ac and the reviewers want to encourage the authors to take these comments on board to improve their paper for future submissions as the paper is not without merit
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper considers the problem of metalearning in a multiagent environment under the assumptions that the learning agents policy evolves over time as a function of the other agents actions the other agents policies evolve potentially using the learning agents actions the policy learning problem is assumed to be markovian the metalearning problem is considered to be that of finding the best initial policy parameters that will subsequently be evolved according to the learning dynamics as to maximize the agents cumulative marginal payoff the paper is very well written easy to read and relatively straightforward in its exposition i do not have any big remarks about writing except that the authors may want to rethink the term defective persona to avoid the weird double meaning a sufficient amount of related work presented and the lineage of the ideas is traced convincingly well the main contribution of this paper is to extend the ideas of alshedivat et al in a way that exposes the other agents learning dynamics to the policy optimization as opposed to treating them as a nonstationarity the policy gradient form corresponding to this setting is derived in theorem 1 the approach is evaluated in a synthetic experiment using iterated games as well as a somewhat less synthetic experiment on a quartercheetah problem each agent controls a leg of the halfcheetah i think that while the paper is incremental the point that is raised within is rather intriguing if anything my main criticism is that the authors could have gone for a more challenging setting that iterated games eg recent results httpsarxivorgpdf190108654pdf indicate that in settings like collaborative exploration being aware of the other players learning dynamics is important for achieving a better outcome perhaps the policy gradient approach can solve issues that cannot be addressed straightforwardly within the bandit framework another question is whether the approach can be used successfully to tune the inner learning process eg by incorporating the policy gradient step size and other hyperparameters into phi0 overall i think this is a solid paper which would benefit significantly from more ambitious problems docsepthis paper makes a full derivation of the metalearning gradient estimation for multiagent adaptation the resulting algorithm combines the metalearning of the opponents updates existing in lola and of oneselfs futur updates existing in metapg while the theoretical part of the paper is clear and well explained the experimental setup is missing a lot of details to be interpreted in each experiment it seems but never explicitly formulated that agent i agent 1 since all experiments are involving 2 players is doing the metalearning algorithm metamapg metapg or lola while the other agent 2 is a naive agent initialised with defectivecooperative policies in that case how are naive agent updated with simple policygradient how many lookahead are used denoted by l in algorithms why did lola failed at learning to cooperate with the cooperative opponents it should have learned to cooperate unless naive agents are still doing selfish pg updates and in that case metamapg results are very impressive are the opponents policies given or learned ie with opponent modelling also i would have been interesting to see an ablation study showing the importance of the own learning and peer learning terms in equation 6 from the same implementation with fixed hp does the authors have tried it docsepthis paper studies metalearning in multiagent reinforcement learning it proposes a meta multiagent policy gradient method that considers the learning processes of other agents in the environment for fast adaptation this method can be seen as a unified framework of previous methods alshedivat et al 2018 and foerster et al 2018a the method outperforms previous methods in two matrix games and 2agent halfcheetah pros the method is simple and well motivated it additionally takes into consideration peer learning comparing to alshedivat et al 2018 the method unifies the benefit of alshedivat et al 2018 and foerster et al 2018a the method greatly outperforms these two methods in two matrix games cons like lola the method needs the access to policy parameters of other agents while alshedivat et al 2018 do not this may be impossible in mixed and competitive environments how to deal with this in experiments most questions are answered by the two matrix games it is not fully convinced since the state space is very limited why not choose robosumo in alshedivat et al 2018 as an experiment for two matrix games opponent policy is limited compared to complex environments for example halfcheetah although the out of distribution is tested it is less informative for generalization why not test the out of distribution for halfcheetah the out of distribution has a smaller overlap between metatrainval and metatesting distribution what exactly is the out of distribution the experimental results need to be more elaborated why do metapg and lola perform similarly to reinforce after rebuttal the responses address my main concerns i have increased the score to 6 but i also agree with other reviewers that the novelty of this paper is somewhat limited docsepthis paper points out that a key challenge in marl is the nonstationarity of other agents policies as opposed to previous papers which only account for nonstationarity of the environment the paper extends alshedivat et al 2018 by directly conditioning the metapolicy on a distribution of other agents policies in my opinion the major contribution of this paper is a new multiagent meta learning theoretic framework that explicitly accounts for the dynamics of all agents strengths of the paper 1 a new perspective in marl that considers nonstationarity of marl in terms of dynamics of the other agents policies 2 a new theoretically grounded algorithm that explicitly models the policy dynamics of all agents weaknesses of the paper 1 except for the new perspective of incorporating policy dynamics of other agents the backbone of the paper ie metarl based framework to mitigate nonstationarity of marl is inherently the same as alshedivat et al 2018 the novelty is somewhat limited 2 in experiments the paper answers several questions that show the effectiveness of the new algorithm however this is subject to the twoagent setting it is questionable whether such a framework can perform well in settings where there are multiple agents question does the proposed framework generalize to 2 agents scenarios if yes what is the reason that the authors did not conduct empirical evaluations in these scenarios ### Summary:
this paper studies the problem of multiagent metalearning it can be viewed as extending alshedivat et al 2018 by incorporating the dynamics of other agents the reviewers praised clear writing and theory there were two main concerns the first concern is the novelty when compared to alshedivat et al 2018 the second concern are experiments which could be more ambitious and are not always clearly described the reviews of this paper were borderline and this was not enough to get accepted
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this manuscript proposes a new graph neural net gnn method to learn the dynamics of a spatiotemporal pdedriven dynamical system directly from data this manuscript proposes a new graph neural net gnn method to learn the dynamics of a spatiotemporal pdedriven dynamical system directly from data the authors propose to do that using the finite element method fem the proposed method builds on using basis function approximation for the unknown field u galerkin method with the assumption that discrepancy between the dynamics f and basis function approximation is orthogonal to finite basis functions method of lines and message passing gnn as a proxy for the dynamics the use of linear interpolation allows to express the time derivative of y as a solution of a system of linear equations which is further approximated to gain computational additional efficiency authors also propose a method to incorporate inductive bias into model learning for models that are assumed to contain a convection component overall the proposed method is wellmotivated and for the most part the description is clear to my knowledge the proposed method is novel and contains some methodologically new ideas and the performance seems to be on par with previous methods that learn freeform dynamics and shows an improvement for models that contain a convection component when such prior knowledge is utilised in the model training authors could address andor clarify the following aspects 1 i understand that a piecewise linear basis simplifies computational complexity by making some of the computational steps straightforward but on the other hand the selected basis is the simplest and obviously not optimal from approximation accuracy point of view can the method be extended to other basis for example if we knew that the dynamics f contains a diffusion term nabla2u we would not be able to introduce it since the second derivative of pwl functions is zero everywhere i think the discussion of possible limitations of the pwl basis and of possible extensions to higherorder bases is missing 2 it seems that the measurement from the initial time point is used as the initial state why not introduce a separate free parameter for the initial state 3 as described towards the end of the manuscript the system is initialed with an initial state and then the pde dynamics define how the system evolves over time however below eq 12 it is noted that where u encodes the data y at time t in its coefficients is u defined based on the dynamically evolving system state or by data 4 paragraph network architecture on page scimitarlearn is used to compute inner product between basis functions provide a brief description of how the computation is done 5 end of page 6 gradients are back propagated through an ode solver why not use adjoint method possibly with checkpoints as proposed in previous work and implemented eg in comparison methods discrete backpropagation may not scale to longer sequences is this the reason why data trajectories are shortened 6 table 1 split the methods into two groups group 1 should includes padgn gwn ctmpnn and fen which do not assume prior knowledge about system dynamics fen performs similarly with ctmpnn on scalarflow and perhaps slightly better on black sea data mae values within one std group 2 should include only tfen that is specifically designed to learn convection systems and thus provides a small performance improvement 7 in experiments the authors make their models time and positiondependent while the strongest baseline models ctmpnn does not utilize neither time nor positions that makes it hard to tell whether the improvements in performance of fen and tfen are due to the modelss structure and inductive biases or due to time and position dependencies authors should provide an ablation study to address this 8 provide additional comparisons on systems with larger variety of dynamics using simulated data for which the ground truth is known to better understand when fen performs betterworse than comparison methods 9 datasets are subsampled to 1000 spatial points provide results for smaller and also larger spatial grids to demonstrate the argument that approximation becomes arbitrarily good as the mesh resolution increases to my knowledge the proposed method is novel and contains some methodologically new ideas and the performance seems to be on par with previous methods that learn freeform dynamics and shows an improvement for models that contain a convection component when such prior knowledge is utilised in the model training docsepthe paper proposes a graph simplicial neural network based on the finite element method for learning dynamics from data when only a finite number of samples exist and the true dynamics are not known or only partially known strenghts the paper looks at a very realistic setting for learning pdes from data a finite number of samples and partially unknown true dynamics tackling these problems concomitantly is of great practical importance and this makes the proposed method relevant for practical applications the introduction does a good job motivating the work and pinpointing the main challenges of learning dynamics from data as someone with little experience with the finite element method section 21 does a great job explaining the required background in the right amount of detail for understanding the paper the connection that the paper makes between the finite element method and message passing neural networks is interesting and to the best of my knowledge original the authors show how inductive biases can be added to the model by using a certain prior over the structure of the function f i like that the paper focuses on realworld datasets also the datasets themselves are extremely interesting to visualize and make the paper more interesting figure 3 provides a very insightful qualitative understanding of the proposed model compared to the baseline i am glad that the paper includes a superresolution experiment often models that work with a discretised space can be very sensitive to changes in the resolution of the mesh figure 4 shows that the proposed model is relatively robust to changes in the number of triangles the authors show in a relatively specific setting that factorizing the dynamics achieves a disentanglement effect which allows some degree of interpretability of the model weaknesses while i appreciate the focus on realworld datasets a synthetic experiment where the model could have been evaluated in a more systematic way would have been useful the trick to stabilize training described at the end of section 3 is slightly peculiar how important is this trick do the authors have any results for when this trick is not used could this trick improve the performance of the baselines as well the importance of the approximation from equation 13 is not studied perhaps that is something that could have been tried in the more controllable synthetic setting i was suggesting above in general i would be interested to know what are the costs of this approximation and if more advanced approximations might be worth being considered to boost performance minor suggestion the paper frames the model as a hypergraph neural network however the authors might want to be aware that there is a recent line of work developing simplicial and cell complex neural networks httpsarxivorgabs210303212 icml 2021 httpsarxivorgabs210612575 neurips 2021 httpsarxivorgabs201003633 since the model learns a function over the 2simplices in the simplicial complex the model is probably more accurately described as a type of simplicial neural network the weaknesses reported above are relatively minor and far outweighed by the strengths of the paper therefore i recommend the paper for acceptance docsepthis paper proposes a new model for learning partial differential equations from data the pde is first discretized then solved as an ode the dynamics function is learned with messagepassing neural networks where the function is split into a sum of physically informed terms this splitting both improves model performance and makes the model more interpretable by disentangling the dynamics the model is tested rigorously against multiple baseline models and the results show the new model performs well strengths the paper is well written and well presented on the whole it was relatively easy to understand and the diagrams definitely contribute to that the proposed model is well motivated and nicely extends ideas such as graph neural odes to the pde domain the paper considers the issues with a naive implementation in depth which is that the model would be slow it then provides the solution to this rigorous tests on the proposed model are carried out against robust baselines there is an extensive review of related work a lot of effort has been put in to make the results reproducible including all experimental details and codedatasets to come weaknessesquestions my main concern is that training has been carried out over 10 time steps was this a hyperparameter that was tuned i agree that the correlation over say 30 steps will be minimal however my understanding is that all models learn an update based on the current state and potentially a few steps in the past would it not still be possible to train over more time steps could tfen extrapolate better than gwn if it is trained on a larger time range one of the proposed reasons gwn outperforms tfen at extrapolation is that it can use the past timesteps as input this could be extended to tfen in the form of delay differential equations httpsarxivorgabs210210801 where the dynamics takes the state at t and the state at ttau as input this would make the model use the past as well would it be possible to carry out an ablation study using this the paper could benefit from a discussion section saying what the model is good at and bad at for example we see that it performs very well in the times used to train but is not as good at extrapolation in table 1 for the ode based method the number of function evaluations are provided to show tfen is faster than fen and ctmpnn is it possible to time the evaluation to support this claim further that way one could also compare to the other two baselines gwn and padgn is it possible to move the related work section either to the introduction or to just before the conclusion it breaks the flow what is the reason for using the l1 loss over the l2 loss could we expect betterworse results with l2 could the model be improved even further by using more physically informed terms for example like those in electromagnetism dotvqv times b where b is learnt httpsarxivorgabs210907359 or because the experiments use data of fluids could it help to include laplacian or curl terms in the dynamics which appear in navierstokes is it possible to carry out an ablation the mass matrix a is approximately inverted by lumping the matrix are there situations where this approximation could lead to errors is it possible to extend this method where we expect higher order pdes that include terms such as fracpartial2upartial tpartial x the model disentangles the dynamics into a convection term and the remainder how does this relate to disentangling the dynamics of an augmented neural ode httpsarxivorgabs190401681 into a velocity and an acceleration httpsarxivorgabs200607220 which can also aid the interpretability of the ode i dont entirely understand the black sea dataset is the mean temperature the mean temperature of the entire sea or of a region say 1m2 if it is of the whole sea does this not remove the spatial element of the task additionally are there any interesting effects appearing over long time periods due to global warmingconcept drift given that the training regime is taken over 20122017 and the testing regime is in 2019 minor points the paragraph just above equation 15 has a small mistake the manuscript says this would prohibit training even with the adjoint equation chen et al 2018 while the paragraph is about the speed of training the benefit of the adjoint method is that it is memory efficient but slow compared to directly backpropagating through an ode solver which is fast but uses a lot of memory there is a typo at the bottom of page 8 disappears extrapolationat the there is a typo at the bottom of page 16 odebase models should be odebased models the paper is well written the model to my best knowledge is novel with the method building on existing work the experiments rigorously test the model against the necessary baselines and information is given in the appendices on reproducing the results therefore i recommend acceptance with a few clarifications to be made edit i have increased my confidence score from 3 to 4 after my initial questions have been answered docsepthe author proposes a method for forecasting in partial differential equations by coupling finite element method on an arbitrary grid with the learning of the dynamics from data for this purpose a variant of message passing based graph networks is used it is show in the paper that its possible to incorporate priors on the structure of the pde that results in an interpretable solution the model also show more stability to changes of the mesh structure in test time like superresolution and to extrapolation than competitors the paper uses message passing neural networks to implement a finite element method with learnable dynamics it combines different models and techniques from the literature but it clearly does that in a nontrivial way including modifications therefore it is more than just derivative work i find it valuable that the author also considers performance of implementation and discusses consideration on gpu architecture related performance issues also the effort put into making the work reproducible adds value to the paper the method can clearly have practical value and the discussion of the method is clear and quite detailed i am not as familiar with the pdenn literature as for example the neural ode literature therefore i cannot rule out entirely that something similar already exists questions actionable comments i the results show that fen and tfen provide the smallest prediction error on both datasets with a further boost due to the separate transport term in tfen i feel this is a bit too strong statement while i tend to accept the paper with the results in table 1 given the other clear benefits of the method like robustness and interpretability i am not comfortable with this statement the table is based on 3 repeats and have results for example black sea dataset ctmpnn 0944 0003 vs fen 0938 0005 these confidence intervals are clearly overlapping and using 3 samples i am a bit skeptical this is one of the strongest gaps in the table in the scalarflow fen is really on par with ctmpnn did you used a statistical test to bold the results again i have no problem supporting the paper even if the method is on par with the best competitor but the statement in the paper should be supported by the statistics similarly in the case of nfe ctmpnn have approx 60 deviation on the scalarflow dataset i am not too comfortable to compare these numbers ii what is the motivation of using l1 loss instead of like mse iii in experimentsmultistep forecasting section time horizon choice is well motivated but still feels arbitrary in some sense does the author see some way to formalize what we accept as meaningful dynamics how one should choose a comparison horizon for example can it be done at least approximately without domain knowledge on the system iv in modelnetwork architecture section order invariance of cell vertices is assured by ordering the nodes canonically would the author expect improvement if a permutation invariant network like a set transformer be used here why or why not in some sense the message aggregation step summation being permutation invariant is a set network but not a very expressive one this however makes the cell order invariant for a given node and not the other way around the paper give valuable contribution the method expected to be practical robust and in some cases interpretable i find the statement on raw prediction error overly strong ### Summary:
this paper introduces a graph neural network gnn based on the finite element method fem for learning partial differential equations from data the proposed finite element network is based on a piecewise linear function approximation and a message passing gnn for dynamics prediction the authors also propose a method to incorporate inductive bias when learning the dynamical model eg including a convection component the paper received three clear accept and one weak accept recommendations the reviewers discussed the possible extensions of the method and also raise several concerns regarding experiments eg the added value of a synthetic dataset implementation tricks or hyperparameter settings the rebuttal did a good job in answering reviewers concerns after rebuttal there was a consensus among reviewers to accept the paper the acs own readings confirmed the reviewers recommendations the paper is well written and introduces solid contribution at the frontier of gnns and finite elements methods especially a pioneer graphbased model for spatiotemporal forecasting derived from fem therefore the ac recommends acceptance
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: strengths 1 the experiments are extensive and clearly demonstrate the merits as compared to prior benchmarks for off policy rl 2 the contextual discussion is clear wellmotivates the proposed approach and gives a nice overview of how importance sampling and off policy rl intersect weaknesses 1 theorem 1 seems vacuous the proof is a simple exercise in elementary calculus one may easily show the minimizer of a quadratic is the least squares estimator the authors need to better explain what is the technical novelty of this statement and why it is meaningful upon inspection it does not seem to qualify as a theorem this is also true of theorems 2 and 3 therefore i feel the conceptual contribution is not enough to warrant acceptance 2 the notion of successor representation seems identical to the occupancy measure which in order to estimate requires density estimation which is extremely sample inefficient can the authors comment about how to estimate the successor representation efficiently there is very little discussion of sample complexity throughout which is somewhat alarming because a key selling point of offpolicy schemes for rl is that they alleviate the need for sampling from the mdp transition dynamics 3 the actual algorithm pseudocode is missing from the body of the paper which is permissible because it is in the appendix however the structural details of how the algorithm works iteratively and how it departs from previous works are also not explained that is while derivation details are presented iterative details are not in my opinion this should be strictly required in the body of the paper as well as contextual discussion of what is similardifferent from previous works but all i could find was high level presentation of objectives minimized at various sections but not how they are interlinked 4 the background discussion is disjointed there is a preliminaries section on page 5 as well as a background section 3 minor comments 1 references missing related to the sampleparameterization complexity issues associated with importance sampling koppel a bedi a s elvira v sadler b m 2019 approximate shannon sampling in importance sampling nearly consistent finite particle estimates arxiv preprint arxiv190910279docsepsummary the paper proposes an approach to employ successor representation combined with marginalized importance sampling the basic idea exploited in the paper consists of expressing the occupancies in terms of the successor representation and to model it via a linear combination of some features this allows handling although approximately continuous stateaction spaces after having derived the objective function an experimental evaluation on both mujoco and atari domains is presented including an ablation study major about the linearity of the weight linear representations expressed in terms of a feature function are common in rl as the reward function can be often seen as a tradeoff of different objectives encoded in the features however the choice of the linear representation in equation 7 is based on the assumption that the marginalized weight is linear in the feature function this assumption seems to me less justified compared to the one for the reward function clearly a suitable feature design could overcome this limitation can the authors explain how the features phi are selected or learned experimental evaluation the results presented in the experimental evaluation are partially unsatisfactory as also the authors acknowledge it seems that there is no clear benefit in employing the marginalized importances sampling both the baselines and the proposed approach compared to standard deep temporal difference approaches the authors suggest that this phenomenon can be ascribed to the fact that the quality of the marginalized weights is affected by the successor representation learned i dont think this is the main weakness of the paper but a reflection of the usefulness of the method in complex scenarios is necessary alternatively it would be interesting to compare the proposed approach with dualdice and gradientdice on simpler tasks maybe toy ones in which dualdice and gradientdice work well minor the related work section should be moved later in the paper maybe after section 4 pag 2 two lines above equation 2 the transition model is here employed as a distribution over the next state s and the reward r but the reward function is considered separately in the definition of mdp presented before figures 2 3 and 4 the plots are not readable when printing the paper in grayscale i suggest using different linestyles andor markers typos pag 2 isnt is not pag 2 doesnt does not pag 8 the the the overall the paper can be considered incremental compared to dualdice i did not find any fault but i feel that the significance of contribution is currently insufficient for publication at iclr in particular for a paper that proposes a practical variation of a theoretically sound algorithm the experimental evaluation is essential i think that the results are currently unable to clearly show the advantages of the proposed methoddocsepthe paper proposes srdice which uses a successor representation to compute dice discounted stationary distribution correction term i am worried about both the technical and experimental qualities of this work the theorems presented are either obvious or previously presented in other works while the authors argue that the marginalized importance ratio is independent of the horizon i assume that they are talking about the variance mis only alleviates the estimator variances exponential dependence on the horizon to become the polynomial dependence on the horizon as proved in xie et al towards optimal offpolicy evaluation for reinforcement learning with marginalized importance sampling 2019 in the experiments it is hard to believe that the gradientdice and dualdice perform that poorly having log mse larger than 0 while the gendice paper reports log mse less than 4 halfcheetah the paper uses phi and psi learned by the previous deep successor representation learning algorithm which is not meant to be used to learn marginal importance ratio in particular phi is learned by minimizing state action and reward reconstruction error and psi is the discounted sum of phi if we consider a case where pi only exploits a very small subset of stateaction space it is easy to see that the reconstruction error minimization in the dataset is not an optimal representation for the marginal importance ratio learning in this sense only the linear vector w is used for the learning of marginal importance ratio the experiment setting is not fair directsr and srdice in their implementation have effectively 2 hidden layers where dualdice and gradientdice in their implementation have a single hidden layer the paper is hard to follow especially notation abuse between the real reward and the virtual reward which is optimized to give a marginal importance ratio is very confusing abuse between real q and minimizer q as well section 42 is also confusing because the authors imposes the problem of dualdice that is not actually handled by srdice the idea of adopting successor representation for learning marginal importance ratio seems quite novel some people will be interested in this work but i think the paper would not have much impact on the field overall pros the idea of using successor representation for learning marginal importance ratio is novel avoids minimax formulation of other dice algorithms which makes the optimization very hard cons not very meaningful theoretical results are presented which mostly just confuse readers uses the representation that is not learned for marginal importance ratio learning questionable experiment results minor details y axis label is log mse for figures although the y axis is log scaled mse most of the concerns are addressed by the authors and i raised my score accordingly docsepthe authors propose srdice based on deep sr for density ratio learning empirical advantages are observed in tested domains overall i think the idea is interesting and theoretically sound but the experiments are not fully convincing it looks the main claim is that srdice is better than other mis methods because srdice delegates the update propagation over the mdp to sr while other mis methods consider update propagation and density ratio learning together to me this claim is not coupled with function approximation at all so i would like to first see some experiments in the tabular setting srdice is a twostage learning algorithm ie sr learning density ratio learning both have hyperparameters to be tuned gradientdice and dualdice are onestate learning algorithm if in the tabular setting we can empirically verify that under the best hyperparameter configuration of each algorithm guaranteed by a thorough grid search srdice is more data efficient counting the samples used in both stages than gradientdice and dualdice in terms of the density ratio prediction error then the argument can be well backed wellcontrolled experiments like this however do not appear in the current submission once deep networks are used for function approximation we run into the problem of representation learning the authors should at least include one more experiment where mis methods run directly on the pretrained deep sr features psipis andor phis in this way we can distinguish whether the empirical advantage of srdice comes from srdice itself or the improved representation learning im also interested in seeing experiments for larger gammas eg 0999 09999 im wondering if srdice can consistently outperform gradientdice with increasing discount factors overall im happy to increase the score if i have any misunderstanding or more convincing results are presented i appreciate that the authors include deep td and behavior rpi as baselines the empirical study has independent interest beyond srdice moreover deep td is also referred to as fittedqevaluation fqe in x x voloshin cameron et al empirical study of offpolicy policy evaluation for reinforcement learning arxiv preprint arxiv191106854 2019 nov 24 the author response addressed my concerns and i therefore raised my score from 5 to 6 i particularly like the idea of using successor representation for density ratio learning ### Summary:
the paper is about an approach that combines successor representation with marginalized importance sampling although the reviewers acknowledge that the paper has some merits interesting idea good discussion extensive experimental analysis and the authors responses have solved most of the reviewers issues the paper is borderline and the reviewers did not reach a consensus about its acceptance in particular the reviewers feel that the contributions of this paper are not significant enough i encourage the authors to modify their paper by taking into consideration the suggestions provided by the reviewers and try to submit it to one of the forthcoming machine learning conferences
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: in this paper the authors assess the quality and reliability of open ended systems such as gpt3 and codex from the perspective of cognitive biases the authors primarily focus on the application of code generation and design examples that elicit commonly made errors by these systems for the task of code generation and completion the authors primarily focus on the 4 different set of cognitive biases 1 framing effect 2 anchoring bias 3 availability heuristic and 4 attribute substitution the main contributions creation of framework that contains hypothesis and prompts that extend existing cognitive bias methodology the hypothesis and examples created can measure the model sensitivity to these transformations the framework is also capable of discovering high impact errors and other types of errors made by the model strengths weakness the paper studies an important problem of robustness of large scale lms such as codex and gpt3 from the task of of code generation using cognitive bias as a way to measure robustness of these models is an interesting and novel approach results in the paper show that by manipulating exhibits different learning effects and thus resulting in significant drops in performance however claiming these to be exhibits of cognitive biases seems to be a bit of a stretch most of the errors discovered could be novel from the perspective of code generation but prior literature in the area of robustness and fairness and prompt engineering shown that these large scale model are susceptible to unwanted context added to prompts the authors address limitations of systems such as gpt3 and codex however there is no mention of limitations of designing prompts docsepthe paper aims to evaluate generation models in text using qualitative categories patterns that demonstrate systematic errors as opposed to individual instance errors the main casestudy is performed on opanais codex where the authors find that codex does predictable mistakes which are dependent on a how the input prompt is framed b outputs are closer to anchors or c biased towards instances that are similar to highly frequent training samples the paper also claims that the proposed framework which uses the concept of human cognitive biases to elicit hypotheses of errors can unearth highimpact errors which for codex amounts to generations that can delete files strengths essentially the paper attempts to understand how large language models are sensitive to the context provided do they focus on irrelevant information or distractors and can their failures be categorized into frequent modes the novelty claimed here is that the paper analyzes these failures from the lens of cognitive biases the paper can be seen as an extension of checklist ribeiro et al here the authors rely on cognitive bias literature to design adversarial prompts with expected outputs unearthing highimpact errors is an important usecase weaknesses the nature of the prompts or the output tendencies mimicking patterns from training corpora etc have been studied in the literature before on different applications for example checking for sensitivity to context and the noise present in them is a known approach the contribution that this paper makes is probably in collating these testing strategies and clubbing them under the cognitive bias framework the connection to cognitive biases while interesting at times seem forced nevertheless it is an interesting way to look at adversarial examples and might be beneficial as it summarizes some important patterns of failure particularly patterns such as framing noise in context and anchoring are interesting as they have wide applicability there is no clear section dedicated to the limitations of the current work maybe the authors should think about situations where cognitive bias framework would fail to elicit errors also are there cases where output of certain prompt attacks can never be anticipated docsepthis paper explores the robustness of large language models through the lens of 4 human cognitive biases framing anchoring attribute substitution and availability results show that openais codex program synthesis model is not robust to changes in prompts that are inspired by these 4 biases strengths wellwritten and easy to understand for the most part especially introduction and some figures categorizes a few types of errors for program synthesis models and develops interesting evaluations for them ie functional accuracy custom evaluations for each error type experiments demonstrate that codex and likely other language modelbased program synthesis models are not robust to certain types of perturbations example of anchoring applied to gpt is especially interesting since it shows a similarity between how language models lms and humans behave weaknesses paper seems to overgeneralize contribution eg claims to study failure modes of large language models but contributions are rather specific to prompt robustness i would expect a paper with this general of a title and abstract to study more errors more comprehensively eg anaphora resolution entailment long context etc i recommend narrowing down the claims made in this paper contribution is relatively simple changing prompts and measuring robustness could expand scope by increasing types of cognitive biases tested or more experiments on gpt missing some key details for reproducibility and understanding needs details on how functional accuracy is checked eg environment open source code figure 3 unclear what is going on maybe add full example andor more explanation in caption figure 5 clarify what colors mean and what is input vs generated section 5 what examples are used here and how many how were examples generated and evaluated section 323 on availability bias unsubstantiated l213214 our results suggest that codex can err by outputting solutions to related frequent prompts in the training set need to show evidence that unary operations applied before binary in training set to make this claim recommend sampling training set and counting unaryfirst vs binaryfirst yes docsepthis paper proposes to use human cognitive biases as inspiration for types of errors that large language models may make as a means of finding and catching such errors they run experiments on four such cognitive biasinspired errors on code generation with codex models they also include two additional sets of smaller experiments one with gpt3 with a preexisting human cognitive bias experiment and another with codex on code deletion as a means of emphasizing highimpact errors as a foreground my view on the use of human cognitive biases as a motivation in this paper is that it is a useful and valid approach for coming up with potential error categories but ultimately the results in the paper can be interpreted and have value independent of the associated human cognitive biases because of this and my relative unfamiliarity with human cognitive bias work i will focus my comments on evaluating the experiments primarily from the point of a view of a machine learning researcher strengths the paper is exceedingly clear wellwritten and easy to follow the examples of the prompt formats is extremely welcome and papers that do promptbased experiments would do well to follow this example the experiments are wellstructured and wellmotivated while i do have concerns regarding some of the experimental setup described below the logical flow from human cognitive biases to a proposed equivalent model experiment to the metrics being measured are generally clear reasonable and cover the questions a critical reader might have eg also measuring copying of the irrelevant propmt weaknesses experimental setup evaluation my major concern with this paper is the experimental design and whether the experiments correspond to the research question being asked i will discuss my concerns with each of the experimental setups here this section includes both what i consider weaknesses as well as questionsadditional results i would like to see 1 framing i have an issue with the choice of irrelevant preceding function and the interpretation of model behavior in that i do not share the authors interpretation that the prepended functions are irrelevant for instance given a prompt def addx y adds numbers raise notimplementederror def multiplyx y multiplies numbers i would expect the model to output raise notimplementederror this looks like a file that is workinprogress and to my mind the model is correctly extrapolating from the prompt in the authors words i do not believe this is semantically irrelevant information a better example and this overlaps somewhat with anchoring would be something like def dividex y divides numbers assert y 0 return x y def multiplyx y multiplies numbers in this case if the model outputs assert y 0 then we can conclude that the model is actually using the prepended function in an erroneousunhelpful way put another way if the experimenter wants the model to generate functional code they should to prompt with functional code this should be a pretext for all experimental involving functional accuracy 2 anchoring i think the setup and results in this experiment are clearer than for framing one concern i have is with the confounding factor of the anchor function having the same name as the function to be completed an experiment with a similar name instead eg common1 vs common2 would clarify this result 3 availability the authors conjecture without evidence that programmers tend to apply unary operations first and use this as the basis for their conclusion that codex has learned to output related prompts that occur more frequently in the training set from the results presented we can take away that the model appears more biased toward unaryfirst solutions but it seems difficult to draw conclusions relating to the training set without further evidence i also have a milder concern regarding the formatting of docstrings in this and the attribute substitution experiments docstrings should go under the function signature and describe the function the docstrings in these two experiments are provided above the function signature and written in the form of an instruction write a function that which is not how docstrings are used in practice and therefore should not be part of the expected behavior of the model 4 attribute substitution my primary concern with this experimental setup is that it introduces contradictory function names and docstrings see also the discussion above and asserts that the model ought to take the dosctrings as the ground truth and that the function names are misleading it is not clear to me that this has to be that case i could conversely argue that in practice documentation updates often lag behind code changes and that the function name is a more reliable sourceoftruth than docstrings under this interpretation what the results in table 2 show is that the model mostly correctly adheres to the function name and is only mildly affected by the incorrect docstring information even in this lens i think the results are still valuable because the conflicting information appears to introduce a set of other errors i recommend the authors revisit and rethink the design and interpretation of this experiment admittedly this is confounded in the docstring subexperiment since the only source of information is the dosctring 6 deletion this suffers from the same issue as the attribution substitution experiments though not to the same degree deleteall can be reasonable seen to be context dependent that said i wonder if the results would be different if the function were named more helpfully deleteallwithlibraries in summary while i think this approach for finding model errors is valid and worthwhile the current experiments fall short of demonstrating the biases in a convincing manner addressing these issues would make a meaningful difference to my review scale of the experiments the experiments tend to be on a small scale with a smallish number of examples particularly given the potential of models to be sensitive to the exact phrasing of prompts this is a moderate but not large concern lack of evaluation on open models as the authors acknowledge the codex model is private and many details regarding the architecture and training are not public this makes it difficult to ascertain for instance that the models have no had additionally curated data to avoid some of these errors ie a more naively model may perform even worse than reported in the paper while i acknowledge that this paper is primarily focused on codex and duplicating experiments on an additional model with a separate setup is significant additional work for the authors i think it is fair to say that experiments solely on a set of nonpublic models with many unknowns does hurt the scientific contribution of the paper and calls into question whether these results generalize even a small set of experiments on a similar more public model eg codegen that demonstrate equivalency of results would help assuage the reader the authors have framed the work in terms of being helpful to catch errors of large language models which i think is reasonable the work does have experiments testing the propensity for models to indiscriminately delete files but i believe the experiments are conducted within a reasonable and contained setting docsepauthors highlight the fragile nature of the gpt3 style code generation models inspired from cognitive biases research authors design different manual input transformations prompt tuning where they add some irrelevant information to the model using experiments with codex model they show that models code generation performance drop significantly when they fed modified input to the model its worth highlighting that the proposed transformations are generally nonadversarial in nature strengths proposed transformations are very intuitive and are wellmotivated by prior research in cognitive science weaknesses while the transformations described in the paper are intuitive some of them are hard to scale in particular one has to design such transformations manually for a given programming language setup further authors have not provided an easy to use list of transformations or dataset version which other researchers can reuse for future experiments prior work 1 show that problem description context are crucial to execution accuracy this papers experiments also underline that execution accuracy highly depends upon the prompt context fed to the model and minor changes in the context can lead to big drop in execution accuracy i am not sure that this paper offers new insights to the community authors crafted these prompts manually and show that all of these transformations lead to significant accuracy drop its not clear 1 how many different prompts they tried for a given experiment 2 have they reported results for all transformations or only for those transformations which lead to accuracy drop paper is missing crucial details related to how the code was generated in particular its not clear which decoding method greedy sampling with temperature etc is used to generate the data without such crucial details its hard to replicate the results presented in the paper i dont see any major concerns related to negative societal impact ### Summary:
this paper aims to qualitatively categorize errors by large language models for generation tasks ie summarization program synthesis drawing inspiration from 4 human cognitive biases framing effect anchoring bias availability heuristic and attribute substitution the work includes case studies on open ais codex and gpt3 and demonstrates that codex makes predictable mistakes based on the framing of the input prompt outputs that are closer to anchors and instances that are similar to frequent training examples the paper uses the proposed framework to elicit highimpact errors the paper is interesting and is clearly presented the experiments are welldesigned the rebuttal includes additional experiments suggested by the reviewers with different prompts and additional models
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper evaluates two moving average strategies for gan optimization since exact theoretical analysis is difficult for this case some informal consideration are provided for explanation of performance gain experiments confirmed high performance of averaging the basic idea seems to be reasonable moving averagebased strategy would stabilize optimization process the obvious weakness of the paper is technical novelty although the experimental improvement is confirmed i would have to say just comparing two known averaging methods would not have strong novelty section 31 would be most important part of the paper but it only mentions quite general tendency of averaging seems not specific to gandocsepthis paper tries to adapt the concept of averaging well known is the game literature to gan training in a simple minmax example the iterates obtained by gradient method do not converge to the equilibrium of the game but their average does this work first provides intuitions on the potential benefits of exponential moving average ema on a simple illustrative example and explore the effect of averaging on gan in think that the approach of this paper is interesting i particularly like the experiments on celeba fig 6 and 7 that seem to show that the averaged iterates change more smoothly with respect to the attributes of the faces during the training procedure nevertheless i have some concerns about the claims of the paper and the experimental process im surprised by the values of the inception score provided in table 2 which do not seem to correlate with the sample quality in fig 3 why did not you use the standard implementation of the inception score provided in salimans et al 2016s paper i think that the effectiveness of ema over uniform averaging is a bit overclaimed from a theoretical point of view uniform averaging works better at least in your example in 31 if you uniformly average the periodic orbit you get a converging iterate moreover concerning to this toy example note that this continuous analysis has been already introduced in goodfellow et al 2016 and the hamiltonian interpretation has been already provided in balduzzi et al 2018 however i think that the intuition on the vanishing magnitude of the oscillation provided by ema is interesting the continuous dynamics is actually different from the discrete one i think that an analysis on the discrete case that is used in practice might be more insightful the comparison with uniform averaging is not fair in the sense that uniform averaging has no hyperparameter to tune in figure 6 uniform averaging performs better than a not well tuned ema a fair comparison would be for instance to propose a parametrized online averaging thetamat fract alphat thetamat1 fracalphat thetat and to tune it the same way beta is tuned in ema refs salimans tim et al improved techniques for training gans advances in neural information processing systems 2016 goodfellow i 2016 nips 2016 tutorial generative adversarial networks arxiv preprint arxiv170100160 balduzzi david et al the mechanics of nplayer differentiable games icml 2018 minor comments in the introduction gradient vector fields of the game may not be conservative mescheder et al 2017 and the related work mescheder et al 2017 states that a reason for nonconvergence is the nonconservative gradient vector of the players the notion of conservative vs nonconservative vector field is never mentioned in mescheder et al 2017 i think you are actually referring to the blog post on that paper httpswwwinferencevcmynotesonthenumericsofgans in the related work can not in fact it has recently been established that the smooth continuoustime analogues of first order methods such as online gradient descent followtheregularized leader in bilinear zerosum games are recurrent ie effectively periodic with trajectories cycling back into themselves can you provide a citation some published papers are refereed as arxiv paper for instance mescheder et al 2017 and mescheder et al 2018 you should cite the published version docsepthe submission analyzes parameter averaging in gan training positing that using the exponential moving average ema leads to more wellbehaved solutions than using moving averages ma or no averaging none while reading the submission the intuitively given explanations for using ema cycling mainly seem reasonable however i do not think there is sufficient understanding of the nonconvergence behavior in realworld gan settings and this submission does not contribute much to it the theoretical underpinnings in section 31 are quite thin and focus on describing one particular example of a bilinear saddle problem which is quite far from a typical gan as used eg in computer vision problems although interesting to read i would not draw any widerreaching conclusions from this carefully constructed example instead the submission serves mainly as an experimental study on why ema works better in some of the tested cases than manone main quantitative measures are the oftenused is and fid it is clear from both the provided quantitative values as well as the provided qualitative images that either averaging method is likely better then no averaging unfortunately is and fid contradict each other somewhat for ema vs ma in table 2 which is attributed to is being more flawed than fid neither measure is flawless however which diminshes the usefulness of the numeric results somewhat well designed human studies may be complicated to set up and costly to conduct but these could demonstrate additional confirmation of the usefulness of the proposed method ema introduces an additional hyperparameter beta which is only discussed very briefly and only in the context of qualitative results i missed a more thorough discussion of the impact of beta overall the submission makes an interesting proposition usage of ema during gan training but falls short in convincing me that this is a useful thing to do in broader contexts overall originality is minor projected significance is minor to medium edit after the rebuttal resulting in several changes and additions to the paper i am changing my rating from 5 6 ### Summary:
this work analyses the use of parameter averaging in gans it can mainly be seen as an empirical study while also a convergence analysis of ema for a concrete example provides some minor theoretical result but experimental results are very convincing and could promote using parameter averaging in the gan community therefore even if the technical novelty is limited the insights brought by the paper are intesting
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the authors proposed a normalization method that learns multimodal distribution in the feature space the number of modes k is set as a hyperparameter each sample xn is distributed softly assigned to modes by using a gating network each mode keeps its own running statistics 1 in section 32 it is mentioned that the mn didnt need and use any regularizer to encourage sparsity in the gating network is mn motivated to assign each sample to multiple modes evenly or to a distinct single mode it would be better to provide how the gating network outputs sparse assignment along with the qualitative analysis 2 the footnote 3 showed that individual affine parameters doesnt improve the overall performance how can this be interpreted if the mn is assuming multimodal distribution it seems more reasonable to have individual affine parameters 3 the overall results show that increasing the number of modes k doesnt help that much the multitask experiments used 4 different datasets to encourage diversity but k2 showed the best results did you try to use k1 where the gating network has a sigmoid activationdocsepthe paper proposes a generalisation of batch normalisation bn under the assumption that the statistics of the unit activations over the batches and over the spatial dimensions in case of convolutional networks is not unimodal the main idea is to represent the unit activation statistics as a mixture of modes and to reparametrise by using mode specific means and variances the posterior mixture weights for a specific unit are estimated by gating functions with additional affine parameters followed by softmax a second similar variant applies to group normalisation where the statistics is taken over channel groups and spatial dimensions but not over batches to demonstrate the approach experimentally the authors first consider an artificial task by joining data from mnist fashion mnist cifar10 and svhn and training a classifier lenet for the resulting 40 classes the achieved error rate improvement is 269 231 when comparing with standard bn in a second experiment the authors apply their method to single classification tasks like cifar10 cifar100 and ilsvrc12 and use large networks as eg vgg13 and resnet20 the achieved improvements when comparing with standard bn are one average 1 or smaller the paper is well written and technically correct further comments and questions to the authors the relevance of the assumption and the resulting normalisation approach would need further justification the proposed experiments seem to indicate that the node statistics in the single task case are less multimodal as compared to the multitask otherwise we would expect the comparable improvements by mode normalisation in both cases on the other hand it should be easy to verify the assumption of multimodality experimentally by collecting node statistics in the learned network or at some specific epoch during learning it should be also possible to give some quantitative measure for it please explain the parametrisation of the gating units more precisely paragraph after formula 3 is the affine mapping x rk a general one assuming that x has dimension cxhxw this would require a considerable amount of additional parameters and thus increase the vc dimension of the network even if its primary architecture is not changed would this require more training data then i miss a discussion of this aspect when comparing different numbers of modes sec 41 table 1 the size of the batch size was kept constant the authors explain the reduction of effectiveness of higher mode numbers as a consequence of finite estimation decreasing number of samples per mode would it not be reasonable to increase the batch size proportionally such that the amount of samples per mode is kept constantdocsepsummary batch normalization bn suffers from 2 flaws 1 it performs poorly when the batch size is small and 2 computing only one mean and one variance per feature might be a poor approximation for multimodal features to alleviate 2 this paper introduces mode normalization mn a new normalization technique based on bn it uses a gating mechanism similar to an attention mechanism to project the examples in the minibatch onto k different modes and then perform normalization on each of these modes clarity the paper is clearly written and the proposed normalization is well explained novelty the proposed normalization is somewhat novel i also found a similar paper on arxiv submitted for review to ieee transactions on pattern analysis and machine intelligence 2018 m m kalayeh m shah training faster by separating modes of variation in batchnormalized models arxiv 2018 i didnt took the time to read this paper in details but the mixture normalization they propose seems quite close to mn could the authors comment on this pros and cons clearly written and motivated try to address bns weakness which is an important direction in deep learning i found similar papier in the literature the proposed method aims to make bn perform better but pushes it toward small batch settings which is where bn performs poorly misses comparisons with other techniques see detailed comments detailed comments 1 multimodality it is not clear if the features are multimodal when performing classification tasks some histograms of a few features in the network would have help motivate the proposed normalization however it seems indeed to be an issue when training gans to make bn work when placed in the discriminator the real and fake examples must be normalized separately otherwise the network doesnt train properly moreover when dealing with multimodal datasets such as the one you created by aggregating different datasets one can use the film framework v dumoulin et al featurewise transformations distill 2018 and compute different means and variances for each datasets how would the proposed method perform against such method 2 larger scale it would be nice to see how mn performs on bigger networks such as the resnet50 or a densenet and maybe a more interesting fullyconnected benchmark such as the deep autoencoder 3 small batch regime it seems that the proposed method essentially pushes bn towards a regime of smaller minibatch size where it is known to performs poorly for instance the gain in performances on the imagenet experiments drops quite a lot already since the training is divided on several gpus and thus the effective minibatch is already reduced quite a lot this effect gets worse as the size of the network increases since the effective minibatch size gets smaller this problem also appears when working on big segmentation tasks or videos the minibatch size is typically very small for those problems so i fear that mn will scale poorly on bigger setups i also think that this is the reason why you need to use extremely small k 4 validation set what validation sets are you using in your experiments in section 41 the different dataset and their train test splits are presented but what about validation conclusion given the similarity with another paper already in the literature i reject the paper also it seems to me that the technique actually pushed bn towards a small batch regime where it is known to perform poorly finally it misses comparison with other techniques revision after the rebuttal i increased my rating to a 6 i feel this paper could still be improved by better motivating why multimodality is important for single tasks for example by plotting histograms of activations from the network i also think that the paper by kalayeh shah should be presented in more details in the related work and also be compared to in the experimental setup for example on a small network especially because the authors say they have experience with gmms ### Summary:
the paper develops an original extensiongeneralization of standard batchnorm and group norm by employing a mixtureofexperts to separate incoming data into several modes and separately normalizing each mode the paper is well written and technically correct and the method yields consistent accuracy improvements over basic batchnorm on standard image classification tasks and models reviewers and ac noted the following potential weaknesses a while large on artificially mixed data improvements are relatively small on single standard datasets 1 on cifar10 and cifar100 b the paper could better motivate why multimodality is important eg by showing histograms of node activations c the important interplay between number of modes and batch size should be more thoroughly discussed d the closely related approach of kalayeh shah 2018 should be presented and contrasted with in more details in the paper also comparing to it in experiments would enrich the work
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: summary the authors propose latephase weights a method of updating the weights near the end of training via a splitting and ensembling mechanism they analyze the benefits in the noisy quadratic setting the method improves validation performance on a range of image recognition tasks and on enwiki8 comments the weight interaction functions h should be more explicitly defined rather than just described in text the paper is overall well written and flows smoothly i think there should be more discussion on the choice of t0 for example in table 1 why does sgd perform worse when t00 it would be good to get a sense of robustness to this hyperparameter good results on cifar latephase weights are shown to boost performance over sgd and to be complementary with swa there are some benefits in the ood setting as well recommendation justification i vote to accept the paper the idea is interesting wellmotivated and seems straightforward to incorporate into existing pipelines however the improvements seems modest in some settings eg imagenet and for the best performance it seems like we should still stick to deep ensembles questions on the imagenet experiments what is the validation accuracy of the pretrained model can you comment on the computaional and memory complexity of your algorithm versus vanilla sgd in the comparisons between late phase weights and sgd do both algorithms consume the same amount of data if so this would be good to mention could the entire network be treated as latephase weights would this help performance minor comments i would consider alluding to possible choices of the weight interaction functions h when it is first introduced at the start of 21 in algorithm 1 how does the loss function consume three inputs this is different from when it is initially described its a bit unclear what is being compared in figure 2 increased score from 6 to 7 docsepsummary the paper proposes a method to improve solutions found by sgd by ensembling subsets of weights in latephase a family of lowdimensional latephase methods are analyzed and shown to improve generalization in cifar10100 imagenet and enwik8 authors also analyze the method in more tractable noisy quadratic settings contribution of the authors is that rather obtaining ensemble they utilize efficient ensemble to guide sgd training and ultimately obtain a single model reason for score while the paper discusses efficient ways of utilizing latephase weight ensemble and improving sgd training the demonstrated benefit is not significant enough for practitioners to pursue the method without strong practical application potential merit of the proposed method is weak since it does not obviously elucidate some aspects of neural network training pros the paper is clearly written and easy to understand the proposed method is it is well structured that helps to improve the clarity proposed method tackles a significant problem in the standard ensemble method in which both traininginference computation can be quite costly the papers method only ensembles subset of weights therefore added training cost is minimal and since inference is done on averaged weight it becomes essentially a single model among various latephase schemes batchnorm latephase seems to work well which is widely used among vision models so easily applicable also since latephase can be applied postpretraining it can be used to improve pretrained models as far as i can tell various experimental conditions are very well controlled and thoughtfully designed cons the idea of weight averaging is not so novel as duly noted by the authors main question arises for the paper is whether the proposed method is worth the effort while all experiments show that the proposed method improves the baseline somewhat deep ensemble baselines remain strong also quoted difference between methods does not mean statistically significant effect see vincent vanhouckes article on reporting significant figures httpstowardsdatasciencecomdigitsignificanceinmachinelearningdea05dd6b85b according to this article results reported in table 1 cifar10 in wrn a significant figure with a 10k test set should be around 02 and differences between different methods are at best marginal this can be applied to most tables and except for deep ensembles improvement other differences are not very significant i wonder as discussed by the authors this is due to mostly the benefit of ensembles is through incorporating different modes as argued in fort et al 2020 rather than a single mode i imagine a single mode ensemble could be beneficial when variance within the mode is large however for models considered by the authors seem to have small model variance which minimizes effect of technique utilizing single mode while sigma0 and t0 are hyperparameters of the algorithm no good way to determine it is explained the role of section 31 is not clear for one thing the legend in figure 1 is confusing where the role of noninteger k is mysterious to me i would suggest clarifying what the message of the section would be in context of understanding latephase weight models nits and additional feedback anonymized link is neither there in the main paper or included as supplementary material if the authors intended to include the code this is a note that code can not be found to the reviewers for models that do not use batchnorm i believe most interest to practitioners would be using transformer based models i wonder if rank1 latephase or layernorm latephase would show improvements in this case was latephase classification layers ever evaluated or discussed in the main paper i find some discussion on the appendix but seem to be missing in the main text i thank the authors for their hard work addressing issues raised by the reviewers authors have answered many issues pointed out by improved performance and showing robustness to hyperparameters and ive increased my score from 5 to 6 and support accepting the paper docsepthis work suggests a variant of ensembling that is more computeefficient specifically it involves forking an ensemble only in the late stage of training and forming this ensemble via a lowdimentional family that is instead of maintaining independent networks maintain only lowrankstyle perturbations of the base network for various instanciations of lowrank the experimental results are somewhat limited but appear to be competitive with current efficientensembling approaches like swaswag the absolute improvement of this method is not very large 03 on cifar 02 on imagenet and there is a large gap to deep ensembles i weakly recommend acceptance because the method appears promising for future work and the experiments seem correct there is also a theory section included though i am generally unconvinced by results in such simple toy examples such settings can usually be contrived to exhibit any desired behavior weaknesses the experimental section would be greatly strengthened by additional experiments for different models and settings there are only 2 architectures tested on cifar10 for example it would also be informative to see the performance of these methods in harder settings for example cifar10 with fewer train samples the ood uncertainty results could be expanded uncertainty estimation and robustness are some of the most relevant practical uses of ensemble methods so it is especially important to evaluate ensembles in this context currently aggregate results are shown in table 4 but it would be good to explicitly see for example how the performance of this method degrades with increasing cifar10c corruption severity as opposed to deep ensembles also reporting the mean corruption error mce for each dataset individually will allow standard comparison to prior methods comments which do not affect the score it seems that starting the ensembling at a late phase in training is the main contribution of this work this could be applied to any ensemble method and you propose several explicit instantiations it could help to focus the writing in terms of this contribution and also to further investigate the role of t0 the time at which ensembling starts edit after rebuttal increased score from 6 to 7 docsepto improve the generalization performance of sgd methods  this paper proposes to use an efficient ensemblelike approach  which computes an average of an ensemble of sgd weights when retrained from some latephase of sgd dynamics  this idea is different to most recent ensemblebased approaches which  aim to average the predictions of the models  the paper focuses on some specific layers of neural networks  in order to apply the latephase training  the batch normalization layers are shown to be  simple and effective some other layers are also analyzed  including a recently introduced rank1 multiplicative matrix  weights idea for fullconnected layers  section 3 presents the numerical results and show that the generalization of sgd  is moreorless improved on various benchmarks explanation of why the generalization is improved in relation with the flatness of energy landscape is also discussed   i find that this approach is quite sensitive the choice of the hyperparameters  such as the beginning of the latephase t0 and the noise perturbation sigma0  it is written in section 21 that in practice sigma00 yields a set of models this results in improved final generalization however in the result of imagenet in section 33  the sigma0 equals to 0 thus it is not conclusive that sigma00 is better  as the improvement in section 33 seems marginal compared to the baseline and the  standard deviation it thus does not fully support the effectiveness of the batch normalization layers  i would recommend using some other dataset or models  but with a more consistent set of hyperparameters  in terms of writing i would recommend to write out the full algorithm of alg 1 or at least in the appendix  including the variant of the sgd momentum and adam  the swa is also worth writing out clearly which is not clear to the reader  is the deepensemble result in table 1 from sgd or swa  this is not clear from the text overall i think both the methodology and the writing need to be improved the revisions made by the authors have addressed all my concerns ### Summary:
this paper proposes to learn an ensemble of weights given a set of base weights from some point late in normal training the authors apply this approach to a number of configurations and find modest performance improvements for normal test settings and larger improvements for out of distribution settings while reviewers had some concerns about the size of the improvement relative to baselines all reviewers agreed that the proposed method is interesting and will likely impact future work especially given the new experiments provided by the authors i recommend that the paper be accepted
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper shows a latentplanning rl model to learn control policies directly from pixels to learn a better representation it uses recurrent model contrastive learning approach which enhances the representation learning performance of single frame based contrastive methods this was tested robotic control suites with challenging distracting backgrounds the main contribution of the work is the addition of recurrence to the model and the extensive testing and explanation on the reason it tends to work better strong points the paper is well written and easy to follow i particularly liked the more straighttothepoint and honest approach on writing the authors could have easily try to mask their similarity with some other methods in the literature but they were upfront with that this made the understanding of the paper way easier and also easier to understand the main contributions there is a substantial comparison with relevant models in the literature that already deal with the dcs the results obtained are really impressive and are a new stateoftheart on interactive environments with distractive backgrounds the ablations specifically the ones concerning the usability of the recurrent contrastive method are very useful specially the intuition on the negative samples per mini batch further the appendix provides even more ablations making very clear that a recurrent state has a major impact on the general performance the visualizations are also good standard to be used on this field which performs control directly from pixels they also show that their model is capable of directly detecting the object of interest weak points the proposed model is indeed very similar to dreamer1 the authors address specifically one point from dreamer the contrastive learning strategy which failed to produce better results as reported by 1 however i do believe that this closer look into this specific point can be useful for the community and insightful in general questions i would be curious to find more references on the idea that hard negatives needs to be found on the mini batch in order to contrastive learning to have a good performance i wonder how the performance of this method would be in a more open task than mujoco style robotic control tasks with a different initial state or with other elements in the scene that the agent need to interact might have a negative impact on this method results 1 hafner danijar et al dream to control learning behaviors by latent imagination arxiv preprint arxiv191201603 2019 i think this is a very useful and well written paper even though the scope is small the results are convincing and it shows a very clear way on how to effectively use contrastive representation learning methods while learning to control directly from pixels after rebuttal after reading the other reviewers comments and the rebuttal i see that i missed some literature that needed further comparison i think it is a good and well written paper but i would lean for rejection given this new data docsepthe paper proposes a recurrent statespace model that learns robust representations for robotic control the proposed method builds on top of prior works on worldmodels which learn a latent dynamics model of the agent which can be used for planning and action selection different from prior work such as dreamer and slac which rely on pixelbased observation reconstruction this paper highlights that a simpler contrastive loss for the nextobservation prediction achieves better results if a recurrent statespace model is used for the latent space results are presented on the distracting control suite benchmark and show strong improvements over prior approaches strengths the clarity of the paper is good and the approach has been described well the experiments have been well designed and indicate strong improvements in the model robustness particularly figure 2 and table 3 table 2 was helpful to understand the key design choices and their impact on final performance the mask visualizations in figure 5 provide good clarity on how the robustness is achieved by the proposed method the ablation studies in the main paper and supplementary clearly highlight the benefits of individual components of the method weaknesses novelty the novelty of the approach isnt quite clear in sec 1 page 2 the authors highlight that one of the key findings is that contrastive learning can in fact lead to surprisingly strong robustness to severe distractions provided that a recurrent statespace model is used this finding itself is valuable in my opinion but the method core does not seem novel to me relative to dreamer and slac the key novelty appears to be the contrastive loss term from eqn 1 which predicts future observations instead of autoencoding however the idea of using a recurrent model to predict encodings of future observations ie not an autoencoder has been studied in cpc 1 actionconditioned cpc aka cpca 2 predictions of bootstrapped latents aka pbl 3 etc cpc does not use action conditioning but uses contrastive learning cpca uses action conditioning contrastive learning very similar to core pbl uses action conditioning reconstruction similar to recon baseline but has an additional loss to predict state conditioned on observation representation given the above the methodological novelty of core is not clear to me particularly relative to cpca is there something different about the contrastive loss in core relative to cpca without which the performance degrades severely or is the novelty only in the observation that recurrent models are needed for contrastive loss to work well note while some recent work like dbc does not use recurrent models prior approaches already use them with contrastive loss intuition behind why recurrent state models are needed for contrastive learning end of section 1 the authors suggest that when recurrent models are used along with contrastive learning the smoothness of the statespace ensures the presence of informative hard negatives in the same minibatch of training this isnt clear to me positives and negatives are obtained from the nextstep real observations encodings just a single frame and does not use the recurrent model how is the smoothness of the statespace related to hard negatives while the performance degrades without the recurrent model in figure 3 top isnt this more likely due to a poorer state representation caused by the lower capacity no rnn and lack of information aggregation over time this would affect all models not just the core missing baselines related works a few related methods have not been compared with or discussed augmented temporal contrast aka atc 4 does not use recurrence but uses future observation prediction and has been shown to achieve good improvements over curl cpca 2 uses recurrence future observation prediction contrastive and action conditioning this is very similar to the proposed method and should be compared with pbl 3 uses recurrence future observation prediction reconstruction and action conditioning this method introduces an additional pstate observation term that results in bootstrapped learning might improve over the recon baseline other concerns why is core worse than pse on 4 6 tasks for davis 2 videos in table 1 why is core better with 60 videos why are only 2 baselines used in figure 4 can the authors include the complete set do we observe similar trends as figure 2 are the findings from figure 5 specific to core or do other methods like pse also learn similar masking functions 1 cpc oord aaron van den yazhe li and oriol vinyals representation learning with contrastive predictive coding arxiv preprint arxiv180703748 2018 2 cpca guo zhaohan daniel et al neural predictive belief representations arxiv preprint arxiv181106407 2018 3 pbl guo zhaohan daniel et al bootstrap latentpredictive representations for multitask reinforcement learning international conference on machine learning pmlr 2020 4 atc stooke adam et al decoupling representation learning from reinforcement learning international conference on machine learning pmlr 2021 i am concerned about the lack of clear novelty in the paper and other experimental issues highlighted in weaknesses i will update my rating based on the authors responses docsepthis paper has presented core contrastive recurrent state space model for modelbased robust modelbased reinforcement learning for robotic control standard reconstructionbased state space models are less robust in the unstructured realworld scenarios because of the highfrequency details instead core learns the state space model with contrastive learning which greatly improves robustness in addition to this a policy is being learned with sac experiments on distracting control suites and several robotic control tasks demonstrate the better robustness of core weaknesses the major issue is that the proposed idea and the experiment setup is not novel they highly overlap with a prior corl 2020 paper contrastive variational reinforcement learning for complex observations cvrl 1 which however has not been cited in the submitted manuscript they overlap in the following aspects 1 the idea is the same cvrl also extends rssm using contrastive learning and aims to improve the robustness of the learned model against realworld observations with high frequency noise both of them use infonce for contrastive learning and the same rssm structure in addition both cvrl and the proposed method use the policy loss from dreamer for learning the policy network the resulting equations and loss functions of the two algorithms are almost the same 2 cvrl also experimented on natural mujoco games which introduces moving backgrounds into the standard dmcontrol suites this is exactly the same with the distracted dmcontrol suites used in the submitted manuscript beyond what has been discussed in the paper cvrl has mathematically shown that by replacing the generative observation likelihood with a contrastive objective we can lower bound the original elbo there are some other weaknesses but i believe the issues discussed above are sufficient to make it a clear rejection the paper is highly similar to a prior work as mentioned above and as a result the contribution of the paper is very limited i would vote for a rejection ### Summary:
meta review of robust robotic control from pixels using contrastive recurrent statespace models this work investigates a recurrent latent space planning model for robotic control from pixels but unlike some previous work such as dreamer and rnnvaebased world models they use a simpler contrastive loss for nextobservation prediction they presented results on the dmcontrol suite from pixels with distracting background settings all reviewers including myself agree that this is a wellwritten paper with clear explanation of their approach the main weaknesses of the approach are on the experimental side see review responses to authors rebuttal by skrv and cjx3 another recommendation from me is to strengthen the related work section to clearly position the work to previous work there is clear novelty in this work but this should be done to avoid confusion the positive sign is that in the discussion phase even the very critical cjx3 had increased their score and acknowledged the novelty from previous related work in the current state i cannot recommend acceptance but im confident that with more compelling experiments recommended by the reviewers and better positioning of the paper to previous work i believe that this paper will surely be accepted at a future ml conference or journal im looking forward to seeing a revised version of this paper for publication in the future
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper focuses on understanding the tail behavior of normalizing flows through a mathematical and statistical way motivated by jaini et al 2020s work on learning longtailed distribution via triangular flows this work proves that the marginal tailedness can be controlled by the tailedness of the marginals of the base distribution in flowbased models based on this theoretical insight the authors propose a new algorithm by leveraging a datadriven permutation scheme to enable a correct tail behavior of the target distribution strength 1 solid and rigorous mathematics foundation the theoretical proofs are very helpful and provide a clear insight which explains the motivation and intuition 2 careful discussion about the related work and current limitations the authors provide a good review and comparison of the existing works also the limitations and future directions are helpful and insightful 3 wellwritten and easy to follow the paper is wellstructured and easy to follow the theoretical proof strongly connects with the experiments which makes the papers much easier to understand weakness 1 the novel contribution is marginal this work is mainly inspired by jaini et al 2020 who proposed to model longtailed distribution via normalizing flows although the theoretical contribution is strong the new proposed mtafs did not show a significant improvement as shown in table 1 compared with vanilla and taf 2 lack of baseline methods and comparison the sota flow models and architectures are not included in the baselines although it is argued by the authors in the potential work i still believe the comparison is necessary many papers have shown that the affine coupling layers such as realnvp have limited expressivity in handling complex distribution either lighttailed or heavetailed distribution would be more challenging so the worse performance might be due to the limited representation capability of the vanilla flows 3 experiments need to be improved with largescale and highdimensional datasets currently only synthetic toy examples are provided to demonstrate the performance the dimensionality is also low if the proposed algorithm is able to scale to highdimensional problems it would be very helpful to increase the impact the paper provides a strong theoretical insight but the experiments and baselines are weak the contribution seems limited with a marginal improvement compared with the current work docsepthe paper proposes an extension to tailadaptive flows for learning the tail behavior of target distributions using normalizing flows the authors propose to learn the tail behavior by learning flows that match the tail properties of the marginal distributions they achieve this by using a source distribution consisting of marginal distributions with tail properties matching the target distribution the tail coefficient of the source distribution is set in a datadriven manner using estimators that can estimate this tail coefficient the paper is the problem of estimating the tail behavior pros modelling tailphenomena or rare events in general is a challenging problem made even more difficult in higher dimensions due to the lack of any definition of heavylight tails in higher dimension the problem considered by the authors of capturing tail behavior by normalizing flows and limitations of the chosen architecture for flow layers imposes is an interesting and valid problem the proposed solution that uses estimators to estimate the tailcoefficient for choice of base distribution is interesting and a nice addition to start the optimization of the nu at a favorable location and to be closer to whats needed overall the paper is very well written and easy to follow the paper develops the idea in a natural and easy to understand manner cons 1 motivation i found that the problem under consideration was not properly motivated and this issue lingers throughout the paper essentially making the paper come across as just an extension of jaini etal 2020 for example it is not clear from the paper why capturing tails in variational inference paradigms is of importance it can be shown that the error in modelling a probability density can be bounded arbitrarily well by learning the density properly on a bounded subset of the support of the density thus what are the drawbacks in the model if it is unable to capture the tail phenomena present in the problem the authors do make statements regarding the limitations of the pushforward density given the choice of base density and the transformation map however i believe a more thorough discussion about the implications of these results both in general and particularly for normalizing flows and some recipe or ideas to alleviate these problems will help the paper tremendously 2 significance another weakness i believe of the paper itself is the significance itself the main result of the paper ie that of specifying and ensuring that marginal distributions have the correct tail coefficient is a direct extension of the work of jaini etal 2020 in some ways the optimization problem presented in jaini etal already can encompasses the correct marginal tails by optimizing over the nu vector this weakness of significance is further amplified by the lack of strong empirical results in the present form the experimental results come across more as proofsof concept rather than proving strong empirical support furthermore id be interested to see the gain in starting the optimization with tailcoefficients estimated using the various estimators vs random initialization and letting the process figure these out will the first step of using estimators lead to any significant gains other comments definition 4 it seems is a bit restrictive as well since it completely sidesteps any issues with differences in tailcoefficients for examplke different marginals can be heavy tailed but have different degrees of heaviness in that case is it correct to call the two distributions having the same tail behavior it seems in the experiments that if the tail estimator estimates that some marginals are lighttailed a normal distribution is used however again there can be degrees of lighttailedness see jaini etal 2020 eg uniform vs normal in these scenarios too a lighter tailed distribution cannot be pushedforward to another lighttailed distribution but with higher tail coefficient with lipschitz maps thus the problem of mismatched tails may still persist overall the paper studies a pertinent and difficult problem however in the current form the present manusript provides only initial proofsofconcept for potentially interesting ideas these ideas need to be demonstrated and explored in more detail both in theory and empirically to make the manuscript stronger docsepthis paper introduces marginally tailadaptive flows mtafs which extend existing work on tafs to better learn a generative model of heavytailed distributions in particular they propose a new type of normalizing flow nf that can learn marginals with mixedtail behavior strengths i thought the paper was interesting nfs definitely do have their limitations despite their expressivity and i dont think the problem of generating distributions with a mixture of both heavy and lighttailed marginals has been considered before the paper provides a more general definition of heavy tailedness that extends existing work and uses it to construct their mtaf method weaknesses that being said i think the paper still requires a significant amount of work in order to demonstrate the efficacy of mtaf first its not clear to me when you would run into situations where you want to generate distributions with mixedtail behavior in the marginals i understand that it would be desirable to generate distributions with heavy tails but when do we encounter cases where we would like to do both i think making this clear would definitely strengthen the paper and could also guide some downstream evaluation tasks the experiments were probably the weakest aspect of the paper mtaf was only evaluated on a synthetic dataset of 16 dimensions which seems too small even for tabular datasets commonly used for evaluating nfs also the evaluations conducted in the experiments did not clearly demonstrate the advantage of mtaf over existing methods for example in table 1 does mtaf capture both the lighttailed and heavytailed components better than tafthe base method this is hard to tell with just a simple average additionally its hard to tell the difference between mtaf and taf in figures 2 and 3 i think the paper would be much stronger if the authors could find some compelling use cases of the method beyond synthetic gaussians and demonstrate that mtaf both captures all marginals more faithfully via likelihoods and can generate samples properly in the tails i also think a big limitation of the method is that mtaf essentially requires separating out the lighttailed marginals from the heavytailed marginals the permutation step where such marginals are grouped into 2 categories this seems particularly problematic as the real advantage of using nfs is to learn complicated dependencies between all dimensions of the data to best capture the overall density this is also why i was asking whether there are realworld examples where such mixtures occur and whether this kind of independence assumption makes sense in these scenarios it seems like mtaf is very restrictive and i am wondering if maybe that is why it doesnt significantly outperform taf and the vanilla baseline questions im also curious if the method performs worse relative to conventional flows eg maf when the distribution in question is only lighttailed or heavytailed it seems like if the tail index estimator is correct mtaf should return the correct tail behavior of each marginal and generate either a lighttailed or heavytailed distribution only is this the case or does mtaf do a worse job at modeling say the lighttailed components etc additionally the vanilla baseline has pretty high variance and sometimes seems to perform on par with taf would the authors elaborate upon this point miscellaneousminor typos allow us to in section 21 by out theory in section 5 although the paper extends an existing approach to learn generative models of distributions exhibiting mixedtail behavior the paper has a number of weaknesses 1 its not clear when such mixedtail behavior arises in the realworld 2 the class of flows considered are quite restrictive affine coupled with a permutation that requires lighttailed and heavytailed marginals to be split into two consecutive blocks and 3 the empirical results are lacking they only provide experiments on datasets of dim16 docsepthe paper is composed of two main parts 1 a theoretical section where the authors prove that lipschitz triangular normalizing flows cannot map either heavy or lighttailed base distributions into target distributions with different tail indices for different marginals 2 an algorithmic section where the authors introduce a new method for modeling distributions with different tail indices for different marginals both the theoretical and the algorithmic parts are a straightforward extension of the analysis and methods introduced in the paper tailadaptive flows jaini 2020 strengths the addressed problem is important vanilla normalizing flows such as other deep density estimators are poor at tailestimation which limits their applicability to many problems in science and engineering the paper is very well written and it serves as a good introduction to both normalizing flows and heavytailed distributions the offered solution is technically sound the theoretical analysis is sound and convincing weaknesses the main weakness of the paper is its very limited novelty all theoretical analysis and methodological improvements are relatively minor modifications of the work in jaini 2020 the present paper does not contain major new ideas the proposed method is somewhat inelegant as it proposes the use of a separate offtheshelf tail estimator prior to the flow training i do agree that it could be the right approach in many applications however it is a rather obvious idea not really worth of a top conference publication i highly appreciate the clarity and technical soundness of the paper however i cannot recommend acceptance given the very limited novelty ### Summary:
this paper addresses the performance of normalizing flows in the tail of the distribution it does this by controlling tail properties in the marginals of the highdimensional distribution the paper is wellmotivated and the key theoretical insight has merit however the general perspective and methodology appears to be incremental relative to past results furthermore some concerns over correctness remain after discussion with authors also clear baselines and more realistic settings are lacking in the experimental results thus while the paper generally has promising ideas on a pertinent topic it appears to be not developed enough to merit dissemination
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this is a very interesting paper and it suggests a novel way to think of implicit regularization the power of this paper lies in its simplicity and its inspiring that such almosteasy arguments could be made to get so much insight it suggests that minimizers of the bregrman divergence are an alternative characterization of the asymptotic endpoints of stochastic mirror descent smd when it converges so choice of the strongly convex potential function in smd is itself a regularizer its a very timely paper given the increasing consensus that implicit regularization is what drives a lot of deeplearning heuristics this paper at its technical core suggests a modified notion of bregmanlike divergence equation 15 which on its own does not need a strongly convex potential then the paper goes on to show that there is an invariant of the iterations of smd along its iterations which involves a certain relationship equation 18 between the usual bregman divergence and their modified divergence i am eager to see if such relationships can be shown to hold for more complicated iterative algorithms but there are a few points in the paper which are not clear and probably need more explanation and let me list them here and these are the issues that prevent me from giving this paper a very high rating despite my initial enthusiasm 1 can the authors explain how is the minimax optimality result of theorem 6 and corollary 7 related to the main result of the paper which is probably proposition 8 and and 9 is that minimax optimiality a different insight separate from the main line of the arguments which i believe is proposition 8 and 9 2 is the gain in proposition 9 over proposition 8 is all about using loss convexity to ensure that the smd converges and winfty exists 3 the paper has highly insufficient comparisons to many recent other papers on the idea of implicit bias like httpsarxivorgabs180208246 httpsarxivorgabs180600468 and httpsarxivorgabs171010345 it seems pretty necessary that there be a section making a detailed comparison with these recent papers on similar themes docsepthe authors look at sgd and smd updates applied to various models and loss functions they derive a fundamental identity lemma 2 for the case of linear model and squared loss sgd and in general for nonlinear models smd non squared loss functions the main results shown are 1 sgd is optimal in a certain sense for squared loss and linear model 2 sgd always converges to a solution closest to the starting point 3 smd when it converges converges to a point closest to the starting point in the bregman divergence the convergence of smd iterates is shown for certain learning scenarios pros shows implicit regularization properties for models beyond linear case cons 1 the notion of optimality is wrt a metric that is pretty nonstandard and it was not clear to me as to why the metric is important to study the ratio metric in eq 9 2 the result is not very surprising since smd is pretty much a gradient descent wrt a different distance metric docsepoptimization algorithms such as stochastic gradient descent sgd and stochastic mirror descent smd have found wide applications in training deep neural networks in this paper the authors provide some theoretical studies to understand why sgdsmd can produce a solution with good generalization performance when applied to highparameterized models the authors developed a fundamental identity for sgd with least squares loss function based on which the minimax optimality of sgd is established meaning that sgd chooses the best estimator that safeguards against the worstcase disturbance implicit regularization of sgd is also established in the interpolating case meaning that sgd iterates converge to the one with minimal distance to the starting point in the set of models with no errors results are then extended to smd with general loss functions comments 1 several results are extended from existing literature for example lemma 1 and theorem 3 have analogues in hassibi et al 1996 proposition 8 is recently derived in gunasekar et al 2018 therefore it seems that this paper has some incremental nature i am not sure whether the contribution is sufficient enough 2 the authors say that they show the convergence of smd in proposition 9 while gunasekar et al 2018 does not it seems that the convergence may not be surprising since the interpolating case is considered there 3 implicit regularization is only studied in the overparameterized case is it possible to say something in the general setting with noises 4 the discussion on the implicit regularization for overparameterized case is a bit intuitive and based on strong assumptions eg the first iterate is close to the solution set it would be more interesting to present a more rigorous analysis with relaxed assumptions ### Summary:
the authors give a characterization of stochastic mirror descent smd as a conservation law 17 in terms of the bregman divergence of the loss the identity allows the authors to show that smd converges to the optimal solution of a particular minimax filtering problem in the special overparametrized linear case when smd is simply sgd the result recovers a recent theorem due to gunasekar et al 2018 the consequences for the overparametrized nonlinear case are more speculative the main criticisms are around impact however im inclined to think that any new insight on this problem especially one that imports results from other areas like control are useful to incorporate into the literature i will comment that the discussion of previous work is wholly inadequate the authors essentially do not engage with previous work and mostly make throwaway citations this is a real pity i would be nice to see better scholarship
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: in this paper the authors developed a probabilistic programming framework for stein variational gradient descent and its variants using difference kinds of kernels ie nonlinear kernels or matrix kernels simple experiments are included that the repository is effective and scalable for various problems followings are a few of my questions and comments 1 how is the new implementation compared with other frameworks using black box variational inference for example what is the speed of the training comparing with previous frameworks such as edward in large scale dataset tasks and the report does not give us a more thorough guide of the performance of each kernels for difference tasks 2 the authors mentioned that the framework can be extended to use other objective function such as rnyi elb tailadaptive fdivergence or wasserstein pseudodivergence i am extremely confused about this part since actually there is no objective function for svgd based methods unless you design a new loss based on ksd or related things how is this possible to combine other objective function using svgd it would be great if the authors write down the derivations and have a detailed discussion 3 does the current framework implement amortized svgd and other related steins paper that can be utilized to train neural networks based applications such as steinvae steingan or kernel stein generative modeling 1 2 3 this implementation can be important since it can be quite helpful for many other applications such as meta learning also the authors give the public code link of their implementation in the paper which may expose their identity but i am not sure if this violates anonymous requirement of iclr submissions 1 feng yihao dilin wang and qiang liu learning to draw samples with amortized stein variational gradient descent arxiv preprint arxiv170706626 2017 2 wang dilin and qiang liu learning to draw samples with application to amortized mle for generative adversarial learning arxiv preprint arxiv161101722 2016 3 chang weicheng et al kernel stein generative modeling arxiv preprint arxiv200703074 2020docsepsummary the paper shows how a particlebased nonparameteric variational inference methodology known as stein variational inference is integrated in a fullfeatured probabilistic programming language numpyro the paper goes into a fair amount detail describing a number of enhancements that have been made into numpyro using the general technique of particlebased representation of nonparameteric approximating distributions they describe how geometric transforms of the parameter space can fit into their scheme how matrixvalued kernels can be integrated also they describe a new variant of stein vi which they call elbowithinstein this introduces a new line of research for stein vi they also describe a stein mixture extension to deep markov models smdmm and demonstrate on a very large dataset for the latter method strengths integrating a more powerful variational approximation has clear benefits for probabilistic inference and integrating this into a fullfeatured ppl allows users of bayesian modeling to get access to a cutting edge technique with minimal programmatic effort the integration of stein vi into numpyro seems to have been very well designed given the very large number of ideas that have become easy to add including some innovative approaches showing stateoftheart results on the high dimensional jsbchoralesdataset is a very impressive achievement for any ppl and it certainly lends credence to this work weaknesses the only claim in the paper that is well supported is that the authors have extended numpyro with svi the presentation style in the paper sometimes fails to draw a clear distinction between implementations of prior work in numpyro versus new innovations it is somewhat unclear whether the authors are making claims about the following points in their paper nonlinear stein matrixvalued kernels parameter transforms enumerationbased integration of discrete random variables the objective function of elbowithinstein is not well motivated see discussion below and there is no direct comparison to the previous stein variational gradient descent which this method seeks to improve there is no way to objectively evaluate the results on the first three experiments recommendation reject rationale experiments dont directly validate the main innovations of the paper supporting arguments the main innovation in this paper appears to be the elbowithinstein method this appears to be different than svgd stein variational gradient descent the difference appears to be that in the current paper both the entropy term and the stein repulsion term are in the general objective page 5 first equation unlike in svgd where the entropy term is not there philosophically it doesnt look right to include both of these terms that are serving the same purpose prevent the collapse of the variational approximation on the mode i could be misreading these equations but if there are other difference the authors should clearly state and motivate these differences most importantly the authors should show an experiment directly comparing to svgd svgd reference qiang liu and dilin wang stein variational gradient descent a general purpose bayesianinference algorithmneural information processing systems nips 2016 the neals funnel should show the posterior marginal which is well known so that the reader can judge whether the samples are of good quality not clear how to interpret the dual moons plot what are we looking at in this plot right plot figure 2 the posterior density or the true density how do we know if this is a good posterior for the lda example there dont seem to be any results questions for authors please provide motivation for the modification to svgd objective please clearly state which of the many enhancements to numpyro are being claimed as novel extensions any results worth sharing for the lda additional suggestions not part of the review rating the abstract mentions that this work is better than stochastic vi but this claim is not actually supported explicitly i had to read many of the referenced papers to realize that jankowiak and karaletsos 2019 had implemented a version of svi im assuming that this is what the abstract was referencing please do make such connections explicit in figure 1b variables x and y are not actually used in the guide space permitting you could make a note as to why these are there in the guide the first paragraph of the introduction mentions nuisance variables in fig 4a not clear which variables in 4a were nuisance variablesdocsep summary this paper introduces einstein vi a lightweight composable library for stein variational inference stein vi the library is built on top of numpyro and can take advantage of many of numpyros capabilities it supports recent techniques associated with stein vi as well as novel features the paper provides examples of using the einstein vi library on different probabilistic models strengths im not aware of other ppls that support stein variational inference einstein vi can provide an easier way to compare different stein vi algorithms and make research in the area easily reproducible   concerns the paper states that it provides examples that demonstrate that einstein vis interface is easytouse and performs well on pathological examples and realistic models while it is true that there are several examples described in my opinion there are not enough details to support the claims that einstein vi is easy to use and performs well a concrete comparison between einstein vi and other methods is missing it would have been helpful to have for example some concrete numbers eg time taken to do inference posterior predictive checks posterior mean convergence plots etc that showcase why it is useful to use stein vi for those examples as opposed to other already existing methods another concern is that it is difficult to judge from the paper what the difference to standard numpyro is there is only a highlevel explanation of the examples in the paper so its hard to imagine what the actual code looks like most importantly i would have liked to see a comparison between einstein vi code and what the code would have looked like without einstein vi reasons for score unfortunately there is not enough to go on in this paper which is why i recommend reject there is no strong evidence to support either the usability of the system through elaborate examples and contrasting einstein vi to other systems or its performance through experiments this paper will be much stronger and will have a better chance of reaching more people if it includes either 1 more elaborate code examples that demonstrate that using einstein is indeed better and easier than vanilla numpyro or 2 experiments comparing different stein vi techniques to other inference algorithms as evidence that a dedicated stein vi library is indeed empowering our inference toolkit however i do appreciate that writing a paper about tools libraries is difficult as the contribution of tools is typically a longerterm improvement in the workflow of developing new methods and techniques i am open to increasing my score during rebuttal depending on the answers of the questions listed below questions for the authors why has stein vi not been implemented in ppl systems previously is it a matter of timing or is there something particularly challenging about integrating stein vi into a ppl the paper mentions compositionality several times i was a little confused about what you mean by that can you explain perhaps with an example the paper mentions novel features second to last paragraph page 8 can you elaborate the paper shows an example of using neutra in combination with stein vi can you elaborate on the kind of problems that neutra wont be able to handle on its own what about more lightweight approaches that can be applied in the context of probabilistic programming such as automatic reparameterisation of probabilistic programs gorinova maria i dave moore and matthew d hoffman icml 2020 when will we see benefits of both applying a reparameterization that improves the posterior geometry and using a more sophisticated inference algorithm like stein vi suggestions for improvement and typos that have not affected the score i gave to the paper perhaps the most important change that would improve the paper is adding more concrete examples that would showcase the importance of using einstein vi as opposed to simply numpyro other libraries it would be nice to see a model where stein vi gives us better inference results than a range of other algorithms techniques and compare the code to what the user would have to write otherwise to achieve the same results the examples of composing stein vi with reparameterization marginalization in numpyro can be improved by comparing the results to stein vi without reparameterization marginalization and to other inference algorithms with reparameterization marginalization typos last line of the abstract should be 500 000 as opposed to 500000 url in footnote 3 does not lead to the correct page docsepunfortunately the authors link directly to the code and the code is not anonymous this might be a deskreject as this is not a double blind review this work is a description of a library for developing variational inference algorithms using the elbowithinstein framework developed in nalisnick et al 2017 the library is evaluated on on neals funnel and two moons and on a polyphonic music dataset comments nalisnick et al was published in 2017 i assume this was a typo on the authors part table a in the appendix describing different kernels should include a column with computational and memory requirements for each kernel if they differ this can affect the scalability the work describes lda but does not evaluate it it would be helpful to include heldout log likelihood numbers on a standard topic modeling dataset such as 20 newsgroups this would help people compare to prior work similarly the library is evaluated by fitting to a standard polyphonic music dataset please report these numbers in a table alongside a reasonable approach using standard variational inference and stein vi using the library sidebyside for example the numbers here are much better and use standard variational inference with the kl divergence httpspapersnipsccpaper6039sequentialneuralmodelswithstochasticlayerspdf stein variational inference can be difficult to understand as can be pyro which is built on jaxpytorch and the library developed here is built on top of all of these moving parts before embarking on using the library a machine learning researcher should be very convinced that all this additional effort is worth it benchmarking this new library against existing work is important and will go a long way toward justifying its existence the references are very poorly formatted please clean up ### Summary:
all reviewers have carefully reviewed and discussed this paper they are in consensus that this manuscript merits a strong revision i encourage the authors to take these experts thoughts into consideration in revising their manuscript
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the authors aim to reduce the gap between clean accuracy without adversarial training and with adversarial training to improve the robustnessaccuracy tradeoff the authors introduce helperbased adversarial training the main idea is to use adversarial examples mathbfxtextadv mathbfx 2 mathbfr where mathbfr is standard pgd adversarial perturbation as helper adversarial examples the model is trained to classify these helper adversarial examples as the adversarial label predicted by the model trained without adversarial training in the experiments the authors show that hat improves clean accuracy and robust accuracy on cifar10 and cifar100 datasets when compared with trades defense strengths extremely simple method which can be useful for practitioners a slight improvement over baseline defences on cifar10 and cifar100 datasets weaknesses the method is based on intuition and the authors didnt provide any theoretical justifications for the proposed defence based on my intuition i believe the method is fundamentally flawed as its assumptions are incorrect for example it is incorrect to assume that all adversarial examples with perturbations 2 epsilon should be labelled with its adversarial label the authors should compare the robustness of their method for moderate size perturbations as well eg epsilon 12255 and epsilon 16255 on cifar10 and cifar100 it is quite likely that their method will be less robust for moderate size perturbations the overall procedure is adhoc and requires training and storing the model trained without any regularization first the model is then finetuned with the proposed training procedure some references are missing and the comparison is outdated the method should also be compared with 1 2 and 3 defenses which improve upon trades defense the experimental comparison can be improved the authors evaluated the models with autoattack the authors can also compare their method against gama 4 attack the authors should also include the gradient masking checks in the experimental results or at least discuss gradient masking 1 amirreza shaeiri rozhin nobahari and mohammad hossein rohban towards deep learning models resistant to large perturbations arxiv preprint arxiv200313370 2020 2 jingfeng zhang xilie xu bo han gang niu lizhen cui masashi sugiyama and mohan kankanhalli attacks which do not kill training make adversarial learning stronger in international conference on machine learning pp 1127811287 pmlr 2020 3 dongxian wu shutao xia and yisen wang adversarial weight perturbation helps robust generalization advances in neural information processing systems neurips 2020 4 gaurang sriramanan sravanti addepalli arya baburaj and r venkatesh babu guided adversarial attack for evaluating and enhancing adversarial defenses in advances in neural information processing systems neurips 2020 update after the authors response the authors addressed all my concerns in particular the authors added adversarial robustness results with epsilon 12 added adversarial robustness results with other attacks overall based on the new results for the larger perturbations and the authors comments to other reviewers i am discarding my doubts about the papers approach that it is somewhat adhoc i believe the empirical contributions of this work are significant and novel therefore i recommend accepting the revised paper the authors proposed a simple technique to improve clean accuracy however the method is based on intuition which in my opinion is flawed not all large perturbations should be labelled with its adversarial label the authors should provide a theoretical justification for their intuition besides that the experimental comparison is outdated with few recent defenses missing which improve upon trades defense update after the authors response i sincerely thank the authors for addressing the majority of my comments and concerns the experimental results are undeniable and clearly demonstrate the advantages of the proposed technique based on the new results for the medium perturbation epsilon 12 and additional experiments with other attacks i tend to overlook my doubts about the papers approach i recommend accepting the revised version of the manuscript docsepthe paper highlights the presence of excessive invariance in the prediction of robust models along the initial adversarial directions initial adversarial directions refers to the directions in which adversarial images generated using a standard trained model are present based on this hypothesis the authors propose a training method where the excessive invariance is minimized using the cross entropy loss between the predictionmade using the standard trained model of larger epsilon adversarial image and the prediction of the adversarial image in addition to the trades loss formulation this additional loss term indeed improves the accuracyrobustness tradeoff by giving a significant boost in clean accuracy along with a slight boost in adversarial robustness as compared to the existing methods overall the paper is well written and easy to follow strengths the paper is well written and easy to understand the motivation behind the design choices is clear all the related works are properly addressed and the baselines are also strong the paper achieves a significant boost as compared to existing methods on strong attacks like autoattack the approach shows consistent gains across multiple datasets weaknesses i think the results shown in figure4 are quite expected as initially when the perturbations are generated from a standard trained model they will be nonsmooth similar to random noise thus the final model will have high invariance to the directions of these random noise as compared to the perturbations which are smooth in nature and have features these smoother perturbations would be generated by the adversarially trained models and thus the model would be easily fooled as we go in the direction of these perturbations this is addressed by the final margin of figure 4c although the proposed approach reduces the invariance in the directions of the initial perturbations which are similar to random noise as shown in table 1 5 and stated in section 3 i think ideally the model should focus on reducing the invariance in the direction of smooth perturbations which have semantic features could the authors clarify a bit on this i dont think it would matter much if the model will reduce the invariance in the directions of initial perturbationssimilar to random noise since they wont change the semantics of the image to some other class image while the smooth perturbations which have some semantics and are generated using an adversarial model have the potential to change the semantics of an image and thus change the true class of the image as well as shown in 1 and thus it is desired to reduce invariance in these directions some minor concerns could the authors clarify how they plotted the class boundaries in figure 3 i think this is plotted by examining the predictions of all the points possible in the 3d space in table 5 it is shown that the models are trained so that they have the same robustness i think this is not a good idea for an ideal comparision could the authors show the same table with the median margin in rinit r5 and r15 where the models do not have any constraint on having the same robust accuracy if possible could the authors share the results of table1 for r5 and r15 also i think the training budget for the results reported in table 6 is only 50 epochs if possible could the authors share the results of hat for all three datasets for 200 epochs training budget this will help in better understanding the proposed approach i think the activation used without additional data is relu if this is true could the authors also share the cifar10 200 epochs without additional data results for silu activation in case the authors have used silu can they share the results with relu if possible could the authors share the prn18 and wrn2810 for cifar10 and cifar100if possible results as shown in table 4 using the relu activation this would help in understanding the influence of silu activation an ablation study on using different perturbation bounds for getting the helper label in algorithm1 can also help a lot in better understanding the proposed approach 1 tramr f behrmann j carlini n papernot n jacobsen j 2020 fundamental tradeoffs between invariance and sensitivity to adversarial perturbations arxiv abs200204599 overall i think the paper is well written it shows a significant boost as compared to existing art and has some minor issues at present if the concerns are properly addressed i am willing to increase my score docsepfew sentences summary the paper proposes a new training loss for adversarial training in the lpnorm setting based on the observation that adversarial training increases the classification margin in a disproportionate manner compared to the nominal training setting the authors introduce additional samples called helpers to reduce the classification margin the helper samples which are samples translated further away in the worst adversarial direction can change labels compared to the adversarial samples helper samples get assigned labels from a standardly trained model thus acting as a constraint coming from this standardly trained network regarding the results and contributions novel method using helpers to define a new training loss for adversarial training on par or better results in robust accuracy compared to the sota training loss trades on cifar10100 svhn tinyimagenet and a subset of imagenet much improved results in clean accuracy compared to trades thus reducing the gap between clean and robust accuracy which is primordial for the application of lp norm models to practical uses clear analytical tools based on the margin analysis to investigate the proposed method and howwhy it works strengths clean original and novel idea leading to good experimental results very well written paper with a clear story with clear arguments and experiments to support the story very extensive experiments in the main paper and in the appendix it gives a lot of intuitions about the problem and lpnorm robustness in general the proposed analytic tools are useful beyond the analysis of the proposed algorithm big plus for the toy problem giving interesting intuitions the margin analysis in figure 4 and the per epsilon analysis in figure 6 the code is attached in the supplementary materials and anyway the experimental details and code are very well described in the paper hence the paper seems reproducible weaknessessuggestionsquestions 1 in the bullet points in page 2 and other parts of the paper please specify when the accuracy is the clean or robust accuracy otherwise there is an ambiguity 2 it would be great to see how the proposed method performs compared to trades on larger models such as wrn7016 maybe by finetuning an already pretrained model to avoid expensive computations 3 in figure 6 maybe specify that the variable epsilon on the xaxis is used for the testtime robust accuracy and not the training procedure 4 in figure 13 in the appendix why do the curves trades and hat gamma0 do not match while they are the same method is the difference due to the variance in the results 5 very optional but curious to check i would be curious to see the performance of an alternative helper x x r r where r is the adversarial perturbation computed at x r in this way helper samples could possibly look more natural rather than when using x 2r thus possibly improving the final results it would require twice as more computations but would be interesting to check paper enjoyable to read with extensive experiments supporting a clear and novel idea leading to improved results the authors also propose great analytical tools to investigate their hypothesis hence i vouch for acceptance docsepthis paper proposes a helperbased adversarial training hat method to alleviate the tradeoff between robustness and accuracy empirical evaluations are done on several datasets and under autoattack and common corruptions strengths the writting is easy to follow while the illustration of the idea of hat is clear and reasonable i especially admire the empirical evaluations in this paper which involve largescale experiments using ddpm generated data and 80m ti extra data the improvements are significant and the sanity check for eg gradient masking is also presented weaknesses the modifications introduced in hat are simple which is good but they depend on an assumption that the model should not be robust beyond the threat model namely under an 8255 ellinftynorm threat model an adversarial example with 16255 perturbation is encouraged by hat to fool the model while the label of the adversarial example may not change for me this assumption is quite adhoc and introducing another standard model fthetatextrmstd seems not an elegant solution in conclusion i think the pros and cons of this paper are quite clear strong empirical evaluations and promising improvements but the method itself is somewhat adhoc and not very principled so i would like to recommend an acceptance but the method could be further polished strong empirical evaluations and promising improvements but the method itself is somewhat adhoc and not very principled ### Summary:
the authors propose a simple addition to adversarial training methods that improves model performance without significantly changing the complexity of training the initial reviews raised some questions about whether experiments were sufficiently extensive but these issues were resolved during the rebuttal and discussion period resulting in a strong consensus that the paper should be published
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper proposes a new algorithm for sentence segmentation which can be applied to various sequential tagging problems the motivation and the description of the algorithm are clearly given and the proposed method achieved stateoftheart results for most of the problems and datasets the proposed method tries to find all possible segments in the given input sequence to estimate the scores of the segments using pretrained bert representations and to find the best sequence of segments using the dynamic programming algorithms the proposed method is general enough to apply to various sequential tagging problems and natural language sentence analysis while the proposed method looks new to apply to the sequential tagging problems in natural language processing the dynamic programming approach to sequential analysis is a wellknown method in the speech recognition community where a sequence of phonemes are segmented into a word sequence also a similar method has been applied to the segmentation of character sequences into word sequences for the languages that have no delimiters between words such as chinese and japanese in these views the novelty of the paper is not high on the contrary the application of the bertbased representation to the sequence segmentation tasks such as sentence segmentation and sequential labelling may be new and the finding that this method can attain a stateoftheart performance in those problems could be worth reporting docsepthe paper is wellwritten easy to follow and clear however the novelty and main contribution of the paper is not clear the authors used a scoring model to score the composition of each segment as well as the probability of having a specific label for the segment the bert language model is used in the paper to encode the input sequence the training part is a more like a supervised training and a dynamic programming dp approach is used for inference it is not clear how dp contributes to the success of the model as the scores for segments are derived during the training which seems most of the success is coming from the labeled data ie supervised training and bert encoding one other thing about formatting and citing references some of the references are published in conference proceedings not sure why authors cited their arxiv versiondocsepthis paper presents a method called lua lexical unit analysis for general segmentation tasks lua scores all the valid segmentation of a sequence and uses dynamic programming to find the segmentation with the highest score in addition lua can incorporate labeling of the segment as an additional component for span labeling tasks pros 1 lua overcomes the shortages of sequence labeling as a tokenbased tagging method and spanbased models as well by treating them separately 2 the decomposition of scoring label and scoring span allows the precomputation of the maximum label score for each span reducing the complexity 3 this method achieve the state of the art performance on 13 out of 15 data sets empirically cons 1 the novelty is incremental as the idea of calculating spanbased score and labelbased score with dp has been used widely in constituent parsing which applies interval dp in a similar way also check semicrf model sunita sarawagi and william w cohen 2004 2 the way of using neural model to calculate the spanbased scoring seems to be very arbitrary eq3 without any explanation why it is designed in this way 3 label correlations are used to mimic correlation scoring however the transition between spans are not explicitly modeled questions 1 lua is only used in inference stage do you think by using lua in training as well though slower the performance can be further improved 2 do you have any intuition of why designing the scoring function eq3 in that way ### Summary:
this paper is concerned with sequence segmentation the authors introduce a framework which they call lexical unit analysis a neural network is used to score spans and then dynamic programming is used to find the best scoring overall segmentation the authors present extensive experiments on various chinese nlp tasks obtaining better results than the systems they compare to reviewers raised concerns including about novelty in my view beyond beating the state of the baselines on the chosen tasks it is hard to extract an actionable insight or novel conceptual understanding therefore the paper is not recommended for acceptance in its current form
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the authors define the task of solving a family of differential equations as a task of gradientbased metalearning generalizing the gradientbased model agnostic metalearning to problems with differentiable solvers the problem is well formulated numerical integrators of differential equations can be sensitive to choices of parameters or initial conditions at the same time numerical integration is a computationally complex task that can benefit from metalearning iterative solvers and their surrogate metasolvers can both be implemented by a neural networks which makes the problem approachable by gradient based meta learning the authors demonstrate their approach on a family of 1d poisson equations and incompressible flow simulations the authors successfully demonstrate the advantages of using gradient based meta solving with a neural network architecture for the metasolver over a baseline learner and over regular supervised learning on this task the presented work is solid the main concern is with its limited audience in the scope of the conference and potential applications beyond those presented in the paper docsepthis paper introduces a framework for learning the parameters of computational algorithms in order to improve runtime andor error they study the specific case of linear systems that arise in pde solvers showing an objective whose solution is an initialization that decreases the number of jacobi iterations required to solve the poissson equation as well as empirical results for both jacobi and sor on several pde systems strengths 1 the paper applies gradientbased methods to important problems of learning initializers for iterative methods in scientific computing 2 the authors provide a guarantee for initializing the jacobi iteration albeit under what seems like a restrictive assumption on the model capacity 3 the authors demonstrate improvement over simply learning one mapping supervised learning rather than back propagating through iterations weaknesses 1 while the improvement over the supervised learning setting is interesting the evaluation largely seems to be in regimes where the error is far too high for practical applications for example in table 1 the mse of even the best approach seems quite high although it is difficult for me to get a sense of what a good scale is would the advantage continue to hold and be significantenough to be interesting if the methods were given sufficient number of iterations for practical purposes 2 there is no demonstration of practical application utility ie whether going through the trouble learning this initialization is actually useful is it more useful for me to spend the likely substantial amount of effort of back propagating through a lot of classical solves in order to get a better initialization or just to use the classical solver to begin with as an example in the field of neural pde solvers there is often a demonstration of endtoend computational savings provided cf li et al 2021 3 while the claimed framework is very general it is only studied for linear system solving the authors also do not compare their overall framework to the substantial work on datadriven algorithm design which has been studying these problems both theoretically and empirically for quite some time eg hutter et al 2011 balcan 2020 mitzenmacher vassilvitskii 2020 references balcan datadriven algorithm design in roughgarden beyond the worstcase analysis of algorithms 2020 hutter et al sequential modelbased optimization for general algorithm configuration icllo 2011 li et al fourier neural operator for parametric partial differential equations iclr 2021 mitzenmacher vassilvitskii algorithms with prediction in roughgarden beyond the worstcase analysis of algorithms 2020 while the problem setup is reasonably wellmotivated and some of the empirical results are interesting it is not clear to me how practically relevant the empirical results are for the problems being studied the very general framework is also only discussed in the restricted case of linear system solving for pdes as a result i tend to lean against acceptance docsepthis paper proposes leveraging data from previous problem instances to improve efficiency of solving similar ones in the future a general gradientbased method is proposed which is applied to generating initial guesses to differential equation solutions this problem is formulated as a metalearning problem strengths the paper proposes a very general formulation of metasolving numerical problems thorough theoretical foundation and justifications are provided weaknesses the only usecase that is thoroughly empirical validated is solving pdes as the paper mentions other applications such as rootfinding are applicable only evaluating the framework on one application does not showcase its general applicability data augmentations are required for the incompressible flow simulation experiment why isnt it possible for the metasolver to learn without these augmentations the formulation of the dataset for the experiment in 32 seems arbitrary why are the two previous timesteps required are there stronger baselines that can be compared against for example are there problemspecific heuristic initial guesses that can be used that leverage domain knowledge about the particular problem typos section 22 is a algorithm to find a good initial weights for the metasolving problems may not be an initial weights section 23 theta does not depend on task tau paragraph 2 theta is weights of another paragraph 3 in this work metalearning approach paragraph 4 tested with multi steps of phi paragraph 4 algorithm 2 differntiable solver section 312 tends to ignore high frequencies and more focus on low frequencies paragraph 4 the paper proposes a general framework for efficiently finding solutions to numerical problems but only evaluates the framework on pde problems furthermore additional tricks such as data augmentations and using the previous two timesteps of the solution are required to make the method work well empirically im not very familiar with metalearning or pde solvers so im not very confident in my assessment docsepthe paper proposed a gradientbased algorithm gbms to solve pdes based on the solutions of other similar problems in gbms a network is trained to produce good initial guess for the iterative solver of the pde numerical experiments are performed to show the effectiveness of the method strengths the paper proposed to predict a good initial guess for traditional pde solvers so the pde solver would converge fast also by using traditional pde solver the obtained solution is usually more accuracy than other purely datadriven ml methods weaknesses the authors spent a lot of effort to create a new terminology metasolving which has a board meaning and many other algorithms can be formulated in this way however this is only a new terminology but it is not a new idea or a new algorithm from the paper there is no clear evidence that why we would need this new terminology or for what problems we have to use this new terminology in fact in the paper it only tested the problem of generating good initial guess which is not really a new idea the paper only tested the algorithm on a 1d poisson equation and 2d incompressible flow other challenging problems should be tested there is no comparison between the proposed method and other methods in terms of inference speed and accuracy the paper didnt provide the details of networks used the paper introduced a new terminology and is more like a perspective paper instead of a comprehensive research article ### Summary:
the authors define the task of solving a family of differential equations as a task of gradientbased metalearning generalizing the gradientbased model agnostic metalearning to problems with differentiable solvers according to the reviews there were some concerns regarding the practical value of the paper for example 1 the proposed technology is restricted to linear systems and relatively easy problems 2 there is no demonstration of practical application utility 3 it lacks systematic comparison with other methods 4 some technical details are missing there were quite a lot of discussions on the paper among the reviewers and the consensus is that the paper is not solid enough for publication at iclr in its current form the reviewer who gave the highest score is less confident and does not want to champion the paper
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the authors propose a recourse methodology to deal with model biasesfairness issues in producing equitable outcomes for all users classes the paper addresses an important issue of dealing with model bias in producing fair user outcomes my main concern with the paper is the lack of clear supporting arguments on why the choice of cost models including knowing the distribution over the cost functions is the right one i am not clear about the assumptions made in the proposed cost function that each user adopts first how is the cost function effectively computed even with the use of a recourse set next how is the recourse set itself guaranteed to always produce at least one reasonable counterfactual in the set more specifically even as the authors acknowledge knowing the exact cost function by each user is difficult their explanation of using the recourse set to get around this problem with high confidence is unclear to me intuitively it seems a measure like diversity will be more effective when the cost functions are unknownprivate to the user i am not completely convinced the proposed model of computing a recourse set to minimize the expected cost for the user is always effective in the absence of knowing the cost function even approximately the experiments to show the effectiveness with respect to the proposed baseline are inconclusive with respect to natural measures like diversity docsepin this paper the problem of algorithmic recourse is studied where the goal is to find best recourse counterfactual set that is optimized for user cost the author proposed new userincurred cost evaluation method expected minimum cost emc which approximate user satisfaction without assuming a fixed global user cost function and instead consider user cost functions as hidden and userspecific specifically the authors define cost function for each user as a set of featurespecific functions of userincurred cost when transitioning between feature states and define mincost as the minimum transition cost across possible recourses to cover diverse user cost functions they propose to model user cost distribution with a hierarchical sampling procedure and estimated expected minimum cost by drawing samples from it next they formulate a discrete optimization problem using emc as objective and propose a search algorithm cols for best recourse generation they introduce three new metrics for user satisfaction all related to mincost fsk coverage and pac finally they test with two realworld datasets and show that cols achieves significant outperformance against baselines that optimize for distancebased metrics on the newly proposed metrics and show that their method is doing better in fairness as well pros this paper proposes a new way of evaluating user satisfaction which differs from existing methods that measures on heuristics such as distancediversity or assume a fixed global user cost function it is more flexible and realistic and thus could be an interesting direction to follow the proposed formulation is quite novel and technically nontrivial with some theoretical grounding the experimental results are very strong on the 3 newly proposed metrics the authors also conduct extensive ablation studies on different aspects of the problem although many of them are deferred to the supplementary the discussion is pretty comprehensive and they also included a fairness analysis concerns one major issue is on the readability of the paper it is certainly good that the paper contains a lot of information however currently it seems that the main text is a bit too packed such that very limited detail about the main methodology is provided in fact both the core sampling and optimization algorithms are described in the appendix and it is very hard for this reviewer to understand them solely based on the descriptions in section 4 perhaps the authors could reorganize the content such that less space is spent on repeating the contributionsmotivations as algorithmic recourse is a rarely new domain and may not be wellknown by general audience it might be better to translate the domainspecific terminologies into plain language or more general language in ml in the introduction part it seems the newly introduced evaluation metrics are generated using the same sampling distribution used for computing emc wouldnt that be a bit circulated to evaluate something where the groundtruth is closely related to the objective used for optimization is there any way to evaluate on more realistic user cost rather than simulating it with the same distribution as the one used in emc the authors talk about distributional shift regime in the appendix that still the groundtruth distribution is from the same family of the emc distribution mixture of percentile shift and linear cost it might be more convincing if it is from a totally independent distribution questions what is used as the initial starting set for cols in the problem formulation in 2 does it mean that the best recourse set would consist at least one desired outcome solution but it may not be the one with lowest cost if so how do one achieve balance between outcome and satisfaction what is the computational complexity of the algorithm is there any downside from the underperformance in distancebased metrics in this paper the author proposed a new way for evaluating and optimizing user satisfaction the technical contributions are solid and the results are rather promising despite the potential bias toward emc the paper contains fruitful discussion and ablation studies although it can be further improved in terms of clarity therefore i would like to give a weak accept docsepthis work introduces a new method for identifying actionable recourses for users with userspecific cost functions users cost functions are hidden from the recourse method the paper proposed a discrete optimization algorithm cols to solve the objective emc it further used a popular realworld dataset to illustrate the performance i enjoyed reading this paper in general my major comments are 1 in section 41 it is assumed that there is a distribution over all the cost functions dc for the population is the distribution dc known or unknown a more practical setting is that dc is unknown then how to use monte carlo estimation to approximate the expectation of the mincost for different users u it is assumed that cu follows distribution dc however in the introduction it claims that we propose a method for identifying a userspecific recourse set that contain at least one good solution for the user however it seems inconsistent between the motivation and the assumption why do all users share the same distribution of the cost function is the framework generalizable to the setting with different distribution 2 theorem 41 proves the monotonicity of costoptimized local search algorithm but how does expmincostsu stbest cii1m converge theorem 41 does not imply that but it is a very important question 3 why choosing equation 3 and equation 4 as metrics to measure recourse quality what is the advantage of choosing a threshold function how to choose k in real cases 4 in the numerical experiments could you compare with other functions that measure the recourse quality in previous recourse papers i think my main concern is on fsk is using fsk equivalent to the following assume there exists a blackbox algorithm that can output the indicator that if the total cost is smaller than k then the distance function can be used to measure the recourse quality could you use numerical experiments to emphasize the advantage of using fsk compare to other measure functions such as weighted sum of costs 5 what is the computational complexity of your algorithm how is that compared to other benchmarks 6 why is fairness an important issue in this work could you comment more on this part to motivate this paper studied an interesting problem to improve the paper the author may want to illustrate the advantage of using fsk and how it is very different from stateofart measure functions both conceptually and numerically docsepthis paper aims to find algorithmic recourse that has lowcost to the users unlike previous work the authors do not assume that there is a known global cost function that is shared by all users i do not think this papers achieves what it sets out to do it is mentioned in the related work how the closest literature to the paper is other costbased approaches to finding recourse and different from those approaches this paper drops the assumption that there is a known global cost function that is shared by all users first the paper formulates mathrmmincostcdotmathcalcu as the cost function of user u which is characterized by the unknown transition matrices mathcalcu however then the paper assumes a distribution mathcald over mathcalcu s of different users is known and proposes to optimize mathbbemathcalcsimmathcaldmathrmmincostcdotmathcalc which is effectively a known global cost function with respect to user state mathbfsu and recourse set mathcals is this not the case having said that the proposed cost function has a certain structure and it is still novel i authors propose a hierarchical cost distribution as the particular mathcald they consider and ii by only considering the element with the minimum cost for each sample from mathcald they exploit the fact that each users only really requires one recourse that they are happy with to be satisfied proposing a new costbased objective like this could still be a valuable contribution but then the paper needs to be positioned accordingly and highlight the merits of optimizing a costbased objective structured in this new way note that the current experiments are not helpful in comparing against other costbased objectives proposed in previous work cost functions of the users are simulated according to the proposed cost function then of course a method that optimizes it would perform better than methods optimizing other cost functions some of the conclusions made in the results section suffer from user preferences being simulated as well for instance at the end of q2 the authors conclude that high diversity is not necessary to satisfy individual users this is of course true for the simulated users since their cost function is designed to ignore diversity in the first place i believe the claim that the paper relaxes the assumption of knowing a global cost function is not true however it still introduces an interesting new objective to optimize for when finding algorithmic recourse ### Summary:
this paper makes an interesting contribution to the literature on algorithmic recourse more specifically while existing literature assumes that there is a global cost function that is applicable to all the users this work addresses this limitation and models user specific cost functions while the premise of this paper is interesting and novel there are several concerns raised by the reviewers in their reviews and during the discussion 1 while the authors allow flexibility to model user specific cost functions they still make assumptions about the kind of cost functions eg they consider three hierarchical cost sampling distributions each of which model percentile shift linear shift and a mixture of these two shifts the authors do not clearly justify why these shifts and a mixture of these shifts is reasonable prior work already considers lot more flexible ways of modeling cost functions in a global fashion for example rawal et al 2020 actually learns costs by asking users for pairwise feature comparisons isnt this kind of modeling allowing more flexibility than sticking to percentilelinear shifts and their mixture 2 several reviewers pointed out that the main paper does not clearly explain all the key contributions while the authors have updated their draft to address some part of this concern reviewers opine that the methods section of the paper still does not discuss the approach and the motivation for the various design choices eg why a mixture of percentile and linear shifts clearly 3 reviewers also opine that some of the evaluation metrics also need more justification for instance why is fraction satisfied measured at k 1 ie fs1 measured and why not fs2 or fs3 will the results look different for other values of k here 4 given that rawal et al 2020 is a close predecessor of this work it would be important to compare with that baseline to demonstrate the efficacy of the proposed approach this comparison is missing given all the above we are unable to recommend acceptance at this time we hope the authors find the reviewer feedback useful
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper proposes an effective deliberated domain bridging ddb for domain adaptive semantic segmentation dass to this end it takes advantage of two data mixing techniques regionlevel mix and classlevel mix to train two corresponding teacher models which eventually guide one student model on the target domain it has been tested on several benchmarks gta5 to cityscapes gta5 synscapes to cityscapes gta5 to cityscapes mapillary strengths 1 it is a wellwritten paper that addresses the limitations of previous methods eg global interpolation pixelwise ambiguity toy game to show the justification of using both coarsegrained and finegrained db and proposes novel learning architecture with multiteacher and singlesource distillation method 2 the proposed method is well proven to be effective with several benchmarks eg singlesource multisource and multitarget settings by widening the gap with the previous state of the arts in each benchmark weaknesses 1 as the author illustrated in limitations i am also a bit concerned with the training efficiency and complexity since the proposed method requires alternating optimization processes one of the simple endtoend optimizations is conducting ema training on one model by combining regionlevel and classlevel mixing techniques it would be better to show a brief study about how the authors can extend this to endtoend simple learning architecture 2 lack of ablation study on some hyperparameters 1 alpha in equation 5 the authors suggested updating the teacher models with ema to avoid a denoised pixelwise pseudo label to prove it an ablation study on alpha needs to be explored 2 xaug in equation 9 need to empirically show justification using augmentation input for a student model yes the authors addressed the limitation docsepto address uda in semantic segmentation this work uses two types of data mixing strategies to artificially create intermediate bridging domains between source and target the paper starts with a detailed analysis comparing different data mixing strategies either done globally mixup61 or locally cowmix 13 fmix 17 cutmix 60 and classmix 42 the analysis demonstrates favorable results when using local data mixing strategies for uda in segmentation in particular cutmix coarse regionwise mixing and classmix fine classwise mixing based on results of the analysis this work proposes a simple way to combine the two mixing strategies cutmix and classmix in the course of training there are five models two teacher models trained with cutmix and classmix two ema models of the two teachers one student model trained using teachers pseudolabels training is done in multiple rounds fixed as 4 in the experiments in each round the two teachers are first trained separately with cutmix and classmix the student is then trained with pseudolabels of two ema models of the two teachers pseudolabel of a given target sample is determined as a weighted combination of softmax scores of the two ema models eqn 12 the weights have size h times w k with k classes at each spatial position the weight vector over k classes is the softmax over the feature distance to class centroids eqn 11 color jittering and gaussian blurs are used on target sample when training the student the two teachers are initialized by the student strengths overall this is an interesting technical paper that combines multiple existing strategies namely cutmix 60 classmix 42 mean teacher 42 prototypical weighting 62 and pseudolabelling 30 empirical results demonstrate better performance than previous sotas on comparable backbone resnet101 and segmentation framework deeplabv2 experiments are extensive the paper is wellwritten and easy to follow weaknesses my main concern is with the technical novelties of this work the analysis comparing different mixing techniques claimed as the first contribution is somewhat interesting however the main proposed approach is merely a mix of previous works actually there are no new insights that i could get from this work its not clear to me how is the intermediate model selected at each stage is the targets validation set used to select the best model if true is there a risk of supervision leak from target validation set missing details for the multisource and multitarget experiments im currently on the borderline slightly leaning toward the positive side thanks to the good results my final decision will be adjusted based on the feedback from the authors and the discussion with other reviewers typos l185 eqn 3 instead of eqn 2 suppmat algo1 l7 l12 eqn 3 instead of eqn 2 postrebuttal i thank the authors for being active during the rebuttal and addressing all of my concerns im happy to increase my score limitation on training complexity is discussed in the supplementary material no discussion on potential negative societal impacts was given docsepthis paper proposes a deliberated domain bridging ddb method for domain adaptative semantic segmentation where the target labels are not available during the training in ddb there are two parts 1 a dualpath domain bridging step to train two teacher models with two intermediated domains using the coarsewise and finewise ie regionlevel and semanticlevel data mixing techniques 2 a crosspath knowledge distillation step to adaptively transfer the knowledge from the two teacher models to a student model the two steps are repeated for several rounds for a good performance extensive experiments on both singlesource domain and multisource multitarget domain settings are conducted to validate ddbs superiority pros 1 this paper proposes an effective method to significantly boost the uda segmentation performance in various settings 2 the comprehensive ablations are done to clearly show 1 the complementarity between the two teacher models and 2 the effectiveness of the distillation step cons 1 since gta5 to cityscapes and gta5 synscapes to cityscapes are done what is the performance in synscapes to cityscapes this experiment shows which dataset contributes more to adapt to the real dataset 2 there are too many symbols which makes the paper hard to follow what do the numbers righter after the approach name in tables 2 3 and 4 mean for example advent19 bdl 19 fada20 etc 3 the authors claim that soft distillation and hard distillation are compared in table 5 however the soft distillation choice and the explanation are missing in that table which is a bit confusing 4 ddb requires two rounds for a good convergence in each round it needs to train three individual models and calculate two groups of category centroids by scanning the target training set for two teacher models respectively this makes the approach cumbersome and may require more training time than others the authors are encouraged to discuss the above issue with detailed analysis besides this cumbersome training process seems in conflict with the stated elegant method 5 the following paper can be included for comparison since it also studies the data mixing technique in uda semantic segmentation besides the difference between dacs which also utilizes the data mixing technique in uda is not well stated in the paper dsp dual softpaste for unsupervised domain adaptive semantic segmentation proceedings of the 29th acm international conference on multimedia 2021 28252833 the authors discuss the limitations of this paper in the supplementary material no negative social impact has been discussed docsepthis paper is about unsupervised domain adaptation for the task of semantic segmentation the paper argues for the importance of gradually bridging the domain gap instead of a direct attempt to transfer a model from the source to the target domain motivated by an empirical analysis about different data mixing technologies the paper explores two regionbased mixing strategies coarse regions and finer classwisemask regions as domain bridges specifically two models are trained on the two different domain bridges dualpathdomainbridge which act as ensembled supervision for a single student model crosspath knowledge distillation this student model can then initialize the teacher models for another round of these two steps experimental results in three different settings confirm the effectiveness of the proposed approach with stateoftheart results on standard benchmarks strengths the ablation studies in tables 57 are great and showcase the impact of individual components the results are impressive with a clear improvement on the standard benchmark gta5cityscapes as well as other settings multisource and multitarget the experiments in table 1 are a good motivation for the choice of data mixing strategies cutmix and classmix weaknesses the writing specifically the motivation and positioning with respect to prior work needs improvement there may be alternative domain bridges than data mixing like methods that rely on selftraining and choose confident target pseudo labels as intermediate source domain the justification for exploring new domain bridges in lines 3741 is vague and unclear what are unexpected artifacts in the global input space what optimization constraints are referred to i do not see why the paper only evaluates two domain bridging strategies in the ensemble one could also include more relating to ensemble methods one could expect improvements if an additional data mixing strategy is orthogonal one recent successful example for a global mixing strategy is a and could be easily integrated there is a related work on domain bridges for semantic segmentation that was not included b in line 185 shouldnt the reference go to eq 3 i do not quite understand why the mixing weights in eq 1112 help arent the softmax values ie scores already an indication how far away a sample is from the decision boundary it would be good to point the reader to the supplemental material for a detailed description of the training strategy its hard to understand and see details in figure 1 references a fda fourier domain adaptation for semantic segmentation yang and soatto cvpr20 b domain bridge for unpaired imagetoimage translation and unsupervised domain adaptation pizzati et al wacv20 potential societal impacts are not discussed i think a paper on domain adaptation should include a discussion about potential biases that are carried over from source domains specifically because these are often synthetic data which often contains some handcrafted components like object sampling distributions etc ### Summary:
summary this paper proposes an effective deliberated domain bridging ddb approach for domain adaptive semantic segmentation dass it leverages two data mixing techniques regionlevel mix and classlevel mix to train two corresponding teacher models which then guide one student model on the target domain it is evaluated on multiple benchmarks strength the paper is a wellwritten paper it is wellmotivated based on the limitations of previous methods the proposed approach is novel interesting and effective the experiments with the toy game are solid weakness training efficiency and complexity lack of ablation study on some hyperparameters and design choices some missing referencescomparisons unclear positioning of the work wrt prior work recommendation the paper receives consistently positive ratings after rebuttal most of the reviewers concerns are addressed and the paper clearly has strengths the ac thus suggests acceptance the ac strongly suggests that the authors incorporate their rebuttal eg additional results into their cameraready version
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper investigates reinforcement learning with a perturbed reward signal in particular the paper proposes a particular model for adding noise to the reward function via a confusion matrix which offers a nuanced notion of rewardnoise that is not too complicated soas to make learning impossible i take this learning setting to be both novel and interesting for opening up areas for future work the central contributions of the work are to 1 leverage a simple estimator to prove the convergence of qlearning under the rewardperturbed setting along with the samplecomplexity of a variant of phased qlearning which they call phrased qlearning and 2 an algorithmic scheme for learning in the rewardperturbed setting algorithm 1 and 3 an expansive set of experiments that explore the impact of various reward models on learning across different environmentalgorithm combinations the sample complexity term extends phased qlearning to incorporate aspects of the reward confusion matrix and to my knowledge is novel further even though theorem 1 is unsurprising as the paper suggests i take the collection of theorem 1 2 and 3 to be collectively novel indeed the paper focuses on an interesting and relatively unexplored direction for rl apart from the work cited by the paper and perhaps work like krueger et al 2016 in which agents must pay some cost to observe true rewards there is little work on learning settings of this kind this paper represents a first step in gaining clarity on how to formalize and study this problem i did however find the analysis and the experiments to be relatively disjointed the main sample complexity result presented by the paper theorem 2 was given for phased qlearning yet no experiments actually evaluate the performance of phased qlearning i think the paper could benefit from experiments focused on simple domains that showcase how traditional algorithms do in cases where it is easier to understand and visualize the impact of the reward perturbations simple chain mdps grid worlds etc and specifically experiments including phased qlearning pros general interesting new learning setting to study initial convergence and sample complexity results for this new setting depth and breadth of experimentation in terms of diversity of algorithms and environments includes lots of detail about the experimental setup cons clarity of writing lots of typos and bits of math that could be more clear see detailed comments below fixed the plots in section 4 are all extremely jagged more trials seem to be required moreover i do think simpler domains might help offer insights into the reward perturbed setting fixed the reward perturbation model is relatively simple some high level questionscomments why was phrased qlearning not experimented with why use majority voting as the rule when this was introduced it sounded like any rule might be used have you triedthought about others your citation to kakades thesis needs fixing it should read kakade sham machandranath on the sample complexity of reinforcement learning phd thesis university of london 2003 right now it is cited as gatsby 2003 throughout the paper you might consider picking a new name for phrased qlearning right now the name is too similar to phased qlearning from kearns and singh nips 1999 as mentioned in the cons section the confusion matrix is still a somewhat simple model of reward noise i was left wondering what might be the next most complicated form of adding reward noise how might the proposed algorithms respond to this slightly more complex model that is its unclear how general the results are or if they are honed too tightly to the specific proposed reward noise model i was hoping the authors could respond to this point section 0 abstract not immediately clear what is meant by vulnerability or noisy settings might be better to pick a more clear initial sentence same can be said of the sources of noise section 1 introduction adversaries in realworld adversaries in the realworld you might consider citing loftin et al 2014 regarding the bulleted point about applicationspecific noise unbiased reward estimator aided reward robust reinforcement learning framework this was a bit hard to parse consider making more concise like unbiased reward estimator for use in reinforcement learning with perturbed rewards our solution framework builds on existing reinforcement learning algorithms including the recently developed drl ones cite these up front so cite qlearning cem sarsa dqn dueling dqn ddpg naf and ppo and spell out the acronym for each the first time you introduce them layer of explorations layer of exploration section 2 problem formulation as each shot of our what is shot in this context in what follow in what follows where 0 gamma leq 1 usually gamma in 01 or 01 why cant gamma 0 the transition notation changes between mathbbpast1 st and mathbbpst1 st at id suggest picking one and sticking with it to improve clarity to learn a stateaction value function for example the qfunction why is the qfunction just an example isnt is the stateaction value function that is id suggest replacing to learn a stateaction value function for example the qfunction with to learn a stateaction value function also called the qfunction qfunction calculates the qfunction denotes the reward feedbacks perfectly the reward feedback perfectly i prefer that the exposition of the perturbed reward mdp be done with c in the tuple so tildem langle mathcals mathcala mathcalr c mathcalp gamma rangle this seems the most appropriate definition since the observed rewards will be generated by c the setup of the confusion matrix for reward noise over is very clean it might be worth pointing out that c need not be markovian there are cases where c is not just a function of mathcals and mathcalr like the adversarial case you describe early on section 3 learning w perturbed rewards theorem 1 builds straightforwardly on qlearning convergence guarantee it might be worth phrasing the result in those terms that is the addition of the perturbed reward does not destroy the convergence guarantees of qlearning we firstly we first value iteration using q function value iteration definition 2 phased qlearning definition 2 phrased qlearning i think unless youre talking about phased q from the kearns and singh 99 work it uses collected m samples it uses the collected m samples theorem 2 it would be helpful to define t since it appears in the sample complexity term also i would suggest specifying the domain of epsilon as you do with delta convergence to optimal policy convergence to the optimal policy the idea of constructing mdp is similar to this seems out of place the idea of constructing which mdp similar to kakade 2003 in what sense the unbiasedness the use of unbiased estimators number of stateaction pair which satisfies number of stateaction pairs that satisfy the above procedure continues with more observations arriving the above procedure continues indefinitely as more observation arrives also which procedure updating tildecij if so i would specify is nothing different from eqn 2 but with replacing a known reward confusion replaces a known reward confusion 4 experiments diverse experiments thats great lots of algorithms lots of environment types i expected to see phrased qlearning in the experiments why was it not included the plots are pretty jagged so im left feeling a bit skeptical about some of the results the results would be strengthened if the experiments were repeated for more trials 5 conclusion despite of the fact despite the fact finite sample complexity of qlearning with estimated surrogate rewards are given its not really qlearning though its a variant of qlearning id suggest being explicit about that appendix it is easy to validate the unbiasedness of proposed estimator directly it is easy to verify that the proposed estimator is unbiased directly for the simplicity of notations for simplicity the phrased qlearning could converge to near optimal policy the algorithm phrased qlearning can converge to the near optimal policy using union bound using a union bound same comment regarding gamma its typically 0 leq gamma 1 bottom of page 16 the second equation from the bottom far right term cj cj using cauchyschwarz inequality using the cauchyschwarz inequality references loftin robert et al learning something from nothing leveraging implicit human feedback strategies robot and human interactive communication 2014 roman the 23rd ieee international symposium on ieee 2014 krueger d leike j evans o salvatier j 2016 active reinforcement learning observing rewards at a cost in future of interactive learning machines nips workshopdocsep summary the authors present work that shows how to deal with noise in reward signals by creating a surrogate reward signal the work develops a number of results including showing how the surrogate reward is equal in expectation to the true reward signal how this doesnt affect the fixed point of the bellman equation how to deal with finite and continuous rewards and how the convergence time is affected for different levels of noise they demonstrate the value of this approach with a variety of early and stateoftheart algorithms on a variety of domains and the results are consistent with the claims it would be useful to outline how prior work approached this same problem and also to evaluate the proposed method with existin approaches to the same problem i realise that this is the first method that estimates the confusion matrix rather than assuming it is known a priori but there are obvious ways around this eg the authors first experiment assumes the confusion matrix is known so this would be a good place to compare with other competing techniques also the authors have a way of estimating this so they could plug it into the other algorithms too i also have some concerns about the clarity and precision of the proofs although i do not have any reason to doubt the lemmatheorem correctness see below the weakest part of the approach is in how the true reward is estimated in order to estiamate the confusion matrix it uses majority vote which is only really possible in the case of finite rewards with noise sufficiently low that this will be a robust estimate perhaps some other approaches could be explore here too finally there is discussion about adversarial noise in rewards at the beginning but i am not sure the theory really addresses it nor the evaluations nonetheless given that i do not know whether the claim of originality is true in terms of the estimation of the confusion matrix if it is then the work is a significant and interesting advance and is clearly widely applicable in domains with noisy rewards it would be interesting to see a more tractable approach for continous noise too but this would probably involve assumptions smoothness gaussianity and doesnt impact the value of this work detailed notes there is a slight sloppiness in notation in equation 1 this uses tilder as a subscript of e but r is 1 or 1 and the error variables are e and e not e1 and e1 the noise levels in atari figure 3 show something quite interesting which could be commented upon for noise below 05 the surrogate reward works roughly similarly to the noisy reward but when the noise level goes above this the surrogate reward clearly exploits the increased information content similar to a noisy binary channel with over 05 noise this may have implications for adversarial noise there are also some issues with the proofs which i spotted outlined below lemma 1 proof the proof of lemma 1 i think fails to achieve its objective the first pair of equations is not a rewrite of equation 1 i believe that the authors intend for this to be a consequence of equation 1 but do not really demonstrate this clearly also the authors seem to switch between binary rewards 1 and 1 and two levels of reward r and r leading to some confusion i would suggest the latter throughout as it is more general but involves no more terms i suggest the following as an outline for the proof it would help for them to define what they mean by the different rhats as they currently do and explain that these values are therefore rhat 1 e r e r 1 e e rhat 1 e r e r1 e e from equation 1 what is left is for them to actually prove the lemma namely that the expected value of rhat is erhat p1rhatrhat rhat prhatrhat rhat er where the probabilities relate to the surrogate reward taking their respective values and just stylistically i would avoid writing we could obtain and simply write we obtain lemma 2 achieves this more clearly with greater generality theorem 1 proof at the end of p13 the proof of the expected value loses track of the chosen action a i would suggest the authors replace mathbbpsshatr with mathbbpsa shatr then define it likewise mathbbpss should be mathbbpsas and also defined i am also a little uncomfortable with the switch from maxb in mathcala qsb qsb in the second to last line of p13 which refers to the maximum q value associated with some state s to qqinfty in the next line which is the maximum over all states and actions the equality should probably be an inequality there too throughout this the notation could be much better defined including how to interpret the curly f and how it acts in the conditional part of an expectation and variance finally there is a bit too free a use of the word easily here if it were easy then the authors could do it more clearly i think otherwise please refer to the appropriate result in the literature docsepthe paper aims at studying the setting of perturbed rewards in a deep rl setting studying the effect of noise in the reward function is interesting the paper is quite wellwritten however the paper studies a rather simple setting the limitations could be discussed more clearly and there are one or two elements unclear see below the paper assumes first the interesting case where the generation of the perturbed reward is a function of sr into the perturbed reward space but then the confusion matrix does not take into account the state which is justified by to let our presentation stay focused i believe these elements should at least be clearly discussed indeed in that setting the theorems given seem to be variations of existing results and it is difficult to understand what is the message behind the theorems in addition it is assumed that the confusion matrix c is known or estimated from data but its not clear to me how this can be done in practice in equation 4 how do you have access to the predicted true rewards additional comments the discount factor can be 0 but can not in general be equal to 1 so the equation in paragraph 21 0 1 is wrong the paper mention that an underwhelming amount of reinforcement learning studies have focused on the settings with perturbed and noisy rewards but there are some works on the subject eg httpsarxivorgabs180503359 and a discussion about the differences with the related work would be interesting ### Summary:
this paper studies rl with perturbed rewards where a technical challenge is to revert the perturbation process so that the right policy is learned some experiments are used to support the algorithm which involves learning the reward perturbation process the confusion matrix using existing techniques from the supervised learning and crowdsourcing literature reviewers found the problem setting new and worth investigating but had concerns over the scopesignificance of this work mostly about how the confusion matrix is learned if this matrix is known correcting reward perturbation is easy and standard rl can be applied to the corrected rewards specifically the work seems to be limited in two substantial ways both related to how the confusion matrix is learned the reward function needs to be deterministic majority voting requires the number of states to be finite the significance of this work is therefore mostly limited to finitestate problems with deterministic reward which is quite restricted as the authors pointed out the paper uses discretization to turn a continuous state space into a finite one which is how the experiment was done but discretization is likely not robust or efficient in many highdimensional problems it should be noted that the setting studied here together with a thorough treatment of an even restricted case could make an interesting paper that inspires future work however the exact problem setting is not completely clear in the paper and the limitations of the technical contributions is also somewhat unclear the authors are strongly advised to revise the paper accordingly to make their contributions clearer minor questions in lemma 2 what if c is not invertible the sampling oracle assumed in def 1 is not very practical as opposed to what the paper claims there are more recent work at nips and stoc on attacking rl including bandits algorithms by manipulating the reward signals the authors may want to cite and discuss
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper studies structural fairness on graph contrastive learning gcl the study is motivated by the finding that gcl is fairer to low degree nodes than gcn based on that the authors first present theoretical analysis on such structural fairness for gcl through intracommunity concentration theorem and intercommunity scatter theorem guided by the theoretical analysis the authors propose grade by enriching the neighborhood of tail nodes and purifying the neighborhood of head nodes experimental results on realworld datasets demonstrate the effectiveness of grade strengths the paper is wellmotivated through empirical experiments the design of the proposed method is inspired from theoretical analysis the proposed method is effective in mitigating structural unfairness as shown by experimental results weaknesses please see below more understanding on the performance for graphs with heterophily is needed there are too many notations in the paper the authors may want to find a way to organize them so the readers wont need to check notations back and forth the authors should make sure the theorems are selfcontained significance test on the experimental results would be helpful to showcase the effectiveness of grade docsepgcn is primarily beneficial to highdegree nodes but biased against lowdegree nodes which causes a performance bottleneck as a promising paradigm in the graph domain gcl integrates the power of gcn and contrastive learning displaying sota performance in a variety of tasks this paper investigates the question of whether will gcl present the same degree of bias as gcn they surprisingly find out that a smaller performance gap exists between tail nodes and head nodes in gcl methods than that of gcn they intuitively and theoretically analyze the reason for this interesting finding particularly intracommunity concentration theorem and intercommunity scatter theorem prove that node representations learned by gcl conform to a clearer community structure and establish the relation between graph augmentation and representation concentration these analyses yield profound insights into solutions to this important degreebias problem and imply that gcl is a promising direction therefore they further propose a graph contrastive learning for degree bias grade to concentrate augmented representations specifically they enlarge limited neighbors of tail nodes to contain more nodes within the same community and purify head nodes by removing neighbors from a different community extensive experiments on various benchmark datasets and several evaluation protocols validate the effectiveness of grade the paper is wellwritten in general and their finding is exciting weaknesses 1 how to get a small degree of bias from a clear community structure needs more explanations theorem 1 and 2 prove that gcl conforms to a clearer community structure via intracommunity concentration and intercommunity scatter but its relationship with degree bias is not intuitive enough 2 there is some confusion in the theoretical analysis why is the supremum in definition 1 gammafracbhatdminkfrac12 based on this definition how to prove that the proposed grade reduces this supremum 3 there is a lack of significance test in table 1 despite the weaknesses mentioned above i believe that this paper is worth publishing they consider an important degreebias problem in the graph domain given that node degrees of realworld graphs often follow a longtailed powerlaw distribution and they show an exciting finding that gcl is more stable wrt the degree bias and give a preliminary explanation for the underlying mechanism although the improvement does not seem significant in table 1 they may inspire more future research on this promising solution a learnable augmentation is an interesting direction for improvement docsepthis paper investigates the great potential of graph contrastive learning to solve the degreebias problem and proposes a new graph augmentation for further improvement the motivation of this paper is clear they discover that node representations obtained by gcl methods are fairer to degree bias than those learned by gcn and explore the underlying cause of this phenomenon based on the theoretical analysis they further propose a novel gcl model targeting the degree bias experimental results clearly show the merit of the proposed model and the source code is attached i have not run it though concerns 1 this paper focuses on semisupervised gcn to motivate their investigation do other semisupervised gnns suffer from the same severe degreebias problem in other words i doubt the finding is limited 2 some simplifications have been made in the theoretical analysis for example they only consider the topology augmentation does it affect the analysis results 3 there is only one traintest split in section 2 but two in section 5 4 the complexity of the proposed model the authors have pointed out the limitations of their work docsepnode degrees of realworld graphs follow a longtailed distribution but gcn exhibits a performance disparity between highdegree nodes and lowdegree nodes ie degree bias this paper discovers an interesting phenomenon that graph contrastive learning methods have already a smaller degree bias based on this discovery this paper theoretically analyzes the reason and proposes a tailored contrastive learning method grade experiments validate the effectiveness of the proposed method strength 1 the finding is interesting and may inspire a new paradigm for alleviating degree bias 2 the paper is wellwritten and the conclusion is clear weakness 1 the conclusion seems to be only for gcn i wonder gat1 may exhibit a smaller degree bias even smaller than graph contrastive learning methods 2 from figure 6 in appendix a the advantage of graph contrastive learning methods over gcn on photo dataset is not obvious the numerical values of their slopes are close 3 there is a small gap between degree bias and theoretical analysis of clear community structure 4 the improvement of the proposed method in table 1 does not seem statistically significant because of high variance 5 there are some related works designed for degree bias such as sldsgcn2 but these methods are not set as baselines in the experimental comparison 1 velikovi p cucurull g casanova a et al graph attention networkscinternational conference on learning representations 2018 2 tang x yao h sun y et al investigating and mitigating degreerelated biases in graph convoltuional networkscproceedings of the 29th acm international conference on information knowledge management 2020 14351444 in addition to the limitations mentioned in the paper the generalization of the conclusion should be taken into consideration ### Summary:
this paper identifies a fairness problem in graph contrastive learning gcl ie gcn often performs bad for lowdegree nodes the key to solve this problem is the observation that gcl can offer more fair representation for both low and high degree nodes authors also support their claims with theoretical analysis all reviewers appreciate the contributions made by this submission it is suggested that to simplify notations and make the theorems are selfcontained in the final version
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper claims that in the context of multiclass image classification using neural networks in the final classification layer if we use randomly initialized parameters with normalization without any training we can achieve better performance than if we train those parameters this is an intriguing claim that can potentially have a very broad impact the authors provide some motivations based on the errorsimilarity plots but no theoretical backing without convincing theoretical support such a claim can only be established through extensive and rigorous experimentation and i find the experiment description in this paper is short on delivering strong evidence for example how many runs to achieve the results in tables 13 what are confidence intervals on the results any statistical significance test done how were hyperparameters selected what about the performance on the imagenet dataset which has more classes than the datasets reported in the paper what distribution was used to initialize the random weights in the classification layer is the performance sensitive to the distribution is the performance sensitive to the complexity of the model used to learn the representation how does this compare to other ways of improve multiclass classification such as softmax temperature annealing label smoothing adding regularization etc or as a stretch does this claim generalize to problems with categorical features details 1 page 2 line 9 do you mean maximizing the cosinesimilaritydocsepthe paper explores deeper into the specific classification layer of a standard supervised learning system the core idea of the paper is to randomly initialize and then fix the classification layer weights and train the network leading improved discrimination the writing is satisfactory and the paper develops the ideas sufficiently well to help any reader who is a beginner in this area one of the major concerns regarding the work is that it seems to have is the relatively limited amount of contribution given the context of the current venue this is no doubt an interesting phenomenon however previous works investigating cosine similarity losses have tested their approaches on much larger problems such as large scale face recognition and full imagenet the paper currently derives its intuitions from object recognition problems which have very different behavior than problems like face recognition where the number of classes is large yet the number of samples per class is much lower that said given the limited scale of the experiments the paper does offer a wider variety of results supporting its claims nonetheless given the simplicity of the idea the paper fails to push envelope of results on any of these datasets lastly the performance gains in table 3 seem limited given that only one run was performed for each dataset docsepsummary this paper introduces a new approach to learn a multiclass image classification model by fixing the weights of the classification layer the authors propose to draw the class vectors randomly and set them as fixed during training instead of training them they analyze this approach when a model is trained with a categorical crossentropy and or softmaxcosine loss the proposed approach is tested on 4 datasets stl cifar10 cifar100 tinyimagenet reasons for score i do not think the technical contribution is strong enough for iclr the idea is interesting but the empirical validation of the idea should be improved and some claims should be proved pros the idea of using fixedrepresentation is interesting it can help to reduce the training time the authors explain why cosinesimilarity maximization models cannot converge to 0 cons the title looks very interesting redesigning the classification layer by randomizing the class representation vectors but after reading the paper it is only about mullticlass image classification there is no study about other types of data or the multilabel setting the authors should use a title more accurate about the content of their paper overall the structure of the paper should be improved it is quite difficult to read because several sections are a mix of model contributions and experimental results maybe using subsections can help to separate the model contributions and experimental results also some information is not at the right place and some sections should be reorganized for example the datasets and models are presented in section 41 but some results are presented in section 2 the authors should also add a related work section to clearly state the motivations and explain the difference with other approaches the authors proposed to randomly initialize the weights of the classification layer but they do not clearly explain how the weights are initialized there are several standard approaches to initialize weights like uniform normal xavier uniform xavier normal kaiming uniform kaiming normal it can improve the paper if the authors compare these initialization mechanisms similarly the authors should analyze the results for several runs to see how the fixed weights approach is sensitive to the random initialization i have a conceptual problem with fixing the bias the bias is sampled so it means it can have a large or small value lets take an example with 2 classes the class a can have a large bias eg 05 but other class b can have a small value eg 05 it means that the class b has a negative bias and will usually have lower scores than a just because there is a difference of 1 between these biases i am not sure that it is a good idea and there is no motivation about that in the paper the authors should analyze the bias initialization because it is important it is important to show the variance when the model is evaluated on several runs section 4 it can help to understand how the model is sensible to the initialization it is well known that the sgd is sensible to its hyperparameter and in particular the learning rate the model will not converge if the learning rate is too large or too small the authors should explain how they choose the hyperparameters i also wonder how the results are specific to the optimizer are the conclusions of the analysis the same for other popular optimizers like adam these observations can provide an explanation as to why nonfixed models with s 1 fail to converge page 6 for me it explains why the model cannot converge to 0 but it does not explain why the model fails to converge they are two different problems in the abstract and in some other parts of the paper the authors claim they improve the compactness of the model but they never show it they did not define how they measure the compactness of a model they should clearly present the definition of compactness and what approach they used to compute it based on my knowledge measuring the compactness of a model is not easy the authors should results on low resolution dataset less than 100100 i wonder if the results can be generalized to larger resolution dataset for example does it also work on imagenet that has more images larger resolution images and more classes 1000 i also wonder if it works on other type of datasets like finegrained datasets eg cub200 stanford cars fgvc aircraft also how does it adapt to new domains like medical images and natural scenes i am not convinced that ignoring the visual similarities between classes is a good idea i think it is important to build spaces that encode some semantic structure for example i think it is important to encode that two bird species are more semantically similar than a bird and a car it is not clear why the authors decided to focus on the cosinesimilarity maximization models they should motivate this decision more because these models are not so popular the authors claimed that the low range of the logits vector is not the cause preventing from cosinesimilarity maximization models from converging page 5 but they did not show results to prove it the authors should analyze the range of the logits the current analysis does not allow us to understand if it is because of the range of value or the normalization of the weights or a bad tuning of some hyperparameters minor comments the authors should give more information on how they generated the figures 3 and 5 docsepthis paper proposed a classification layer by randomizing the class representation vectors this paper first analyses the class vector distributions between different training strategies and then proposed the randomized class vector to improve the representation learning performance the proposed model is further extended and analyzed for the fixed cosinesimilarity maximization setting the experiments demonstrate the effectiveness of the proposed method compared with the basicvanilla baselines pros the motivation of this paper is comprehensive some quantitative and visual experimental results introduced the motivation of the proposed model the randomization weights also provide a novel view for solving more machine learning problems this is a good point cons my main concern is the experimental results the experiments are mainly done for evaluating fixed and nonfixed models without any other stateoftheart methods and the exact performance is considerably low compared with other stateoftheart methods to this end it is hard to confirm the effectiveness of the model for example 1 even though the visualization results are good figure 3 it does not mean the final performance is better 2 the original model could be overfitting and a random and fixed weight layer could be considered as a regularizer there are some experiments should be done 1 compare this method with relevant methods such as normface and arcface to proof the effectiveness of this approach 2 compared with exact performance in face relevant datasets and compared with other sota methods ### Summary:
the reviewers are in consensus that this paper is not ready for publication cited concerns include simple interesting ideas but need to be carefully analyzed empirically contextualized other similar studies exist identifying convincing empirical evidences etc the ac recommends reject
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper deals with how to estimate the noise transition matrix under a multilabel setup like a singlelabel setup the authors attempted to make an idea without using anchor points for matrix estimation they leverage the sample selection method ie gmm for extra clean supervision and provide a detailed mathematical derivation for the matrix estimation the experimental section includes voc and mscoco datasets they injected a synthetic label noise with two factors controlling the flip ratio of 10 and 01 under the proposed synthetic noise the method works well and outperforms other simple extensions of similar works but developed under the singlelabel setup strengths 1 this paper is the first approach to estimating the noisy transition matrix for handling noisy labels with multilabels 2 leveraging mismatched label correlation is useful 3 the method shows higher performance than other simple baselines weaknesses i agree that estimating the noise transition matrix helps make a statistically consistent classifier however i felt some major weaknesses of this paper as below 1 assumption of instanceindependent noise as the authors said many previous studies make the same assumption for matrix estimations but there have been numerous recent studies to overcome this in the same direction 13 i agree that dealing with instancedependent label noise in a multilabel setup is very difficult to address at least the author should mention what is more challenging in this setup compared to the singlelabel case and why making this assumption is reasonable with a more specific reason just saying the vast majority of current algorithms mainly focus is not enough as a research paper 2 noise injection protocol according to the paper the authors assume the classdependent label noise and aim to estimate an accurate noise transition matrix under this assumption however the noise injection protocol in line 217 looks like classindependent label noise specifically the rho just means the probability of 0 or 1 being flipped to 1 or 0 without any consideration of class pairs like i j only class j associates with the flipping and no connection between classes therefore this is not very realistic and not classdependent label noise did i get it wrong 3 unrealistic evaluation related to the second weakness the author should have included at least one realistic real noisy data there are benchmark data with noisy multilabel instances called openimages database the author can find other better datasets if possible without realistic evaluation i am not convinced of the robust results in the paper since the injection protocol is very unrealistic 4 class imbalance problem low map unlike singlelabel classification the biggest difference in multilabel setup is class imbalance the authors also mentioned that this problem is very severe in lines 44 102 however there is no detailed mention of how to resolve this issue with the proposed idea throughout the paper next when i am looking at the results on mscoco the map value is too low compared with other multilabel papers using the same resnet50 pretrained models as an example 4 shows resnet50 with gap head achieves around 80map on ms coco dataset refer to figure 6 however in this paper the map was less than 70 with standard at the easiest noise setup 00 02 this is a 20 missing label scenario there are 20 missing labels but i think 20 of missing labels do not make 10map drops could you report the performance of your method and others under 0 0 setup this is a very good reference to check whether your implementation is correct and whether your method is still comparable with a zeronoise setup 1 approximating instancedependent noise via instanceconfidence embedding arxiv 2021 2 a secondorder approach to learning with instancedependent label noise cvpr 2021 3 an information fusion approach to learning with instancedependent label noise iclr 2022 4 mldecoder scalable and versatile classification head arxiv 2021 please see the weakness section above in addition the paper writing should be improved particularly the theoretical section too many results are omitted and borrowed from 16 24 25 31 docsepthe paper proposes a new estimator to estimate the transition matrix in noisy multilabel learning the main idea of the estimator is to utilize the clean label cooccurrence as a constraint to help the estimation of the transition matrix specifically authors derive a twostage method to estimate then transition probabilities at the first stage the paper utilizes an existing technique to select clean labels for estimating the label occurrence although the estimated label occurrence is biased due to the selection bias authors claims that the selection biased cannot lead to large estimation error at the section stage the paper obtains the label cooccurrence with frequency counting and use the mismatch of label correlation to estimate the transition matrix strength 1 the motivation of the proposed method is reasonable the information of label cooccurrence is useful for the transition matrix estimation 2 the paper reports extensive experiments to validate the effectiveness of the proposed method 3 the paper conducts comprehensive analyses for noisy multilabel learning weakness 1 in section 32 authors claim that the label cooccurrence estimated by using selected clean labels is unbiased this is based on the assumption that given yj the features about class j is biased while the features about another class i is unbiased the assumption is unreasonable since the features are biased with respect to class i and class j cooccur in an image 2 authors select a small number of clean labels for estimating the label cooccurrence however the estimated label cooccurrence may be imprecise due to the insufficient labels used for frequency counting especially for positive labels 3 there are many typos such as page 4 line 168 ys algorithm 1 line 4 dt page 2 line 48 while bird and sky are always cooccurrence 4 the references are not cited properly in experiments the citations for multilabel with missing labels and partial multilabel are missed none docsep this paper presents a way of estimating the noise transition matrix for noisy multilabel learning which makes use of label correlations through a bilinear decomposition based on frequency counting it first theoretically studies the identifiable problem of estimating the noise transition matrix under the multilevel setting which then leads to the development of the bilinear decompositionbased method for estimating the noise transition matrix experiments results derived from several image datasets show that the proposed method can better estimate the transition matrix strengths labelnosing learning is an important and challenging problem not just in multiclass classification and also in the multilabel setting this paper presents a bilinear decompositionbased approach based a simple frequency counting with a clearly described algorithm the proposed estimation method is well motivated with a set of theorems that study the identifiable problem in the multilabel learning setting which seems to be thorough with proper proof in the appendix the proof is also easy to follow overall the paper is written well and most of the notations were clear and wellorganized to guide the readers through the text weaknesses my major concern is around the performance of the proposed estimation method in terms of estimation of the transition matrices it outperforms the competitors in most of the settings which is good however when coming to the classification performance the proposed method is not a clear winner particularly compared with those without considering noise labels across the datasets for instance for varying levels of noise it is hard to conclude the behaviour of the proposed method ablation studies showing how the proposed model benefit from label correlation and deal with imbalance issue are missing for example there is a running parameter tau used in stage 1 to select the sample set mathcaldjt which i believe will have an impact on the performance of the proposed method the authors have addressed the limitations and border impact in the appendix docsepthis paper discusses the estimation problem of the transition matrices in the noisy multilabel setting they study the identifiability problem of the classdependent transition matrix in noisy multilabel learning and propose a new estimator by exploiting label correlations without both anchor points and accurate fitting of noisy class posterior inspired by the identifiability results experimentally the effectiveness of the proposed method is also illustrated strenghths 1 they utilize the mismatch of label correlations to identify the transition matrix without both anchor points and accurate fitting of noisy class posterior in noisy multilabel learning 2 the method is effective to the issue with both theoretical analyses and empirical results support this method 3 clear writing and structure weaknesses 1 some formulas like in line 192 are recommended to be written in more simplified way 2 the experimental results are recommended to be presented in different forms but not just tables they address it adequately in appendix ### Summary:
estimating the noisy transition matrix for handling noisy labels with multilabels good experimental work illustrating estimating transition matrices reviewers liked theory and the writeup paper has had improved citations and writing there was some discussion about the assumptions nuances of this should be addressed in the revised paper for instance your comments about class imbalance regarding reviewer kmfhs q5 note retrieval metrics eg rk have been widely used in multilabel classification although versions of f1 are probably more common they give alternative looks at the errors regarding reviewer n5crs weakness 2 would be nice to do summary plots andor winloss tables and put some of the big tables in appendices
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper presents a novel algorithm for approximating posterior distributions via weighted subsets of the data the proposed algorithm samples points uniformly at random and then computes the weights using a quasinewton method crucially it scales well with respect to the number of data points both in terms of time and storage the authors provide a rigorous theoretical analysis of their proposed algorithm and empirically evaluate it against commonlyused methods for posterior approximation overall i found this paper to be an excellent read and believe it to be a strong candidate for acceptance 1 the proposed method is not particularly creative but frankly i think the simplicity of the ideas being presented here are a strength of this work the sum of the contributions in my mind are sufficient to get this paper above the originality bar for acceptance 2 the theoretical and empirical analysis are compelling although i do have a few suggestionsconcerns regarding the experiments that if addressed could raise my score of this submission even higher i would have liked to see some comparison against svi even if that meant scaling down the experiments to settings where svi is computationally tractable it would be very telling if qnc was competitive with or even outperformed svi in smalldata regimes if not then finding some sort of costaccuracy tradeoff between the two methods could be useful for practitioners i would have liked to see some sort of sensitivity analysis to the model hyperparameters specifically s ktune about which very little is said and tau 3 the manuscript is very wellwritten i genuinely found this paper to be an enjoyable read and applaud the authors effort to include intuition wherever possible 4 i believe this work will have significant impact on a nontrivial percentage of the neurips population the proposed algorithm is simple and easytoimplement and thus accessible to people who might want to useexpand upon it furthermore the scalability makes this algorithm broadly applicable in settings where previouslyproposed methods might not be yes i believe the authors discussion of their works limitations is sufficient related to my question above i would have found it helpful if the authors had provided some intuition about settings in which their theoretical results would not hold much like how they did for settings in which the assumptions do hold docsepin the area of bayesian coresets this paper presents an algorithm that proposes 1 to uniformly select m data points that will be used to approximate the posterior distribution and 1 to use a quasinewton method to refine the weights that will be assigned to each of the selected data points for this quasinewton method the authors propose using an estimated covariance matrix to approximate the objectives hessian finally the authors provide theoretical justification for their algorithm via three different theorems i think this is a very solid submission the authors propose an algorithm and prove that their approximation might behave as the full posterior and provide a convergence rate a notable feature is that assumptions are deeply discussed in the body of the paper section 4 or the appendix in general i would say that it is a very precise paper on the weak side i found the proof in the appendix quite hard to follow many inequalities were not evident at all and limited the possibility of doing a complete verification of the papers contributions cf questions below it is important to highlight that when i was able to derive the bounds the proofs are correct as in thm 43 the authors mention some of the papers limitations above im suggesting mentioning a few more docsepthis paper addresses the problem of approximating a bayesian posterior with a large number of datapoints they do so by selecting a sparse weighted subset of datapoints a coreset previous methods have addressed this problem but the authors note that previous methods suffer from requiring some subset of 1 potentially expensive loops over the whole dataset 2 user input or parameter tuning 3 a large number of potentially expensive mcmc simulations the authors address 1 by noting that a coreset can be formed on a random subset of the data 2 by proposing a quasinewton method that requires one mcmc simulation per iteration and 3 just by virtue of providing a different algorithm without the same tuning user input requirements the authors prove conditions under which 1 still allows for exact or approximate recovery of the posterior and they prove conditions under which their optimization algorithm 2 does not require many iterations in experiments the authors show that their method is both faster and more accurate than other coreset methods strengths the writing in the paper is good the material is pretty technical but i found it relatively easy to follow the method proposed by the authors intuitively fixes clearly explained problems with previous work and has desirable theoretical properties mostly see weaknesses below and this intuition and theory actually seem to hold up in practice in nonexhaustive but at least detailed and careful experiments thats a pretty solid combination based on this i vote for the paper to be accepted weaknesses 1 the method is claimed to be black box with far less user input required than other algorithms i think this is overselling things a little bit i see five different hyperparameters at least five since one of them is a sequence the number of coreset points m the step size sequence gammak the hessian regularization tau or equivalently the condition number threshold mentioned on line 142 the number of iterations to tune the stepsize for ktune and the size of the subsampling set t since there are no sensitivity studies for these hyperparameters i dont think its fair to suggest that little user input is required relatedly some competing methods are criticized for needing an initial userspecified posterior approximation hatpi as an input such as the laplace approximation i dont think this requires much user input given access to an optimization library and automatic differentiation tools a laplace approximation can be constructed without user input 2 theorem 43 which studies the convergence rate of the authors proposed algorithm an important selling point of the work essentially assumes the smallest eigenvalue of the covariance gw is not too small for example if xi is zero and the optimal coreset perfectly approximates the posterior then theorem 43 predicts no convergence of their algorithm but it seems and maybe the authors can correct me on this that as n to infty we should expect xi to 0 for example say we expect the posterior to concentrate to a point mass as n to infty and some w in w perfectly approximates the posterior if the posterior is contracting to a point mass wont gw approach the matrix of 0s im writing this as if the inf over w is actually the inf over wn if its the inf over w then the inf contains arbitrarily large ws even for n 1 smaller things there are a lot of derivatives with respect to w taken throughout the paper but w sits in a constrained set w how are these derivatives defined when w is on the boundary of w should the left hand side of equations 6 and 7 be indexed 1m and 1m 1m respectively line 113 its not intuitively clear to me why the first term should dominate the expression here algorithm 1 is input k supposed to be ktune this is sublinear in n i usually take sublinear to mean on whereas this runtime is on in the equation right after line 188 i dont think a k x and x1 were ever defined in theorem 43 could the inf over w in w be replaced by the inf over w in w w w leq w0 w since theres monotone improvement towards w it seems like youll never leave this ball and so you can ignore any bad behavior outside of it in the experiments it would be good to include the runtime of full to see how marginal nonmarginal the gains here are in the proof of theorem 43 there are a couple places where gw lambda i is written should this lambda be a tau it would be helpful if the equations in the appendix were contiguously numbered with those in the main text otherwise its not immediately obvious which equation eq 1 refers to the letter t shows up in a few places in proofs it seems like this is a number between 0 and 1 but i dont think this is ever stated i think the authors have adequately discussed the social impact of their work docsepthis paper proposes a method for approximating a posterior distribution in the framework of bayesian statistics specifically this posterior distribution involves a sum over n potentials sumn1n fntheta which has to be evaluated each time an expectation wrt the posterior has to be calculated it is proposed to replaced this sum of another weighted sum over m ll n terms this corresponds to selecting a subset of data called a coreset these points are sampled uniformly at random and their weights are computed by minimizing the kl divergence between the full posterior and the approximated posterior a quasinewton optimization method is proposed to calculate these weights statistical guarantees are given for the size of the coreset in order to reach a small kl divergence the convergence of the quasinewton method is also discussed numerical simulations illustrate the accuracy of the posterior approximation originality this paper discusses interesting ideas being familiar with coresets in general but less with this specific application i cannot completely assess the originality of this paper quality from a global perspective the main paper is wellwritten and the results are well presented however from the technical viewpoint the interpretation of some mathematical claims is not straightforward for instance in the case of thm 41 the numerical simulations are convincing clarity the main paper is easy to follow concerning the technical details i have several questions about the proofs of the theoretical results in supplementary material these proofs are long eg the proof of thm 42 takes more than 8 pages in principle this is not an issue however these proves deserve extra clarifications at several places and are not easy to read see the questions below to guide the reader it would also be beneficial if a brief description of the proof strategy is given before each proof significance this method is certainly interesting and probably useful in the context of bayesian inference the limitations of this work are summarized in the checklist and are briefly discussed in section 54 i do not see any potential negative societal impact ### Summary:
the paper has genereated unanimous enthusiasm and we are happy to recommend acceptance please make sure that all comments in the reviewsdiscussion threads are taken into account in the final version of the manuscript
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper studied selfsupervised learning a semantic shift problem in the aggressive augmentations of selfsupervised learning is considered this paper inherits the memorization effect in tackling noisy labels and gradually reduces the weights of aggressively augmented pairs extensive experiments verify the effectiveness of the proposed method strengths 1 selfsupervised learning is a practical and much important research topic in the community 2 the motivation of this paper is clear the semantic shift problem is common in multiple selfsupervised learning methods and needs to be addressed 3 experimental results on benchmark datasets and realworld datasets show the effectiveness of the proposed method besides ablation studies are provided to better understand the proposed method weaknesses 1 the writing and organization need to be improved to enhance this paper 2 the intuition of the advantages of the proposed method should be supplemented the weaknesses of this paper are detailed below see above weaknesses docsepthe data augmentation transformations used in some selfsupervised learning models for vision can generate pairs that are not semantically consistent for example cropping or blurring too aggressively can produce an image that is not identifiable as its class label this work addresses this shortcoming by decreasing the degree to which a model relies on aggressive augmentations later on in training the authors show doing so improves top1 linear classification accuracy for imagenet as well as on object detectionsegmentation for coco over other selfsupervised baselines the authors highlight a problem arising when data augmentations are too aggressivethus producing samples that not semantically meaningful noisy samples the authors propose a solution to address this problem by reducing the weight given for aggressive augmentations later on in training the authors motivate doing so by citing recent work illustrating that deep neural networks overfit to noisier samples later on in training the authors define weak augmentation to be random crop and horizontal flipwhile aggressive augmentations additionally include other color based augmentations such as blurring and color jitter this delineation is 1 at odds with the original motivation since even weak augmentations can produce crops that are not semantically meaningful thus violating the motivation that only aggressive augmentations produce noisy samples 2 at odds with more recent ssl methods masked autoencoders mae that only use weak augmentations if we instead want aggressive to capture the extent to which the augmented sample is noisy then a more precise approach is to control the magnitude of the cropping degree to which gaussian noise is added etc such a definition of an augmentations aggressive extent would more directly validate the authors claims i find the method an unnecessarily complicated and memoryintensive approach to achieve the stated goal of controlling the extent to which a model relies on aggressively augmented samples rather than introduce two asymmetric networks with twice the memory footprint did the authors attempt to adjust the extend to which the samples fed into the original models were aggressively or weakly augmented or other simpler variants before settling on this two asymmetric network approach at the very least id like to see stronger motivation for the proposed methods implementation overall the paper is wellorganized and clearly written i found the sentences motivating the impact of noisy samples on the online network around line 144 confusing i think the wording here can be improved algorithm 1 would be easier to follow if it was selfcontainedit doesnt include the loss the portion of the diagram in figure 2 on the right with numerous arrows and boxes is not easy to follow overall i find the experiments wellmotivated and convincing putting aside the weak versus strong augmentation definition appropriate baselines are used authors assess sensitivity to beta and show performance across several tasks however several of the experimental results tables 3 4 5 are used as evidence of msrs superior performance but do not include error bounds the authors conclude msr is a novel method to property utilize aggressive augmentations if the motivation is that aggressive augmentations produce semantically noise samples then the methods success is instead to decrease models reliance on aggressive augmentations the claim that is property utilizes or neutarlizes the semantic shift problem strikes me as an exaggeration i suggest the authors appropriately qualify these claims the papers title suggests the proposed method directly improves models robustness instead the approach balances the extent to which models rely on aggressively augmented samples during training there are no experiments directly measuring robustness to augmentations i suggest the authors amend the title to more directly reflect the contribution overall i find the problem of ensuring data augmentation produces semantically meanginful pairs for selfsupervised learning important the proposed method attempts to avoid overfitting to noisy samples shows some performance gains over existing baselines on the other hand i find the aggressive versus weak augmentation definition used here to miss the mark i also find the proposed method to be needlessly complex based on the stated objective minor inconsistent notation v and v describe two augmented samples but their representations are designated using subscript z1 and z2 its easier for readers to follow the same designation either subscript or prime is used for pairs throughout line 96 typo by utilizes line 157 typo to further against yes docsepthe authors find that in most selfsupervised learning methods when applying aggressive augmentations to further improve the diversity of training pairs there would exist severe semantic shift problem thus degrading the transfer performance to address this problem the authors propose a new ssl paradigm which counteracts the impact of semantic shift by balancing the role of weak and aggressively augmented pairs with the training going on the authors gradually reduce the weights of aggressively augmented pairs experiments have been done on the small datasets cifar10100 and medium datasets stl10 and tiny imagenet and large datasets imagenet100 and imagenet1k which have validated the effectiveness of the proposed method the motivation of the proposed method is strong and clear the authors fully consider the semantic shift problem thus propose to minimize the negative impacts of noisy positive pairs from aggressive augmentations while taking advantage of aggressive augmentations it has been achieved by different weights while training but the weakness is that i dont think the novelty is strong enough as for compared with the previous work byol i can find that the only difference is to add an extra aggr aug stream the main architecture almost keeps the same and for the reweighting strategy the authors use equ6 to achieve it it is not flexible enough which doesnt consider the quality of each augmentation pair and in figure 2 is there a mistake i think in the row three vm and va should be exchanged yes docsepthis paper proposes a new learning strategy that assigns different weights for aggressively augmented data pairs at different training stages to deal with the semantic shift problem caused by aggressive data augmentation methods for ssl strengths 1 this work contains extensive experiments to verify the effectiveness of the proposed strategy weaknesses 1 although this work has a good motivation the proposed method is quite incrmental which does not meet the standard of neurips for example this work claims that aggressive augmentations are retained which is different from ressl a lines 3942 however ressl does not give up the aggressive augmentations ressl also feeds the weakly augmented images to the teacher model similar to the target network and inputs the aggressively augmented images to the student model similar to the online network which is exactly the same as msr the proposed learning strategy is more like a trick which cannot support a paper to be accepted by neurips 2 according to figure 3 the proposed strategy seems not to work very well on some datasets as the increments brought by the decayed beta seems to be negligible 3 as for the experiment it would be great to show some results compared to ressl a and wmse 4 b with the same batch size and training epochs concerning the second weakness it is not quite clear where the improvement comes from does it come from the different training settings the batch size usually has a large impact on the performance in ssl overall since i have some concerns about the effectiveness of the proposed strategy the missing comparisons i think it is hard to supplement them during the rebuttal stage as well as the limited novelty i vote for a rejection for this paper but i may increase my score after the discussion stage especially if i misunderstand something ref a mingkai zheng shan you fei wang chen qian changshui zhang xiaogang wang and chang xu ressl relational selfsupervised learning with weak augmentation 2021 b aleksandr ermolov aliaksandr siarohin enver sangineto and nicu sebe whitening for selfsupervised representation learning in icml pages 30153024 2021 the novelty is limited and some comparisons are missing please refer to the comments in the weaknesses for more details ### Summary:
this paper aims to improve ssl pretraining by adjusting the strength of augmentations applied at different points in training providing a large number of aggressive augmentations early in training with this rate decreasing over time to prevent the model from overfitting to noisy examples using this approach the authors demonstrate substantial improvements over prior methods all reviewers recognized the soundness of the motivation and were generally convinced by the experiments though there were some concerns about whether the approach is too incremental since it is relatively simple i strongly agree with the authors that simplicity is not a downside of an approach but rather a benefit and the fact that the approach works with such a small modification makes it more likely that this result is not caused by an obscure mix of hyperparameters i also note that the authors engaged extensively with the reviewers providing a number of additional experiments comparing to other approaches and providing further tests of the impact of the hyperparameter they introduce i think this is an worthwhile paper which will have impact going forward and i recommend acceptance
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this work aims to study the characteristics of the class manifolds provided by a multiclass classifier in particular the main goal is to determine the effective dimensionality of these manifolds for the case of neural networks that outputs normalized probabilities to achieve this authors introduce the cutting plane method that following some assumptions allow them to relate the dimensionality of a random affine hyperplane to the effective dimensionality of the manifold for each class authors support their main findings including extensive experimentation the theoretical foundation behind the paper seems to be sound however as a disclaimer this research area is not close to the main expertise of this reviewer in terms of writting the exposition is clear however my main doubt is related to the process to infer the manifold dimensionality for a whole class using a process that depends on each specific instance it will be good to clarify this point and also the computational complexity involved during the process as a recommendation it will be great to test the method using an artificial case with known ground truth about the effective dimensionality of the class subspaces so it will be possible to directly validate the findings of the cutting plane method the current analysis focuses on the case that the cutting plane dimension leads to 50 of the target class d50 which is not the only choice specially considering that a highly relevant goal is to quantify the effect of manifold dimensionality on generalization the use of metrics or scores related to generalization will lead to valuable conclusions in summary this is an interesting area of research to shed lights on the process used by learning models to transform from input space to classprobability space in particular the potential relation between manifold dimensionality and generalization is worth to pursue this work will be of interest to iclr and i recommend to be accepted as a poster contribution docsepthis paper proposes to understand the behavior of deep networks for classification tasks by studying the dimensionality of the class manifolds ie regions in the data space that are mapped to the same onehot output to measure such dimensionality the paper proposes a method that is based on intersecting the class manifold with a random affine subspace of varying dimension the idea is that when there is a intersection then the dimension of the random affine subspace is roughly the codimension of the class manifold the paper then studies how different factors in data architecture training etc affects such dimensionality strength the development of the paper is solid in the sense that it studies the effect of a wide range of design choices see the list 1 9 in paper abstract weakness the whole paper is based on the assumption that each class manifold is a lowdimensional manifold however the paper did not provide a justification for this assumption nor do i think it is a valid assumption the manifold assumption is a fundamental assumption for machine learning and data science and that assumption is made for data rather than classes learned by neural networks one intuitive justification for that assumption in the case of data is that if i take a data point say an image and i do a perturbation then if the perturbation is in a direction that is meaningful say by a translation rotation and distortion then the class label for that data point remains the same but if you go towards another direction then likely the image is no longer meaningful and the class label changes however this same line of argument does not seem hold for the class manifolds learned by neural networks if i consider a random input to a network then because the decision boundary is piecewise linear it is with high probability that you can go towards all directions and maintain the class label if the lowdimensionality assumption is not valid then the premise of the entire paper becomes problematic the intuition given in fig 1 is no longer valid and the theory in sec 3 is no longer meaningful even if the lowdimensionality assumption is true to some degree the proposed dimension estimation is still very much problematic both the intuition in fig 1 and the theory in sec 3 are based on assuming that such lowdimensional manifold is close to being linear but the ability of deep networks for performing complicated nonlinear mapping which is the key to its great success likely makes such lowdimensional manifolds to be highly nonlinear therefore a discussion of how such nonlinearity affects the proposed dimension estimation is quite necessary but is missing additional comments how is x0 in the cutting plane method generated it is said in the paper that it is generated at random so perhaps that means a iid gaussian vector but presumably the variance of the gaussian distribution could have an impact on the result as it captures how far the affine subspace is from the origin sec 31 which contains the main theoretical result of the paper is presented in vague terms eg what is highly likely under what statistical model perhaps it is better to make it precise by writing the result as a theorem update after rebuttal i would like to thank the authors for the detailed rebuttal but my feeling now is that the rebuttal is making it even more complicated and sometimes conflicting with itself i believe the paper needs some careful rewriting and updates to clarify its points and assumptions concretely the paper is built upon the premise that each class manifold is a submanifold with dimension lower than that of the ambient space i pointed out in my review that this premise may not hold at al therefore the paper is fundamentally problematic then r2 in one of hisher responses raise the same question perhaps after reading my question then i see a difference in response to r2 and my comments for r2 the response is the intrinsic dimensionality of class manifolds is absolutely the full dimension of ambient input space which is effectively acknowledging that my critique is valid however the response to me is this is very easily refuted by the ubiquitous and universal existence of adversarial examples i dont really see why there is a discrepancy here besides the argument that is used to refute my argument namely existence of adversarial examples implies class manifolds are lower dimensional than the ambient space is apparently wrong and can be easily refuted by and large the existence of adversarial examples only means that the decision regions are thin at every location i can totally have a fine mesh of the data space that achieves this docsepthis paper proposes an empirical method for estimating the dimensionality of a class manifold defined here as a collection of points for which the last softmax layer of a neural network maps them to a membership probability vector associated with a specific class their approach involves the generation of a randomly oriented cutting plane of dimension d passing through a randomly generated source point the authors note that if the sum of the dimensions of the class manifold and the cutting plane exceeds the full spatial dimension the chance of an intersection of the two is high conversely if the sum falls short of the full dimension the chance of an intersection is very low using a gradient descent technique starting at the source point a location within the cutting plane is sought that minimizes the cross entropy loss between it and a target class membership vector representing the class manifold a low minimum loss would indicate a likely intersection d too high whereas a high loss would indicate a likely miss d too low although the dimension of the class manifold is in general unknown the process is iterated for many choices of the initial cutting plane and many choices of the cutting plane dimension d the value of d achieving the median loss value is chosen as the estimate of dimensionality in their experimental validation of the approach the authors examine the effects on the estimated manifold dimensionality due to various factors including data noise label noise training set size interestingly their method also allows them to produce an estimate of the class boundary dimensionality by specifying the average of two class onehot vectors as the target probability vector pros 1 this is an interesting approach to the problem of dimensional modeling estimation of dimensionality using cutting planes without an explicit parameterization of the subspace is an attractive idea that if performed efficiently and reliably could be particularly impactful a strong point of the model is that it considers as its class manifold the regions of deep latent space that have sufficient probabilities of being in one or more classes of interest the method thus supports assessments of dimensionality in border regions in an elegant way 2 the optimization procedure proposed does seem practical enough each optimization run is efficient and the number of runs can be tailored to an execution budget 3 the paper is generally well organized and presented the descriptions are clear and accessible cons 1 as acknowledged by the authors in the caption of fig 2 the dimensional estimates seem much higher than the typical estimates of intrinsic dimensionality as determined by local estimators eg lid levina bickel etc this discrepancy could be due to a number of factors that are not taken into account curvature of the class manifold its boundedness its disconnectedness etc all these factors could cause the gradient descent to terminate at high crossentropy loss values which would drive the estimate of dimensionality too high even approaching the representational dimension of the latent space 2 following from 1 some of the conclusions reached from the experimental analysis are not fully convincing for example in 45 an inverse relationship is reported between the training set size and the effective dimension however nonuniformity of distribution within the manifold could lead to configurations that trap solutions at unrealisticallyhigh values of d in 46 adding gaussian noise to each pixel is a fulldimensional transformation that is known to strongly bias the local intrinsic dimensionality upward to unrealistically high values 3 again following from 1 the authors have not situated their work with respect to the recent literature on the use of intrinsic dimensional estimation in deep learning settings for example local intrinsic dimensionality has been proposed as a characterization of learning performance ma et al icml 2018 adversarial perturbation ma et al iclr 2018 amsaleg et al wifs 2017 and in ganbased image infilling li et al ijcai 2019 how does their estimator compare in practice to other estimators already in use other comments questions 1 the paper should be more selfcontained in places for example equation 6 is referred to in the main paper but appears only in the appendix 2 like distances distributions themselves loss functions may exhibit a bias due to the local intrinsic dimensionality discuss docsepthe authors propose a cutting plane method inspired by intersection theory from algebraic geometry to analyse the properties of neural networks specifically the method allows to estimate the dimensionality of the class manifolds induced by the networks an empirical analysis shows that the estimated dimensionality correlates with the generalisation performance and the robustness of neural networks thus opening the door to potentially new perspectives on deep learning theory the paper is well structured and clearly written also the authors are planning to release the code to reproduce their experiments last but not least in terms of importance the paper provides an original and novel method for the analysis of the properties of neural networks in fact while previous works have used a similar strategy to estimate the intrinsic dimensionality of the loss landscape in the weight space see cited works in the paper this work focuses on the analysis of neural networks in the input space in general there are no major issues with the paper however there are some points that need to be discussed which can be helpful i to identify more precisely the conditions about the validity of their results and ii to relate with other existing work on the analysis of deep learning using spline theory please see below for more detailed comments and also suggestions to increase the quality of the paper based on these considerations i recommend for the acceptance of the paper with an initial score of 6 im willing to considerably increase the score and award the authors if they can address my questions detailed comments please let me make two simple pedagogical examples to analyse the behaviour of the proposed method and to possibly seed further thought first example consider a twodimensional real space where the classmanifold is a line then generate a second line by randomly sampling its intercept and slope coefficient and refer to them as the line parameters now if you consider the parameter space of this second line you have two regions one of zero measure which contains all the cases where the two lines are parallel and the remaining one which contains all the intersecting cases this means that the two lines are almost always intersecting each other consequently the estimated dimension of the classmanifold is correct second example consider the same example as before but now the classmanifold is a parabola similarly to the previous example there are two regions namely the ones defined by the intersecting and the nonintersecting cases between the parabola and the randomly generated line but differently from the previous one both regions have nonzero measure therefore we may end up to generate lines that do not intersect the class manifold this would result in considering a higher dimensional object in this case a plane to guarantee the intersection with the parabola consequently we would underestimate the dimension of the class manifold this phenomenon can be even more pronounced when moving to higher dimensions therefore i agree with the authors that the whole analysis is exact when considering hyperplanes but what are its limitations when moving to the nonlinear regime how can we guarantee that the estimated dimension is accurate it seems that the proposed method provides a lower bound on the true dimensionality of the class manifolds is that correct if so when can this bound be tight also there is some recent line of work trying to analyse the behaviour of deep leaning in terms of decision boundaries and their curvatures from the perspective of spline theory 12 could you please discuss this and add the explanation in the paper suggestions to increase the quality of the paper i proceed following the order of the sections section 22 is it possible to provide the full details of the algorithm to estimate the cutting plane dimension like an algorithmic table also where is equation 6 in the appendix what are the mean and covariance parameters section 31 can you be more precise when you use the terms highly likely and generically and discuss what happens in the nonlinear regime section 31 ddadb should be 2ddadb section 31 can you rephrase the sentence for the subspaces a and b intersecting transversallysatisfying the upper bound and therefore leading to equation 2 and make it more clear specifically which upper bound and how does this lead to equation 2 section 32 can you consider to remove it as the purpose is not clear and it does not seem to introduce any additional information section 33 can you make an example to explain the difference between dimension and effective dimension section 43 is there any concrete insight on the analysis of the classboundary and multiway classboundary manifolds i would appreciate to see more discussion on that section 46 is there any specific reason why you chose to show only class 0 and 1 in figure 8 can you provide the figures for the other classes as well maybe in the appendix section 47 similarly to figure 8 can you add the other cases for figure 9 always for this subsection which initialisation did you use in the experiments this is an important information that could be of interest for those studying initialisation strategies for deep learning section 48 do you have any experiment with ensemble of classifiers with different architectures if so do the same findings hold may it be possible that you are underestimating the dimension of the class manifold in the set of experiments shown in the paper section 48 can you provide a plot of the generalisation performance versus the ensemble size or better correlating the cutting plane dimension with the generalisation performance for the different ensemble size 1 balestriero and baraniuk a spline theory of deep learning icml 2018 2 balestriero et al the geometry of deep networks power diagram subdivision neurips 2019 update the discussion phase has highlighted several major issues 1 there has been a significant conceptual shift in the problem definition ie from estimating the intrinsic dimensionality of the class manifold to quantifying its geometrical properties 2 im not convinced about the validity of some argumentsstatements used by the authors to support point 1 for example the statement the intrinsic dimensionality of class manifolds is absolutely the full dimension of ambient input space but this is a completely uninteresting observation is not fully supported and im not even sure that is true 3 furthermore the paper is still in its original form it has been difficult to keep track about the modifications that the authors should do to conclude the article is not ready for publication yet and therefore recommend for its rejection i encourage the authors to further investigate the topic and carefully consider whether the statements provided in the discussion phase are true ### Summary:
this paper aims to study the dimension of the class manifolds cm which are defined as the region classified as certain classes by a neural network the authors develop a method to measure the dimension of cm by generating random linear subspaces and compute the intersection of the linear subspace with cm all reviewers agree that this is an interesting problem and worth studying however there are major concerns one question raised by several reviewers is that the goal of this paper is to analyze the dimension of the region that has the same output for the neural network while the method and analysis are for a single datum it is not clear if the obtained result is what the paper really aimed at another issue is the experimental results are different from that of local analysis the dimension estimated by using the method in this paper is much higher based on these i am not able to recommend acceptance but the authors are highly encouraged to continue this research
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the authors present isagrasp a method for performing dexterous grasps on arbitrary objects using a pretrained network from a large dataset augmented by a generative model which outputs corresponding grasp points on deformed meshes this is achieved with a combination of labeled human grasp demonstrations with poseretargeting to get corresponding robot allegro grasps to generate novel objects and grasps they use difnet building upon prior work by deng et al 21 which returns pointwise deformations of the objects used in the human demonstrations they compute new grasp locations that minimize the total offset to the original grasp over a local patch of points experimentally their method outperforms rl heuristic and grasping baselines strengths their grasping policy generalizes to novel object instances provided that an accurate point cloud of the object can be generated from the scene their method extensively compares to different grasping approaches with a wide range of objects in addition to ablating the input features used by the grasping policy these comparisons show strong benefits to shape augmentation weaknesses by using an openloop policy their method is unable to react to dynamiccluttered environments or recover from failed grasps their simulation does not account for kinematic infeasibility with the environment both for the arm and the hand this method does not consider functionaldexterous grasps of objects and therefore does not leverage the dexterity of the hand their method uses demonstrations but does not leverage an rl approach which is designed for use with demonstrations docsepthis paper studies the problem of dexterous grasping it proposes to use correspondenceaware implicit deformation networks to propagate a small number of human grasp demonstration to grasping configurations on a diverse deformed object set trained with the generated objects and corresponding grasp configurations the grasping policy can better generalize to unseen objects in human demonstration strengths 1 its a smart idea to augment human grasping demonstrations with correspondenceaware shape deformation networks it simultaneously creates novel shapes and the corresponding grasp configurations 2 simulation experiments demonstrate the efficacy of the proposed shape augmentation in the comparison between last two rows in table 1 3 this paper conducted realworld experiments of dexterous grasping which makes the results much stronger weaknesses 1 table 1 the comparison with baselines seems not very fair random heuristic and graspit should be compared with transformed grasps gd when performing the rejection sampling 133 collect data with rejection sampling on augmented shapes using random heuristic and graspit with a same budget and compare the result 2 248 what does refinement rate mean minors 220 baseelines baselines fig 8 caption miss one before middle panel near the end of the second line docsepthe paper proposes a system implicit shape augmentated grasping isagrasp to augment limited human demonstrations of dexterous graspsthe implicit shape augmentation is built on difnet 11 a correspondenceaware implicit generative model novel shapes are generated via deformation and the resulting dense correspondences helps transfer the human demonstration to novel objects the transferred grasps are refined via simulation and a grasp prediction model is trained on the augmented dataset by supervised learning strengths human demonstration of dexterous grasping is expensive to collect and always require specialized set up the paper proposes an interesting way to extrapolate the limited demonstrations to a large dataset of novel objects the correspondenceaware generative model is used in a reasonable way and the effectiveness of the method is shown by experiment the system transfers well to real world and achieves a decent result weakness analysis of the results is insufficient the authors only show that method a gets a better score than method b but doesnt really explain why why do the baselines such as heuristic and graspit perform so poorly are there any specific failure mode why doesnt data augmentation improve the performance on rescaledycb is it because the distribution of augmented objects are different from the ycb dataset or its something else i can understrand that augmentation can make the grasping policy generalize better to novel objects but what are the source of remaining perfomance gap 30 on 3 datasets the limitation part is not very satisfactory i want to see more limitation on the algorithmic side for example what are the failure mode of the method and how can we further improve the success rate questions i like the second term relative poses of the regarting optimization my question is why do you choose the center of the object as anchor point instead of closest point on object surface isnt the latter one more robust to shape variation and better modeling the contact docsepthis paper proposed a dataaugmentation framework to learn dexterous grasp pose from point cloud observation first the dexycb dataset which consists of human hand object interaction trajectories is used to provide an initial demonstration second hand motion retargeting is utilized to convert the human hand motion to robot hand joint angles third difnet an implicit neural network that can keep dense correspondence while deforming an implicit shape representation is used to deform the original object mesh with sampled latent gaussian vectors with correspondence after deformation the grasp pose can also be modified according to the object then a rejectionsamplingbased grasp refinement step is performed inside pybullet simulator to eliminate physicallyunfeasible grasp candidates finally the author trained a pointnet to predict the palm pose and finger joints from point cloud input this paper proposes a data augmentation strategy for dexterous grasping and motivates the problem well in the introduction the proposed objectcentric deformation method is very general and agnostic to tasks and manipulators there are also several issues that need to be addressed in the paper as follows strength 1 the proposed method is simple and elegant it does not rely on dynamics analysis of the contact wrench space or other forcebased measurements the dynamics correctness checking is achieved solely by a physical simulator while the underlying troublesome computation is hidden below 2 in principle the implicitbased deformation network can generate infinite object mesh and corresponding grasp pose this method is quite general 3 the observation that appending object points with additional information regarding the153 alignment between the robot hand and the local object surface is beneficial for many robotorientated regression tasks with point cloud input weakness 1 the original demonstration dataset dexycb seems not so useful in the whole pipeline it only provided the initial objectgrasp pair the following data generated is achieved in a sample and reject fashion sample with difnet and reject with physical simulator so the original dataset can be replaced by any method that can provide roughly okay grasp proposals even if the grasp proposal itself is not dynamicallycorrect or successful for example the author can use any grasp proposal network eg contact grasp net to generate a grasp pose and use the same data augmentation procedure for the following steps in this case the training data is not even limited to only ycb objects but any object meshes which may lead to better generalization performance due to the increasing diversity of the training data i assume that maybe one benefit of using human demonstration is for functional grasp when you do not want some grasp poses even if they are successful in simulator dynamically but not semantically feasible 2 the ppo baselines with sparse reward seem too weak for manipulation tasks with a dexterous hand sparse reward will lead to nearly zero success rate in this task writing a distancebased dense reward is also very simple and straightforward especially the authors can train the algorithm inside a pybullet simulator 3 the graspit baseline seems not used in a fair way in the experiments the author utilizes the results predicted by graspit and other heuristic to evaluate the grasp performance however the major contribution of this paper is the implicit shapebased augmentation so a better way to compare graspit is as follows i replace deycb and implicit shape augmentation with graspit to generate diverse grasp poses ii use pybullet to reject sample the unfeasible grasp poses iii train the same pointnet based on the data generated by graspit since the contribution is on the training data side the graspit should also be used to generate training data but not evaluation otherwise the experiments may show the value of the grasp refinement for dynamics consistency but not implicit shape augmentation ### Summary:
all the reviewers acknowledge somehow the noveltyoriginality of the paper but have all questioned the proposed approach in terms of baselines and fair comparisons eg ppo graspit in the phasse of rebuttal the authors have significantly reinforced the comparison of some wellknown baselines suggested by the reviewers as a result all the reviewers achieved a consensus for weak accept
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: i was searching for relevant work but found an arxiv paper that is very similar and being reviewed under neurocomputing journal httpsarxivorgabs191106475 this paper is poorly written and not well organized it is unclear to me how the method works and the results section is also not informative docsepthis short paper proposes exploit dependencies among abnormality labels and used the label smoothing regularization for a better handling of uncertain samples pros 1 the proposed model gains 4 improvement in auc from the label smoothing regularization compared with pure uones 2 the proposed work achieves the highest auc for 5 selected pathologies 3 the proposed work is on average better than 26 out of 3 other individual radiologists cons 1 all 14 labels are trained but the model only has 14 outputs does that mean parent labels in the paper are labels included in the dataset if so is it guaranteed that parent is positive when at least one child is positive this is the essential assumption in the adapted model chen et al 2019 2 terms not consistent we propose the uzeroslsr approach at the end of section 22 but uoneslsr is evaluated in ablation study 3 lack ablation study with the model ignoring all uncertain cases defined as uignore in the paperdocsepthe authors present a work that classifies chest xray images with 14 different labels and uses hierarchical labelling and label regularization in an attempt to improve results a leading performance on the public chexpert challenge is claimed but while the authors may have created a nice model the claims they make in this paper are not well proven or explained the method for using hierarchical labelling appears to follow a previously published scheme cited except with a different hierarchy no details of the new hierarchy are provided the method for label regularization is also previously published and cited therefore there is not methodological novelty in the paper the authors apply their methods to the chexpert public dataset from section 23 it is not clear to me precisely what experiments were carried out were all of these models trained withwithout the hierarchical labelling and also withwithout the label regularization that is not described at all section 3 claims that extensive ablation studies were carried out however there is not a single table or figure to illustrate the results of these the text provides a few auc values but the precise gain from the hierarchical labelling and from the label regularization is unclear what is meant by uonesctlsr this is mentioned in results but not explained the paper has no abstract docsepthis paper presents a multilabel classification framework based on deep convolutional neural networks cnns for diagnosing the presence of 14 common thoracic diseases and observations in xrays images the novelty of the proposed framework is to take the label structure into account and to learn label dependencies based on the idea of conditional learning in chen et al 2019 and the lung disease hierarchy of the chexpert dataset irvin and al 2019 the method is then shown to significantly outperform the stateoftheart methods of irvin and al 2019 allaouzi and ahmed 2019 the paper reads well and the methodology seems to be interesting i only regret the fact that this is a short paper and there is therefore not enough space for a more formal description and discussion of the methodology ### Summary:
first of all this paper does not follow the midl template with missing the abstract section major concerns from the reviewers lie in the unclear presentation of results and large overlapping with an arxiv paper nevertheless i think that taking into account the structural dependencies of labels is interesting
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the novelty of the network structure is marginal the decomposition way of feature is very common in computer vision just utilizing the latent vector of the encoder with only the comparator loss to decompose the feature into two feature types is limited the authors should show the visual differences between these two feature types the expression of the article is very clear but some basic theories need not be explained in detail such in section 34 one more concern hid and hor are both used for reconstruction its best to prove that only using identity feature hid is better than the overall latent vector hid hor docsep it is well presented the idea of splitting the encoding feature space into task related features and nontask related features is probably not new but the use of it in estimating rank might be new and intuitively it makes sense to use it they also propose an extension to the clustering algorithm using a repulsive term and propose map estimation algorithm to assign a rank based on the output probabilities of the comparator when the max possible rank is known experiments are conducted on 3 data sets the results show the effectiveness of the approach the experiments i feel are sufficient to show that clustering instances based on nonrank related features will help improve effectiveness of comparison based ranking of new instances they also show the effectiveness of their proposed map estimation rule for assigning a rank the effectiveness of the repulsive clustering on ranking performance is not clear the authors discuss that using the repulsive term in the objective for clustering produces more distinct clusters but how does this improved cluster quality translate to better performance in ranking as this is one of the key contributions of the paper a comparison of ranking performances with and without the use of the repulsive term in clustering would be useful how sensitiverobust is the proposed approach to the number of clusters chosen how can one choose the right number of clusters to use a discussion on these would be useful in each experiment what was the dimensions of the orderrelated feature and identityrelated feature in general i think this paper is above the borderline but i would also like to see the comments from other reviewers docsepsummary of paper this paper considers the task ordered learning making predicting a class label for a point among an ordered graph of classes the paper proposes a clustering objective that encourages the model to separate data into groups such that classification prediction is easier within each cluster the method is intuitive clearly explained and well motivated the paper indicates state of the art results on a task of estimating ages of individuals from photographs review summary missing crucial discussion on discussion of use cases broader impact of task of estimating ages from photographs otherwise intuitive and effective method for ordered data effective empirical results limited novelty exploration of methodological approach strengths the authors describe an intuitive and effective method for making predictions on ordered data the approach uses a intuitive clusteringbased method that groups data into subsets where items are easier to order the paper is clearly written and explains the approach clearly the paper shows several examples of predicted output of the method and shows results on two tasks estimating ages aesthetic score regression the method achieves state of the art results on the task of estimating ages and is competitive on the other task the authors show further results on age transformation weakness broader impacts of applications one of the primary applications of the paper is estimating ages of individuals based on their photographs while this is paper is not the first to focus on such a task it is very remiss of this paper to not discuss the motivations for this task and the broader impacts and ethical considerations of this task i would very strongly encourage the authors to add a discussion of the potential uses of their system and the benefits as well as harms that come from these uses i think that it is crucially important to discuss this both in the context of this work as well as previous work on the task in particular it would be important to mention how the use of clustering into groups based on genderrace in this model factors into potential biases when the model is used i think it would be necessary to include this discussion in the body of the paper itself rather than an appendix i greatly believe that this discussion is necessary and the lack of it is one of my top concerns about the paper distinctions between total ordering and partial ordered related work the presentation of the approach indicates that observations are not directly comparable across clusters however the overall model does in fact provide a total ordering each point is mapped one of the clusters and then compared within that cluster i think the presentation would be greatly improved if it were described not in a way that implies a partial ordering only within each cluster is there but instead that the total ordering function is this multimodal clusterbased ordering further i think it would important to discuss the relationships between this work and work on partially ordering sets particularly work on combining partially ordered sets it might also be good to consider more related work on ordering such as learning to order things httpspapersnipsccpaper1431learningtoorderthingspdf also i think that it is especially important to address other work such as that in extreme classification that organizes class labels into groups that are easier to discriminate between ie logarithmic time oneagainstsome httpsarxivorgabs160604988 novelty of approach depth of exploration the core novelty of the approach is in the use of clustering to separate the data into groups that are easier to rank this is a nice idea and appears give strong empirical benefits i worry that since the clustering component is the core contribution of the paper that the analysis of the method of clustering is not very deeply explored empirically the idea is intuitive but i feel the limited deviation from classic approaches that combine clustering classification would benefit from additional analysis of the approach along the dimension of the clustering objective that is selected questions for the authors what are the potential use cases for the system its applications to age prediction what are the fairnessethicalsafety concerns of such an application were clustering objectives other than the repulsivebased one considered how does your work connect to papers such as logarithmic time oneagainstsome httpsarxivorgabs160604988 which also organize classes into clusters docsepsummary this paper considers the problem of order learning which learns an ordinal classification function this paper proposes to learn separarted orderrelavent and orderirrelavent latent representations to improve the performance of existing methods which is a very interesting and promising idea however the approach lacks novelty and convincing theoretical guarantees as well as not showing convincing performance even through the insufficient empirical evaluation main concerns the orid model structure the latent representation is separated to hor and hid and the comparison loss is defined on hor however this need not to exclude orderrelavent information from hid also it needs to be clarified that to what exent introudcing a discriminator helps as this turns a minimization problem into an unstable minmax optimization problems how it works without the discriminator normalization of hid normalizing vectors in a space may result a totally different cluster structure different clusters may appear to be overlapped with each other by normalization euclidean distance can be the natural dissimilarity metric without normalization the drc algorithm the idea of encouraging innercluster similarity and itercluster dissimilarity of eq 9 is not new also right after algorithm 1 in the paper drc is guaranteed to converge to a local maximum is quite suspicious is it true that different rules optimiziming the same objective alternatively is guaranteed to converge at least some references need to be provided as it is a crucial point of the main contribution the decisioin rule eq15 loops over all y so what is the point of selecting a yi in eq13 experimental results seem to be fine and authors are honest to report unfavorable results however in my humble opinion results for a sufficient number of repetitions 5 or 10 is needed to achieve a least convincibility minor comments in eq4 the rightmost inequation should be thetax thetay r ### Summary:
this paper is overall well written and clearly presented the problem of ordered data clustering is relevant and the proposed method is effective during the discussion all reviewers agree with the strength of this paper and share the positive impression the authors successfully addressed reviewers concerns by the careful author response which i also acknowledge one of the reviewers raised the concern about the broader impacts while it is also well addressed in the author response i therefore recommend acceptance of the paper
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the authors formulate a general framework that unifies inference actionperception control and several other tasks the framework is based on minimizing the kl divergence between a parameterized actual distribution and a target distribution the authors argue that this formulation unifies a wide range of previously proposed objectives they also argue that it has some advantages when compared to fristons free energy principle framework with which it shares many similarities in particular that probability matching is preferred to surprise minimization the paper is clearlywritten and provides a very thorough literature review however generally i question the scientific value of such allencompassing unifying frameworks and this paper in particular offers no concrete formal or empirical results while promising a lot at the end of the day the divergence minimization objective is nothing more than maxent decorated with various interpretations and decompositions without empirical support i do not find the interpretations and decompositions very convincing as one example does divergence minimization really mean that expressive world models lead to autonomous agents that understand and inhabit large niches one of the issues is that the paper appears to treat the heart of the matter ie the source of interesting solutions as if it lay in the elegant and generic objective in my opinion however the real heart of the matter will be encoded in 1 the structure of the target distribution 2 the structureparameterization of the actual distribution and 3 the optimization algorithm that can actually minimize the typically highdimensional objective the quality of resulting solutions depend on 13 all of which need to be exogenously specified because divergence minimization cannot on its own produce interesting behavior at the end of the day i do think there is some value in providing a unifying framework and developing informationtheoretic decompositions and interpretation however i think the paper would be much stronger if it was considerably longer and had more room to breathe which it doesnt have right now given all the connections it tries to make and if qualitative statements of the type discussed above were accompanied by empirical results even if simulations with simple toy modelsdocsepthe authors proposed to use the joint kl divergence between the generative joint distribution and the target distribution containing latent variables which could correspond to latent parts we wanted to model eg beliefs it was illustrative to discuss decomposing the joint kl into different ways and thus forming information bounds in different scenarios the decomposition of past and future in eq6 also provided a unified perspective for looking at the most currently used objectives the examples shown in the paper and appendix give a good illustration of how people can make assumptions or design the terms to convert prevalent objectives into objectives that follow from this joint kl divergence framework this is in my mind one of their key contributions for connecting the past progress in a general and unified way however one concern about this paper is that the proposal of such a unified kl minimization framework is in fact a bit too general and abstract in fact many methods mentioned in this work shared a similar insight of deriving objectives from a klminimization perspective but some factors are omitted to better fit the corresponding tasks the general decomposition discussed in this paper provides little hint on how new objectives could be derived for problems the general framework does somehow serve as the guideline but my worry is that its effect will be limited as we still need to design the mapping for the terms in the general objective accordingly in different tasks given the pros and cons of this paper im putting a borderline decision for now the authors should clear any of my misunderstandings and perhaps show the potential for this general framework as a source for new objectives after reading the authors rebuttal my major concerns are fully addressed and i decide to keep my decision as weak acceptdocsepthe authors of this paper propose a unified optimisation objective for sequential decisionmaking ie action and representation learning ie perception built on joint kl divergence minimisation as also mentioned by the authors this is a concept paper and it includes no empirical study in particular the authors demonstrate how existing ideas and approaches to sequential decisionmaking and representation learning can be expressed as a joint kl minimisation problem between a target and actual distribution such examples are a maxent rl b vi c amortised vi d kl control e skill discovery and f empowerment which are all cases of the kl minimisation between a target and an actual distributions concerns 1 although the proposed perspective and language is rich and expressive i question the novelty of the proposed framework since the informationtheoretic view of decisionmaking and perception is a rather established and old idea even the termidea of perceptionaction cycle is already defined 1 2 the power of latent variables for decisionmaking and their interpretation is also a known idea 1 references 1 tishby n and polani d 2011 information theory of decisions and actions in perceptionaction cycle pp 601636 springer new york ny docsep summary in this manuscript the authors propose a unifying framework for a large class of inference and reinforcement learning objectives which have been studied in prior works by various authors they demonstrate that approaches and central ideas from many different fields in the mlai community can be derived as limiting cases of their framework reasons for score overall i vote for acceptance 7 like many i have employed various variational approaches in the past and see its merit while i agree with the main idea this work is not without problems this is especially problematic for such a broadly applicable work that will most likely influence plenty of future research my main problems with this submission are 1 presentation while the paper is for the most part well written and well organized there are some gaps jumps that render understanding difficult two examples a the parameters phi the authors start by introducing parameters phi as abstract placeholders for i parameters of the true joint distribution of data and latents of the underlying system and ii a set of actions an agent can perform to interact with this world the agents target distribution has no explicit parameter dependence so far so good then one is redirected to the appendix a1 and a2 section a1 is already a bit confusing because suddenly additional latents w are introduced that were not mentioned before then suddenly in a2 the target tau is suddenly dependent on the parameters phi which were initially parameters of the underlying systems true joint distribution this also happens in figure 2 c which is also never referenced in the text i find this strange mixing of parameters of agent and system very confusing it also sheds some doubt about the generality of the framework b i have read the paper carefully and still do not understand figure 1 completely this may also be due to the reason that it is only referenced in the appendix related why does information gain play such a central role if all derived objectives only contain upper bounds for it appear 2 unsupported claims in the abstract the authors promise to offer a recipe for designing novel objectives as much as i can see they only come back to this promise in the conclusion where they say that one could look at other divergence measures to arrive at new objectives and they will leave it for future work i would not call this a recipe but an outlook at most 3 too many ideas it is hard if not impossible to explain a broad framework well in a conference proceeding this work contains so many ideas and establishes many connections that following this work and understanding them in detail becomes very hard i would suggest sacrificing some connections in favor of a more concise presentation 4 fixation on kldivergence this is more of a suggestion i understand that many works use the nonsymmetric kl due to its favorable analytic properties thus i agree that it makes sense to focus this framework on this measure however i believe this works main idea still holds if one would exchange the kl with some other measure of similarity between distributions maybe it would make sense to first introduce and discuss the abstract idea of aligning target and belief before fixation on a particular measure this would also go well with resolving my concern 2 pros 1 unifying framework of many inference and rl objectives 2 well written 3 will be impactful to a lot of future research cons 1 see my reasons for score questions during the rebuttal period please address and clarify the cons above minor please consider citing toussaint m storkey a 2006 probabilistic inference for solving discrete and continuous state markov decision processes international conference on machine learning icml 945952 httpsdoiorg10114511438441143963 in the control as inference section to my knowledge it is one of the first to establish the connection between planning and inference ### Summary:
the paper presents an kldivergence minimisation approach to the actionperception loop and thus presents a unifying view on concepts such as empowerment entropybased rl optimal control etc the paper does two things here it serves as a survey paper but on top of that puts these in a unifying theory while the direct merit of that may not be obvious it does serve as a good basis to combine the fields more formally unfortunately the paper suffers from the length restrictions with more than half of the paper in the appendix it should be published at a journal or directly at arxiv not having a page limit would improve the readability much iclr may not be the best venue for review papers
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper focuses on improving the adversarial robustness of prefix tuning li et al 2021 which is a recent parameterefficient tuning method specifically the paper proposes to add extra batchlevel prefixes that are tuned for each test batch on the fly to minimize the distance between hidden activations of the test samples and the canonical manifold obtained from the hidden activations from correctly classified training samples the intuition is to optimize the added batchlevel prefixes so that the geometry of hidden states from adversarial examples is closer to that of training examples experiments on three text classification benchmarks across several different adversarial attacks demonstrate the effectiveness of the method below are the detailed strengths and weaknesses strengths 1 adversarial robustness is an important problem and has not been explored much for relatively new prefixprompt tuning approaches thus the topic of this paper can be of interest to a general audience also this paper is timely given the recent attention on prompts 2 the idea of optimizing the geometry similarity to defend against attacks is interesting and novel from my perspective particularly i like testtime tuning which could adapt to different types of attacks on the fly 3 the experimental results are strong updates after rebuttal most of the following concerns have been addressed in the revision and i have increased my score weaknesses 1 what is the batch size at test time tuning is the added robust prefix the same for the entire test set or the same within a batch but different across batches or unique for every test example this is an important point to assess whether the experiments are in an online setting where test data arrives instream or not 2 section 5 is not very convincing to me 1 there are only several case studies without any quantitative results i think it may appear in the appendix only or just uses a short paragraph in the main body because the statements in section 5 can only be good hypotheses which are not wellsupported by quantitative evidence also attention is not a convincing proxy for the explanation see point 2 yet it takes more than one page in the current version 2 the interpretation from the perspective of attention weights bothers me a bit attention as an explanatory tool is known to not be faithful 1 a larger attention weight does not necessarily mean the final prediction depends on it more than others and vice versa thus maybe not overread it too much as said above it is ok to do it this way in the appendix but using over one page of the main content for this is not convincing to me 3 in the attention visualization figures eg figure 4 why does a word attend itself as well instead of only attending to the contexts for lm i think that the diagonals in the attention figure should be zero or am i missing something also the presentation could be improved here in the figure caption you can explain what the rows and columns mean in the visualization to make it easier to read 4 section 6 is difficult to follow without reading the appendix and it is disconnected from the rest of the paper in terms of the paper structure having a theoretical interpretation section after experiments at the end of the paper is not a good presentation structure in my view if you think section 6 is important to have move it before the experiment section and clarify more details to make it more selfcontained if it is not very important you can just put it into the appendix while briefly mentioning it in the main body 5 besides the presentation issues above there are some other minor presentation places which could be improved eg 1 eq 4 an undefined variable xc suddenly comes in without explanation 2 better to explain eq 8 with more text given that this is an important equation for the proposed method 1 serrano et al is attention interpretable acl 2019 there are novel contributions of the paper both technically and empirically however some major analysis sections are not convincing and a significant portion of the presentation needs to be improved docsepthe paper investigates the robustness of prefixtuning methods and proposes a simple yet effective method to improve the robustness the experiments show that the proposed method can largely improve the performance in adversarial settings and slightly improve the performance in clean settings the authors study a novel problem in lightweight finetuning methods most studies aim to match the performance of full model tuning via updating a subset of parameters but rarely study the robustness of lightweight finetuning methods strengths 1 the authors study an important and novel problem 2 the authors provide a simple yet effective method with motivations the proposed method is shown to be effective even combining with adversarial methods weakness 1 the authors argue that robustness is important for lightweight tuning methods but i still think it is better to provide a comparison between lightweight tuning methods will full tuning methods provide a basic starting observation about whether lightweight methods bring more challenges on robustness or not the studied problem is important and novel the proposed method is simple and clear the experiments justify the effectiveness of the proposed method docsep this paper introduces a tweak to prefixtuning to make it more resilient to adversarial perturbations of the input the idea is to add a batchlevel prefix at inference to the original one which enhances robustness critically robust prefixtuning rpt does not require auxiliary model updates or storage in contrast with other robustness methodsthus this approach makes prefixtuning more robust while preserving its modularity and low storage requirements the authors conduct experiments on 3 text classification tasks 5 textual attacks and different training regimes normal training adversarial training and adversarial data augmentation in nearly all instances their method improves robustness sometimes considerably so while preserving the accuracy on the original text the authors also present rpt from an optimal control perspective and conduct a qualitative study that shows how rpt impacts attention weights review note i am not familiar enough with optimal control theory to evaluate the soundness of section 6 appendix e i will leave it outside the scope of my review and hope other reviewers can fill in the gap pros the results are quite convincing in nearly all settings aside from important caveats in cons while some scores are still quite low after rpt they are consistently better than the baselines notably the method can provide additional gains when used with other defense methods such as adversarial data augmentation and adversarial training the need for this method is wellmotivated the paper is good at emphasizing different priorities rather than only the end accuracy for instance time to accuracy in fig 2 although there are some questions about the validity of using attention weights see httpsarxivorgabs190210186 which likely should be mentioned as a caveat i found section 5 insightful cons inference batch batch size and its importance the batch size at inference seems like an important variable indeed my understanding of section 3 and eq 8 is that the specific batches used at inference will play an important role as they impact ppsi this is my key concern with this paper as it brings up quite a few issues that should be mentioned inference for a datapoint depends also on other datapoints in the batch this is different enough from other ml setups that it should be highlighted it also causes reproducibility issues for instance batch norm avoids this by fixing batch statistics at inference it is not clear that the method works for low inference batch size opt of eq 8 the inference batch size used is not mentioned anywhere if the method requires batch size some n or if performance varies widely with batch size this seems a strong assumption that should be made clear you do not always get several samples of the attack on your system it is not clear how well the method works when the inference batches are a mix of unperturbed samples and perturbed ones this seems like a more realistic attack scenario it is not clear how well the method works if there are different attacks in the inference batch which also seems like a more realistic threat model overall i feel like some answers to the above would diminish my concerns most notably what is the inference batch size used in the experiments how does performance vary with inference batch size one or two settings should be enough does the method work when only x of n samples in the test batch are adversarial does the example work when they are two types of attack dataset statistics and mode prediction it would be helpful to remind the reader for each dataset how many classes they are what the moderandom pred accuracy is to help interpret results for instance on table 1 under pwws improves from 1664 to 50 for sst2 and 25 to 34 on snli but that is the same as the accuracy of a random predictor writing the writing could be improved substantially some examples p2 remaining the pretrained models unmodified keeping also on p3 remaining its lightweightness i suggest doing another pass as it does not align with the quality of the rest of the paper related work i feel like other methods for parameter efficient transfer learning such as adapters normal prompting should be quickly mentioned same goes for comparable approaches such as ptuning as mentioned earlier i would caveat section 5 with discussions of the validity of using attention for explanation such as the attention is not explanation paper edit given the authors response i am raising my score slightly there are still some concerns over the threat model but some of my questions on the impact of test batch size have been answered the motivation experimental settings and results look good overall one major caveat however is that it is not clear how flexible the inference setup is this is critical as the predictions for a datapoint at inference depend on the other datapoints in the batch currently it seems like the inference is being done with a batch of all the same attack these assumptions are simply not realistic as a threat model without understanding how this performs under a more realistic threat model i cannot score this paper higher i have highlighted experiments that would provide more realistic results despite my marginally below score i do not think the paper can be accepted without an answer on this point reject is too harsh for a paper that is otherwise promising the paper would also benefit from another writing pass mentioning missing related work and clarifying some properties of the dataset docsep the paper is a focused contribution at the intersection of defending against text attacks and prompt tuning the paper requires the reader to understand the context and motivation of several different things before understanding the contribution of the paper first adversarial examples can attack a text classifier such as uat second various techniques defend against these attacks in different way however these techniques requiring modifying the parameters of the lm or other additional computational burdens these techniques can be used with prompt tuning but then the benefits of prompt tuning go away hence there ought to be a technique that improves the robustness of prompt tuning without removing its benefits over regular finetuning the paper proposes such a technique and do experiments for three text classification tasks and various adversarial attacks strengths the problem being addressed is cutting edge we are barely understanding prompt tuning and this paper already jumps ahead to adversarial defenses the approach seems novel though i havent searched for related literature carefully clear logic motivating the problem and what constraints need to be accounted for while solving it clear research question weaknesses the experiments are ok but could be a bit more streamlined i guess two things came up when i was looking at them it may be good to expand the scope of the paper to generation tasks as these are likely more suceptible to adversarial attacks what i mean by this is that the worse case scenario is much worse for generation tasks while in binary classification the worse case is bad accuracy for adversarial attacks on generation tasks potentially very harmful text could be generated which is much worse than simply getting the answer wrong the other thing is that it is hard to contextualize how good the numbers in your framework are for example i have no idea how good 52 on vipr for sst2 is if previously proposed defenses are able to get 90 on the same task then it is unlikely that the proposed method will gain traction even if it is more computationally efficient with prefix tuning so it would strengthen this paper a lot to have this comparison other comments i think the paper may benefit from spending just a little more time describing how susceptible prefix tuning is to adversarial attacks compared with regular finetuning it seems like it is very susceptible especially for easy tasks like text classification appreciate the candidness in figure 2 in showing that adversarial training takes longer it will be good to see if the proposed method not only improves performance against adversarial attacks but also if it improves performance against different paraphrases of expressing the context and question for example minor points the grammar of the paper will benefit from having a native english speaker proofread it eg as well as remaining the pretrained models unmodified can be rewritten as without modifying the pretrained model parameters since a lot of work was put into the paper i will pay my respect and make a picky point the bibliography could be cleaned up a bit eg capitalize roberta correctly add urls consistently capitalize all conference names eg ieee transactions the paper studies the very timely topic of adversarial attacks against prefix tuning the paper proposes a method that maintains the advantages of prefixtuning against finetuning a formidible problem the experiments are ok as of now though expanding the scope to generation tasks could make the impact substantially larger the characterization of the method could also be further improved by providing finetuning with and without defenses as baselines overall i lean towards acceptance though i am not an expert in adversarial attacksdefenses ### Summary:
this paper tackles a relatively novel problem that is the result of recent work on prefix tuning specifically the need to be robust to adversarial perturbation in the context of prefix tuning and they show a method for achieving this without requiring more storage and obtain good results there were some clarity issues that were addressed by the reviewers during the rebuttal the main issue that was pointed out was the effect of batch size on the success of the model the authors gave experiments with batch size 1 where results are less impressive but still outperform the baseline also the authors say that for now they are not considering the case where only some of the elements in the batch are adversarial which i think is ok for a research paper on such a cuttingedge topic thus the result of the discussion is to lean to accept this paper given that it is now more clear has experiments that make it clear what the benefits are in realistic settings and obtains improvements
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper studies the problem of graph generation and proposes a new model using both micro and macro level supervision information in graphvae architecture fitting adjacency matrix is the micro supervision and three kinds of graph statistics ie degree histogram number of triangles and higherorder proximity relations are adopted as macro supervision the object consists of elbos modeling micromacro loss and a kldivergence between the prior and the approximate posterior of hidden representation the proposed model is validated on 3 synthetic and 2 realworld graph datasets the experimental results the proposed model generates graphs with a lower discrepancy between generated and test graph embeddings than graphs generated by competitors in terms of mmd rbf and f1 pr strong points s1 the macro objective of fitting graph statistics in graph generation is novel to me s2 the paper proposes a general micromacro elbo as the objective and then implements the elbo by graph neural networks s3 the experimental results show the proposed model outperforms the competitors weak points w1 it is not clear how the graph generation task benefits from fitting graph statistics in other words what is the limitation to only fitting adjacency matrix in graph generation from this line i have a concern about what kind of graph statistics should be chosen as targets this paper selects three graph statistics but does not present an explanation for this selection w2 the efficiency both calculating and fitting graph statistics bring new computing costs eg the complexity is on3 to compute the transition probability matrix w3 it is not clear how to form descriptor functions with respect to vector label histogram and triangle count and how to guarantee the descriptor functions are differentiable yes docsepthe contributions of this paper was to model graph data jointly at two levels a micro level based on local information and a macro level based on aggregate graph statistics positives 1 the idea of this work is interesting and novel which trys to use probabilistic model to explore the local and global graph statistics 2 the performance of this work is very good comparing to the existing graphvae and the code is available negathive 1 the scalability of this work may be a challenge the compexity of the descriptors is either on2 or on3 also the algorithm reuqires predefine a graph descriptors to compute the graph statistics 2 the algorithm part is straightforwad basically it designs a mm loss into one unified fraemwork it seems that many ganbased models can achieve the similar function any discussion yes docsepthis paper jointly models micro and macro level graph information for graph generation a principled joint probabilistic model for both levels is proposed and an elbo training objective is derived for graph encoderdecoder models extensive experiments and visualization results validate the efficacy of adding micromacro modelling to graphvae models for graph generation strengths 1 this paper is well motivated and the idea of utilizing nodelevel properties and graphlevel statistics to constrain graph generation seems reasonable 2 the design of micromacro mm loss is clear and theoretically solid 3 the authors have done a thorough analysis of the proposed model and validated its effectiveness through qualitative and quantitative evaluation the main claims are supported by the experimental results weaknesses my main concern is that the proposed objective function is only applied on graphvae following an ab design although the experimental results are satisfactory on graph generation it remains unclear whether the benefits of micromacro modeling would generalise to other models the authors have adequately discussed the limitations of their work docsepthe authors of this paper newly presented a function that can reflect graph statistics in the graph generative model they have shown various experiments and visualizations proving graph statistics are wellreflected in addition designing an objective function to reflect different graph statistics simply is a significant contribution originality yes the proposed method seems to be original in that the authors proposed a new but simple vaebased objective function to reflect graph statistics quality neutral since the purpose of this study is to generate graphs that reflect graph statistics theoretical support and experiments for the purpose are well shown however the performance on realworld datasets such as the molecule is marginal in particular when only one graph statistic is used the performance degradation is greater than that of graphvae which needs clarification since it shows good performance only when all three statistics presented in the paper are written it is necessary to explain why these three combinations were selected and what synergy they show clarity yes there was no difficulty in understanding what the paper was trying to say and it shows sufficient proof of the formula i think it would be easier to understand if the architecture overview was attached i suggest adding a picture of the structure for the readers clear understanding significance neutral this model seems to have particular strengths in experiments using synthetic datasets in addition it seems to be a good contribution that it showed a higher performance improvement compared to graphvae however as discussed in the paper performance in realworld datasets seems to be more important to contribute in practical areas such as molecule and medical discovery however the experimental results presented in the paper do not support this additional experiments will be needed to show that graphs are well generated using the qm9 dataset shown in graphvae the positive social impacts presented in this paper include molecular presentation and medical discovery however since the model proposed by realworld dataset shows weak performance it is seen as an important limitation ### Summary:
this paper proposes a new generative model for the generation of graphs different from most of existing approaches the proposed method considers both node and graph level properties to capture highorder connectivity and overcome sparsity of any observed graph the writing is general clear and the results are convincing the reviewers are overall positive with some concerns on the motivation which has been addressed well by the authors in the rebuttal some other questions raised by the reviewers are also appropriately addressed which leads to the increase of some scores the downside of the approach lies in the time complexity in collecting the macrolevel statistics but overall it is a good paper worth accepting
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the authors study a prevalent medical problem where the treatment aims to keep a physiological variable in a safe range and preferably close to a target level they propose escada a multiarmed bandit algorithm tailored for the above leveling task to make safe personalized and contextaware dose recommendations strength 1 they consider constraints on instantaneous outcomes and propose efficient algorithms to achieve their goal 2 they provide safety guarantees and upper bounds on cumulative regret weakness 1 is alpha a tuning parameter there many too many tuning parameters which makes the method hard to be used in practice how would the choice of tmin tmax affect the results of the algorithm 2 what is the complexity of the algorithm how does it depend on the cardinality of dose sets please see the weakness and questions docsepthe authors investigate a problem that they refer to as leveling shortly it is a prevalent medical problem in which the treatment aims to keep a physiological variable in a safe range and preferably close to a target level their proposed algorithm is a multiarmed banditbased for the leveling task which aims to make safe personalized and contextaware dose recommendations as a theoretical contribution they derive probability upper bounds on its cumulative regret and safety guarantees additionally they conducted in silico experiments on the bolusinsulin dose allocation problem in type1 diabetes mellitus disease by comparing their algorithm against the gpucb baseline the dosefinding problem is challenging given the ethical concerns from online experimentation with real patients this paper uses a simulator to bring experimental results however it compares its performance against a clinician for virtual adult patients it would be worth including the qualifications of the clinician that evaluate the unseen meal events possibly in the appendix as a minor improvement the authors may increase the font size of figures such as figures 36 the authors need to include the limitations of their method in the concluding remarks docsepconsider the safe dose allocation problem in precision medicine this paper proposes a contextual multiarmed bandit algorithm with the objective to keep the outcomes close to a target level the proposed algorithm has high probability upper bounds on cummulative regrets and also possess a twosided safety guarantee strengths the paper is well organized the objective is an interesting problem instead of maximizing the outcome this paper aim to keep the expected outcome close to a target level the taco algorithm is novel both exploration and exploitation are addressed the action when all safe doses are suboptimal is interesting weaknesses the figures in section 5 are too small to read none docsepthe paper introduces and evaluates a multiarmed bandit mab algorithm for insulin dose allocation related to type1 diabetes t1d its technical contributions in enhancing safety in t1d by preventing hyperglycemia and hypoglycemia by contextaware and personalised dosing of insulin before meals is fairly convincing based on evaluations on an appropriately chosen simulator the paper introduces and evaluates a multiarmed bandit mab algorithm for insulin dose allocation related to type1 diabetes t1d its technical contributions in enhancing safety in t1d by preventing hyperglycemia and hypoglycemia by contextaware and personalised dosing of insulin before meals are fairly convincing based on evaluations on an appropriately chosen simulator however the evaluation should have differentiated the 30 simulated patients instead of presenting averaged results over all these patients regardless of their age for instance glucose control of adolescents is substantially harder than for adults and hence i am used to separating adults adolescents and children in evaluating algorithms and medical interventions similarly in order to contribute to precision medicine as argued by the paper for example the target ppbg level should have been adjusted by this age group finally i would have wanted to know more about the clinician that was one of the comparisons in the evaluation i really liked this more humancentric aspect of the evaluation but would like to know more about this experiment so that future studies could follow the same experimental design if they wanted also it was unclear how the simulate patient cases to be analysed by this clinician were chosen and how the human judgement and decisionmaking was implemented in this part experiments last but not least i could not find details about this experiment eg the cases analysed implementation of the assessment task etc involving a human participant ie the clinician as a participant or obtaining the related ethics approval and informed consent from the paper or its supplement clinical aspects of the study could be clearer for example the guidelines by httpswwwjmirorg201612e323 may help in particular i encourage clarifying the rationale clinical implications and limitations of the model see the jmir paper for further details about these topics ### Summary:
i have read all comments and responses carefully the reviewers recognized that the problem was a challenging one and that the paper provides both practical and novel tools and theoretical analysis however the reviewers pointed to the lack of numerical studies in the paper for example more details about the human clinicians and the patients that being said the authors have addressed most constructive comments given by reviewers overall reviewers agree that this is an important and yet underexplored problem and the authors have provided useful contributions i therefore have decided to recommend the acceptance of the paper
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: summary they investigated the effectiveness of selfsupervised learning ssl for one class classification occ here is what i think are contributions relative to existing literature empirically improved auc for multiple occ datasets here are the techniques that were useful used distribution augmentation distaug for learning representation for occ and in ablation studies show distaug leads to improvement over standard augmentation used kde and ocsvm on top of learned representation and showed improvement over using the classification head training during ssl used a smaller batch size 32 used a mlp head during ssl the authors also included a section of visualizing explanation using existing techniques to illustrate how their method leads to more reasonable decisions strength the paper is well written i appreciate the clarity and good coverage of the current literature the ablation studies are thorough which make the empirical improvement solid concerns the uniformity argument is weak the authors state the empirical improvement on occ using their method hinges on the distaug technique which is motivated to reduce the uniformity of the learned representation when achieved the inliers will live in the dense regions on the hypersphere and outliers will live on the nonoccupied region this assumes all the test inputs are projected onto the hypersphere including the outlier from my understanding the authors used f for occ not phi which is the normalized ie hypersphere output in this case there are many ways that occ can be achieved even if phi of the training inputs are uniform on the hypersphere suppose both the inliers and outliers after f live on hyperspheres just with a different radius then after normalization they can both be uniformly distributed on the same hypersphere one question is if there is a difference in using f or phi for occ furthermore the authors try to back this claim up using figure 4 but i cannot seem to connect the dots here they authors used mmd to a uniform distribution to measure how uniform the representations are the less uniform ie higher mmd the better it should be for occ the correlation between mmd and auc does not seem to be very strong eg for the da gf variant the 2 metrics actually seem negatively correlated this again makes me wonder if less uniformity really is why their technique led to an improvement in occ if this is not why then we should find another explanation for why there was an improvement there is always the concern that the improvement comes from extra hyperparameter tuning did the author also tune for good hyperparameters for the non distaug version as described in a3 overall a fairly thorough empirical investigation into better techniques for using ssl for occ it can be a decent contribution along the lines of one of the improve techniques papers if the above concerns can be addressed in fact i think not focusing on selling distaug but really identifying what contributes to the gain empirically makes this paper stronger references distaug heewoo jun rewon child mark chen john schulman aditya ramesh alec radford and ilya sutskever distribution augmentation for generative modeling in proceedings ofmachine learning and systems 2020 pages 1056310576 2020 docsepthis paper presents a twostage representation learning approach to deep oneclass classification in the first stage a mapping f to a versatile highlevel latent representation is learned using selfsupervised learning for a contrastive learning proxy task in the second stage the same mapping f is used to map the data to the latent space whereafter a traditional oneclass classifier such as ocsvm or kde is applied it is shown that the oneclass task puts somewhat different requirements on the representation than with a multiclass classification task both 1 in terms of uniformity of the data points in the representation which is desired for multiclass tasks but not fully beneficial for oneclass tasks and 2 in terms of minimizing or maximizing the distance between different instances of the negative class for multiclass tasks you want the distances maximized while for oneclass tasks you want the negative inlier examples close together 1 is addressed by using smaller batch sizes in training while 2 is addressed by distribution augmentation that will render a compact inlier distribution in the representation this paper is overall a good paper that will be interesting to a certain audience at iclr it is well written well motivated with a clear argument and as far as i can see technically correct the experiments are well designed valid and exhaustive with comparison to a range of baselines as well as an ablation study moreover the visual explanation of what the different representations have focused on is highly interesting i appreciate the comprehensive grounding of the contribution in both new and old related work the reference list contains all the relevant state of the art as well as references to more classical work such as 1314294753 the paper is not highly seminal but more incremental in nature putting together and modifying existing methodology however since it is very well done the work is absolutely worth acceptance a criticism is that there are some repetition in the line of argument for example between 212 second paragraph and 213 first paragraph a more compact eg section 21 would render more space for results which now have been pushed to the appendix to a large degree another suggestion for improvement could be to indicate more clearly in figure 1b that f is kept fixed in this step this could be done eg with a different color of the f box in figure 1b docsepthis paper proposes a framework for deep oneclass classification an example application being anomaly detection the basic idea is to combine selfsupervised representation learning eg through a proxy task such as rotation prediction or contrastive learning with a classical approach to oneclass classification such as oneclass svm or kde this is in contrast to existing methods for deep oneclass classification that use simulated outliers to form a surrogate classification loss and then train endtoend the paper further improves on the first stage of representation learning by introducing modifications to contrastive learning to make it more appropriate for oneclass classification the main insight is to introduce distribution augmentation where geometric transformations of images such as rotation are treated as separate instances to be separated from the original view this is motivated from the perspective of reducing uniformity of the inliers across the unit hypersphere to allow for better separation from outliers positives strong empirical results with improved performance over existing methods for oneclass classification validation of two stage framework by showing improved performance with rotnet representation with kde detector versus rotnet endtoend 20 validation of improvements to contrastive learning for oneclass classification such as distribution augmentation batch size selection use of mlp project head minor negatives i think the paper would flow a little better if the related work section was moved earlier in the paper rather than coming only after the detailed description of the method in describing distribution augmentation and contrasting it with standard data augmentation for contrastive learning it is clarified that the two sets of augmentations are disjoint i would it have found it helpful if the paper was explicit about which data augmentations were used for the contrastive learning as this did not seem to be stated in the paper overall i found this to be a nice paper with strong empirical resultsdocsepthis paper proposes an anomaly detection approach that has two stages a first stage for learning a feature representation and a second stage to train either a oneclass classifier based on ocsvm or kde the main contribution of the paper is the feature representation learning that relies on contrastive learning to optimise a selfsupervised loss function which minimises the distance of the samples from the same image augmented with different data augmentation functions and maximises the distance of samples from different images augmented with the same augmentation functions the data augmentation functions used were horizontal flip and rotation 090180270 results on the public datasets cifar10 cifar100 fashion mnist and catvsdog show that the proposed method has better anomaly detection measured with auc than the state of the art the paper also displays qualitative anomaly detection results and an ablation study that shows a how close to uniform distribution on hypersphere the feature representations are as a function of batch size and b how auc is affected with batch size and depth of mlp project heads this paper has outstanding results on the datasets cifar10 cifar100 fashion mnist and catvsdog but it is missing result on a challenging dataset such as mvtec 51 it is also missing results on anomaly localisation eg venkataramanan shashanka et al attention guided anomaly detection and localization in images arxiv preprint arxiv191108616 2019 so it scores slightly below acceptance for results given that it is hard to assess how the method would perform in a more realistic anomaly detection problem in terms of the proposed method it is quite similar to 2021 with the difference that it uses more data augmentation functions and rely on contrastive loss therefore it scores slightly below acceptance on novelty as well one argument that seems contradictory is the one for class collision and uniformity in particular if pretraining forces all inlier samples to be on a hypersphere wouldnt it be advantageous to have a uniform distribution given that outliers could be easily detected as not lying on the hypersphere of course this would probably require a change in the ocsvm classifier can the authors comment on that also the argument on sec 213 on the effect of projection heads says that igfxx ifxx so f can retain more information than g thus more suitable for downstream tasks that are not necessarily correlated with the proxy tasks if we push this argument then ifxx ixx so we should use x for downstream tasks can the authors comment on that ### Summary:
this paper investigates the oneclass classification problem proposing to learn a selfsupervised representation and a distributionaugmented contrastive learning method thorough results and analysis show that the method is effective and backs up their claims in terms of the underlying mechanism for why it works in general reviewers thought the paper was wellwritten wellmotivatedargued and presents a thorough related work comparison and experimentation though the novelty was found to be somewhat low several reviewers brought up some possible weaknesses in terms of demonstrating uniformity of the representations as well as suggesting additional datasets through an interesting discussion the authors provided additional visualizations and results on the mvtec dataset this further bolstered the arguments in the paper overall this is a strong paper with a clear argument and contribution and so i recommend acceptance
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: in this paper they introduce a new method to optimize opauc and tpauc after presenting its derivation they provide an implementation with sgd and analyze its convergence rate and generalization finally they provide experiments comparing their new method to preexisting strengths cool reformulation ideas decent experiments showing the strengths of their method weaknesses unclear or sloppy notation see questions 1 4 5 6 main limitations are discussed in the questions section in particular questions 3 and 7 docsepthe paper proposes a nonconvex strongly concave minmax formulation for opauc and tpauc maximization and employs a stochastic minmax algorithm with oepsilon3 complexity strengths 1 the paper is wellorganized and written clearly 2 the formulation conversion of pauc is novel weaknesses 1 the paper employs an algorithm with a very strong bad assumption lg in assumption 1 can be infinity 2 contribution is not significant enough please see above docsepthis paper proposes novel algorithms to improve the efficiency of partial auc optimization specifically they present a reformulation scheme to transform the pairwise indifferentiable objective function into an instancewise differentiable with an approximation scheme moreover they provide generalization and optimization guarantees for their proposed method the extensive experiment in this paper shows that the proposed method can outperform the stateofart most times pros this paper presents an efficient reformulation scheme to make a complicated problem much more practical to solve in other words both the number of epochs and the periteration running time could be reduced significantly this proposed method also has a strong and comprehensive theoretical guarantee in terms of convergence and generalization moreover technical details are nontrivial i believe these merits can benefit the audience from a broad range of the ml community the experiments are extensive most of the competitors are quite sota the paper presents a solid work with the possibility to be employed in realworld problems i only have some minor concerns which i hope can be addressed during the rebuttal cons the math is dense even in the main paper though i can understand most of the details i think the authors can add more details and intuitive content to guide readers unfamiliar with auc i only see the performance comparisons in the main paper i think efficiency is more important in this paper since the goal is to accelerate so i would also like to the running time comparisons in the experiments yes docsepthis paper focuses on optimizing the oneway twoway partial auc metric which is challenging since a ranking constraint is involved in the objective function interestingly this paper presents a simple instancewise reformulation of the original objective which is unbiased in an asymptotic sense it turns out that the complicated problem could be solved with an accelerated minimax optimization problem moreover the convergence rate can thus be improved empirically the experiments also show its superiority in most cases strength 1 the reformulation of the original problem is impressive to me where the ranking constraints are canceled by conditional expectation and a differentiable reformulation of the topk bottomk ranking 2 the generalization analysis is interesting where the minimax reformulation can also simplify the derivation of uniform convergence bounds moreover the differentiable formulation also allows the analysis to deal with realvalued hypothesis classes which previous works often fail to do 3 though the convergence analysis is an existing result it is also good to see that the convergence rate could decrease to ot3 due to the reformulation weakness 1 it seems that there are some typos in the proof for example in line 529 in the decomposition of the conditional risk i think ell should be replaced with ell01 the same problem exists in line 578 2 in figures 45 i can only see the efficiency improvement in the number of iterations but the authors also claimed that the reformulation could improve the periteration efficiency which i do agree i think it may be better if they could give some empirical comparison in terms of this all my concerns are presented in weakness and questions this paper focuses on designing an efficient and asymptotically unbiased algorithm for pauc which seems no pontential negative social impact ### Summary:
the paper presented a novel reformulation of maximzing pauc in an asymptotically unbiased and instancewise manner based on this formulation the authors presented an efficient stochastic minmax algorithm for opauc and tpauc maximization convergence and generalization analysis were conducted the concerns and questions are well addressed in the rebuttal following the recommendation from the reviewers i recommend its acceptance
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper proposes a method to use clip for ordinal regression using a combination of soft labels and prefix tuning with an interpolation scheme added to enforce order between the learnt prompts strengths the idea of using context from natural language for ordinal regression tasks is interesting and worth exploring the proposed interpolation method to allow regressing in between ordinal ranks is also clever weaknesses the main weakness to me is that an obvious baseline is missing from an engineering perspective the paper uses a vgg16 network pretrained on imagenet as a trainable vision encoder for several datasets the vgg16 network has 138m parameters similar to the vit in the official clip release the natural baseline would then have been to train a linear probe atop the clip vitb to simply predict the rank as a classification task second the results in fig 3 are very surprising if i am reading it correctly about 35 of the rank prototypes violate the ordinal property this is a substantial portion and suggests the proposed method cannot grasp anything on the long tail of the distribution third why do language priors help with this task at all its difficult to conceive that clip has learned a meaningful representation of say the number 72 even if clip did have a very meaningful representation of some arbitrary number the rank embeds are entirely learned how is the language information being used t1 is missing variance numbers for the comparison with coop as evident in t2 coop and ordinalclip are very close with large variances the authors should include a statistical test to confirm that the proposed method is indeed superior to coop there are no variance numbers again in table 7 summary the technique presented is a somewhat straightforward extension of coop with the main novelty being that the literal rank names are replaced with soft class names the results are in general very close to that of coop and some tables are missing variance information while others have variance information given how close the results of the method are to coop and how high the included variances are variance results should be included for all tables and statistical significance tests conducted an important baseline linear probing is missing furthermore it is unclear to me why language information can help with this kind of task at all i do not see any experiments to explain why language information can help with this kind of task or provide insight into what the language model is adding here finally the result showing that 35 of the rank embeddings do not obey the ordinal property is troubling especially since the broken rank embeddings are clustered in the long tail this suggests the error distribution of the model is highly biased and the model does not work well at all for the tail limitations are not discussed docsepthe authors propose a languagepowered paradigm for ordinal regression tasks by learning rank prompts named ordinalclip the ordinalclip can leverage rank categories of language to explicit learning ordinal rank embeddings which will preserve the order of the language prototypes in the language latent space in the three regression tasks including age estimation historical image dating and image aesthetics assessment the experimental results show good performance than other baseline models in addition for fewshot learning the method also gains improvement the overall structure is wellorganised the paper has a clear motivation and is innovative for the regression field strength 1 the innovative languagepowered paradigm for ordinal regression uses language prototypes and learned rank prompts which are interesting and valuable 2 the good performance shows the effectiveness of ordinalclip 3 the approvement and experiments of the appendix are detailed weakness 1 the statement of introduction related works and problem statements should narrow down the ordinal regression to the visionlanguage or cv ordinal regression task cuz there are some pure language ordinal regression tasks 2the two loss imagetotext loss and a texttoimage loss should be introduced in detailed and the reason for using kl 3 we choose to maintain the order of rank embeddings to preserve the order of the language prototypes this statement is unclear how to maintain the order of the languag yes docsepthe authors propose a languagepowered model for ordinal regression based on clip the language prototypes are constructed from sentences with rank categories via the clip paper encoder and then optimizing the clip model by language prototype and image feature matching to further boost the ordinality this paper introduces the learnable rank prompts by interpolation from the base rank embeddings multiple experiments on age estimation image aesthetics assessment and historical image dating show that the proposed paradigm surpasses other related methods strengths 1 this paper introduces the contrastive languageimage pretrained clip model as a paradigm for ordinal regression is novel to me 2 the proposed language prototypes and learnable rank prompts are insightful extensions of the clip model for the ordinal regression task 3 the proposed interpolation learning rank prompts contribute to the output of smooth language prototype similarity trends which represent well learned ordinality weaknesses 1 the writing of this paper should be improved including motivation related work and task description etc 2 the motivation for the two proposed interpolations linear interpolation and inverseproportion is unclear and lacks visualization for comparison other than numerical comparison the authors have discussed the limitations and potential negative effects of their work ### Summary:
the paper proposes a languagepowered model for ordinal regression tasks based on clip language prototypes are constructed from sentences with rank categories via the clip paper encoder and then optimizing the clip model by language prototype and image feature matching to further boost the ordinality this paper introduces the learnable rank prompts by interpolation from the base rank embeddings while the proposed approach builds on coop reviewers agree the contribution is significant enough and original enough for neurips regarding the experimental section the paper shows that on three regression tasks age estimation historical image dating and image aesthetics assessment results show good performance compared to baseline models concerns regarding the writing of the manuscript have been raised pfax rx3e but seem to have been addressed during the rebuttal phase
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this work concerns the asymptotic behaviours of gaussiansmoothed 2wasserstein distance in particular the authors provide bounds on the difference between wasserstein distances of a pair of discrete measures and their smoothened versions when the variances of gaussian kernels are small the authors consider two scenarios when a perfect matching between two discrete measures exists and when it does not for the former case they show that the asymptotic gap decays exponentially in some nearzero regions and linearly otherwise for the latter case they show that the gap is linear even in a region around zero strengths the paper is wellwritten and wellpresented the addressed questions are of theoretical importance and the given answers to these questions are novel and complete to the best of my knowledge the authors covers all possible scenarios and the tools used for proofs stronger notions of cyclical monotonicity and implementability robustness of optimality and their relations are interesting to me weaknesses in the experiment the toy example is indeed beneficial for understanding but i think the authors should also verify their theory in largescale settings ie large m n maybe in the onedimensional space if the design of points statisyfing the uniqueness of optimal plan is an issue the authors have adequately addressed the limitations and potential negative societal impact of their work docsepthe paper provides an approximation rate for gaussiansmoothed wasserstein distances for discrete measures it shows that the approximation rate can be exponential in the perfectmatching plan case with a phase transition to linear rate if sigma is large and is linear otherwise strength the results in this paper are interesting and all the theoretical results are technically correct weakness the setting considered in the paper appears to be restrictive both measures of interest are assumed to have finite number of support points it makes the developed theory difficult to apply to general setting of gaussiansmoothed optimal transport some points in presentation that can be improved 1 in proposition 11 the constant c is dependent on d dimension as well the author should make this point clear so it is coherent with the argument about the curse of dimensionality of the wasserstein distances 2 in the presentation of section 3 case i we need reasons why you choose to present proof of theorem 33 but not of other results like theorem 31 which appears to be one of the main results if the proof of theorem 31 is not presented in the main text then why do you present lemma 34 in general the writing style in sections 3 and 4 needs to be improved so that the reader can know what proof is going to be presented and what is deferred to the appendix 3 the order of results in the appendix is not consistent dividing those results into several small sections may help to keep track of this issue because of the restriction of the setting in the paper it is not obvious to see implications in using gaussiansmoothed optimal transport for general measures i recommend considering the problem in a more general setting and trying to see if the current approach still works docsepthis paper studies the approximation of the 2wasserstein distance between two discrete probability measures mu and nu by the 2wasserstein distance between the same distributions smoothed by a gaussian measure that is called gaussiansmoothed optimal transport in particular they prove the existence of a phase transition in the small noise regime of the variance parameter which depends on the existence of a perfect matching between the distributions mu and nu studying the behaviour of gaussiansmoothed ot distance is particularly interest as it approximates the true transport and does not present a curse of dimensionality in the sampling complexity in particular proving the curious result of the existence of a phase transition in the context of finitely supported measures mu and nu is very nice moreover this paper is well written the problem and presentation are clear and the choices made for the study the finitely supported framework are precisely justified the proofs based on strong cyclical monotonicity and perfect matching are quite elegant the simulation study is also concise and convincing overall the results constitute a modest because limited to gaussiansmoothed ot distance but very interesting contribution to the study of approximations of the classical wasserstein distance minor comments the notations for the parameter sigma of the gaussian and the permutation sigma can be confusing the order and presentation of proofs in the appendix could be improved the authors decided to limit their study to finitely supported measures which is well justified docsepthis paper presents an analysis of the gaussiansmoothed wasserstein distance got in the framework where the gaussian kernel parameter sigma is small it is already known that got approximates the true wasserstein distance and that the difference between the two is of order sigma the objective of this paper is to refine this bound and to show under certain assumptions of uniqueness of the transport plan that this bound can be improved and that got approximates wasserstein exponentially well in a certain regime more precisely the authors show that there is a phase transition on sigma such that below the bound is exponential and above it is linear in sigma this paper completes the understanding of got with respect to sigma which has already been studied in the sigma to infty regime overall i find this article quite well written the thread of definitions and proofs is clear the ideas are well linked from a purely theoretical point of view the results are i think really interesting they complete the statistical understanding of got with respect to sigma which is a nice contribution moreover the ideastricks introduced and the theorems go far beyond the study of the got distance and can certainly be used to establish other theoretical results in optimal transport in particular i think that the notions of strong implementabilitystrongly cyclically monotonic and robustness of the transport plans are useful and richthey allow us to establish the uniqueness of the transport plan which is in the discrete case a key property that is not much addressed by the community as far as i know the fact of having cleared all these properties around the strong implementability is for me a contribution that is useful in itself the phase transition of got opens also the door to other studies notably on sample complexity or on approximations of the wasserstein distance the main criticism i would have is that this article focuses on a really specific and technical problem related to a subproblem of optimal transport for a reader who is interested in got the contributions are certainly really interesting but the article does not discuss the potential applications for optimal transport in general neither for machine learning to be more precise the article strings together theoretical results without giving much insight nor discussing the propositions and theorems it also lacks a conclusion that could perhaps bring some perspectives around this work in this context i think it would be interesting to save space by moving the proof of theorem 33 to an appendix to make a conclusionperspective part and for discussing the different results and their implications for example it could be interesting to discuss even informally the possible generalization to the case of continuous measures or to explain their implications for the wasserstein distance approximation moreover the numerical results are very succinct and i find difficult to read i find for example that the phase transition which is the center of the contributions is not really visible on figure 2 i think that this part should be more complete by better illustrating this phase transition or for example the notion of robustness of a transport plan for these reasons i rather recommend a weakaccept but i am ready to change my mind depending on the authors answer small remark in terms of notation the sigma is both used to define a permutation and for the sigma of the gaussian kernel moreover the fact that sigma is a permutation in definition 24 is not clearly stated in the article typos t instead of gamma in proposition 212 and 213 the authors did not discuss the potential negative societal impacts of their work however this is not really relevant in this context as the article is quite theoretical and specific concerning the limitations i find that the authors could discuss how in practice can we check if we are in the regime of fast approximation ie when the transport plan is unique after rebuttal as written below i am satisfied with the authors answer so that i change my score to 7 ### Summary:
all reviewers are in agreement that the main factors in particular the results and their presentation are above the bar for neurips no significant concerns remain following the author response and the discussion period i encourage the authors to carefully take into account all of the minor comments when preparing the cameraready version
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: the paper surveys recent offline rl algorithms and seeks to analyze different factors contributing to their performance novel evaluation protocols are designed to analyze their representation and behavior based on this analysis the authors propose a wellmotivated modification to iql which achieves strong results on several d4rl datasets strengths wellmotivated range of analysis on the representation learned by an offline rl algorithm proposed algorithms riql uss variants show strong performance and are motivated by earlier findings in the paper novel and insightful analysis of how to integrate modelfree methods into a modelbased framework and how to address the failures of a naive approach weaknesses on line 208 and 269 the uncertainty estimate used is the max meandiscrepancy this measure is unusual as we would use the standard ensemble variance in supervised learning 1 evaluation is solely on medexp datasets a good understanding of the strengths of each algorithm on mixedrandom data where we may expect a higher extrapolation gap would be useful minor td3bc should td3bc typo on line 182 dose does line 204 262 the learned probabilistic model and modelbased training are more accurately attributed to mopo rather than combo in all tables and results total random seeds should be included 1 simple and scalable predictive uncertainty estimation using deep ensembles balaji lakshminarayanan alexander pritzel charles blundell the paper provides good insights on the datasets it tests however limited evaluation on medexp datasets this could be expanded to include more d4rlsuite datasets incomplete information on seeds used during evaluation possible inaccuracy of using online qfunctions as groundtruth tabular settings where precise qfunction can be determined could elucidate this better docsepthe paper provides a comprehensive analysis of the stateoftheart offline reinforcement learning algorithms in particular the paper evaluates the critics and policies from offline rl algorithms using several metrics including representation probing when the learned representations are used to predict various quantities effective rank action ranking etc the authors observe a mismatch between the quality of critics predictions and the performance of the policy for some methods based on this insight the authors propose a modification of iql that achieves significant improvements over the original version strengths the paper is well written and easy to follow i believe that the analysis of offline reinforcement learning methods is very valuable to the community and has received only limited attention so far the analysis covers a variety of metrics for evaluating the critic and policy separately the metrics introduced in the paper can help tune the components of offline reinforcement learning methods separately the analysis has been applied in practice to the improvement of iql weaknesses the paper focuses only on a subset of tasks considered in cql and iql the paper focuses only on improving iql to sum up i believe this paper is relevant to the offline reinforcement learning community and the pros outweigh the cons therefore i recommend this paper for acceptance the paper can be improved by considering a wider variety of datasets from d4rl and applying the same analysis to enhance the other methods docsepintroduces a set of experiments to evaluate diagnose the bottleneck of offline rl algorithms along three axes 1 representations representation probing use the second last layer of the critic network to predict the next state reward and action via linear regression similarly use the actor embedding to predict the optimal action value function and qfunction which is approximated by a trained td3 policy representation metric feature dotproduct and effective rank of critic representations 2 value functions rank actions using the learned qs a observation td3bc and iql learn more accurate value functions policy evaluation might be more effective but achieve worse performance policy improvement might be limited more generally an offline rl algorithm can be higherperforming but learn poor representations and value functions 3 policies policy ranking experiment average mse of the selected action vs optimal action how often does the policy take ood actions observations combo selects most optimal worst actions performant policy is good at selecting better actions even if the actions are suboptimal the paper also introduces a new offline rl algorithm called riql relaxed insample qlearning based on a simple modification of iql the method is motivated by empirical heuristicbased observations that awrbased policy improvement is effective to avoid taking ood actions but is sometimes overconservative it add extra policy constraints and use a less conservative actor loss to enable learning from ood actions note that there is no theoretical justification or provable guarantees for the method the paper also investigates when a learned dynamics model helps modelfree offline rl introduces an uncertaintybased sample selection method that is more robust to model noises strengths the paper presents thoughtful experiments analyzing different offline rl algorithms it is an interesting result that performant offline rl algorithms often exhibit poor representations and inaccurate value functions however i wonder if this is due to the environmentstasks which are all similar statebased locomotion tasks overall i think the empirical experiments analyses are useful for better understanding offline rl algorithms on simple classic control tasks although it is unclear if these analyses still hold for more challenging tasks weaknesses 1 my main concern is that all evaluations are done on the toy classic control locomotion tasks in the d4rl benchmark which are limited due to the fact that simple filtered behavioral cloning outperforms sota offline rl methods on d4rl and therefore good performance on these simple d4rl tasks does not necessarily translate to good offline rl performance the papers contribution would be greatly improved if the authors add a more challenging and different task other than toy locomotion such as d4rl antmaze navigation manipulation tasks or imagebased tasks such as atari 2 the proposed method riql performs similarly as other baseline methods on the toy locomotion tasks table 10 which shows that riql is a reasonable offline rl algorithm to use on the other hand riql is a heuristicbased modification of existing offline rl objectives and there is no theoretical justification eg provable guarantees about policy improvement for the proposed method riql yes ### Summary:
the main strengths of this paper are that 1 it provides some interesting analysis that leads to some somewhat surprising findings and 2 it presents and evaluates some new technical algorithmic ideas based on this analysis that lead to improved performance after the author discussion the main weaknesses is that the new antmaze results are somewhat disappointing showing that the algorithmic ideas dont improve over iql on a more complex problem setting the ant maze tasks are a lot more interesting and complex than the standard locomotion tasks and so this as a fairly major weakness of lesser importance the title is not particularly descriptive and could be used to describe a lot of papers so i would like to suggest to the authors to make the title more specific to the contributions of this paper overall the reviewers and ac think the strengths outweigh the weaknesses especially since the analysis is interesting on its own and since there is some new analysis on more complex imagebased settings irrespective of the technical ideas only providing benefits on simplistic locomotion tasks nonetheless we encourage the authors to use our feedback to further improve the paper
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper proposes an algorithm for offpolicy reinforcement learning using the hellinger distance between the sampling policy and optimized policy as a constraint the motivation for the proposed method is explained in the preliminaries section the actual algorithm and experiments run using the proposed algorithm are also provided the derivation is easy to follow and this is because of the wellknown lower and upper bounds on the hellinger distance the writing of the paper needs work for example the abstract talks about the sampling policy and current policy by current policy what the authors mean is the policy that is being optimized the sampling policy is the policy that was run offline clarifying these terms would help similarly i did not follow return for the new policy is improved comparing to kl in paragraph 3 with the use of lagrangian have been derived needs proofreading in eqn 13 what is beta in the figures what are the axesdocsep summary the paper provides a new metric hellinger distance to be combined with trust region ideas in policy optimization the major difference from prior work is the change of this distance metric the paper shows that with this distance metric along with lagrangian relaxation one could show analytic results of improved policies the paper also shows similar lower bound improvement results and compared with baselines on offline rl tasks reasons for score overall i vote for rejection i think the idea of changing the distance metric is not novel enough critically i do not think so far in the paper there is a strong enough motivation to use this distance metric both innovationwise and resultwise i will explain in details below pros 1 idea is not novel the overall idea of using an alternative metric does not seem novel though the authors motivated an improved version of the trust region lower bound by using the fact that the hellinger distance is upper bounded by kl i think such an improvement in the lower bound is a bit trivial and does not provide new perspectives on the old results 2 this new lower bound also might not provide additional benefits in practice because in practice such lower bounds are generally too conservative 3 experiment results are also not strong enough i will explain below cons 1 the final performance of all three baseline algorithms are fairly bad in terms of final rewards eg for halfcheetah all returns are negative yet we know that online algorithms could achieve 3000 at least and in some cases 6000 i wonder if this general inferior performance is a result of using offline dataset in that sense does the agent learn anything meaningful at all 2 from both fig 1 and fig 2 about for half of the tasks the performance seem to drop or stay at the same level as the case where no training is done xaxis at the origin does this also corroborate my previous concern that these agents do not learn much at all 3 from the curves presented in fig12 as well as meanstd results in table 12 it does not seem that the new method provides much significant gains either questions during rebuttal period please address and clarify the cons above thanks docsepthe authors propose the use of the hellinger distance instead of kl divergence to constrain new policies to remain close to the behavior policy the technical aspects are straightforward noting that hellinger provides tighter bounds on total variation than kl and can straightforwardly be plugged into the cpitrpo bounds for policy improvement they also propose an offline reinforcement learning algorithm based on enforcing a hellinger constraint to the data policy deriving iterative optimization procedure and evaluate it on offline i find the experimental evaluation highly lacking it seems with the datasets and envs evaluated policy performance actually drops as policy optimization is conducted so it is not clear to me that these evaluations actually provide meaningful information towards which methods perform better in scenarious where we would want to use offline rl i would like to see much more extensive evaluation of this method compared to other offline rl algorithms like bcq httpsarxivorgabs181202900 brac httpsarxivorgabs191111361 or cql httpsarxivorgabs200604779 over a much wider variety of datasets in general im not convinced that simply using the hellinger distance instead of kl will lead to significant improvements on its own given that in the brac paper the authors experimented with different trust regions including wasserstein mmd and kl and didnt find huge differences in the tested domains overall the contribution does not seem significant enough to warrant publication without strong experimental results which this paper lacksdocsepsummary this paper proposes a supervised learning for offpolicy reinforcement learning it exploits the hellinger distance instead of kl divergence thus it achieves tighter lower bound of the expected culmulative return than that using the kl divergence moreover the new lower bound is policy independent the experimental results show that the proposed method slightly outperforms other baselines when only small amount of data are given while the algorithms fail to learn on several environments reasons for score though it has some advantages i vote to reject this paper this is because it has low novelty the experiments are wrongly designed and thus it is hard to believe the results the specific details are below pros hellinger divergence is used instead of kl divergence and thus the lower bound become tighter than that using kl divergence the loss function for policy can be derived by theory cons changing kl distance to hellinger divergence has low novelty also the derivation of the loss function using hellinger distance isnt difficult hellinger distance and kl divergence are all under the class of amari alphadivergence when alpha 1 amari alphadivergence becomes kl and when alpha0 amari alphadivergence becomes the hellinger distance integral sqrtp sqrtq2 dx indeed hd is symmetric and satisfies the axioms of distance basically when we consider the hd on the space of probability distribution we consider euclidean geometry on the space of probability distribution whereas the kld induces the boltzman interpretation ie p exp kld in addition to the issue of significance in novelty the numerical results show that the performance improvement is insignificant or negligible the experiments used data sampled by random policies or first few samples of onpolicy data but i think that this is a little strange training setting most of the previous works in this line use samples at a certain performance not drawn by random policy for example in abm paper1 it used first 10000 episodes if the length of an episode is 1000 it uses first 10 million samples or first 2000 episodes first 2 million samples to show its performance when it uses high performed samples or low performed samples respectively these contain good performed samples relative to the random samples however experiments in this paper use almost random samples to train policies we cannot expect a good policy at a certain performance using these random samples this expectation is also shown in the results some learning curves go down as learning proceeds and this means that the learning fails on these environments if the proposed method learns successfully while the others fail to learn it is a meaningful result but it is not otherwise i think that the authors should evaluate performance using better samples to prove that the proposed method outperforms others reference 1 noah siegel et al keep doing what worked behavior modelling priors for offline reinforcement learning in international conference on learning representations 2020 ### Summary:
the reviewer concerns generally centered around the novelty of replacing the distance metric for a policy constraint while the authors clarified many of the reviewer concerns and added some additional comparisons in the end it was not clear why the proposed approach was interesting while it is true that this particular distance metric has not been evaluated in prior work and the result would have been interesting if it resulted in some clear benefits either empirically or theoretically in the absence of clear and unambiguous benefit its not clear how valuable this concept really is after discussion the reviewers generally found the paper to not be ready for publication in its present state
Below is given review of a research paper from cnoference journal. Please write a summary the review. ### Review: this paper presents the nethack learning dataset nld which has 3 parts i 15 million human trajectories recorded from the nao public nethack server ii 100000 trajectories from the symbolic bot winner of the nethack challenge 2021 iii code for users to apply these trajectories in a compressed format to demonstrate the utility of nld the authors train and compare several algorithms spanning online rl offline rl imitation learning and learning from observations only 1 the paper presents a largescale dataset for a fast and challenging rl environment nethack the authors also provide highperformance code to load this dataset and train agents which makes largerscale experiments more accessible to those with smaller compute budgets 2 the dataset and code is welldocumented 3 baseline results cover a range of different approaches to train agents online rl offline rl learning from observations only combinations of offline and online learning 1 appendix g6 mentions a sql interface to consider subsets of the datasets according to the metadata based on table 2 the performance of learned policies is quite below the dataset average i think the paper would benefit from additional investigation and discussion on why agents trained on nldaa more narrow data distribution generated by symbolic bot cleaner demonstrations outperform those trained on nldnao human demonstrations for instance cutting nldnao trajectories short so the length of the trajectories are similar to nldnao to exclude the laterstage states of nethack from figure 1 it seems that nldnao trajectories may be several times longer than nldaa 2 only 5 seeds are reported in figure 2 it seems that appo methods have very high variance blue and orange lines and would likely benefit from running more seeds the discussion would also benefit from exploring why appo has higher variance bw 0 and 600m steps after which the variance suddenly drops docsepthe paper introduces a dataset called nld to be useful for imitation learning and offline rl study for nethack learning environment nle kuttler neurips20 nle and potentially for a controlled version of nle minihack samvelyan neurips21 is an openai gym environment from the same author group with a popular roguelike or dungeon explorertype game nethack nldnao collects 15m human game plays in a state only trajectories form with game play metadata nldaa collects 100k plays by the winning bot autoascend from a neurips21 competition in the stateactionscore trajectory form nle itself is updated v090 to be compatible with nld experiments using the dataset is included related dataset for similar benchmarks starcraft dota and minerl is discussed nldnao collects 15m human game plays in a state only trajectories form with game play metadata nldaa collects 100k plays by the winning bot autoascend from a neurips21 competition in the stateactionscore trajectory form nle itself seems to be updated v090 to be compatible with nld containing ttyrecdataset in its python modules nledataset it is essentially valuable that the work includes experiments for online rl offline rl imitation learning and learning from demonstrations which demonstrate the example usage and usefulness of the dataset unfortunately the process for reproducing experimental results shown in the paper is not clear from the main paper supplemental material or github httpsgithubcomdungeonsdatasubmissiondungeonsdataneurips2022treea3f01c425f5d75ce9174b76105ac34a377c2df30experimentcode docsepthis paper introduces the nethack learning dataset nld which is a large dataset of demonstrations from the nethack game it has 2 partitions nldnao that contains 10 billion stateonly trajectories and metadata from 15m human games scraped from online web servers and nldaa a collection of 3 billion trajectories with complete state acton and scores generated by a winning bot on the nethack challenge python scripts are provided to load these datasets efficiently experimental results are demonstrated on the dataset with rl methods like appo and dqn and imitation learning methods like bc and bco clone from observations the paper is wellwritten and easy to follow the human dataset is high quality as it is scraped from actual game plays on the web servers the scale is huge 15m human trajectories with almost 10b transitions this is much more than most other video game benchmark data and is provided under the opensource gpl license the nldaa dataset contains trajectories generated by a winning bot on nethack while not as good quality as human dataset it has groundtruth actions to train inverse dynamics model and do direct behavior cloning it is complementary to nldnao and i am glad that the authors included both the experiments use offtheshelf methods like appo dqn cql etc no novel algorithm is proposed but these methods are standard enough to provide good baseline results for the dataset dungeon and data great name my main concern is about the benchmarks contribution to advances in the broader policy learning community nethack is a very niche domain it is an asciibased game that does not make much sense to untrained eyes and does not have any meaningful highdimensional observation like 3d perception or other sensing modalities in addition the action space and world transition dynamics are quite simplistic i am not convinced that the potential future algorithms developed on nethack or nld will be generally applicable to other embodied agent domains as the authors pointed out even a purely hardcoded agent called autoascend is able to achieve nontrivial performance on the benchmark autoascend bot actually contributes the nldaa dataset with full stateactionscore trajectories i am not claiming that mastering nethack is easy but in contrast it is extremely difficult to hardcode a robot agent in habitat 1 or ai2thor 2 from pixels alone even for the simplest tasks while the authors discussed connection to robotics in l337339 i am still doubtful of nethack as an effective testbed for generalpurpose embodied agent algorithms that being said im still leaning towards acceptance for the largescale dataset introduced in this paper considering that much simpler gridworld environments without datasets have been accepted at top conferences 1 habitat a platform for embodied ai research savva et al 2019 2 ai2thor an interactive 3d environment for visual ai kolve et al 2019 docsepthe paper introduces nld a largescale dataset of demonstrations from nethack the dataset enables research in multiple areas such as imitation learning and learning from both offline data and online interactions empirical results indicate that significant research advances are needed to leverage largescale datasets to solve challenging decisionmaking problems fully 1the dataset is combined with nle which provides potentially large insights for the study conducted in mdp problem the proposed dataset enables computationallyaccessible research in multiple areas including imitation learning offline rl learning from sequences of only observations as well as combining learning from offline data with learning from online interactions 2the dataset has many properties of realworld domains such as partial observability stochastic dynamics sparse reward long trajectories rich environment diverse behaviors and a procedurally generated environment such properties allow the dataset to provide a more realistic evaluation environment thereby making the evaluation of rl algorithms more reliable 3nld can enable agents learn from demonstrations containing only observations learning from both static data and environment interaction 4the proposed nld is complete which is decomposed of three components nldnao nldaa and ttyrecdataset the significance of each component for dealing with decisionmaking problems is clearly explained in the paper also the details of the released data including the used raw features file format and scalability are described clearly 5the dataset have been opensourced and there is a detailed introduction for usage in the corresponding repository thus facilitating researchers to quickly get started with the development and research of nld 6nld strikes a better balance between scale ie a large number of diverse human demonstrations on a complex task and efficiency ie cheap to use and fast to run 7experimental results indicate that nld poses a substantial challenge to stateoftheart methods as the nle environment is highly stochastic and partially observed 1the authors should explain the relations between symbolic and rl in details corresponding to the sentence in section 2 symbolic bots decisively outperformed deep rl methods with the best performing symbolic bots surpassing stateoftheart deep rl methods by a factor of 5 2the paper should provide a quantitative metric to quantify data scale and efficiency of implementation in reinforcement learning between different datasets such metric could further demonstrate the significance of the nethack dataset also the authors need conduct experiments to compare different decisionmaking datasets in the area of randomness magnitude of actions and state spaces and partial observability in other words the paper needs to quantitatively emphasize the necessity and pioneering of nld for existing research 3section 5 does not fully explain the research significance of this dataset by only mentioning that there is room for improvement in symbolic ie naoaa docsepthis paper presents nld nldnao and nldaa a largescale dataset of nle environment nld is easy to use cheap but largescale pipeline code ready and is hard for offline rl community several promising directions can utilize nld including learning from only observations 1 the author provides detailed descriptions and analysis of nldaa and nldnao including the performance distribution and other attributes format metadata 2 the author effectively shows the computation cost of this dataset backed with solid numbers 38tb 229gb which is impressive 3 the author illustrates the difficulty of solving this dataset and posts several promising research directions i did not personally find significant weaknesses docsepthe paper proposes a dataset of trajectories gathered from the game of nethack it has humangenerated trajectories nldnao as well as ones generated by a symbolic agent that won the nethack challenge at neurips 2021 nldaa the paper describes the dataset its analysis in terms of the game coverage as well as experimental results for popular rl methods these include online rl as well as offline rl and learning from demonstrations using the dataset nldaa the results show the large gap between the current rl methods and the symbolic agent not to mention the human players minor you refer to table 4 multiple times was that supposed to be table 2 the dataset can be useful for research in offline rl but also possibly of interest to the broader rl community the efficient implementation is definitely a plus given the computational hunger of current rl methods as the paper indicates the symbolic method used to gather the data can be characterized as slightly above beginner level this data is static in the sense that it is gathered once and is not updated with larger parts of the state space explored thus at some point the dataset will become obsolete even if that day may be not too close it will come therefore it would be beneficial to have a sort of symbolic agent that can search the state space and update the dataset with additional trajectories obviously no significant portion of the entire state space can be kept offline but maybe a symbolic agent would be able to generate some portions on the fly ### Summary:
this paper presents the nethack learning dataset nld which has 3 parts i 15 million human trajectories recorded from the nao public nethack server ii 100000 trajectories from the symbolic bot winner of the nethack challenge 2021 iii code for users to apply these trajectories in a compressed format to demonstrate the utility of nld the authors train and compare several algorithms spanning online rl offline rl imitation learning and learning from observations only weakness a niche domain nethack is a very niche domain it is an asciibased game that does not make much sense to untrained eyes and does not have any meaningful highdimensional observation like 3d perception or other sensing modalities quantitative metric to quantify data scale and efficiency of implementation this data is static in the sense that it is gathered once and is not updated with larger parts of the state space explored some of these points were addressed in the rebuttal while the challenge of a static dataset is deferred to future work broadly there is agreement among the reviewers and the acs that this is a useful benchmark for the community ac would request the authors to carefully integrate all the feedback in the updated manuscript as well as any leftover comments added as clarifications in the appendix