paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
iclr_2018_H1a37GWCZ
UNSUPERVISED SENTENCE EMBEDDING USING DOCUMENT STRUCTURE-BASED CONTEXT
We present a new unsupervised method for learning general-purpose sentence embeddings. Unlike existing methods which rely on local contexts, such as words inside the sentence or immediately neighboring sentences, our method selects, for each target sentence, influential sentences in the entire document based on a document structure. We identify a dependency structure of sentences using metadata or text styles. Furthermore, we propose a novel out-of-vocabulary word handling technique to model many domain-specific terms, which were mostly discarded by existing sentence embedding methods. We validate our model on several tasks showing 30% precision improvement in coreference resolution in a technical domain, and 7.5% accuracy increase in paraphrase detection compared to baselines.
rejected-papers
The paper presents an interesting extension of the SkipThought idea by modeling sentence embeddings using several document-structure related information. Out of the various kinds of evaluations presented, the coreference results are interesting -- but, they fall short by a bit (as noted by Reviewer 2) because they don't compare with recent work by Kenton Lee et al. In summary, the idea provides an interesting bit on building sentence embeddings, but the experimental results could have been stronger.
train
[ "HJUoksOxG", "HkMKdz9gz", "BkJBMfslf", "SkhSHOa7z", "Sk9XSOaXf", "BkN-ruaXf", "r17sEOTmz", "ryrYVu6XM", "HJU4pLHzM", "B1mXQXi-M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents simple but useful ideas for improving sentence embedding by drawing from more context. The authors build on the skip thought model where a sentence is predicted conditioned on the previous sentence; they posit that one can obtain more information about a sentence from other \"governing\" sentences in the document such as the title of the document, sentences based on HTML, sentences from table of contents, etc. The way I understand it, previous sentence like in SkipThought provides more local and discourse context for a sentence whereas other governing sentences provide more semantic and global context.\n\nHere are the pros of this paper:\n1) Useful contribution in terms of using broader context for embedding a sentence.\n2) Novel and simple \"trick\" for generating OOV words by mapping them to \"local\" variables and generating those variables.\n3) Outperforms SkipThought in evals.\n\nCons:\n1) Coreference eval: No details are provided for how the data was annotated for the coreference task. This is crucial to understanding the reliability of the evaluation as this is a new domain for coreference. Also, the authors should make this dataset available for replicability. Also, why have the authors not used this embedding for eval on standard coreference datasets like OntoNotes. Please clarify.\n2) It is not clear to me how the model learns to generate specific OOV variables. Can the authors clarify how does the decoder learns to generate these words.\n\nClarifications:\n1) In section 6.1, what is the performance of skip-thought with the same OOV trick as this paper?\n2) What is the exact heuristic in \"Text Styles\" in section 3.1? Should be stated for replicability.", "1) This paper proposes a method for learning the sentence representations with sentences dependencies information. It is more like a dependency-based version skip-thought on the sentence level. The idea is interesting to me, but I think this paper still needs some improvements. The introduction and related work part are clear with strong motivations to me. But section 4 and 6 need a lot of details. \n\n2) My comments are as follows:\ni) this paper claims that this is a general sentence embedding method, however, from what has been described in section 3, I think this dependency is only defined in HTML format document. What if I only have pure text document without these HTML structure information? So I suggest the authors do not claim that this method is a \"general-purpose\" sentence embedding model.\n\nii) The authors do not have any descriptions for Figure 3. Equation 1 is also very confusing.\n\niii) The experiments are insufficient in terms of details. How is the loss calculated? How is the detection accuracy calculated?\n", "This paper extends the idea of forming an unsupervised representation of sentences used in the SkipThought approach by using a broader set of evidence for forming the representation of a sentence. Rather than simply encoding the preceding sentence and then generating the next sentence, the model suggests that a whole bunch of related \"sentences\" could be encoded, including document title, section title, footnotes, hyperlinked sentences. This is a valid good idea and indeed improves results. The other main new and potentially useful idea is a new idea for handling OOVs in this context where they are represented by positional placeholder variables. This also seems helpful. The paper is able to show markedly better results on paraphrase detection that skipthought and some interesting and perhaps good results on domain-specific coreference resolution.\n\nOn the negative side, the model of the paper isn't very excitingly different. It's a fairly straightforward extension of the earlier SkipThought model to a situation where you have multiple generators of related text. There isn't a clear evaluation that shows the utility of the added OOV Handler, since the results with and without that handling aren't comparable. The OOV Handler is also related to positional encoding ideas that have been used in NMT but aren't reference. And the coreference experiment isn't that clearly described nor necessarily that meaningful. Finally, the finding of dependencies between sentences for the multiple generators is done in a rule-based fashion, which is okay and works, but not super neural and exciting.\n\nOther comments:\n - p.3. Another related sentence you could possibly use is first sentence of paragraph related to all other sentences? (Works if people write paragraphs with a \"topic sentence\" at the beginning.\n - p.5. Notation seemed a bit non-standard. I thought most people use \\sigma for a sigmoid (makes sense, right?), whereas you use it for a softmax and use calligraphic S for a sigmoid....\n - p.5. Section 5 suggests the standard way to do OOVs is to average all word vectors. That's one well-know way, but hardly the only way. A trained UNK encoding and use of things like character-level encoders is also quite common.\n - p.6. The basic idea of the OOV encoder seems a good one. In domain specific contexts, you want to be able to refer to and re-use words that appear in related sentences, since they are likely to appear again and you want to be able to generate them. A weakness of this section however is that it makes no reference to related work whatsoever. It seems like there's quite a bit of related work. The idea of using a positional encoding so that you can generate rare words by position has previously been used in NMT, e.g. Luong et al. (Google brain) (ACL 2015). More generally, a now quite common way to handle this problem is to use \"pointing\" or \"copying\", which appears in a number of papers. (e.g., Vinyals et al. 2015) and might also have been used here and might be expected to work too. \n - p.7. Why such an old Wikipedia dump? Most people use a more recent one!\n - p.7. The paraphrase results seem good and prove the idea works. It's a shame they don't let you see the usefulness of the OOV model.\n - p.8. For various reasons, the coreference results seem less useful than they could have been, but they do show some value for the technique in the area of domain-specific coreference.\n\n", "- You mentioned the paraphrase evaluation does not show the usefulness of the OOV model. That is yes and no. The OOV Handler works in two different situations. The first is when we train the model, where we are given a raw HTML document that we can build dependency-based OOV mapping. The second one is when we apply the model. In a task like coreference resolution where you are given a document, you can leverage and build an OOV mapping, but the paraphrase identification dataset does not provide raw HTML documents that we can annotate the dependencies. In this case, we are applying the model trained with OOV words (thus with more data, especially for a domain corpus), and the encoder understands that we have to carry some information about an OOV word in a given sentence, which gives better sentence embedding. While the effect of this training can be found in the paraphrase detection experiment, we added the coreference resolution experiment to further analyze the use of OOV words.\n- To help understanding of the components (i.e., OOV Handler and dependencies) of our approach, we are conducting additional experiments and we have partial results. If we provide our model only sequential dependencies, while still using OOV Handler, the loss is slightly increased but it is much lower than that of Skip-Thought (updated in Table 2). The evaluation of OURS - OOV Handler requires re-training of the model due to the change of vocabulary. The training is still ongoing, but the intermediate training loss is much higher than that of OURS at the same number of training iterations (e.g., 24.90 vs 1.58 @ 36160 and 19.77 vs 1.01 @ 61820, each by averaging 832 sentences) and also the reduction rate is lower (21% vs. 36%). We think this shows the importance of OOV handling.\n\n- One difficulty of the evaluation was that there is no available task on documents with structures (they are all plain text of sentence sequences, or at most just with a title). Thus, we conducted the coreference resolution task on Wikipedia documents. We are planning to publicly release this evaluation data set.\n\n- Other minor comments including the notation of sigmoid and softmax are fixed. Also, there are several reasons we chose an old version of Wikipedia. First, we wanted to have enough new documents to test on. Second, we wanted to build a real-world system that process a raw HTML document. This Wikipedia dump is the last version provided in HTML format. The idea of using the first sentence of a paragraph seems to be valid, and we agree that this can be observed in Wikipedia articles, and will include it in the future experiments.\n\n", "- Regarding the malware coreference examples, there are many different cases including “the malware”, “the program”, “the worm”, “WannaCry ransomware” (c.f., WannaCry attack), “Wannacry” (different capitalization/spacing), “WannaCrypt” (other nicknames), different malware names by different antivirus companies (e.g., WORM_WCRY.A, Ransom.Wannacry), “it”. In this domain test data, proper noun string identity almost always indicates the same entity. Therefore, we did not include them in our experiments, and we only considered if the strings are different.\n- Regarding word embedding of OOV words, you mentioned other approaches like character-based encoding. The main reason we did not consider such approaches is that the application area we target is specialty domains like cyber-security, where the semantic meaning of many terms may not come from character level information (e.g., the embedding of WannaCry should be more similar to Locky or other ransomware than “cry” or “wanna”). Character-based encoding is not very helpful while adding the complexity to the model. However, we included Luong et al.'s method and some others in the paper.\n- We found the paper by Luong et al. (2015) you mentioned is indeed very related to ours. The main difference to Luong et al. (2015) is that they rely only on positions, but in our approach, we have two dimensions to identify a specific unknown word placeholder: position and dependency. We updated the draft to include this paper. “Pointing” (Vinyals et al. 2015) seems to be an interesting approach handling varying vocabulary size. But although their approach can handle varying output vocabulary, encoding, and training the attention mechanism over the whole English vocabulary having millions of words is computationally infeasible (their experiments covers up to 500 symbols in the vocabulary), or they did not consider a hybrid of a fixed vocabulary and varying vocabulary (i.e., OOV) together which is a nontrivial task.\n\n", "Thank you very much for the very thoughtful and constructive comments. Here, we tried to clarify unclear parts you pointed out and address all comments. We have updated the paper accordingly.\n\n- In particular, there were many questions about the coreference resolution experiments. Regarding the baseline, we found that the specific implementation we compared with is that of Clark & Manning EMNLP 2016 based on https://github.com/clarkkev/deep-coref, and we fixed the citation. We do not claim that our approach can outperform a dedicated coreference resolution method designed for a specific domain with a large training data set. Instead, our focus is more on semantic embedding of sentences which can be used for many NLP tasks, and we show how our model can be used as a unsupervised coreference resolution tool for technical domains that may not have enough training data. We revised to make this clear in the draft.\n- You mentioned the coreference candidate generation is unclear. We run a dependency parser and entity recognition pipeline to annotate entities. We also find pronouns or noun phrases requiring coreference resolution using a dictionary. The entity type conformity is done using rules on pronouns (i.e., it, they, … can refer to any type, he/she can only refer to a person) or head noun (e.g., the new ransomware -> Malware class, …). We updated the paper to clarify this.\n- You had a question about the coreference resolution performance per entity types. The main difference between malware/vulnerability and person/organization cases is the amount of contextual information (or dependency). Since the documents are in the cyber security domain, for people or organizations, their coreference is often syntactically obvious, but there is not much context. Our approach does not consider gender/number agreement and other syntactic features well. Another case we observed is when the entity is not mentioned in governing sentences. We expect to solve this problem by expanding the sequential dependency window or the dependency annotator. Therefore, our method focusing on semantic information does not perform well for entities with little context.\n- You questioned if a domain-specific mention detection would significantly improve the accuracy of the Stanford Coref system. While a more rigorous experiment is need to answer the question, we expect that this might not be critical. Poor mention detection would result in poor recall, but not necessarily poor precision. Their system uses a mention detection method which is described in Raghunathan et al. (2010), and it does find malware names and vulnerability names as noun phrases. Even though,it does not know the entity type, they are considered as candidate antecedents. But we can still see low precision. Also, according to Clark and Manning (2016 EMNLP and ACL), they train their network using features such as string matching features and distance features as well as word embeddings. They may have different weights in different domains, and even the word embedding may not be available or useful in a domain corpus as discussed in Pilehvar and Collier (2016). ", "Thank you for your thoughtful comments. We tried to address all the concerns you have and revised the paper accordingly.\n- You had a question about generality of the approach. On one hand, to “train” the network, the model requires dependency annotator for each data format (or documents need to be converted to HTML). On the other hand, \"embedding a given sentence\" does not require dependency or an entire document, as we show in Section 6.2 (paraphrase detection given a pair of sentences without context). To perform a document-level inference such as coreference resolution, we again need formatted documents.\n- We updated the paper to clarify the descriptions of Figure 3 and Equation 1. Also, we discuss more related papers in Section 2, and added explanations about the OOV handler in Section 5.\n- We computed the cross entropy loss for the prediction test. The models predict the next sentence, word by word, and each word is represented as a vector. Each vector is used to compute the cross entropy loss against the corresponding word in the correct next sentence. We clarified the measures, and the experimental set-ups.", "- The coreference annotations are done by the two authors of the paper by examining the HTML documents with web browsers. Only nouns/noun phrases and within-document coreferences are considered. We tried to exhaustively annotate the coreferences. We checked all the results of each method to see if a wrong guess is indeed a spurious or we missed it. We are planning to release this evaluation data set publicly.\n- We clarified the procedure to build an OOV mapping in the paper. To explain here a little bit more, OOV variables are defined for each OOV word position for each dependency type. For example, O_TitleMetadata(1) is the first OOV word in the title-metadata governing sentence, which might have a relatively high chance of being the topic word. This OOV variable is later considered mostly in the same way as other words in the decoder. That is, a decoder may output OTitleMetadata(1) and this variable can be mapped back to the OOV word. In our use case, we do not map it back to a word since we focus on embedding not generating a sentence.\n- To evaluate the effect of OOV Handler, we are conducting additional experiments, and we have partial results. If we provide our model only sequential dependencies, while still using OOV Handler, the loss is slightly increased but it is much lower than that of Skip-Thought (reflected in Table 2). The evaluation of OURS - OOV Handler requires re-training of the model due to the change of vocabulary. The training is still ongoing, but the intermediate training loss is much higher than that of OURS at the same number of training iterations (e.g., 24.90 vs 1.58 @ 36160, and 19.77 vs 1.01 @ 61820, each by averaging 832 sentences) and also the reduction rate is lower (21% vs. 36%). We think this shows the importance of OOV handling.\n- Regarding the text style heuristics, we updated the draft to explain it more clearly.", "We are compiling answers, and conducting additional analyses to support them. We will get back to you soon!", "I was a bit rushed finishing reviews, so here is a longer version of my p.8 coreference experiment thoughts:\n - While the one cited paper is reasonably representative as a good recent neural coreference paper, it's not the latest, best work, since both Clark & Manning EMNLP 2016 (as opposed to the ACL 2016 paper cited here) and more strongly Kenton Lee et al. (EMNLP 2017) follow and outperform it.\n - Both of these two papers mainly use learned neural representations and only a handful of handcrafted features (for mention distance, speaker identity, etc.) but argue that they improve performance and hence are still useful. To simply say that you can do it \"without handcrafted features\" while showing no evidence that the same features would not have improved the performance of your system too isn't clear forward progress. They could run their systems without handcrafted features at the cost of a couple of points in performance too.\n - You don't say how you \"first identify a pronoun or an entity reference\". Is this in fact by using a parser and then using handwritten patterns/rules? \n - You don't say how you find candidate referents that conform to the pronoun type and entity type of the reference. Again, this sounds like handwritten patterns/rules.\n - Is the dataset you use as a test set available to others, or will it be?\n - You do show the useful result that a generic newswire supervised learning coref system (Clark and Manning 2016) performs quite poorly on a narrow technical domain (software malware and vulnerabilities), whereas an unsupervised similarity measure can do much better. This is interesting and potentially shows that this work is valuable!\n - However, looking at the results in Figure 4, your performance on Person and Organization entities is extremely poor, whereas performance of Clark and Manning (2016) is at least far better and moderately good, if not great. So, it doesn't really look like this is a coref system that you can use unsupervised and have it work well.\n - Overall, your system comes out well ahead because of its far superior performance on Malware and Vulnerabilities. But how much is the failure of Clark and Manning (2016) a failure of coreference, or is it really that their system fails to detect these entities (most malware has weird names). E.g., if your method for mention detection was used followed by Clark and Manning (2016), would Clark and Manning then do much better? It's impossible to tell from the results presented. Perhaps relevant in this context is a recent paper by Berkeley people and colleagues on entity detection on the dark web ( https://evidencebasedsecurity.org/forums/ ). They don't do coreference but their entity detector could have been paired with the Clark and Manning (2016) system. Also, their data is public and may be of interest.\n - No examples are given, but I suspect that a lot of the cases of malware coreference are just string identity. That is, they keep referring to something as \"W97M.Cloud.1\" many times. If so, again this is in principle trivial coreference, and the other system is probably mainly failing on mention detection. It would be great if you could give some statistics on what proportion of string identity coreference decisions there are, how many involve pronouns, etc.\n\nSo, overall, section 6.3 seems to fall short of being a well-done experiment." ]
[ 7, 5, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1a37GWCZ", "iclr_2018_H1a37GWCZ", "iclr_2018_H1a37GWCZ", "B1mXQXi-M", "B1mXQXi-M", "B1mXQXi-M", "HkMKdz9gz", "HJUoksOxG", "B1mXQXi-M", "BkJBMfslf" ]
iclr_2018_Sk03Yi10Z
An Ensemble of Retrieval-Based and Generation-Based Human-Computer Conversation Systems.
Human-computer conversation systems have attracted much attention in Natural Language Processing. Conversation systems can be roughly divided into two categories: retrieval-based and generation-based systems. Retrieval systems search a user-issued utterance (namely a query) in a large conversational repository and return a reply that best matches the query. Generative approaches synthesize new replies. Both ways have certain advantages but suffer from their own disadvantages. We propose a novel ensemble of retrieval-based and generation-based conversation system. The retrieved candidates, in addition to the original query, are fed to a reply generator via a neural network, so that the model is aware of more information. The generated reply together with the retrieved ones then participates in a re-ranking process to find the final reply to output. Experimental results show that such an ensemble system outperforms each single module by a large margin.
rejected-papers
This paper presents an ensemble method for conversation systems, where a retrieval-based system is ensembled with a generation-based system. The combination is done via a reranker. Evaluation is done on one dataset containing query reply pairs with both BLEU and human evalutations. The experimental results are good using the ensemble model. Although this presents some novel ideas and may be useful for chatbots (not for goal oriented systems), the committee feels that the approach and the presented material does not have enough substance for publication at ICLR: it will be interesting to evaluate this system in a goal oriented setting; many prior papers have built generation based conversation systems (1-step) -- this paper does not present any comparison with those papers. Addressing these issues may strengthen the paper for a future venue.
train
[ "SksrEW9eG", "rkQ2C8cxz", "S1EhNw2gz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThe paper proposes a new dialog model combining both retrieval-based and generation-based modules. Answers are produced in three phases: a retrieval-based model extracts candidate answers; a generator model, conditioned on retrieved answers, produces an additional candidate; a reranker outputs the best among all candidates.\n\nThe approach is interesting: the proposed ensemble can improve on both the retrieval module and the generation module, since it does not restrict modeling power (e.g. the generator is not forced to be consistent with the candidates). I am not aware of similar approaches for this task. One work that comes to mind regarding the blend of retrieval and generation is Memory Networks (e.g. https://arxiv.org/pdf/1606.03126.pdf and references): given a query, a set of relevant memories is extracted from a KB using an inverted index and the memories are fed into the generator. However, the extracted items in the current work are candidate answers which are used both to feed the generator and to participate in reranking.\n\nThe experimental section focuses on the task of building conversational systems. The performance measures used are 1) a human evaluation score with three volunteers and 2) BLUE scores. While these methods are not very satisfying, effective evaluation of such systems is a known difficulty. \n\nThe results show that the ensemble outperforms the individual modules, indicating that: the multi-seq2seq models have learned to use the new inputs as needed and that the ranker is correlated with the evaluation metrics.\n\nHowever, the results themselves do not look impressive to me: the subjective evaluation is close to the \"borderline\" score; in the examples provided, one is good, the other is borderline/bad, and the baseline always provides something very short. Does the LSTM work particularly poor on this dataset? Given that this is a novel dataset, I don't know what the state-of-the-art should be. Could you provide more insight? Have you considered adding a benchmark dataset (e.g. a QA dataset)?\n\nSpecific questions:\n\n1. The paper motivates conditioning on the candidates in two ways. First, that the candidates bring additional information which the decoder can use (e.g. read from the candidates locations, actions, etc.). Second, that the probability of universal replies must decrease due to the additional condition. I think the second argument depends on how the conditioning is performed: if the candidates are simply appended to the input, the model can learn to ignore them.\n2. The copy mechanism is a nice touch, encouraging the decoder to use the provided queries. Why not copy from the query too, e.g. with some answers reusing part of the query <\"Where are you going?\", \"I'm going to the park\">?\n3. How often does the model select the generated answer vs. the extracted answers? In both examples provided the selected answer is the one merging the candidate answers.\n\nMinor issues:\n- Section 3.2: using and the state\n- Section 3.2: more than one replies\n- last sentence on page 3: what are the \"following principles\"?", "The authors present a generation-based neural dialog response model that takes a list of retrieved responses from a search engine as input. This novel multi-seq2seq approach, which includes attention and a pointer network, increases reply diversity compared to purely retrieval or generation-based models. The authors also apply a reranking-based approach to ensembling based on a gradient-boosted decision tree classifier.\nBut their multi-seq2seq model is not particularly well-justified with evaluations and examples (as compared with the reranking/ensemble, which is essentially a standard approach) and it's unclear whether it helps more than other recent approaches to response diversity.\n\nSome additional points:\n1. For several of the GBDT features, the approach chosen is unusual or perhaps outdated. In particular, the choice of a word-level MT-based metric rather than an utterance-level one, and the choice of a bigram-based fluency metric rather than one based on a more complete language model are puzzling and should be justified.\n2. The authors report primarily comparisons to ablations of their own model, not to other recent work in dialog systems.\n3. The human evaluation performance of a simple reranking ensemble between the authors' generation-based model and their retrieval-based model is significantly higher than multi-seq2seq, suggesting that multi-seq2seq may not be an especially powerful way to combine information from the two models.\n4. The authors present only very limited examples (in Table 4) and one out of the two multi-seq2seq examples in that table is relatively nonsensical. When the original examples are non-English, papers should also include the original in addition to a translation.", "The approach involves multiple steps.\nOn a high level the query is first used to retrieve k best matching response candidates. Then a concatenation of the query and the candidates are fed into a generative model to generate an additional artificial candidate.\nIn a final step, the k+1 candidates are re-ranked to report the final response.\nEach of these steps involves careful engineering and for each there are some minor novel components.\nYet, not all of the steps are presented in complete technical detail.\nAlso, training corpora and human labeling of the test data do not seem to be publicly available.\nConsequently, it would be hard to exactly reproduce the results of the paper.\nExperimental validation also is relatively thin.\nWhile the paper report both BLEU metrics and Fleiss kappa from a small-scale human test, the results are based on a single split of a single corpus into training, validation and test data.\nWhile the results for the ensemble are reported to be higher than for the various components for almost all metrics, measures of spread/variance would allow the reader to better judge the degree and significance of improvement.\n\nMinor:\nThe paper should be read by a native speaker, as it involves a number of minor grammar issues and typos.\n" ]
[ 5, 5, 6 ]
[ 3, 3, 3 ]
[ "iclr_2018_Sk03Yi10Z", "iclr_2018_Sk03Yi10Z", "iclr_2018_Sk03Yi10Z" ]
iclr_2018_rkaqxm-0b
Neural Compositional Denotational Semantics for Question Answering
Answering compositional questions requiring multi-step reasoning is challenging for current models. We introduce an end-to-end differentiable model for interpreting questions, which is inspired by formal approaches to semantics. Each span of text is represented by a denotation in a knowledge graph, together with a vector that captures ungrounded aspects of meaning. Learned composition modules recursively combine constituents, culminating in a grounding for the complete sentence which is an answer to the question. For example, to interpret ‘not green’, the model will represent ‘green’ as a set of entities, ‘not’ as a trainable ungrounded vector, and then use this vector to parametrize a composition function to perform a complement operation. For each sentence, we build a parse chart subsuming all possible parses, allowing the model to jointly learn both the composition operators and output structure by gradient descent. We show the model can learn to represent a variety of challenging semantic operators, such as quantifiers, negation, disjunctions and composed relations on a synthetic question answering task. The model also generalizes well to longer sentences than seen in its training data, in contrast to LSTM and RelNet baselines. We will release our code.
rejected-papers
This paper presents a neural compositional model for visual question answering. The overall idea may be exciting but the committee agrees with the evaluation of Reviewer 1: the experimental section is a bit thin and it only evaluates against an artificial dataset for visual QA that does not really need a knowledge base. It would have been better to evaluate on more traditional question answering settings where the answer can be retrieved from a knowledge base (WebQuestions, Free917, etc.), and then compare with state of the art on those.
train
[ "B1uoZsYlM", "SyvmULHVf", "BJ-RO0meG", "ryx2q7_eG", "S1J7L_TQf", "rJsg8_TXf", "HkBCBuamf", "H1qKB_6Xf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper describes an end to end differentiable model to answer questions based on a knowledge base. They learn the composition modules which combine representations for parts of the question to generate a representation of the whole question. \n\nMy major complaint is the evaluation on a synthetically generated data set. Given the method of generating the data, it was not a surprise that the method which leverages hierarchical structure can do better than other methods which do not leverage that. I will be convinced if evaluation can be done on a real data set. \n\nMinor complaints: \n\nThe paper does not compare to NMN, or a standard semantic parser. I understand that all other methods will use a predefined set of predicates, but its still worthwhile to see how much we loose when trying to learn predicates from scratch.\n\nThe paper mentions that they enumerate all parses. That is true only if the groundings are not considered part of the parse. They actually enumerate all parses based on types, and then find the right groundings for the best parse. This two step inference is an approximation, which should be mentioned somewhere.\n\nResponse to rebuttal: \n\nI agree that current data sets have minimal compositionality, and that \"if existing models cannot handle the synthetic data, they will not handle real data\". However, its not clear that your method will be better than the alternatives when you move to real data. Also, some work on CLEVR had some questions collected from humans, maybe you can try to evaluate on that. I am going to keep my rating the same. ", "Thanks for rebuttal! My concern is that:\n\n1. The underlying representations in the KB version of the dataset are already so clean that the model can't be claimed to be \"learning from scratch\" in any meaningful sense. At the very least, the problem of lining up a word with the particular one-hot vector that picks out a feature is no more interesting on the surface than the problem of lining up the word with a discrete semantic token.\n\n2. I absolutely agree that there are \"significant challenges in properly formalizing language in terms of logic\"; the problem is that these problems don't actually show up in this dataset!\n\nSo minimally, if you're going to use this dataset I think you really have to compare to a regular semantic parser (it would be fine to run UBL or Cornell-SPF out of the box). But it would be even better to use a dataset with real natural language even if you're going to stick to structured world representations.\n\nI'm leaving my score as-is for now, but I think this paper is close to ready.", "This paper presents a model for visual question answering that can learn both\nparameters and structure predictors for a modular neural network, without\nsupervised structures or assistance from a syntactic parser. Previous approaches\nfor question answering with module networks can (at best) make a hard choice\namong a small number of structures. By contrast, this approach computes a\nbottom-up approximation to the softmax over all possible tree-shaped network\nlayouts using a CKY-style dynamic program. On a slightly modified set of\nstructured scene representations from the CLEVR dataset, this approach\noutperforms two LSTM baselines with incomplete information, as well as an\nimplementation of Relation Networks.\n\nI think the core technical idea here is really exciting! But the experimental\nvalidation of the approach is a bit thin, and I'm not ready to accept the paper\nin its current form.\n\nRELATED WORK / POSITIONING\n\nThe title & first couple of paragraphs in the intro suggest that the\ndenotational interpretation of the representations computed by the modules is\none of the main contributions of this work. It's worht pointing out that the\nconnection between formal semantics and these kinds of locally-normalized\n\"attentions\" to entities was already made in the cited NMN papers. Meanwhile,\nrecent work by Johnson et al. and Perez et al. has found that explicitly\nattentional / denotational models are not necessarily helpful for the CLEVR\ndataset.\n\nIf the current paper really wants to make denotational semantics part of the\ncore claim, I think it would help to talk about the representational\nimplications in more detail---what kinds of things can and can't you model\nonce you've committed to set-like bottlenecks between modules? Are there things\nwe expect this approach to do better than something more free-form (a la\nJohnson)? Can you provide experimental evidence of this?\n\nAt the same time, one of these things that's really nice about the\nstructure-selection part of this model is that it doesn't care what kind of\nmessages the modules send to each other! It might be just as effective to focus\non the dynamic programming aspect and not worry so much about the semantics of\nindividual modules.\n\nMODELING\n\nFigure 2 is great. It would be nice to have a little bit of discussion about the\nmotivation for these particular modeling implementations---some are basically\nthe same as in Hu et al. (2017), but obviously the type system here is richer\nand it might be helpful to highlight some of the extra things it can do.\n\nThe phrase type semantic potential seems underpowered relative to the rest of\nthe model---is it really making decisions on the basis of 6 sparse features for\nevery (span, type) pair, with no score for the identity of the rule (t_1, t_2 ->\nt)? What happens if you use biRNN representations of each anchored token, rather\nthan the bare token alone? (This is standard in syntactic parsing these days.)\nIf you tried richer things and found that they didn't help, you should show\nablation experiments.\n\nEXPERIMENTS\n\nAs mentioned above, I think this is the only really disappointing piece of this\npaper. As far as I know, nobody else has actually worked with the structured KBs\nin CLEVR---the whole point of the dataset (and VQA, and the various other recent\nquestion answering datasets) is to get away from requiring structured knowledge\nbases. The present experiments involve both fake language data and fake,\nstructured world representations, so it's not clear how much we should trust the\nproposed approach to generalize to real tasks.\n\nWe know that more traditional semantic parsing approaches with real logical\nforms are capable of getting excellent accuracy on structured QA tasks with a\nlot more complexity and less data than this one. I think fairness really\nrequires a comparison to an approach for semantic parsing with denotations. \n\nBut more importantly, why not just run on images? Results on VQA, CLEVR, and\nNLVR (even if they're not all state of the art!) would make this paper much more\nconvincing.", "This paper proposes for training a question answering model from answers only and a KB by learning latent trees that capture the syntax and learn the semantic of words, including referential terms like \"red\" and also compositional operators like \"not\".\n\nI think this model is elegant, beautiful and timely. The authors do a good job of explaining it clearly. I like the modules of composition that seem to make a very intuitive sense for the \"algebra\" that is required and the parsing algorithm is clean. \n\nHowever, I think that the evaluation is lacking, and in some sense the model exposes the weakness of the dataset that it uses for evaluation.\n\nI have 2.5 major issues with the paper and a few minor comments: \n\nParsing:\n\n* The authors don't really say what is the base case for \\Psi that scores tokens (unless I missed it and if indeed it is missing it really needs to be added) and only provide the recursive case. From that I understand that the only features that they use are whether a certain word makes sense in a certain position of the rule application in the context of the question. While these features are based on Durrett et al.'s neural syntactic parser it seems like a pretty weak signal to learn from. This makes me wonder, how does the parser learn whether one parse is better than the other? Only based on this signal? It makes me suspicious that the distribution of language is not very ambiguous and that as long as you can construct a tree in some context you can do it in almost any other context. This is probably due to the fact that the CLEVR dataset was generated mostly using templates and is not really natural utterances produced by people. Of course many people have published on CLEVR although of its language limitations, but I was a bit surprised that only these features are enough to solve the problem completely, and this makes me curious as to how hard is it to reverse-engineer the way that the language was generated with a context-free mechanism that is similar to how the data was produced.\n\n* Related to that is that the decision for a score of a certain type t for a span (i,j) is the sum for all possible rule applications, rather than a max, which again means that there is no competition between different parse trees that result with the same type of a single span. Can the authors say something about what the parser learns? Does it learn to extract from the noise clear parse trees? What is the distribution of rules in those sums? is there some rule that is more preferred than others usually? It seems like there is loss of information in the sum and it is unclear what is the effect of that in the paper.\n\nEvaluation:\n\n* Related to that is indeed the fact that they use CLEVR only. There is now the Cornell NLVR dataset that is more challenging from a language perspective and it would be great to have an evaluation there as well. Also the authors only compare to 3 baselines where 2 don't even see the entire KB, so the only \"real\" baseline is relation net. The authors indeed state that it is state-of-the-art on clevr. \n\n* It is worth noting that relation net is reported to get 95.5 accuracy while the authors have 89.4. They use a subset so this might be the reason, but I am not sure how they compared to relation net exactly. Did they re-tune parameters once you have the new dataset? This could make a difference in the final accuracy and cause an unfair advantage.\n\n* I would really appreciate more analysis on the trees that one gets. Are sub-trees interpretable? Can one trace the process of composition? This could have been really nice if one could do that. The authors have a figure of a purported tree, but where does this tree come from? From the mode? Form the authors?\n\nScalability:\n* How much of a problem would it be to scale this? Will this work in larger domains? It seems they compute an attention score over every entity and also over a matrix that is squared in the number of entities. So it seems if the number of entities is large that could be very problematic. Once one moves to larger KBs it might become hard to maintain full differentiability which is one of the main selling points of the paper. \n\nMinor comments:\n* I think the phrase \"attention\" is a bit confusing - I thought of a distribution over entities at first. \n* The feature function is not super clearly written I think - perhaps clarify in text a bit more what it does.\n* I did not get how the denotation that is based on a specific rule applycation t_1 + t_2 --> t works. Is it by looking at the grounding that is the result of that rule application?\n* Authors say that the neural enquirer and neural symbolic machines produce flat programs - that is not really true, the programs are just a linearized form of a tree, so there is nothing very flat about it in my opinion.\n\nOverall, I really enjoyed reading the paper, but I was left wondering whether the fact that it works so well mostly attests to the way the data was generated and am still wondering how easy it would be to make this work in for more natural language or when the KB is large.\n\n\n", "We would like to thank all the reviewers for their thoughtful comments and suggestions. We’re glad that they think that “this model is elegant, beautiful and timely” and that the “core technical idea here is really exciting!”\n\nThe major concern raised by all the reviewers is the choice of evaluation dataset. We respectfully suggest that the some of the comments are judging the evaluation with respect to claims that we are not making. In our evaluation, we aim to show that the model can simultaneously learn structure and interpretation to perform many-hop reasoning, and that it shows better compositional generalization than alternatives such as LSTMs and RelNets. While it is certainly true that using human language would cause different challenges (primarily due to greater diversity in the language), existing datasets are dominated by simpler questions that do not require the multistep reasoning we focus on. If existing models cannot handle the reasoning involved in the synthetic data we evaluate on, then there is no reason to think they could deal with the additional complexity of human language.\n", "We thank the reviewer for their helpful review.\nWe address the issue regarding the evaluation dataset in a separate official comment.\nWe would like to clarify that we do not follow a two-step procedure and instead compute groundings for each separate parse. The final answer / grounding at the root is the weighted average of groundings from all possible parses. The feedback from the correct answer at the root encourages the grounding and hence the correct parse.\n", "We thank the reviewer for their helpful review, which will allow us to improve a number of points in the paper.\n\nParsing:\n* The base case for \\Psi is the semantic type distribution for each word computed in Eq. 1. In the updated version we have made this explicitly clear.\nFor simplicity, we use a simple feature-based parsing model - although similar features can achieve very good performance on the Penn Treebank. We found that learning with these features was more stable than an RNN parsing model. \n* The score of a certain type t for a span (i,j) is indeed the sum for all possible rule applications, but there is still competition between different parse trees that result with the same type of a single span. This competition arises since the different parse trees result in different denotations (grounding) for this type for the span. Feedback from the root of the parse tree encourages correct grounding and hence correct parse structure. \n\nEvaluation\n* We retrained the RelNet moden on the new dataset, and carefully tuned the hyper-parameters for the RelNet model on validation data.\n* The trees computed by the model are completely interpretable. For each subtree, we can see exactly how the different compositions score which allows us to completely trace the composition. The tree shown in Fig.1 is the highest scoring tree (mode) from the learned model.\n\nScalability\n* As you suggest, there would be challenges in scaling the approach to large knowledge graphs, and would require further work to be efficient. As the number of entities grows to an intractable size, KNN search, beam search, feature hashing and parallelization techniques can be explored to make the model tractable. Such techniques are fairly commonly used in large-scale KG QA.\n\nMinor Comments\n* If accepted, we will make this clearer in the camera-ready version.\n* The denotation of a particular rule application t_1 + t_2 → t is indeed the resulting grounding from the module application.\n", "We thank the reviewer for their helpful review.\nWe address the issue regarding the evaluation dataset in a separate official comment.\n\nTo clarify, we do not see this work as a VQA model, but as a model for answering questions on knowledge graphs (hence why we don’t run on images). KG question answering is an important task in its own right, so we don’t see the use of a KG as a fake approximation of images. The choice of the CLEVR-style dataset may have been confusing here.\n\nAs you say, traditional semantic parsing approaches with hardcoded logical operators would likely work well on this data. However, we are interested in the extent to which these operators can be learnt from scratch with minimal prior knowledge. Also, there are very significant challenges in properly formalizing language in terms of logic, and our work offers a direction for circumventing these issues while still retaining many of the attractive properties of compositional semantics.\n\nIn terms of representational power, the use of additional ‘ungrounded’ vectors helps avoid the limitations of set-like bottlenecks, by giving the model another mechanism for passing information. A major advantage compared to fully-freeform sentence representations, we showed that our model offers better generalization to longer questions by having compositionality built in (and also gives a more interpretable output). Compared to Johnson et al., we can learn end-to-end without pre-annotated programs.\n\nRegarding the phrase semantic type potential, we identified a typographical error in the paper and have corrected the same (Eq. 5). The feature function does indeed take into account the identity of the module. Parsing on this data is relatively straightforward, so we did not see additional gains from using RNN models.\n" ]
[ 4, -1, 5, 7, -1, -1, -1, -1 ]
[ 4, -1, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_rkaqxm-0b", "H1qKB_6Xf", "iclr_2018_rkaqxm-0b", "iclr_2018_rkaqxm-0b", "iclr_2018_rkaqxm-0b", "B1uoZsYlM", "ryx2q7_eG", "BJ-RO0meG" ]
iclr_2018_r1kNDlbCb
Learning to Encode Text as Human-Readable Summaries using Generative Adversarial Networks
Auto-encoders compress input data into a latent-space representation and reconstruct the original data from the representation. This latent representation is not easily interpreted by humans. In this paper, we propose training an auto-encoder that encodes input text into human-readable sentences. The auto-encoder is composed of a generator and a reconstructor. The generator encodes the input text into a shorter word sequence, and the reconstructor recovers the generator input from the generator output. To make the generator output human-readable, a discriminator restricts the output of the generator to resemble human-written sentences. By taking the generator output as the summary of the input text, abstractive summarization is achieved without document-summary pairs as training data. Promising results are shown on both English and Chinese corpora.
rejected-papers
As expressed by most reviewers, the idea of the paper is interesting: using summarization as an intermediate representation for an auto encoder. In addition, a GAN is used on the generator output to encourage the output to look like summaries. They just need unpaired summaries. Even if the idea is interesting, from the committee's perspective, important baselines are missing in the experimental section: why would one choose to use this method if it is not competitive with other baselines that have proposed work in this vein? One reviewer brings up the point that the method is significantly worse than a supervised baseline. Moreover, the authors mention the work of Miao and Blunsom, but could have used one of their experimental setups to show that at least in the semi-supervised scenario, this work empirically performs as well or better than that baseline.
train
[ "B1JR9zaVz", "SyrgB9UEz", "HkSpi4cgz", "S1ms6cqxM", "B1_R9Digf", "Bkfuk3L7f", "H1HWq58Qz", "HyJUmo8XG" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thank you for reading the paper again and giving us comment. We will improve the writing of later sections. If we want to apply dual learning in this text summarization task, the training is not only on “source text -> summary -> source text”, but also on “summary -> source text -> summary”. In the “source text -> summary -> source text” path, reconstructor (summary -> source text) produces source text with teacher forcing because the source text is known. However, in the “summary -> source text -> summary” path, it’s difficult for reconstructor to produce source text from summaries without teacher-forcing (due to the unsupervised update) since source text is long. Hence, we do not consider this baseline in the first place. But if possible to modify the paper in the future, we will compare duel learning with our results on semi-supervised training.", "Thank you for revising the paper. It is easier to read now, though later sections still seem less edited than the beginning.\n\nFor the semisupervised experiments a more appropriate baseline would be a likelihood-based equivalent of your technique, e.g. the \"dual\" training by He et al. 2016 in \"Dual Learning for Machine Translation\".", "This paper proposes a model for generating long text strings given shorter text strings, and for inferring suitable short text strings given longer strings. Intuitively, the inference step acts as a sort of abstractive summarization. The general gist of this paper is to take the idea from \"Language as a Latent Variable\" by Miao et al., and then change it from a VAE to an adversarial autoencoder. The authors should cite \"Adversarial Autoencoders\" by Makzhani et al. (ICLR 2016).\n\nThe experiment details are a bit murky, and seem to involve many ad-hoc decisions regarding preprocessing and dataset management. The vocabulary is surprisingly small. The reconstruction cost is not precisely explained, though I assume it's a teacher-forced conditional log-likelihood (conditioned on the \"summary\" sequence). The description of baselines for REINFORCE is a bit strange -- e.g., annealing a constant in the baseline may affect variance of the gradient estimator, but the estimator is still unbiased and shouldn't significantly impact exploration. Similar issues are present in the \"Self-critical...\" paper by Rennie et al. though, so this point isn't a big deal.\n\nThe results look decent, but I would be more impressed if the authors could show some benefit relative to the supervised model, e.g. in a reasonable semisupervised setting. Overall, the paper covers an interesting topic but could use extra editing to clarify details of the model and training procedure, and could use some redesign of the experiments to minimize the number of arbitrary (or arbitrary-seeming) decisions.", "Summary: In this work, the authors propose a text reconstructing auto encoder which takes a sentence as the input sequence and an integrated text generator generates another version of the input text while a reconstructor determines how well this generated text reconstructs the original input sequence. The input to the discriminator (as real data) is a sentence that summarizes the ground truth sentences (rather than the ground truth sentences themselves). The experiments are conducted in two datasets of English and Chinese corpora.\n\nStrengths:\nThe proposed idea of generating text using summary sentences is new.\nThe model overview in Figure 1 is informative.\nThe experiments are conducted on English and Chinese corpora, comparison with competitive baselines are provided.\n\nWeaknesses:\nThe paper is poorly written which makes it difficult to understand. The second paragraph in the introduction is quite cryptic. Even after reading the entire paper a couple of times, it is not clear how the summary text is obtained, e.g. do the authors ask annotators to read sentences and summarize them? If so, based on which criteria do the annotators summarize text, how many annotators are there? Similarly, if so this would mean that the authors use additional supervision than the compared models. Please clarify how the summary text is obtained.\n\nIn footnote 1, the authors mention “seq2seq2seq2” term which they do not explain anywhere in the text.\n\nNo experiments that generate raw text (without using summaries) are provided. It would be interesting to see if GAN learns to memorize the ground truth sentences or generates sentences with enough variation. \n\nIn the English Gigaword dataset the results consistently drop compared to WGAN. This behavior is observed for both the unsupervised setting and two versions of transfer learning settings. There are too few qualitative results: One positive qualitative result is provided in Figure 3 and one negative qualitative result is provided in Figure 4. Therefore, it is not easy for the reader to judge the behavior of the model well. \n\nThe choice of the evaluation metric is not well motivated. The standard measures in the literature also include METEOR, CIDER and SPICE. It would be interesting to see how the proposed model performs in these additional criteria. Moreover, the results are not sufficiently discussed. \n\nAs a general remark, although the idea presented in this paper is interesting, both in terms of writing and evaluation, this paper has not yet reached the maturity expected from an ICLR paper. Regarding writing, the definite and indefinite articles are sometimes missing and sometimes overused, similarly most of the times there is a singular/plural mismatch. This makes the paper very difficult to read. Often the reader needs to guess what is actually meant. Regarding the experiments, presenting results with multiple evaluation criteria and showing more qualitative results would improve the exposition.\n\nMinor comments:\nPage 5: real or false —> real or fake (true or false)\n\t the lower loss it get —> ?", "TL;DR of paper: Generating summaries by using summaries as an intermediate representation for autoencoding the document. An encoder reads in the document to condition the generator which outputs a summary. The summary is then used to condition the decoder which is trained to output the original document. An additional GAN loss is used on the generator output to encourage the output to look like summaries -- this procedure only requires unpaired summaries. The results are that this procedure improves upon the trivial baseline but still significantly underperforms supervised training.\n\nThis paper builds upon two recent trends: a) cycle consistency, where f(g(x)) = x, which only requires unpaired data (i.e., CycleGAN), and (b) encoder-decoder models with a sequential latent representation (i.e., \"Language as a latent variable\" by Miao and Blunsom). A similar idea has also been explored by He et al. 2016 in \"Dual Learning for Machine Translation\". Both CycleGAN and He et al. 2016 are not cited. The key difference between this paper and He et al. 2016 is the use of GANs so only unpaired summaries are needed.\n\nThe idea is a simple but useful extension of these previous works. The problem set-up of unpaired summarization is not particularly compelling, since summaries are typically found paired with their original documents. It would be more interesting to see how well it can be used for other textual domains such as translation, where a lot of unpaired data exists (some other submissions to ICLR tackle this problem). Unsurprisingly, the proposed method requires a lot of twiddling to make it work since GANs, REINFORCE, and pretraining are necessary.\n\nA key baseline that is missing is pretraining the generator as a language model over summaries. The pretraining baseline in the paper is over predicting the next sentence / reordering, but this is an unfair comparison since the next sentence baseline never sees summaries over the course of training. Without this baseline, it is hard to tell whether GAN training is even useful. Another experiment missing is seeing whether joint supervised-GAN-reconstruction training can outperform purely supervised training. What is the performance of the joint training as the size of the supervised dataset is varied?\n\nThis paper has numerous grammatical and spelling errors throughout the paper (worse, the same errors are copy-pasted everywhere). Please spend more time editing the paper.\n\n", "We really appreciate your comment and suggestions. The paper has been carefully revised. All the page numbers below refer to the revised version.\n\n1. We have cited “Adversarial Autoencoder” by Makhzani et al.\n\n2. Data Preprocessing:\nIn Chinese Gigaword corpus, the arbitrary decisions regarding on data preprocessing aim to filter out some bad training examples. We conducted all experiments including baseline experiment on same set of pre-processed data. Hence, we still can compare our model to baseline models. In English Gigaword corpus, we simply use the training set pre-processed by previous work (A Neural Attention Model for Abstractive Sentence Summarization by Rush et al. 2015) and don’t do any further preprocess. \n\n3. Vocabulary size:\nThe reason that the vocabulary size in Chinese corpus is extremely small (4K) is that the text unit we use is Chinese character instead of Chinese word. In English corpus, the vocabulary size we used is 15K which is in a reasonable range.\n\n4. Reconstruction cost: \nThe reconstruction cost is conditioned on generated summary sequence and is teacher forced by source text. We add more description to clarify this. Please refer to the first 4 line in P4.\n\n5. Semi-supervised training:\nIn semi-supervised training, we first pre-trained the generator with few labeled data. Then, we conducted teacher forcing with labeled data every several unsupervised training steps. We evaluate the performance of our model with regard to the number of labeled data. It’s worth mentioning that In English Gigaword corpus, with only 100K labeled data, semi-supervised training even slightly outperforms supervised-training with full labeled data. Please refer to the results in Figure 6 (P10). Furthermore, we also discussed the performance of our proposed adversarial REINFORCE in Section 7.5(P10) with regard to number of labeled data in semi-supervised learning. \n\n6. Issues of ad-hoc decisions:\nWe found that in the semi-supervised scenario if we pretrain generator with few labeled data, ad-hoc decisions regarding on pre-training generator are not necessary. However, in completely unsupervised setting, we still not come up with a proper method to prevent ad-hoc decisions on pre-training generator.\n\n7. Clarification of details of the model and training procedure:\nWe have made some extra editing to clarify the details of model and training. The details are provided in Section 6(P6).", "Thank you for giving us some helpful suggestions. To reply your comment, the paper has been carefully revised. All the page numbers below refer to the revised version. \n\nWe made the following modifications of paper: \n1. We have cited the papers of Cycle GAN and He et al. 2016 mentioned in your comment.\n\n2. Problem setup: \nIt is true that in news domain, it is relatively easy to find document-summary pairs because usually people consider the news titles as summaries. However, for the domains like lecture recording, we think collecting labelled data is not trivial. Therefore, it is worth to study unsupervised abstractive summarization. In this paper, we still conduct the experiments on the news domain because the ground truth is available for evaluation. In the future, we can extend to other domains in which collecting label data is changeling. \n\n3. Pre-training generator as language model over summaries:\nThe model architecture of generator is a hybrid pointer network in which decoder selects part of the words from the generator input text. Hence, it’s difficult to train the generator as language model of summary without input text. We came up with another method that solves this problem. Given a set of unpaired documents and summaries, we used an unsupervised approach to match each document with its most relevant summaries. We represented each document and each summary as tf-idf (term frequency–inverse document frequency) vectors. Each document is matched to the summary whose vector has the largest cosine similarity with the document vector. \nWe further used the retrieved paired data to train generator and regarded its performance as baseline. With this method, generator can be roughly initialized with a language model of summaries. The ROUGE scores obtained in this approach is shown in row (B-2) of Table 2 (P8) Then we further improve the generator pre-trained in this way by the proposed unsupervised approach. However, with generator pre-trained by this method, we do not obtain the results better than the ones in Table 2.\n\n4. Semi-supervised training:\nIn semi-supervised training, we first pre-trained the generator with few labeled data. Then, we conducted teacher forcing with labeled data every several unsupervised training steps. We evaluated the performance of our model with regard to the number of labeled data. It’s worth mentioning that In English Gigaword corpus, with only 100K labeled data, semi-supervised training even slightly outperforms supervised-training with full labeled data. Please refer to the results in Figure 6 (P10) and Appendix B(P14). Furthermore, we also discussed the performance of our proposed adversarial REINFORCE in Section 7.5(P10) with regard to number of labeled data in semi-supervised learning. \n\n5. Writing:\nBecause all the authors are not English native speaker, we hired an English native speaker with Computer Science PhD. to help us polish the English writing.", "Thank you for your thorough read of the paper and pointing out our defects. The paper has been carefully revised. All the page numbers below refer to the revised version.\n\nMajor revisions:\n1. Writing:\nWe acknowledge that there are some defects in original version of paper which make readers difficult to understand. We have revised the grammatical errors in the paper. Because all the authors are not English naïve speakers, we hired an English native speaker with Computer Science PhD. to help us polish the English writing.\n\n2. How to obtain summaries:\nThe documents used in the study are from news. The titles of the documents are considered as the summaries. This is a typical setup in the study of summarization. We add footnote 1 for the above description (P2).\n\n3. Clarification of the core idea:\nWe are very sorry that the original version of introduction is misleading. The purpose of the work is to generate summaries from an article, not to exploit summaries to better generate articles. First, instead of encoding a sentence into another version of sentence, our text auto-encoder encodes long text into short text while reconstructor tries to reconstruct long text from encoded short text. The discriminator regularizes the latent representations encoded by encoder (generator in our paper) to be human-readable summaries. The short text encoded by generator can be considered as summary of long text, and thus unsupervised text summarization is achieved. The discriminator can use any human-written sentences as real data. Hence, there is no need of human annotators.\nIn the real implementation, instead of using general human-written sentences, we use the sentences from the titles of the documents as real data for discriminator for better performance. However, the titles do not have to be paired with the training documents (for example, in Section 7.3(P8), we can use documents from Gigaword and titles from CNN/Diary), so the training in unsupervised.\nWe have re-written the introduction, especially the second paragraph. We also add an overview figure to clearly describe the basic idea. Please refer to Fig. 1 (P2).\n\nFor specific points:\n1. “seq2seq2seq” in footnote: \nIn the typical seq2seq model, the input sequence is compressed into a vector and then back to another sequence. In our model, the input long sequence is first compressed into shorter sequence, and the model uses the short sequence to generate the long sequence. Hence, we called it “seq2seq2seq” model. This footnote is removed from the revised version.\n\n2. Experiments about text generation:\nThe target of this work is to generate short text as summary of input document instead of generating raw text. The generator never sees the summaries of the documents, so it cannot memorize the summaries.\n\n3. Comparison to original WGAN:\nAs mentioned in your review, in English Gigaword, compared to WGAN, the performance of our proposed adversarial REINOFRCE consistently drops both in unsupervised learning and transfer learning. However, after conducting semi-supervised training experiments, we found that with more labeled data available, adversarial REINFORCE is better than WGAN. We compared the performance of two models regarding available labeled data. Please find the results in Fig. 6 (P9). The full discussion of the two models is in Section 7.5 (P10). We also found that self-critic in Section 5.2.2 is helpful. The results are shown in Table 3 (P10).\n\n4. More examples:\nTo make reader better judge the proposed model, besides Fig. 3 and 4, we have more results in the appendix. Please refer to Fig. 7 to 12 (P15 - 17).\n\n5. Evaluation metric:\nWe know that ROUGE is not a perfect evaluation for summarization, but ROUGE is widely used to evaluate the generated summaries. In the previous work (A Neural Attention Model for Abstractive Sentence Summarization by Rush et al. 2015; Abstractive text summarization using sequence-to-sequence rnns and beyond, Nallapati et al. 2016; Abstractive Sentence Summarization with Attentive Recurrent Neural Networks, Chopra et al 2016), ROUGE is the only major evaluation measure used to evaluate the quality of the summaries." ]
[ -1, -1, 5, 4, 6, -1, -1, -1 ]
[ -1, -1, 4, 4, 4, -1, -1, -1 ]
[ "SyrgB9UEz", "Bkfuk3L7f", "iclr_2018_r1kNDlbCb", "iclr_2018_r1kNDlbCb", "iclr_2018_r1kNDlbCb", "HkSpi4cgz", "B1_R9Digf", "S1ms6cqxM" ]
iclr_2018_Hy3MvSlRW
Adversarial reading networks for machine comprehension
Machine reading has recently shown remarkable progress thanks to differentiable reasoning models. In this context, End-to-End trainable Memory Networks (MemN2N) have demonstrated promising performance on simple natural language based reasoning tasks such as factual reasoning and basic deduction. However, the task of machine comprehension is currently bounded to a supervised setting and available question answering dataset. In this paper we explore the paradigm of adversarial learning and self-play for the task of machine reading comprehension. Inspired by the successful propositions in the domain of game learning, we present a novel approach of training for this task that is based on the definition of a coupled attention-based memory model. On one hand, a reader network is in charge of finding answers regarding a passage of text and a question. On the other hand, a narrator network is in charge of obfuscating spans of text in order to minimize the probability of success of the reader. We experimented the model on several question-answering corpora. The proposed learning paradigm and associated models present encouraging results.
rejected-papers
The paper presents an adversarial learning framework for reading comprehension. Although the idea is interesting and presents an approach that ideally would make reading comprehension approaches more robust, the results are not substantially solid (see reviewer 3's comments) compared to other baselines to warrant acceptance. Comments from reviewer 2 are also noteworthy where they mention that adversarial perturbations to a context around an answer can alter the facts in the context, thus destroying the actual information present there, and the rebuttal does not seem to satisfy the concern. Addressing these issues will strengthen the paper for a potential future venue.
train
[ "HynOT_5Jz", "BJggQbceG", "Sy0AiMnef" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper aims to improve the accuracy of reading model on question answering dataset by playing against an adversarial agent (which is called narrator by the authors) that \"obfuscates\" the document, i.e. changing words in the document. The authors mention that word dropout can be considered as its special case which randomly drops words without any prior. Then the authors claim that smartly choosing the words to drop can make a stronger adversarial agent, which in turn would improve the performance of the reader as well. Hence the adversarial agent is trained and is architecturally similar to the reader but just has a different last layer, which predicts the word that would make the reader fail if the word is obfuscated.\n\nI think the idea is interesting and novel. While there have been numerous GAN-like approaches for language understanding, very few, if any, have shown worthy results. So if this works, it could be an impactful achievement. \n\nHowever, I am concerned with the experimental results.\n\nFirst, CBT: NE and CN numbers are too low. Even a pure LSTM achieves (no attention, no memory) 44% and 45%, respectively (Yu et al., 2017). These are 9% and 6% higher than the reported numbers for adversarial GMemN2N. So it is very difficult to determine if the model is appropriate for the dataset in the first place, and whether the gain from the non-adversarial setting is due to the adversarial setup or not.\n\nSecond, Cambridge dialogs: the dataset's metric is not accuracy-based (while the paper reports accuracy), so I assume some preprocessing and altering have been done on the dataset. So there is no baseline to compare. Though I understand that the point of the paper is the improvement via the adversarial setting, it is hard to gauge how good the numbers are.\n\nThird, TripAdvisor: the dataset paper by Wang et al. (2010) is not evaluated on accuracy (rather on ranking, etc.). Did you also make changes to the dataset? Again, this makes the paper less strong because there is no baseline to compare.\n\nIn short, the only comparable dataset is CBT, which has too low accuracy compared to a very simple baseline.\nIn order to improve the paper, I recommend the authors to evaluate on more common datasets and/or use more appropriate reading models.\n\n---\n\nTypos:\npage 1 first para: \"One the first hand\" -> \"On the first hand\"\npage 1 first para: \"minimize to probability\" -> \"minimize the probability\"\npage 3 first para: \"compensate\" -> \"compensated\"\npage 3 last para: \"softmaxis\" -> \"softmax is\"\npage 4 sec 2.4: \"similar to the reader\" -> \"similarly to the reader\"\npage 4 sec 2.4: \"unknow\" -> \"unknown\"\npage 4 sec 3 first para: missing reference at \"a given dialog\"\npage 5 first para: \"Concretly\" -> \"Concretely\"\nTable 1: \"GMenN2N\" -> \"GMemN2N\"\nTable 1: what is difference between \"mean\" and \"average\"?\npage 8 last para: missing reference at \"Iterative Attentive Reader\"\npage 9 sec 6.2 last para: several citations missing, e.g. which paper is by \"Tesauro\"?\n\n\n[Yu et al. 2017] Adams Wei Yu, Hongrae Kim, and Quoc V. Le. Learning to Skim Text. ACL 2017\n\n", "Summary:\n\nThis paper proposes an adversarial learning framework for machine comprehension task. Specifically, authors consider a reader network which learns to answer the question by reading the passage and a narrator network which learns to obfuscate the passage so that the reader can fail in its task. Authors report results in 3 different reading comprehension datasets and the proposed learning framework results in improving the performance of GMemN2N.\n\n\nMy Comments:\n\nThis paper is a direct application of adversarial learning to the task of reading comprehension. It is a reasonable idea and authors indeed show that it works.\n\n1. The paper needs a lot of editing. Please check the minor comments.\n\n2. Why is the adversary called narrator network? It is bit confusing because the job of that network is to obfuscate the passage.\n\n3. Why do you motivate the learning method using self-play? This is just using the idea of adversarial learning (like GAN) and it is not related to self-play.\n\n4. In section 2, first paragraph, authors mention that the narrator prevents catastrophic forgetting. How is this happening? Can you elaborate more?\n\n5. The learning framework is not explained in a precise way. What do you mean by re-initializing and retraining the narrator? Isn’t it costly to reinitialize the network and retrain it for every turn? How many such epochs are done? You say that test set also contains obfuscated documents. Is it only for the validation set? Can you please explain if you use obfuscation when you report the final test performance too? It would be more clear if you can provide a complete pseudo-code of the learning procedure.\n\n6. How does the narrator choose which word to obfuscate? Do you run the narrator model with all possible obfuscations and pick the best choice?\n\n7. Why don’t you treat number of hops as a hyper-parameter and choose it based on validation set? I would like to see the results in Table 1 where you choose number of hops for each of the three models based on validation set.\n\n8. In figure 2, how are rounds constructed? Does the model sees the same document again and again for 100 times or each time it sees a random document and you sample documents with replacement? This will be clear if you provide the pseudo-code for learning.\n\n9. I do not understand author's’ justification for figure-3. Is it the case that the model learns to attend to last sentences for all the questions? Or where it attends varies across examples?\n\n10. Are you willing to release the code for reproducing the results?\n\nMinor comments:\n\nPage 1, “exploit his own decision” should be “exploit its own decision”\nIn page 2, section 2.1, sentence starting with “Indeed, a too low percentage …” needs to be fixed.\nPage 3, “forgetting is compensate” should be “forgetting is compensated”.\nPage 4, “for one sentences” needs to be fixed.\nPage 4, “unknow” should be “unknown”.\nPage 4, “??” needs to be fixed.\nPage 5, “for the two first datasets” needs to be fixed.\nTable 1, “GMenN2N” should be “GMemN2N”. In caption, is it mean accuracy or maximum accuracy?\nPage 6, “dataset was achieves” needs to be fixed.\nPage 7, “document by obfuscated this word” needs to be fixed.\nPage 7, “overall aspect of the two first readers” needs to be fixed.\nPage 8, last para, references needs to be fixed.\nPage 9, first sentence, please check grammar.\nSection 6.2, last sentence is irrelevant.\n", "The main idea of this paper is to automate the construction of adversarial reading comprehension problems in the spirit of Jia and Liang, EMNLP 2017. In that work a \"distractor sentence\" is manually added to a passage to superficially, but not logically, support an incorrect answer. It was shown that these distractor sentences largely fool existing reading comprehension systems although they do not fool human readers.\n\nThis paper replaces the manual addition of a distractor sentence with a single word replacement where a \"narrator\" is trained adversarially to select a replacement to fool the question answering system. This idea seems interesting but very difficult to evaluate. An adversarial word replacement my in fact destroy the factual information needed to answer the question and there is no control for this. The performance of the question answering system in the presence of this adversarial narrator is of unclear significance and the empirical results in the paper are very difficult to interpret. No comparisons with previous work are given (and perhaps cannot be given).\n\nA better model would be the addition of a distractor sentence as this preserves the information in the original passage. A language model could probably be used to generate a compelling distractor. But we want that the corrupted passage has the same correct answer as the uncorrupted passage and this difficult to guarantee. A trained \"narrator\" could learn to actually change the correct answer." ]
[ 4, 5, 5 ]
[ 5, 5, 4 ]
[ "iclr_2018_Hy3MvSlRW", "iclr_2018_Hy3MvSlRW", "iclr_2018_Hy3MvSlRW" ]
iclr_2018_r1QZ3zbAZ
Adversarial Examples for Natural Language Classification Problems
Modern machine learning algorithms are often susceptible to adversarial examples — maliciously crafted inputs that are undetectable by humans but that fool the algorithm into producing undesirable behavior. In this work, we show that adversarial examples exist in natural language classification: we formalize the notion of an adversarial example in this setting and describe algorithms that construct such examples. Adversarial perturbations can be crafted for a wide range of tasks — including spam filtering, fake news detection, and sentiment analysis — and affect different models — convolutional and recurrent neural networks as well as linear classifiers to a lesser degree. Constructing an adversarial example involves replacing 10-30% of words in a sentence with synonyms that don’t change its meaning. Up to 90% of input examples admit adversarial perturbations; furthermore, these perturbations retain a degree of transferability across models. Our findings demonstrate the existence of vulnerabilities in machine learning systems and hint at limitations in our understanding of classification algorithms.
rejected-papers
This paper presents a way to generate adversarial examples for text classification. The method is simple -- finding semantically similar words and replacing them in sentences with high language model score. The committee identifies weaknesses in this paper that resonate with the reviews below -- reviewer 1 suggests that the authors should closely compare with the work of Papernot et al, and the response to that suggestion is not satisfactory. Addressing such concerns would make the paper stronger for a future venue.
train
[ "Byk2NS9xf", "HJKBdUKJf", "B1CfWm1WG", "Bk7QGoEQz", "SJx-zjVXz", "SkvobiEQz", "B1dv-jEmG", "SkuiQMmff", "BkMNGG7Gz", "BJNs1wMGG", "SJKx9Lzff", "B1BAtUMMG", "ByX3tUGGM", "S1E7t8fGz", "BJC9P7fMz", "SkkTnVJbz", "rJ7D9bpC-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "author", "author", "author", "author", "public", "public", "public" ]
[ "This paper proposes a method to generate adversarial examples for text classification problems. They do this by iteratively replacing words in a sentence with words that are close in its embedding space and which cause a change in the predicted class of the text. To preserve correct grammar, they only change words that don't significantly change the probability of the sentence under a language model.\n\nThe approach seems incremental and very similar to existing work such as Papernot et. al. The paper also states in the discussion in section 5.1 that they generate adversarial examples in state-of-the-art models, however, they ignore some state of the art models entirely such as Miyato et. al.\n\nThe experiments are solely missing comparisons to existing text adversarial generation approaches such as Papernot et. al and a comparison to adversarial training for text classification in Miyato et. al which might already mitigate this attack. The experimental section also fails to describe what kind of language model is used, (what kind of trigram LM is used? A traditional (non-neural) LM? Does it use backoff?).\n\nFinally, algorithm 1 does not seem to enforce the semantic constraints in Eq. 4 despite it being mentioned in the text. This can be seen in section 4.5 where the algorithm is described as choosing words that were far in word vector space. The last sentence in section 6 is also unfounded.\n\n\nNicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z.Berkay Celik, and Ananthram Swami\nPractical Black-Box Attacks against Machine Learning.\nProceedings of the 2017 ACM Asia Conference on Computer and Communications Security\n\nTakeru Miyato, Andrew M. Dai and Ian Goodfellow\nAdversarial Training Methods for Semi-Supervised Text Classification.\nInternational Conference on Learning Representation (ICLR), 2017\n\n* I increased the score in response to the additional experiments done with Miyato et. al. However, the lack of a more extensive comparison with Papernot et. al. is still needed. The venue for that paper might not be well known but it was submitted to arXiv computer science too and the paper seems very related to this work. It's hard to say if Papernot et. al produces more perceptible samples without doing a proper comparison. I find the lack of a quantitative comparison to any existing adversarial example technique problematic.", "Nice overview of adversarial techniques in natural language classification. The paper introduces the problem of adversarial perturbations, how they are constructed and demonstrate what effect they can have on a machine learning models. \n\nThe authors study several real-world adversarial examples, such as spam filtering, sentiment analysis and fake news and use these examples to test several popular classification models in context of adversarial perturbations. \n\nTheir results demonstrate the existence of adversarial perturbations in NLP and show that several different types of errors occur (syntactic, semantic, and factual). Studying each of these errors type can help defend and improve the classification algorithms via adversarial training.\n\nPros: Good analysis on real-world examples\nCons: I was expecting more actual solutions in addition to analysis", "The paper shows that neural networks are sensitive to adversarial perturbation for a set of NLP text classifications. They propose constructing (model-dependent) adversarial examples by optimizing a function J (that doesn't seem defined in the paper) subject to a constraint c(x, x') < \\gamma (i.e. that the original input and adversarial input should be similar)\n\nc is composed of two constraints:\n1. || v - v' ||_2 < \\gamma_1, where v and v' are bag of embeddings for each input \n2. |log P(x') - log P(x)| < \\gamma_2 where P is a language model\n\nThe authors then show that for 3 classification problems \"Trec07p\", \"Yelp\", and \"News\" and 4 models (Naive Bayes, LSTM, word CNNs, deep-char-CNNs) that the models that perform considerably worse on adversarial examples than on the test set. Furthermore to test the validity of their adversarial examples, the authors show the following:\n1. Humans achieve somewhat similar accuracy on the original adverarial examples (8 points higher on one dataset and 8 points lower on the other two)\n2. Humans rate the writing quality of both the original and adversarial examples to be similar\n3. The adversarial examples only somewhat transfer across models\n\nMy main questions/complaints/suggestions for the paper are:\n\n-Novelty/Methodology. The paper has mediocre novelty given other similar papers recently. \n\nOn question I have is about whether the generated examples are actually close to the original examples. The authors do show some examples that do look good, but do not provide any systematic study (e.g. via human annotation)\n\n This is a key challenge in NLP (as opposed to vision where the inputs are continuous so it is easy to perturb them and be reasonably sure that the image hasn't changed much). In NLP however, the words are discrete, and the authors measure the difference between an original example and the adversary only in continuous space which may not actually be a good measure of how different they are.\n\nThey do have some constraint that the fraction of changed words cannot differ by more than delta, but delta = 0.5 in the experiments, which is really large! (i.e. 50% of the words could be different according to Algorithm 1)\n\n-Writing: the function J is never mathematically defined, neither is the function c (except that it is known to be composed of the semantic/syntactic similarity constraints).\n\nThe authors talk about \"syntactic\" similarity but then propose a language model constraint. I think is a better word is \"fluency\" constraint. \n\nThe results in Table 3 and Table 6 seem different, shouldn't the diagonal of Table 6 line up with the results in Table 3?\n\n-Experimental methodology (more of a question since authors are unclear): The authors write that \"all adversarial examples are generated and evaluated on the test set\".\n\nThere are many hyperparameters in the proposed authors' approach, are these also tuned on the test set? That is unfair to the base classifier. The adversarial model should be tuned on the validation set, and then the same model should be used to generate test set examples. (The authors can even show the validation adversarial accuracy to show how/if it deviates from the test accuracy)\n\n-Lack of related work in NLP (see the anonymous comment for some examples). Even the related work in NLP that is cited e.g. Jia and Liang 2017 is obfuscated in the last page. The authors' introduction only refers to related works in vision/speech and ignores related NLP work.\n\nFurthermore, adversarial perturbation is related to domain transfer (since both involve shifts between the training and test distribution) and it is well known for instance that models that are trained on Wall Street Journal perform poorly on other domains. See SJ Pan and Q Yang, A Survey on transfer learning, 2010, for some example references.", "Happy holidays! Since the end of the discussion period is approaching, we wanted to check if you had the chance to read our response. We would like very much to address your concerns and have a discussion about our paper before the January 5 deadline.\n\nThank you!", "Happy holidays! Since the end of the discussion period is approaching, we wanted to check if you had the chance to read our response. We would like very much to address your concerns and have a discussion about our paper before the January 5 deadline.\n\nThank you!", "Happy holidays! Since the end of the discussion period is approaching, we wanted to check if you had the chance to read our response. We would like very much to address your concerns and have a discussion about our paper before the January 5 deadline.\n\nThank you!", "Thank you for your interest in our paper and for your effort to reproduce some of our results.\n\nFirst, we want to point out that some of the choices you made when reproducing our paper are not quite accurate. Our paper uses the post-processed word vectors by Mrksic et al., which are crucial to replace words by their synonyms only. We did not make use of WordNet. Also, it is surprising that you were not able to match our accuracy on clean data, since these models have been shown to achieve similar accuracies in their original papers. There are also several more inaccuracies.\n\nWe are happy to provide our source code as a file on an anonymous server. We are going to release this code with the camera-ready version of our paper, after we turn it into a form that is easy to read and execute.\n\nMore generally, we think that reproducing published results is very important. However, some aspects of our method have not been exactly reproduced, which yields slightly different numbers. Unfortunately, this casts some doubt on the correctness of our paper (while it's under review), and we cannot release our source code to confirm our reported results (e.g., because of anonymity issues). Therefore, we think it would be best to validate reproducibility after the paper is published.", "Thank you for pointing us to this paper. We were not aware of it, and we agree that it needs to be cited.\n\nHowever, the scope of this paper is very limited compared to our work.\n\n1. Most importantly, there is no notion of preserving the meaning of the original sentence.\n\nIn other words, Papernot et al. replace words in a sentence without ever looking at whether the new words are related to the originals. In our experience, this would most likely produce non-sensical sentences, and a human would recognize them as such. The paper also offers no evaluation of the constructed AEs besides the accuracy of the classifier, e.g. it doesn't evaluate their coherence, their similarity to the original, their human classification accuracy, and it does not even provide a list of example AEs.\n\n2. Significantly more limited scope of the experimental setup\n\nPapernot et al. focus on a specific model (recurrent neural networks) and a specific task (sentiment analysis). Our work compares RNNs with word-level CNNs, character-level CNNs, and Naive Bayes. We also look at multiple tasks: sentiment analysis, fake news detection, spam classification. Our algorithm explicitly attempts to preserve of the meaning of the new sentences (and make the adversarial examples difficult to detect). We extensively evaluate our method on Mechanical Turk.\n\nOur algorithm greedily optimizes the objective (score of the wrong class); that of Papernot appears to optimize a linearization of that objective. We are happy to compare against their approach if you think it's important, but we don't see why this would be better than optimizing the actual objective function.\n\nOverall, we think that the paper by Papernot et al. does not prove that *imperceptible* adversarial examples can be constructed for text classification tasks. This is a crucial property of AEs. Our paper, on the other hand, demonstrates that it is possible.\n\n\n\nFinally, this paper was published at the \"Military Communications Conference\". We have never heard of this venue, and suspect that it might be closer to a peer-reviewed workshop in the machine learning community. The page count appears to be shorter, and the scope of the experiments seems to be considerably more narrow (e.g., human evaluation would be a must at ICML/NIPS/ICLR).\n\nAs with the three papers below, we think it's a bit unfair to count previous recent publications on the Arxiv, conference workshops, or non-standard conference proceedings against the novelty of our paper. We believe that these papers should be cited and discussed as parallel work, but it is a bit harsh to claim that our work is incremental.", "We are a group of students from McGill University who are reproducing the findings of your paper. \n\nWe implemented the Naïve Bayes Model and Word-Level CNN on the Real-Fake News dataset. \nWe generated adversarial examples following the individual word replacement model specified within the paper. However, we were not able to verify the hyperparameters used for lambda-1 (semantic similarity) and lambda-2 (syntactic similarity). However, we recognized that constraining for those factors were important, and so we elected to use WordNet’s synsets to find our replacement words.\n\nWe did two baseline models: Multinomial Naïve Bayes with sklearn and Naïve Bayes with nltk. Our testing accuracy: 88.8% (sklearn), 87.4% (nltk). In training, we shuffled the data and held aside 10% for testing. In testing, our accuracy was 69.2% and 65.4%, respectively. The drop in accuracy is significantly less than that of the authors. \n\nWe also implemented Word-Level CNN. The Kim paper referenced in by the authors had several adaptations. We checked all methods and picked the most robust one. The basic neural network is composed of a single embedding layer, a temporal convolution layer followed by max-pooling, and a fully connected layer for classification. The convolutional layer by max-pooling is composed of filters of three different sizes, where the filter windows are composed of 128 filter maps each. Filter sizes were chosen as [3, 3, 3], and [3, 4, 5] as reported by Kim. Regularization is done by performing Dropout on the before last layer of the network. The dropout rate is fixed at 0.50 is used with the Adam optimizer in tensorflow. Finally, training is done using stochastic gradient descent over shuffled mini-batches of size 64. Unfortunately, our computing resources did not allow us to use the Real-Fake News Dataset. Instead, we used a smaller dataset for a sentiment analysis task: Positive/Negative Movie Reviews. In training, we shuffled the data and held aside 10% for testing. Our training accuracy was 97.3%, and our testing accuracy was 72.7%. \n\nFrom our reproduction, it’s evident that the execution of adversarial example generation highly influences the classifier’s ability to remain accurate. We followed the spirit of the authors’ adversarial example generation algorithm, but used a different implementation with WordNet (instead of word2vec or GloVe) to preserve semantic similarity.\n", "Apologies, from your comment I realised I cited the wrong Papernot et. al paper for previous work on adversarial text generation. I meant to cite:\n\nPapernot, N., McDaniel, P., Swami, A., & Harang, R. (2016, November). Crafting adversarial input sequences for recurrent neural networks. In Military Communications Conference, MILCOM 2016-2016 IEEE (pp. 49-54). IEEE.\n", "Thank you for pointing us to these papers. We were not aware of them, and we agree that they need to be cited.\n\nHowever, these papers have not been published (they're workshop or arxiv papers) and they are very recent. They consider different and more limited settings than our work.\n\nWe do not think it is fair to count this preliminary work against the novelty of our paper. In fact, these papers are most likely under peer review right now. We are going to cite and discuss the in the paper. We provide an outline of our discussion below.\n\nHosseini et al.\n-------------------\n\nThe main differences with this paper are:\n- They look at a very specific system and task: the Google Toxic Detection System\n- They obfuscate toxic words by mis-spelling them, e.g. stupid -> stu.pid\n- Their method tries random mis-spellings until the detection system is fooled.\n\nIt is not clear what is the relevance of such specialized perturbations to the more general problem of adversarial examples, especially ones that are well-formed like in our paper. Our paper is substantially more general.\n\nReddy et al.\n----------------\n\nHere, the differences are:\n- They consider a very specific problem: fooling gender detection systems\n- They consider very specific classifiers: linear classifiers over bag-of-words feature counts\n- This enables them to substitute words that occur more frequently in one class relative to another.\n\nLike us, they also use word vectors to measure similarity between words. Their optimization algorithm is somewhat similar, but involves additional hand-crafted NLP rules.\n\nOverall, this paper considers a more limited setting than ours in terms of dataset and model. Again, it is recent and has not been published (it was presented at a workshop).\n\n\nSamanta et al.\n--------------------\n\nThis paper is most similar to ours, and was released on ArXiv this summer while we were working on ours.\n\nIt is the only one of the three to look at deep models (the same CNN architecture as we do). They also look at the same dataset.\n\nThe differences are:\n- They use a more specialized algorithm with more specialized hand-crafted rules aimed at specific parts-of-speech.\n- They look at one setting (sentiment classification), while we look at two more (fake news and spam).\n- They look at CNNs, while we also look at LSTMs, character-level CNNs, and deep networks.\n\nAgain, this should be considered as parallel work, but I don't think it is fair to count it against the novelty of our paper.\n", "Thank you for your generally positive review. However, since the score that you are giving us (6.0) is not very high, could you please elaborate on your concerns with this paper?\n\nWe would be very happy to improve the paper before the final decision period, but your only negative comment is that you were expecting \"more actual solutions in addition to analysis\". We don’t know how to interpret that.\n\nIf you are interested in ways of protecting against attacks, we tried using the method of Miyato et al., which results in a small increases in performance on adversarial examples.\n", "Thank you very much for your feedback. We see that you raise several issues in your review.\n\n1. We are not evaluating our method on state-of-the-art models\n\nHere, we respectfully disagree: this claim is incorrect.\n\nOur models (except the linear classifier) achieve accuracies of 94.9%-95.3% on the popular Yelp dataset (same as the papers whose models we used). The 2017 state-of-the art is ~97% using a ResNet [1], and the 2016 SOA was 96%. On the widely used IMDB dataset we obtain accuracies of ~92-93%; the state-of-the-art around 96%. (We didn't include IMBD results due to lack of space and similarity to Yelp). On spam detection, we also obtain nearly perfect accuracy. Finally, there is no standard fake news dataset, but we achieve high accuracy on the one that we use.\n\nFurthermore, our architectures are very modern and date from as recently as last year (see our citations)\n\n[1] Johnson and Zhang, http://www.aclweb.org/anthology/P17-1052\n\n2. We do not take into account the recent work of Miyato et al.\n\nFirst, note Miyato et al. propose a method for adversarial training, which is very different from adversarial examples (what we study). Adversarial training is at the moment not part of the standard toolkit for classification algorithm, which is why we did not immediately compare to it.\n\nYou also mention that the method of Miyato et al.. could be used as a defense. However, that too is not correct: our adversarial examples arise from large perturbations in embedding space (we replace an entire word); Miyato et al., on the other hand, perform adversarial training in embedding space, which consists in introducing very small perturbations. In the context of Naive Bayes, their method does not even apply (there are no word embeddings in NB)\n\nWe confirmed this empirically by testing the method of Miyato et al. on the CNN model. We observed only a small (10%) improvement in accuracy on AEs. We are also currently running additional experiments on every setup. We will report here our final results once they are done.\n\n\n3. There is no comparison to existing adversarial text generation approaches\n\nWe are more than happy to compare to any existing work. However, the Papernot et al. paper you cite has no mention of text classification at all. The underlying algorithm is gradient-based, and is not applicable to discrete inputs (relative to which we cannot differentiate the model). The Miyato paper you mention does not work well as a defense against our method (see above).\n\nAn anonymous commenter mentioned some relevant work; please see also our detailed response to their comment.\n\n4. Extra technical questions and clarifications \n\nThe language model we use a tri-gram model. This is a detail that we forgot to mention and that we will add into the paper.\n\nWe certainly enforce equation (4) in our algorithm. There is a typo in Algorithm 1 (it should read \"Equations 4, 5\" instead of \"Equation 5\"), which we will correct right away. We apologize for any confusion due to this typo.\n", "Thank you for your feedback! We identified several concerns in your review.\n\n1. Our work is not sufficiently novel.\n\nFirst, we believe that your claim that our paper has \"mediocre novelty\" is quite harsh. Especially given that your review does not include any references to related papers.\n\nOur paper explores adversarial examples for natural language classification. If you think this has been done before, could you please provide references? We will be more than happy to compare.\n\nEarlier, an anonymous commenter mentioned 3 references; we discuss these below.\n\n2. We are not evaluating similarity between the adversarial examples and the originals.\n\nFirst, saying that we \"don't provide any systematic study via human annotation\" is incorrect: we measured both human accuracy and readability on every model/dataset combination (dozens of experiments in total).\n\nNext, we want to point out that the topic of similarity is more nuanced than it seems. Most often, the algorithm changes irrelevant parts of the input, e.g.:\n\nOn Wednesday, Obama raised taxes (fake) -> On Tuesday, Obama raised taxes. (real)\nWe ordered pasta and it was the worst we ever had (neg) -> We ordered chicken and it was the worst we ever had (pos)\n\nThese are still valid similarity-based adversarial examples: i.e., we fool the fake news detection system and succeed in spreading the false news that Obama is raising taxes. What is most important is that humans and machines consistently classify our examples into opposite classes and the examples sound natural to humans.\n\nHowever, we understand the validity of your concern and we thank you for suggesting this experiment. To address your concern as much as possible, we performed the experiment in question. \n\nWe quantified the similarity of adversarial examples via Mechanical Turk. We asked Turkers to rate the similarity of the adversarial examples to the originals on a scale of 1-5, with 1 being completely unrelated, and 5 being identical. Here are the results we compiled so far:\n\nDomain Score Number\nNews 1 56\nNews 2 49\nNews 3 138\nNews 4 141\nNews 5 116\n\nYelp 1 53\nYelp 2 40\nYelp 3 121\nYelp 4 180\nYelp 5 106\n\nOverall, we see that the majority of adversarial examples are similar to the originals.\n\n3. We measure the difference of adversarial examples only in continuous space.\n\nAgain, this is incorrect. We optimize a continuous objective; however we measure and report only metrics that are derived from human experiments (accuracy and readability)\n\nAlthough we set the maximum fraction of replaced words to 50%, we very rarely reach that number (see examples in the paper). This is just an early stopping criterion. Similarity is enforced via Equations 4 and 5, and the constants there are indeed tight. This can be seen by looking at the similarity of our examples to the originals. We are happy to add an experiment where we vary the threshold, if you think this is important.\n\n4. Other technical issues\n\nThe objective J is the score of the target (adversarial) class, and we define it right below Equation 6. Sorry if this wasn't clear, we will make it more obvious.\n\nThe function c is defined right below Equation 3, and is simply a vector of constraints. In our algorithm, we instantiate c with two constraints: a syntactical and a semantic one.\n\nWe are going to think of a better name for the syntactic constraint (e.g., fluency as you suggested).\n\nWe did not tune any hyper-parameters on the test set (we're not sure what might lead to think that). We chose hyper parameters on the training set (validation would have been slightly cleaner). We did not touch the set test, except for generating the final adversarial examples.\n\nWe are certainly not “obfuscating” the work of Jia and Liang. We spend a whole paragraph comparing our work to theirs in Section 3.1. In brief, they create AEs by adding irrelevant sentences; we create AEs by changing some words to synonyms. We will extend the existing discussion if you think it’s necessary.", "In the scope of the Reproducibility Challenge, the following is an executive summary of the findings on the paper \"Adversarial Examples for Natural Language Classification Problems\" currently under review for the ICLR 2018 Conference.\n\nThe paper provides the vast majority of the information required to reproduce the experiment. However, this is at a great expense in terms of the required time, effort and computational power. The lack of source code, hyperparameter specification and clean datasets are the driving factors behind the increased reproduction difficulty.\n\nThe models described in the paper were implemented using Keras with a Tensorflow backend and were trained on Google Cloud Computer Instances with 6 vCPUs, 24GB of RAM and either a NVidia K80 or a NVidia P100.\n\nThe datasets used in this experiment were publicly available in their raw form. Virtually no preprocessing was necessary to make the Yelp Polarity Dataset usable and a minimal amount was required for The News dataset. In contrast, cleaning the spam dataset proved to be challenging and it was ultimately abandoned in the final testing due to technical difficulties dealing with the encoding issues. It is possible that the dataset was exposed to antivirus software that may have tampered with the contents of the dataset and rendered them unusable. Although the procedure describing the data splitting and the preprocessing was clear, reproducing the exact same split in this experiment was impossible without a fixed random seed or the pre-split datasets (as was the case for the Yelp Polarity Dataset).\n\nWithout the source code and a full specification of the hyperparameters used to train the classifiers. It was challenging to even obtain optimal results on the clean data. This was likely caused by the time constraints of the reproducibility challenge which limited the quality of our hyperparameter search to achieve reasonable performance of the models. Our highest performing models for the Yelp Dataset achieve 86.79%, 79.64%, 70.21% and 56.12% for the Naive Bayes, the LSTM, the WCNN and VDCNN respectively. On the News dataset, the results were more comparable to the cited results, with each classifier achieving 87.68%, 91.24%, 85.17% and 51.10%. Although the hyper parameter tuning may have played a part in the relatively poor performance of the models on the Yelp Dataset, training the models for longer on the larger Yelp Dataset may have yielded better results. The reimplementation and training required a significant amount of time and effort.\n\nThe greedy algorithm's pseudocode clearly conveyed the protocol that was to be followed to emulate the specified behaviour. However, when implementing the algorithm, there were a few instances where assumptions had to be made. Notably, the optimization objective J(x') was not clearly specified in the paper. Although, the line \"J(x') measures the extent to which x' is adversarial and may be a function of a target class y' != y, e.g. J(x') = f(x')\" suggests that the optimization objective is the softmax value of the classifiers. Furthermore, it was assumed, with a relatively high degree of certainty, that the constraint function c(x, c')was merely a formalism to indicate that breaching any of the constraints should lead to a rejection of the candidate example. Optimizing the adversarial against each classifier led to poor results as observed in the paper. However, the human evaluation of the generated responses was not reproduced given the limited available resources. \n\nAlthough the paper thoroughly specifies the architecture of the models and the details of the algorithm, reproducing the results of this paper would be greatly facilitated by providing source code along with the hyperparameter values of the classifiers. Furthermore, providing clean, split datasets reduces the effort required to preprocess the data, while ensuring that the reproducibility of the results will be more consistent. In conclusion, with more time, the results that were generated would have likely been more in line with the paper's published results but the time constraints imposed by the challenge were insufficient to do a more thorough hyperparameter search. Finally, the lack of resources which limited the ability to evaluate the generated adversarial examples by humans further reduced the credibility of this reproducibility experiment, seeing as the algorithm could be generating novel examples rather than true adversarial ones to fool the classifier.", "As part of the reproducibility challenge, our team of students would like to attempt to reproduce the results of your paper.\nIf possible, it would be incredibly helpful if you could provide parts of the code used in your creation of the adversarial examples.\n\nAlso could you confirm if these are the datasets used for the paper:\nTrec07p: https://plg.uwaterloo.ca/~gvcormac/treccorpus07/\nYelp: https://www.yelp.com/dataset\nNews: https://github.com/GeorgeMcIntire/fake_real_news_dataset/\n\nAnd if this is the pre-trained word vector model that was used for word replacement in the LSTM method:\nwor2vec: https://code.google.com/archive/p/word2vec/\nwhile this pre-trained model was used for the other 3 methods:\nGloVe: https://nlp.stanford.edu/projects/glove/\n\nThank you \n", "Interesting paper; however, it fails to cite related NLP papers. There is a vast amount of research on related topics, such as evading spam filters. Aside from that, adversarial examples for language has been also studied before. The followings are some related papers:\n\nhttp://www.aclweb.org/anthology/W16-5603\nhttps://arxiv.org/pdf/1702.08138.pdf\nhttps://arxiv.org/pdf/1707.02812.pdf" ]
[ 4, 6, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1QZ3zbAZ", "iclr_2018_r1QZ3zbAZ", "iclr_2018_r1QZ3zbAZ", "HJKBdUKJf", "Byk2NS9xf", "B1CfWm1WG", "BkMNGG7Gz", "BJNs1wMGG", "iclr_2018_r1QZ3zbAZ", "ByX3tUGGM", "rJ7D9bpC-", "HJKBdUKJf", "Byk2NS9xf", "B1CfWm1WG", "iclr_2018_r1QZ3zbAZ", "iclr_2018_r1QZ3zbAZ", "iclr_2018_r1QZ3zbAZ" ]
iclr_2018_rybDdHe0Z
Sequence Transfer Learning for Neural Decoding
A fundamental challenge in designing brain-computer interfaces (BCIs) is decoding behavior from time-varying neural oscillations. In typical applications, decoders are constructed for individual subjects and with limited data leading to restrictions on the types of models that can be utilized. Currently, the best performing decoders are typically linear models capable of utilizing rigid timing constraints with limited training data. Here we demonstrate the use of Long Short-Term Memory (LSTM) networks to take advantage of the temporal information present in sequential neural data collected from subjects implanted with electrocorticographic (ECoG) electrode arrays performing a finger flexion task. Our constructed models are capable of achieving accuracies that are comparable to existing techniques while also being robust to variation in sample data size. Moreover, we utilize the LSTM networks and an affine transformation layer to construct a novel architecture for transfer learning. We demonstrate that in scenarios where only the affine transform is learned for a new subject, it is possible to achieve results comparable to existing state-of-the-art techniques. The notable advantage is the increased stability of the model during training on novel subjects. Relaxing the constraint of only training the affine transformation, we establish our model as capable of exceeding performance of current models across all training data sizes. Overall, this work demonstrates that LSTMs are a versatile model that can accurately capture temporal patterns in neural data and can provide a foundation for transfer learning in neural decoding.
rejected-papers
This paper tries to establish that LSTMs are suitable for modeling neural signals from the brain. However, the committee and most reviewers find that results are inconclusive. Results are mixed across subjects. We think it would have been far more interesting to compare other types of sequence models for this task other than the few simple baselines implemented here. It is also unclear what is the LSTM learning extra in contrast with the other models presented in the paper.
train
[ "HJ_bsmPxG", "HJmBCpKeG", "S1D3Hb9eM", "SyQDCnMNz", "H1S6vPofG", "HkJYOvsGG", "BkrakDsfz", "H1AvJwszG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper describes an approach to use LSTM’s for finger classification based on ECOG. and a transfer learning extension of which two variations exists. From the presented results, the LSTM model is not an improvement over a basic linear model. The transfer learning models performs better than subject specific models on a subset of the subjects. Overall, I think the problem Is interesting but the technical description and the evaluation can be improved. I am not confident in the analysis of the model. Additionally, the citations are not always correct and some related work is not referenced at all. For the reasons above, I am not willing to recommend the paper for acceptance at his point.\n\nThe paper tackles a problem that is challenging and interesting. Unfortunately, the dataset size is limited. \nThis is common for brain data and makes evaluation much more difficult.\n The paper states that all hyper-parameters were optimized on 75% of subject B data.\nThe actual model training was done using cross-validation. \nSo far this approach seems more or less correct but in this case I would argue that subject B should not be considered for evaluation since its data is heavily used for hyper-parameter optimization and the results obtained on this subject are at risk of being biased.\nOmitting subject B from the analysis, each non-transfer learning method performs best on one of the remaining subjects.\nTherefore it is not clear that an LSTM model is an improvement. \nFor transfer learning (ignoring B again) only C and D are improved but it is unclear what the variance is.\nIn the BCI community there are many approaches that use transfer learning with linear models. I think that it would be interesting how linear model transfer learning would fare in this task. \n\nA second issue that might inflate the results is the fact that the data is shuffled randomly. While this is common practice for most machine learning tasks, it is dangerous when working with brain data due to changes in the signal over time. As a result, selecting random samples might inflate the accuracy compared to having a proper train and test set that are separated in time. Ideally the cross-validation should be done using contiguous folds. \n\nI am not quite sure whether it should be possible to have an accuracy above chance level half a second before movement onset? How long does motor preparation take? I am not familiar with this specific subject, but a quick search gave me a reaction time for sprinters of .15 seconds. Is it possible that cue processing activity was used to obtain the classification result? Please discuss this effect because I am do not understand why it should be possible to get above chance level accuracy half a second before movement onset. \n\nThere are also several technical aspects that are not clear to me. I am confident that I am unable to re-implement the proposed method and their baseline given the information provided.\n\nLDA baseline:\n—————————\nFor the LDA baseline, how is the varying sequence length treated? \nLedoit wolf analytic regularization is used, but it isn not referenced. If you use that method, cite the paper. \nThe claim that LDA works for structured experimental tasks but not in naturalistic scenarios and will not generalize when electrode count and trial duration increases is a statement that might be true. However, it is never empirically verified. Therefore this statement should not be in the paper. \n\nHMM baseline\n—————————\nHow are the 1 and the 2 state HMM used w.r.t. the 5 classes? It is unclear to me how they are used exactly. Is there a single HMM per class? Please be specific. \n\nLSTM Model\n—————\nWhat is the random and language model initialization scheme? I can only find the sequence auto-encoder in the Dai and Le paper. \n\n\nModel analysis\n——————————-\nIt is widely accepted in the neuroimaging community that linear weight vectors should not be interpreted directly. It is actually impossible to do this. Therefore this section should be completely re-done. Please read the following paper on this subject.\nHaufe, Stefan, et al. \"On the interpretation of weight vectors of linear models in multivariate neuroimaging.\" Neuroimage 87 (2014): 96-110.\n\nReferences\n———— \nLedoit wolf regularization is used but not cited. Fix this.\nThere is no citation for the random/language model initialization of the LSTM model. I have no clue how to do this without proper citation.\nLe at al (2011) are referenced for auto-encoders. This is definitely not the right citation. \nRumelhart, Hinton, & Williams, 1986a; Bourlard & Kamp, 1988; Hinton & Zemel, 1994 and Bengio, Lamblin, Popovici, & Larochelle, 2007; Ranzato, Poultney, Chopra, & LeCun, 2007 are probably all more relevant.\nPlease cite the relevant work on affine transformations for transfer learning especially the work by morioka et al who also learn an input transferm.\nMorioka, Hiroshi, et al. \"Learning a common dictionary for subject-transfer decoding with resting calibration.\" NeuroImage 111 (2015): 167-178.\n", "The ms applies an LSTM on ECoG data and studies tranfer between subjects etc. \n\nThe data includes only few samples per class. The validation procedure to obtain the model accuray is a bit iffy. \nThe ms says: The test data contains 'at least 2 samples per class'. Data of the type analysed is highly dependend, so it is not unclear whether this validation procedure will not provide overoptimistic results. Currently, I do not see evidence for a stable training procedure in the ms. I would be curious also to see a comparison to a k-NN classifier using embedded data to gauge the problem difficulty. \nAlso, the paper does not really decide whether it is a neuroscience contribution or an ML one. If it were a neuroscience contribution, then it would be important to analyse and understand the LSTM representation and to put it into a biological context fig 5B is a first step in this direction. \nIf it where a ML contribution, then there should be a comprehensive analysis that indeed the proposed architecture using the 2 steps is actually doing the right thing, i.e. that the method converges to the truth if more and more data is available. \nThere is also some initial experiments in fig 3A. Currently, I find the paper somewhat unsatisfactory and thus preliminary. ", "This work addresses brain state decoding (intent to move) based on intra-cranial \"electrocorticography (ECoG) grids\". ECoG signals are generally of much higher quality than more conventional EEG signals acquired on the skalp, hence it appears meaningful to invest significant effort to decode. \nPreprocessing is only descibed in a few lines in Section 2.1, and the the feature space is unclear (number of variables etc)\n\nLinear discriminants, \"1-state and 2-state\" hidden markov models, and LSTMs are considered for classification (5 classes, unclear if prior odds are uniform). Data involves multiple subjects (4 selected from a larger pool). Total amount of data unclear. \"A validation set is not used due to the limited data size.\" The LSTM setup and training follows conventional wisdom.\n\"The model used for our analyses was constructed with 100 hidden units with no performance gain identified using larger or stacked networks.\"\nA simplistic but interesting transfer scheme is proposed amounting to an affine transform of features(??) - the complexity of this transform is unclear.\n\nWhile limited novelty is found in the methodology/engineering - novelty being mainly related to the affine transfer mechanism, results are disappointing. \nThe decoding performance of the LSTMs does not convincingly exceed that of the simple baselines. \n\nWhen analyzing the transfer mechanism only the LSTMs are investigated and it remains unclear how well trans works.\n\nThere is an interesting visualization (t-SNE) of the latent representations. But very limited discussion of what we learn from it, or how such visualization could be used to provide neuroscience insights.\n\nIn the discussion we find the claim: \"In this work, we have shown that LSTMs can model the variation within a neural sequence and are a good alternative to state-of-the-art decoders.\" I fail to see how it can be attractive to obtain similar performance with a model of 100x (?) the complexity\n\n\n\n", "I will response to all relevant comments here.\n\nw.r.t. Subject B\n- Here I did not change my opinion. \nThe hyper-parameters are optimized on the data that is used for evaluation. This is a basic machine learning error. Therefore the best thing would be to exclude the data from subject B since there is some doubt about the validity of those results.\n\nw.r.t. the LDA model\n- Thanks for the clarification. This approach is more complex than I expected.\n A different solution to having to train different models of different length could be to view it as a convolutional approach with max or mean pooling to obtain the actual output. Given that it is very easy to implement this, I would encourage the authors to try this.\nThis approach would also have the advantage that more data is available to train the entire model.\nAlso, this would enable the authors to test the affine transformation in combination with the linear model. \nIf the LSTM still performs better, this would make the paper a lot stronger. \n\nw.r.t. model analysis\n- Without a colour map next to the weights I cannot understand the plotted matrix. \n- The conclusion that zero weights do not contribute to the output is only valid if they are actually zero and not really small. The whole point of the Haufe paper is that a small weight might be important and processing information, while a large weight might be there primarily due to noise cancelling.\n- Since there is no information about whether the weights or zero or just small, I cannot conclude anything from the provided data.\n- That being said, I am not convinced that the paper needs Fig 5A. Making fig 5b larger would be more informative. \n\nw.r.t. language model init.\nIt is still not clear to me how exactly the model is pre-trained. The notion of language model does not exist here and this might be confusing me. Is it just an auto-encoder trained to predict the data one time-step ahead?\n\nw.r.t. relation to other TL approaches.\nI understand that the difference between EEG and ECOG is that ECOG is typically placed on different parts of the brain. What I fail to grasp is why an affine transformation makes sense here. Is there an intuition about how this can compensate for electrode placement?\n\nFinally, another suggestion. \nIf transfer learning works well, it would make sense to jointly train on all subjects. Where each subject has its own affine transformation but where the model after the transformation is shared across subjects.", "> “For the LDA baseline, how is the varying sequence length treated?”; “The claim that LDA works for structured experimental tasks but not in naturalistic scenarios and will not generalize when electrode count and trial duration increases is a statement that might be true. However, it is never empirically verified. Therefore this statement should not be in the paper.”\nFor each of the possible sequence lengths an LDA model is trained, thus, a family of LDA models is constructed and the model that is picked depends on the length of the sequence. This is the reasoning behind the claim that LDA would not generalize when trial duration increases, as it would be infeasible to construct an LDA model for every possible sequence length. \n\n> “How are the 1 and the 2 state HMM used w.r.t. the 5 classes?”\nThere is a single HMM per class; we have updated the manuscript to make this clear.\n\n> “What is the random and language model initialization scheme?”\nWith respect to the LSTM initialization schemes: by random we mean initializing weights of the network randomly, say using Xavier initialization (Glorot and Bengio 2010); language model initialization is from the Dai and Le paper. We have updated to make this clear in the manuscript.\n\n> Model analysis\nThank you for the insightful reference; reviewing the statements of Haufe et al and our results, we think our interpretations are still valid, but need to be reworded in the manuscript to prevent misinterpretation. While we are utilizing a backward model, we are not hoping to make conclusions about the learned weights in the affine mapping specifically concerning the underlying brain processes. We are addressing the fact that the LSTM model input features are electrodes with specific locations on Subject 1, and when we include known non-informative electrodes, the learned representation excludes them. I.e. we know from the task design and neurophysiology that the occipital region in motor movement tasks should have little activation. Therefore, we would expect the affine mapping to learn zero weights for the signals from occipital region and we do see that in our model. We believe that this interpretation does not violate the points from Haufe et al.\n\n> References\nOur apologies for the missing references. We have included the citations for the Ledoit-Wolf lemma and the prior transfer learning work in EEG (Morioka et al.). In the discussion section we refer to usage of auto-encoders as an unsupervised feature extractor, this is in contrast to our current features which are based on neurophysiology, and hence the reference to Le et al 2011.", "Thank you for your detailed remarks - we hope to address all of your concerns below and in the updated manuscript. \n\n> “I would argue that subject B should not be considered for evaluation since its data is heavily used for hyper-parameter optimization and the results obtained on this subject are at risk of being biased.”\nIn an attempt to limit the amount of subject-specific tuning performed for the LSTMs, we limit the hyper-parameter tuning to only Subject B. Regarding the inclusion of Subject B, we limit the amount of fine tuning that is done for on the model to limit the potential bias when evaluating Subject B. Furthermore, we demonstrate that other subjects, using Subject B’s hyperparameters, perform comparably. While tuning hyperparameters will likely improve performance for each subject, that would require setting aside data for hyperparameter tuning in the already data constrained situation. Clarifying the transfer learning results, the variance can be reported with standard error of mean (SE) values where, for all reported accuracies, the SE is at most 0.02. \n\n> “it is not clear that an LSTM model is an improvement”\nAs we state in the manuscript, we demonstrate the LSTM model achieves performance comparable to the other models. The primary advantage to utilizing LSTMs resides in the ability to learn the mapping for an affine transformation (please refer to our response to Reviewer2). \n\n> “In the BCI community there are many approaches that use transfer learning with linear models. I think that it would be interesting how linear model transfer learning would fare in this task.”\nIn exploring existing techniques for transfer learning in neural data, there are limited approaches that can be applied to ECoG data and none found in existing literature. Typical techniques utilize EEG data which has a significant amount of spatial averaging allowing for more direct mapping between subjects. In ECoG, the array placements are unique for each subject have greater spatial resolution that exploits underlying neural structures (i.e. using only sensorimotor cortex electrodes), but makes it increasingly difficult to directly map between subjects. Regarding the models, specifically, we may be able to adapt them for transfer learning, however, they would be limited by either the need for a hand-tuned affine transform, or would not represent time series.Specifically, exploring the transfer learning capabilities of the other proposed models, it is important to consider the key advantage of LSTMs is that backpropagation allows for learning the affine transform. Modification of the other models would require hand tuning a mapping layer, or constructing an ensemble of models to leverage a learned mapping to make TL possible. While it is possible to extend LDA, a key goal was to move away from models that operate only on fixed time contexts such as LDA, to a time-series model.\n\n> “issue that might inflate the results is the fact that the data is shuffled randomly”\nRegarding the random shuffle, the experimental setup does not require analysis of contiguous data folds. While neural activity changes over time, the duration of the experiment is on the order of tens of minutes, not hours, which should limit the amount of variability present. Furthermore, between each trial, there is a refractory period that allows the baseline dynamics to be achieved. \n\n> “Accuracy above chance level half a second before movement onset”\nThis is possible because cue processing activity was used to obtain the classification result. As mentioned in the manuscript, the trials are segmented based on the cue. Hence, accuracy above chance is achieved by integrating over 300 ms of neural activity beginning from the display of the cue. Hotson et al. 2016 also show that it is possible to decode prior to movement onset.", "Thank you for your remarks - we hope to address all of your concerns below and in the updated manuscript. \n\n> “The data includes only few samples per class. The validation procedure to obtain the model accuray is a bit iffy. Currently, I do not see evidence for a stable training procedure in the ms.”\nThe availability of invasive neural recordings in humans is quite limited. Therefore, we selected a validation procedure that demonstrates the robustness of the model training across multiple runs and partitions of data. We show in a the various groupings, a consistent performance is achieved. Furthermore, we did not finetune the parameters for all subjects, rather, a single set of hyperparameters for the model was selected after being coarsely trained on a single subject. \n\n> “Comparison to a k-NN classifier using embedded data to gauge the problem difficulty”\nRegarding the comparison to a k-NN classifier (presumably on the 2-d embedding), the t-SNE embedding is obtained on the training data after the network was optimized on it. As such, the embedding looks separable because it was optimized on this data. Because t-SNE is nonparametric, the test data cannot be projected onto the same embedding space to facilitate the k-NN experiment. We use the t-SNE embedding to show that the learned parameters of the network separate classes for both subjects well and the embedding for the same class from the two subjects cluster together even though the network weights were not explicitly optimized for it.\n\n> “Also, the paper does not really decide whether it is a neuroscience contribution or an ML one.”\nThe primary contributions we hope to make clear through the paper are not restricted to neuroscience or machine learning due to the nature of the experimentation and analysis. We hope to demonstrate the benefits and disadvantages of existing approaches, applications of new models that have not been applied to neural data, and to propose improvements to the model demonstrating benefits in performance with potential for understanding what is being learned rooted in neuroscience principles.", "Thank you for your remarks - we hope to answer all of your concerns below or in the updated manuscript. \n\n> “the feature space is unclear (number of variables etc)”\nBriefly clarifying the feature space, we use high frequency band (70 - 150 Hz) power extracted from the electrodes in the sensorimotor region. The number of features (equivalent to the number of electrodes as a single average power is utilized) vary based on the subject but are between 6 and 8. \n\n> “Unclear if prior odds are uniform. Total amount of data unclear.”\nFor the models, the classes are balanced and the number of samples per classes vary based on the subject, but are between 27 and 29. Thus the total number of samples are between 135 and 145 and a uniform prior is utilized.\n \n> “A simplistic but interesting transfer scheme is proposed amounting to an affine transform of features(??) - the complexity of this transform is unclear.”\nRegarding the affine transform, it transforms the feature space from the new target subject to the feature space of the original subject. As the transform is affine, the number of parameters of the transform is e_2*(e_1+1) where e_1, e_2 are the number of electrodes in the original subject and the transferring subject, respectively.\n \n> “The decoding performance of the LSTMs does not convincingly exceed that of the simple baselines.”; “I fail to see how it can be attractive to obtain similar performance with a model of 100x (?) the complexity”\nWith respect to the decoding performance of the LSTM’s, we establish the model as an approach that provides comparable results even while having significantly more parameters to learn and being data constrained. An advantage to utilizing such a technique is due to the scalability of the model (as seen in speech recognition literature Graves et al. 2013). The key advantage to the LSTM model, however, is the demonstrated superior performance utilizing the TL architecture which is only possible due to the network structure and the ability to back propagate errors. Furthermore, preventing training of the LSTM and only training the affine mapping provides comparable performance to existing techniques that could have applications where subject specific training data is limited.\n\n> “When analyzing the transfer mechanism only the LSTMs are investigated and it remains unclear how well trans works.”\nExploring the transfer learning capabilities of the other proposed models, it is important to consider the key advantage of LSTMs is that backpropagation allows for learning the affine transform. Modification of the other models would require hand tuning a mapping layer, or constructing an ensemble of models to leverage a learned mapping to make TL possible. While it is possible to extend LDA, a key goal was to move away from models that operate only on fixed time contexts such as LDA, to a time-series model. We discuss reasoning for this in the manuscript.\n\n > “There is an interesting visualization (t-SNE) of the latent representations. But very limited discussion of what we learn from it, or how such visualization could be used to provide neuroscience insights.”\nThe t-SNE representation is meant to demonstrate that a meaningful mapping between the two subjects exists rather than some arbitrary mapping that gives good results (i.e. the results are interpretable). It is likely a representation of an underlying physiological basis rooted in the structure of the sensorimotor cortex. However, a more detailed examination with more subjects is necessary to be able to concretely say anything about the underlying physiology." ]
[ 4, 6, 3, -1, -1, -1, -1, -1 ]
[ 5, 5, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rybDdHe0Z", "iclr_2018_rybDdHe0Z", "iclr_2018_rybDdHe0Z", "H1S6vPofG", "HJ_bsmPxG", "HJ_bsmPxG", "HJmBCpKeG", "S1D3Hb9eM" ]
iclr_2018_HJ_X8GupW
Multi-label Learning for Large Text Corpora using Latent Variable Model with Provable Gurantees
Here we study the problem of learning labels for large text corpora where each document can be assigned a variable number of labels. The problem is trivial when the label dimensionality is small and can be easily solved by a series of one-vs-all classifiers. However, as the label dimensionality increases, the parameter space of such one-vs-all classifiers becomes extremely large and outstrips the memory. Here we propose a latent variable model to reduce the size of the parameter space, but still efficiently learn the labels. We learn the model using spectral learning and show how to extract the parameters using only three passes through the training dataset. Further, we analyse the sample complexity of our model using PAC learning theory and then demonstrate the performance of our algorithm on several benchmark datasets in comparison with existing algorithms.
rejected-papers
There is overall consensus about the paper's lack of novelty and clarity. Reviewer 1 has detailed comments that can be used to strengthen the paper. Reviewer 3 suggests that this paper is very close to Anandkumar et al 2012, and it is not clear where the novelty lies. Addressing these concerns of the reviewers will make the paper more acceptable to future venues.
train
[ "S1xUzwOgz", "B1ctU0uez", "B1ZszE9lG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies the problem of multi-label learning for text copora. The paper proposed a latent variable model for the documents and their labels, and used spectral algorithms to provably learn the parameters.\n\nThe model is fairly simplistic: the topic can be one of k topics (pure topic model), based on the topic, there is a probability distribution over documents, and a probabilistic distribution over labels. The model between document and topic is very similar to previous pure topic models (see more discussions below), and because it is a pure topic, the label is just modeled by a conditional distribution.\n\nThe paper tried to stress that the model is different from Anandkumar et al. because the use of \"expectations vs. probabilities\", but that is only different by a normalization factor. The model defined here is also very strange, especially Equation (2) is not really consistent with Equation (7). \n\nJust to elaborate: in equation (2), the probability of a document is related to the set of distinct words, so it does not distinguish between documents where a word appear multiple times or only once. This is different from the standard bag-of-words model where words are sampled independently and word counts do matter. However, in the calculation before Equation (7), it was trying to compute the probability that a pair of words are equal to v_i and v_j, and it assumed words w_1 and w_2 are independent and both of them satisfy the conditional distribution P[v_i|h = k], this is back to the standard bag-of-words model. To see why these models are different, if it is the model of (2), and we look at only distinct words, the diagonal of the matrix P[v_i,v_i] does not really make sense and certainly will not follow Equation (7). Equation (7) and also (9) only works in the standard bag-of-words model that is also used in Anandkumar et al. (the same equations were also proved).\n\nThe main novelty in this paper is that it uses the label as a third view of a multi-view model and make use of cross moments. The reviewer feels this alone is not enough contribution.", "The paper proposes to learn a latent variable model with spectral algorithm and apply it to multi-label learning. First of all, the connection to multi-label learning is very loose, and the majority of the paper deals with learning the latent variable model. Second, there is almost nothing new in the paper compared to Anandkumar et al 2012, except it uses probability as parameters but not expectations. This difference is trivial since they are only a normalization away. Third, the experiments shows the performance (AUC) compared with other algorithms is significantly worse although the memory consumption may be small.", "The paper addresses the problem of multi-label learning for text corpora and proposes to tackle the problem using tensor factorization methods. Some analysis and experimental results for the proposed algorithm are presented.\n\nQUALITY: I find the quality of the results in this paper rather low. The proposed probabilistic model is defined ambiguously. The authors then look at joint probability distributions of co-occurence of two and three words, which gives a matrix and a tensor, respectively. They propose to match these matrix and tensor to their sample estimates and refer to such procedure as the moment matching method, which it is not. They then apply a standard two step technique from the moment matching literature consisting of whitening and orthogonal tensor factorization. However, in their case this does not have much statistical meaning. Indeed, whitening of the covariance matrix is usually justified by the scaling unidentifiability of the problem. In their case, the mathematics works because of the orthogonal unidentifiability of the square root of a matrix. Furthermore, the proposed sample estimators do not actually estimate densities they are dealing with (see, e.g., Eq. (16) and (17)). Their theoretical analysis seems like a straightforward extension of the analysis by Anandkumar, et al. (2012, 2014), however, I find it difficult to assess this analysis due to numerous ambiguities in the problem formulation and method development. This justifies my statement in the beginning of the paragraph.\n\nCLARITY: The paper is not well written and, therefore, is difficult to assess. Many important details are omitted, the formulation of the model is self contradicting, the standard concepts and notations are sometimes abused, some statements are wrong. I provide some examples in the detailed comment below.\n\nORIGINALITY AND SIGNIFICANCE: The idea to apply tensor factorization approaches to the multi-label learning is novel up to my knowledge and is a pro of the paper. However, I have problems to find other pros in this submission because the clarity is quite low and in the present form there is no novelty in the proposed procedure. Moreover, the authors claim to work with densities, but end up estimating other quantities, which are not guaranteed to have the desirable form. They also emphasize the fact that there is the simplex constraint on the estimated parameters, but this constraint is completely ignored by the algorithm and, in general, won't be satisfied in practice. If think the authors should do some more work before this paper can be published.\n\n\n\nDETAILED COMMENTS: Since I am quite critical about the paper, I point out some examples of drawbacks or flaws of this paper:\n\n - The proposed model (Section 2) is not well defined. In particular, the description in Section 2 is not sufficient to understand the proposed model; the plate diagram in Figure 2 is not consistent with the text. It is not mentioned how at least some conditional distributions behave (e.g., tokens given labels or states). The diagram in Fig. 1 does not help since it isn't consistent with the text (e.g. the elements of labels or states are not conditionally / independent). The model is very close to latent Dirichlet allocation by Blei, et al. (2003), but differences are not discussed.\n\n - The standard terminology is often abused. For example, the proposed approach is referred to as the method of moments when it is not. In Section 2.1, the authors aim to match joint distributions (not the moments) to their empirical approximations (which are also wrong; see below). The usage of tokes and documents is interchanged without any explanations.\n\n - The use of the whitening approach is not justified in their setting working with joint distributions of couples and triples and it has no statistical meaning. No explanation is provided. I would definitely not call this whitening.\n\n - In Section 2.2, the notation is not defined and is different from what is usually used in the literature. For example, Eq. (15) does not make much sense as is. One could guess from the context that they are talking about the eigenvectors of an orthogonal tensor as defined in, e.g. Anandkumar, et al. (2014).\n\n - In Section 3, the authors emphasize the fact that their parameters are constrained to the probability simplex, but this constraint is not ensured in the proposed algorithm (Alg. 1).\n\n - Importantly, the estimators of the matrix M_2 and tensor M_3 do not make much sense to me. For example, for estimating M_2 it would be reasonable to average over all word pairs, i.e. something like [M_2]_{ij} = 1/L \\sum_{w_k \\not = w_l} P(w_k = v_i, w_l = v_j), where L is the number of pairs. This is different from the expression in Eq. (16), which is just a rescaled non-central second moment. Similar issue is true for the order-3 estimator.\n\n - The factorization procedure does not ensure non-negativity of the obtained parameters and, therefore, the rescaling is not guaranteed to belong to the probability simplex. I could not find any explanations of this issue.\n\n - I explain good plots in the experimental section, potentially, by the fact that the authors do algorithmically something different from what they aim to do, because the estimators do not estimate the desired entities (i.e. are not consistent). The procedure looks to me quite similar to the procedure for LDA, hence the reasonable results. However, the authors do not justify their proposed method." ]
[ 4, 3, 4 ]
[ 5, 4, 5 ]
[ "iclr_2018_HJ_X8GupW", "iclr_2018_HJ_X8GupW", "iclr_2018_HJ_X8GupW" ]
iclr_2018_r1AMITFaW
Dependent Bidirectional RNN with Extended-long Short-term Memory
In this work, we first conduct mathematical analysis on the memory, which is defined as a function that maps an element in a sequence to the current output, of three RNN cells; namely, the simple recurrent neural network (SRN), the long short-term memory (LSTM) and the gated recurrent unit (GRU). Based on the analysis, we propose a new design, called the extended-long short-term memory (ELSTM), to extend the memory length of a cell. Next, we present a multi-task RNN model that is robust to previous erroneous predictions, called the dependent bidirectional recurrent neural network (DBRNN), for the sequence-in-sequenceout (SISO) problem. Finally, the performance of the DBRNN model with the ELSTM cell is demonstrated by experimental results.
rejected-papers
The reviewers of the paper are not very enthusiastic of the new model proposed, nor are they very happy with the experiments presented. It is unclear from both the POS tagging and dependency parsing results where they stand with respect to state of the art methods that do not use RNNs. We understand that the idea is to compare various RNN architectures, but it is surprising that the authors do not show any comparisons with other methods in the literature. The idea of truncating sequences beyond a certain length is also a really strange choice. Addressing the concerns of the reviewers will lead to a much stronger paper in the future.
test
[ "SJ2iJmyeM", "ByNyuNYeM", "HyFOESYxM", "ryj3O097f", "SyWcVWC-M", "HyTiNZRZf", "HyI84ZCWG", "r1bUXZAWz", "rkM2fRtZf", "SkILbCF-z", "SJqHV0tZM", "HJ-W4RYZM", "S1fN-RtZz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "The paper proposes a new recurrent cell and a new way to make predictions for sequence tagging. It starts with a theoretical analysis of memory capabilities in different RNN cells and goes on with experiments on POS tagging and dependency parsing. There are serious presentation issues in the paper, which make it hard to understand the ideas and claims.\n\nFirst, I was not able to understand the message of the theoretical analysis from Section 2 and could not see how it is different from similar derivations (i.e. using a linearized version of an RNN and eigenvalue decomposition) that can be found in many other papers, including (Bengio et al, 1994) and (Pascanu et al, 2013). Novelty aside, the analysis has presentation issues. SRN is introduced without a nonlinearity from the beginning, although normally it should have one. From the classical upper bound with a power of the largest singular value the paper concludes that “Clearly, the memory will explode if \\lambda_{max} > 1”, which is not true: the memory *may* explode, having an exponentially growing upper bound does not mean that it *will* explode. The notation chosen from LSTM is different from the standard in deep learning community and was very hard to understand (Y_t is used instead of h_t, and h_t is used instead of c_t). This notation also does not seem consistent with the rest of the paper, for example Equations 28 and 29 suggest that Y_t are discrete outputs and not vectors. \n\nThe novel cell SLSTM-I is meant to be different from LSTM by addition of “input weight vector c_i”, but is not explained where c_i come from. Are they trainable vectors, one for each time step? If yes, then how could such a cell be applied to sequence which are longer than the training ones?\n\nEquations 28, 29, 30 describe a very unusual kind of a Bidirectional Recurrent Network. To the best of my knowledge it is much more common to make one prediction based on future and past information, whereas the paper describes an approach in which first predictions are made separately based on the past and on the future. It is also very common to use several BiRNN layers, whereas the paper only uses one. As for the proposed DBRNN method, unfortunately, I was not able to understand it.\n\nI also have concerns regarding the experiments. Why is seq2seq without attention is used? On such small datasets attention is likely to make a big difference. What’s the point of reporting results of an LSTM without output nonlinearity (Table 5)?\n\nTo sum up, the paper needs a lot work on many fronts, but most importantly, presentation should be improved.\n", "\nThis paper introduces a different form of Memory cell for RNN which has more capabilities of long-term memorizing. Furthermore, it presents and efficient architecture for sequence-to-sequence mapping.\n\nWhile the claim of the paper sounds very ambitious and good, the paper has several flaws. First of all, the mathematical analysis is a bit problematic. Of course, it is known that Simple Recurrent Networks (SRN) have a vanishing gradient problem. However, the way you proof it is not correct, as you ignore the application of f() for calculating the output (which is routed to the input) and you use an upper bound to show a general behaviour.\nThe analysis of the Memory capabilities of LSTM is a bit simplified, however, it is okay. Note, that various experiments by Schmidhuber's group, as well as Otte & al have shown that LSTM can generalize and memorize to sequences of more than a million time steps, if the learning rates is small enough.\n\nThe extended memory which the authors call SLSTM-I has similar memory capabilities as LSTM. The other one (SLSTM-II) looses the capability of forgetting as it seems. An analysis would be crucial in this paper to show the benefits mathematically. \n\nThe authors should have a look at \"Evolving memory cell structures for sequence learning\" by Justin Bayer, Daan Wierstra, Julian Togelius and J¨urgen Schmidhuber, published in 2009. Note that the SLSTM belongs to the family of networks which could be generated by that paper as well.\n\nAlso \"Neural Architecture Search with Reinforcement Learning\" by Barret Zoph and Quoc V. Le would be interesting.\n\nIn your experiments it would be fair to compare to Cheng et al. 2016\n\nI suggest the authors being more modest with the name of the memory cell as well as with the abstract (especially in the POS experiment, SLSTM is not superior)", "This submission first proposes a new variant of LSTM by introducing a set of independent weights for each time step, to learn longer dependency from the input. Then the submission propose a dependent bidirectional structure by using the output as input to the RNN cell to introduce the dependency of the outputs.\n\nWhile LSTM do have problem to learn very long term dependency, the model proposed in this paper is very inefficient, the number of parameters are depend on the the length of sequences. Also, there is no analysis about why adding these additional weights could help the model learn better long-term dependency. In the other word, why this approach is better than attention/ self-attention? How to handle very long sequence and also, how to deal with different length? Just ignore the additional weights?\n\nIn the second part, the author argued a standard seq2seq model is vulnerable to previous erroneous predictions. But I don't understand why the DBRNN can handle it. It essentially just a multitask learning function: L = L_f + L_b + L_fb where error signal backprop to different layer directly which is not new.\n\nThe experimental results are weak. It compare with Seq2Seq model without attention. The other baseline for POS tag is from 1997. ", "Other revisions include:\n\n1. Clarification of SRN model in section 2\n2. Elaboration of the GPU memory/computing speed efficiency of ELSTM at the end of section 3\n3. Change of notations as suggested by reviewer 1\n4. Added proof of the memory capability of ELSTM in equation 25\n5. Added proof of why DBRNN is robust to erroneous previous predictions in equations 32-34 ", "Dear Reiviewer\n\nWe have revised our paper according to your feedback, we have made some important revisions:\n1. We have changed the cell name from super-long short-term memory (SLSTM) to extended-long short-term memory (ELSTM) as suggested.\n2. The mathematical proof of the memory capability of ELSTM and the robustness of DBRNN has been added\n3. Argument of the added number of parameters of ELSTM does not affect its practicability has been added\n4. Experiments for sequence-to-sequence with attention and Cheng's bi-attention model have been added, the results further confirm our cell and RNN model design.\n\nbest", "Dear Reiviewer\n\nWe have revised our paper according to your feedback, we have made some important revisions:\n1. We have changed the cell name from super-long short-term memory (SLSTM) to extended-long short-term memory (ELSTM) as suggested by reviewer 2.\n2. The mathematical proof of the memory capability of ELSTM and the robustness of DBRNN has been added\n3. Argument of the added number of parameters of ELSTM does not affect its practicability has been added\n4. Experiments for sequence-to-sequence with attention and Cheng's bi-attention model have been added, the results further confirm our cell and RNN model design.\n\nbest", "Dear Reiviewer\n\nWe have revised our paper according to your feedback, we have made some important revisions:\n1. We have changed the cell name from super-long short-term memory (SLSTM) to extended-long short-term memory (ELSTM) as suggested by reviewer 2.\n2. The mathematical proof of the memory capability of ELSTM and the robustness of DBRNN has been added\n3. Argument of the added number of parameters of ELSTM does not affect its practicability has been added\n4. Experiments for sequence-to-sequence with attention and Cheng's bi-attention model have been added, the results further confirm our cell and RNN model design.\n\nbest", "Dear readers, please check the latest version that is revised based on the valuable feedback from the reviewers. We have change the cell name from super-long short-term memory (SLSTM) to extended-long short-term memory (ELSTM). We also include the results for sequence-to-sequence with attention model and Cheng's bi-attention model. Both results confirm our cell and RNN model design.", "Thank you for the valuable input for this paper, especially the literatures you point to reference.\nTo address your concerns of this work, we would like to answer them as follows and reflect these answers to our next revision:\n\t\n1. The formulation of SRN is incorrect\nThank you for the careful examination. We should have mentioned that the SRN model we adopt is from Elman,1990 with linear hidden state activation function. There are some variations of SRN, including Elman,1990 and Jordan,1997. Tensorflow also has its own variation of SRN. The SRN models that feed the previous output back to the input is Jordan,1990 and Tensorflow. For the Elman's SRN model, the hidden state is fed back to the system instead.\nThe reason we chose Elman's SRN model is that it is mathematically tractable, and it can be analyzed using induction. For the other two models, however, due to the presence of non-linear output activation f(), such analysis is impossible. In addition, all three SRN models are similar to each other performance-wise. \nSo the choice of Elman's model, in our assessment, is valid. It opens the door for the ensuing mathematical comparison between LSTM and SRN, which, to our knowledge, the first time in literature.\n\n2. Showing upper bound is inadequate\nIt is important to point out that in most RNN literature, the point of interest is about gradient vanishing/exploding problem which happens in the training process. And on that front, Pascanu Razvan, 2013 already uses upper bound to show the necessary conditions for gradient vanishing and exploding. \nThis work focus more on the memory decay problem once the RNN model is trained, and we follow similar proofing procedure. In our SRN analysis, the upper bound indicates once the largest singular value of the weight of hidden state is less than one, the memory of SRN will decay at least exponentially. Such conclusion is solid.\n\n3. Show the benefits of SLSTM (we are considering to change the name to make it sound modest, will appear soon after our next revision) mathematically\nThank you for reminding us the crucial missing part! \nBefore we show how mathematically SLSTM has longer memory than LSTM, we need to first point out that in our experiment results section, SLSTM demonstrates qualitative difference as compared to LSTM for complex NLP problems.\n As suggested by other two reviewers, we also carried experiments on sequence to sequence with attention (O.Vinyals, 2015), the results are as follows:\nPOS, I^T_t = [X^T_t, Y^T_{t-1}]:\n\t\t\t LSTM\tGRU\t\tSLSTM-I\t\tSLSTM-II\nseq2seq with attention\t 34.10%\t73.65%\t 80.90%\t\t54.53%\nDP, I^T_t = [X^T_t, Y^T_{t-1}]:\n\t\t\t LSTM\tGRU\t\tSLSTM-I\t\tSLSTM-II\nseq2seq with attention\t 31.47%\t53.70%\t 66.72%\t\t51.24%\nIt is interesting to see that seq2seq with attention with SLSTM-I also outperform DBRNN. We will do our detailed analysis on these updated results. We also include the Cheng's result for DP, the result is:\nDP\n\t\t\t GRU\nCheng's bi-attention:\t61.29%\nTo show that SLSTM retains longer memory, the closed form expression of SLSTM-I output is:\n Y_t = \\sigma (W_o I_t) \\phi (\\sum^t_{k=1} c_k [\\prod^t_{j=k+1} \\sigma(W_f I_j)] \\sigma(W_i I_k) \\phi(W_{in} I_k) +b) (A)\nWe can pick c_k such that:\n|c_k \\prod^t_{j=k+1} \\sigma(W_f I_j)| > |\\prod^t_{j=k+1} \\sigma(W_f I_j)|\t(B)\nCompare Eq. (A) to Eq. (11) (page 3), we can conclude that SLSTM-I has longer memory than LSTM. The memory capability of SLSTM-II can be proven in a similarly fashion.\nAs a matter of fact, c_k plays a similarly role as the attention score in various attention models such as Vinyals et al. (2015). The impact of proceeding elements to the current output can be adjusted (either increase or decrease) by c_k. \n\n4. Two papers for reference\nThank you for providing these papers which we will cite as reference.\nAfter careful reading, we do appreciate the generalization of LSTMs, however, our proposal of SLSTM differs to those architectures by introducing a scaling factor that can enable the cell to attend to particular position of a sequence.\n\n5. Compare Cheng's results\nWe have showed Cheng's result on DP in point 3, it is outperformed by seq2seq with attention with SLSTM-I by more than 5% in accuracy.\n\n6. Be modest in claims\nThank you for pointing this out. We will definitely take this into consideration in our next round of revision. We do agree that there are some presentation issues that confuse readers, but we also feel that this paper introduces some noteworthy ideas that may benefit the NLP and RNN communities for their future research. But of course, we do not want to make overblown claims.", "3. Is SLSTM better than attention/self attention?\nThank you for reminding us to compare SLSTM with other attention models.\nWe did the experiment for sequence to sequence with attention (O.Vinyals, 2015) using different cells. The result is: seq2seq with attention with SLSTM-I outperform other cells by around 10%\nPOS, I^T_t = [X^T_t, Y^T_{t-1}]:\n\t\t\t LSTM\tGRU\t\tSLSTM-I\t\tSLSTM-II\nseq2seq with attention\t 34.10%\t73.65%\t 80.90%\t\t54.53%\nDP, I^T_t = [X^T_t, Y^T_{t-1}]:\n\t\t\t LSTM\tGRU\t\tSLSTM-I\t\tSLSTM-II\nseq2seq with attention\t 31.47%\t53.70%\t 66.72%\t\t51.24%\nIt is interesting to see that seq2seq with attention with SLSTM-I also outperform DBRNN. We will do our detailed analysis on these updated results. We also include the Cheng's result for DP, the result is:\nDP\n\t\t\t GRU\nCheng's bi-attention:\t61.29%\nAlso from the discussion in point 2, the scaling factor in SLSTMs does function like a attention score, making SLSTM inherently an attention model.\n\n4. Isn't DBRNN just another so so multi-task learner?\nThe answer is yes and no.\nDBRNN learns three different tasks: the one-directional forward, one-directional backward and bi-directional final result. In that sense, yes, DBRNN is a multi-task model.\nHowever, we can show that mathematically, by doing this, DBRNN is less vulnerable to previous erroneous predictions as sequence to sequence model as follows:\nDenote p_t as the ground truth distribution of the output at time step t, \\hat{p}_t is the final prediction from the model, \\hat{p}^f_t is the forward prediction, \\hat{p}^b_t is the backward prediction. And we have \\hat{p}_t = W^f \\hat{p}^f_t + W^b \\hat{p}^b_t (Eq. 36 on page 7, we just change the notation from Y to p as suggested by reviewer1). The p_t is an one-hot vector in the form of p = \\mathbbm{I}(p_{tk} = k'), \\forall k \\in 1,...,K. Where \\mathbmm{I} is indicator function, k' is the ground truth label.\nThen the cross entropy of DBRNN is:\nl = -\\sum^K_{k=1} p_{tk} log( \\hat{p}_{tk} )\nwhere K is the total number of classes (size of vocabulary)\nIt can be further expressed as:\nl = -\\sum^K_{k=1} p_{tk} log( W^f_k \\hat{p}^f_{tk} + W^b_k \\hat{p}^b_{tk})\n = -log( W^f_{k'} \\hat{p}^f_{tk'} + W^b_{k'} \\hat{p}^b_{tk'})\nWe can pick W^f_{tk'} and W^b_{tk'} such that l < -\\sum^K_{k=1} log(\\hat{p}^f_{tk}) and l < -\\sum^K_{k=1} log(\\hat{p}^b_{tk})\nWhich means DBRNN can have less cross entropy than one-directional prediction by pooling expert opinions from that two predictions.\n\n 5. The experiment is weak\nWe will add the new results in point 3 to our experiment results section.\n", "6. What is DBRNN?\nAs summed up by reviewer3, DBRNN is a multi-task model with three learning objectives: the one-directional forward target sequence, the one-directional backward target sequence, and the bi-directional target sequence.\nOne can certainly make DBRNN deeper. In our experiments, we are more concerned with the fundamental model capabilities rather than how many RNN layers the model can have. As reported in (Ilya Sutskever,2014), the sequence-to-sequence model gives good performance for machine translation up to 4 layers. The optimal number of layers is itself an ongoing research topic (Razvan Pascanu, 2014). Such topic is beyond the scope of this work.\nWhat makes DBRNN stand out to sequence-to-sequence model is it is mathematically robust to previous erroneous predictions, which is shown as below:\nDenote p_t as the ground truth distribution of the output at time step t, \\hat{p}_t is the final prediction from the model, \\hat{p}^f_t is the forward prediction, \\hat{p}^b_t is the backward prediction. And we have \\hat{p}_t = W^f \\hat{p}^f_t + W^b \\hat{p}^b_t (Eq. 36 on page 7, we just change the notation from Y to p as suggested). The p_t is an one-hot vector in the form of p_t = \\mathbbm{I}(p_{tk} = k'), \\forall k \\in 1,...,K. Where \\mathbmm{I} is indicator function, k' is the ground truth label. Then the cross entropy of DBRNN is:\nl = -\\sum^K_{k=1} p_{tk} log( \\hat{p}_{tk} )\nwhere K is the total number of classes (size of vocabulary)\nIt can be further expressed as:\nl = -\\sum^K_{k=1} p_{tk} log( W^f_k \\hat{p}^f_{tk} + W^b_k \\hat{p}^b_{tk})\n = -log( W^f_{k'} \\hat{p}^f_{tk'} + W^b_{k'} \\hat{p}^b_{tk'})\nWe can pick W^f_{tk'} and W^b_{tk'} such that l < -\\sum^K_{k=1} log(\\hat{p}^f_{tk}) and l < -\\sum^K_{k=1} log(\\hat{p}^b_{tk}). Which means DBRNN can have less cross entropy than one-directional prediction by pooling expert opinions from that two predictions.\nIn the experiment results section, DBRNN with GRU gives the best performance for POS, it also outperforms BRNN and sequence-to-sequence in all language tasks which validates our claim.\n\n7. Need to compare to sequence-to-sequence with attention\nThank you for this suggestion. We carried out experiments for sequence-to-sequence with attention, the results show that our design SLSTM outperforms other cells by around 10%\nPOS, I^T_t = [X^T_t, Y^T_{t-1}]:\n\t\t\t LSTM\tGRU\t\tSLSTM-I\t\tSLSTM-II\nseq2seq with attention\t 34.10%\t73.65%\t 80.90%\t\t54.53%\nDP, I^T_t = [X^T_t, Y^T_{t-1}]:\n\t\t\t LSTM\tGRU\t\tSLSTM-I\t\tSLSTM-II\nseq2seq with attention\t 31.47%\t53.70%\t 66.72%\t\t51.24%\nIt is interesting to see that seq2seq with attention with SLSTM-I also outperform DBRNN. We will do our detailed analysis on these updated results. We also include the Cheng's result for DP, the result is:\nDP\n\t\t\t GRU\nCheng's bi-attention:\t61.29%\nAs to why we also show the results when the input to the model does not include previous output (I_t = X_t), the point is to show that our proposal is robust to different inputs, as shown in Fig. 4. It also substantiates our analysis of SLSTM.", "Dear Reviewer, we do apologize for the presentation style that confuses readers. We will definitely clarify the notations as you suggested, and some basic concepts to develop our ideas.\nWe have also added new experiments as you suggested, as the results (which will be elaborated in the following feedback points) suggests that our design do have an edge as compare to existing RNN cell designs.\nHopefully, after our feedback and coming revision, you can have better idea of what this work is about.\nWe also feel that this paper introduces some noteworthy ideas that may benefit the NLP and RNN communities for their future research\n\n1. How the mathematical analyzing part differs to other RNN works?\nUnlike many other RNN/NLP papers which focus on gradient vanishing/exploding problem which happens in training process, this work primarily focus on the \"memory\" behavior of RNN once it is trained. That answers your question of how this paper is different to (Bengio et al, 1994) and (Pascanu et al, 2013). Here, we analyze the memory of different RNN cells by looking into their output as a function of the input, and derive the relationship between current output to previous inputs.\n\n2. SRN formulation is problematic\nWe should have mentioned that the SRN model we adopted is from Elman,1990 with linear hidden state activation function and non-linear output activation function, so our SRN formulation is not a linearized version. There are some variations of SRN, including Elman,1990 and Jordan,1997. Tensorflow also has its own variation of SRN. The SRN models that feed the previous output back to the input is Jordan,1990 and Tensorflow. For the Elman's SRN model, the hidden state is fed back to the system instead.\nThe reason we chose Elman's SRN model is that it is mathematically tractable, and it can be analyzed using induction. For the other two models, however, due to the presence of non-linear output activation f(), such analysis is impossible. In addition, all three SRN models are similar to each other performance-wise. \nSo the choice of Elman's model, in our assessment, is valid. It opens the door for the ensuing mathematical comparison between LSTM and SRN, which, to our knowledge, the first time in literature.\n\n3. Upper bound does not mandate memory explosion\nYes, we agree with your conclusion here. In our revised paper, we will strictly limit our discussion to the case that the largest singular value is less than one, and then the memory of SRN will decay at least exponentially.\nNevertheless, the derivation of SRN memory does not impact our major contribution: the relative memory capabilities between LSTM and SRN.\n\n4. Notations\nThank you for pointing this out, we will change the notations as you suggested.\n\n5. Where the \"c_i\" comes from?\nYes, it is a trainable vector, one for each time step.\nTo use SLSTM, one needs to decide the maximum sequence length for implementation. Such practice is also used for some popular models like sequence to sequence (Ilya Sutskever,2014) and sequence to sequence with attention (O.Vinyals, 2015). Please refer to the tensorflow implementation for bucketing here:\nhttps://github.com/tensorflow/nmt\nOr here from line 1145:\t https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py\nIn terms of whether SLSTM is efficient in number of parameters and computational complexity, here is our response to a similar question to reviewer 3:\n\"However, the memory overhead required for SLSTM is limited. In addition, SLSTM-II requires less number of parameters than LSTM for typical lengthed sequence. Please take a look at table 1 on page 5: to double the number of parameters as compare to an ordinary LSTM, the length of a sentence needs to be 4 times the size of the word embedding size and number of cells put together. That is, in the case of Ilya Sutskever,2014 with 1000 word embedding and 1000 cells, the sentence length needs to be 4x(1000+1000) = 8000! In practice, most NLP problems whose input involves sentences, the length wil be typically less than 100. As a matter of fact, the tensorflow implementation of O.Vinyals, 2015 for machine translation, the maximum sentence length is truncated up to 50. In our experiment, sequence to sequence with attention (O.Vinyals, 2015) for maximum sentence length 100 (other model settings please refer to Table 2), SLSTM-I parameters uses 75M of memory, SLSTM-II uses 69.1M, LSTM uses 71.5M, and GRU uses 65.9M. Through GPU parallization, the computational time for all four cells is almost identical with 0.4 seconds per step time on a TITAN X 1080 GPU. \nSo the bottom line is: unless you are dealing with sequence length of more than hundreds of thousands, you should not worry about the memory (or the waste of it) or computational complexity issue.\"", "Thank you for your feedback, you do raise some critical points which we would like to address in the comment section, we will also reflect our answers to your concerns in the next revision of our paper.\nWe do agree that there are some presentation issues that confuse readers, but we also feel that this paper introduces some noteworthy ideas that may benefit the NLP and RNN communities for their future research. \n\n1. Efficiency of number of cell parameters\nIt is true that the number of parameters of our cell design depends on the length of sequences. Before using SLSTM, one needs to decide the maximum sequence length for implementation. Such practice is also used for some popular models like sequence to sequence (Ilya Sutskever,2014) and sequence to sequence with attention (O.Vinyals, 2015). Please refer to the tensorflow implementation for bucketing here:\nhttps://github.com/tensorflow/nmt\nOr here from line 1145:\nhttps://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/legacy_seq2seq/python/ops/seq2seq.py\nHowever, the memory overhead required for SLSTM is limited. In addition, SLSTM-II requires less number of parameters than LSTM for typical lengthed sequence. Please take a look at table 1 on page 5: to double the number of parameters as compare to an ordinary LSTM, the length of a sentence needs to be 4 times the size of the word embedding size and number of cells put together. That is, in the case of Ilya Sutskever,2014 with 1000 word embedding and 1000 cells, the sentence length needs to be 4x(1000+1000) = 8000! In practice, most NLP problems whose input involves sentences, the length wil be typically less than 100. As a matter of fact, the tensorflow implementation of O.Vinyals, 2015 for machine translation, the maximum sentence length is truncated up to 50. In our experiment, sequence to sequence with attention (O.Vinyals, 2015) for maximum sentence length 100 (other model settings please refer to Table 2), SLSTM-I parameters uses 75M of memory, SLSTM-II uses 69.1M, LSTM uses 71.5M, and GRU uses 65.9M. Through GPU parallization, the computational time for all four cells are almost identical with 0.4 seconds per step time on a TITAN X 1080 GPU. \nSo the bottom line is: unless you are dealing with sequence length of more than hundreds of thousands, you should not worry about the memory (or the waste of it) or computational complexity issue.\n\n2. Analysis of why SLSTM can have longer memory\nThank you for raising this critical point! Indeed, it can be proven in a similarly fashion as we did in section 2 for LSTM. Below is the proof, and we will definitely add this part to our next revision.\nThe closed form expression of SLSTM-I output is:\nY_t = \\sigma (W_o I_t) \\phi (\\sum^t_{k=1} c_k [\\prod^t_{j=k+1} \\sigma(W_f I_j)] \\sigma(W_i I_k) \\phi(W_{in} I_k) +b) (A)\nWe can pick c_k such that:\n|c_k \\prod^t_{j=k+1} \\sigma(W_f I_j)| > |\\prod^t_{j=k+1} \\sigma(W_f I_j)|\t(B)\nCompare Eq. (A) to Eq. (11) (page 3), we can conclude that SLSTM-I has longer memory than LSTM. The memory capability of SLSTM-II can be proven in a similarly fashion.\nAs a matter of fact, c_k plays a similarly role as the attention score in various attention models such as Vinyals et al. (2015). The impact of proceeding elements to the current output can be adjusted (either increase or decrease) by c_k." ]
[ 3, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1AMITFaW", "iclr_2018_r1AMITFaW", "iclr_2018_r1AMITFaW", "r1bUXZAWz", "ByNyuNYeM", "SJ2iJmyeM", "HyFOESYxM", "iclr_2018_r1AMITFaW", "ByNyuNYeM", "S1fN-RtZz", "HJ-W4RYZM", "SJ2iJmyeM", "HyFOESYxM" ]
iclr_2018_HJ39YKiTb
Associative Conversation Model: Generating Visual Information from Textual Information
In this paper, we propose the Associative Conversation Model that generates visual information from textual information and uses it for generating sentences in order to utilize visual information in a dialogue system without image input. In research on Neural Machine Translation, there are studies that generate translated sentences using both images and sentences, and these studies show that visual information improves translation performance. However, it is not possible to use sentence generation algorithms using images for the dialogue systems since many text-based dialogue systems only accept text input. Our approach generates (associates) visual information from input text and generates response text using context vector fusing associative visual information and sentence textual information. A comparative experiment between our proposed model and a model without association showed that our proposed model is generating useful sentences by associating visual information related to sentences. Furthermore, analysis experiment of visual association showed that our proposed model generates (associates) visual information effective for sentence generation.
rejected-papers
None of the reviewers are enthusiastic about the paper, primarily due to lack of proper evaluation. The response of the authors towards this criticism is also not sufficient. The final results are mixed which does not show very clearly that the presented associative model performs better than the sole seq2seq baseline that the authors use for comparison. We think that addressing these immediate concerns would improve the quality of this paper.
val
[ "SySoIWLgf", "SJ4THI9gG", "BkC87_Cgz", "SkQSCdCmG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "\n\nThe authors describe a method to be used in text dialogue systems. The contribution of the paper relies on the usage of visual information to enhance the performance of a dialogue system. An input phrase is expanded with visual information (visual context vectors), next visual and textual information is merged in a deep model that provides the final answer. \n\nAlthough the idea is interesting, I found the methodology fairly straightforward (only using known models in each step) and not having a particular component as contribution. What is more, the experimental evaluation is limited and purely qualitative: a few examples are provided. Also, I am not convinced on the evaluation protocol: the authors used captions from video+videos as input data, and used \"the next caption\" as the response of the dialogue system. I am not very familiar with this type of systems, but it is clear to me that the evaluation is biased and does not prove the working hypothesis of the authors. Finally, no comparison with related works is provided.\n\n\nEnglish needs to be polished \n", "**Strengths**\nIn general, the paper makes an important observation that even in textual dialog, it might often make sense to reason or “imagine” how visual instances look, and this can lead to better more grounded dialog. \n\n**Weakness**\nIn general, the paper has some major weaknesses in how the dataset has been constructed, details of the models provided and generally the novelty of the proposed model. While the model on its own is not very novel, the paper does make an interesting computational observation that it could help to reason about vision even in textual dialog, but the execution of the dataset curation is not satisfactory, making the computational contribution less interesting. \n\nMore specific details below:\n1. The paper does not write down an objective that they are optimizing for any of the three stages in the model, and it is unclear what is the objective especially for the video context prediction task -- the distribution over the space of images (or videos) for a given piece of text is likely multimodal and gaussian likelihoods might not be sufficient to model this properly. Not clear if the sequence to sequence models are used in teacher forcing model when training in Stage 1, or there is sampling going on. In general, the paper lacks rigor in writing down what it optimizes, and laying out details of the model clearly. \n\n2. The manner in which the dataset has been constructed is unsatisfying -- it assumes that two consecutive pieces of subtitles in news channels constitutes a dialog. This is very likely an incorrect and unsatisfying assumption which does not take into account narrative, context etc. Right now the dataset seems more like skip-thought vectors [A] which models the distribution over contextual sentences given a source sentence than any kind of dialog.\n\n3. The setup and ultimately the motivation in context of the setup is fairly artificial -- the dataset does have images corresponding to each “dialog” so it is unclear why the associative model is needed in this case. Further, it would have been useful to see quantitative evaluation of the proposed approach or statistics of the dataset to establish context for the dataset being a valid benchmark, and providing a baseline / numerical checkpoint for future works to compare to. Without any of these things, the work seems fairly incomplete.\n\nClarity:\n1. Figure 2 captions are pretty unclear and hard to understand what they are conveying.\n2. For a large part the paper talks about how visual instances are not available for textual phrases and then proceeds to assume access to aligned text and visual data. It would be good to clarify from the start that the model does need paired videos and text, and state exactly how much aligned data is needed.\n3. Already learned CNN (Page. 4, Sec. 2.2.1): Would be good to mention which CNN was used.\n4. Page 4: “the textual and visual context vectors of the spider are generated, respectively”: Would be good to clarify that textual and visual context vectors for the spider are attended to, as opposed to saying they are generated.\n\nReferences:\n\n[A]: Kiros, Ryan, Yukun Zhu, Ruslan R. Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. “Skip-Thought Vectors.” In Advances in Neural Information Processing Systems 28, edited by C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, 3294–3302. Curran Associates, Inc.", "The paper proposes to augment (traditional) text-based sentence generation/dialogue approaches by incorporating visual information. The idea is that associating visual information with input text, and using that associated visual information as additional input will produce better output text than using only the original input text.\n\nThe basic idea is to collect a bunch of data consisting of both text and associated images or video. Here, this was done using Japanese news programs. The text+image/video is used to train a model that requires both as input and that encodes both as context vectors, which are then combined and decoded into output text. Next, the image inputs are eliminated, with the encoded image context vector being instead associatively predicted directly from the encoded text context vector (why not also use the input text to help predict the visual context?), which is still obtained from the text input, as before. The result is a model that can make use of the text-visual associations without needing visual stimuli. This is a nice idea.\n\nActually, based on the brief discussion in Section 2.2.2, it occurs to me that the model might not really be learning visual context vectors associatively, or, that this doesn't really have meaning in some sense. Does it make sense to say that what it is really doing is just learning to associate other concepts/words with the input text, and that it is using the augmenting visual information in the training data to provide those associations? Is this worth talking about?\n\nUnfortunately, while the idea has merit, and I'd like to see it pursued, the paper suffers from a fatal lack of validation/evaluation, which is very curious, given the amount of data that was collected, the fact that the authors have both a training and a test set, and that there are several natural ways such an evaluation might be performed. The two examples of Fig 3 and the additional four examples in the appendix are nice for demonstrating some specific successes or weaknesses of the model, but they are in no way sufficient for evaluation of the system, to demonstrate its accuracy or value in general.\n\nPerhaps the most obvious thing that should be done is to report the model's accuracy for reproducing the news dialogue, that is, how accurately is the next sentence predicted by the baseline and ACM models over the training instances and over the test data? How does this compare with other state-of-the-art models for dialogue generation trained on this data (perhaps trained only on the textual part of the data in some cases)?\n\nSecond, some measure of accuracy for recall of the associative image context vector should be reported; for example, on average, how close (cosine similarity or some other appropriate measure) is the associatively recalled image context vector to the target image context vector? On average? Best case? Worst case? How often is this associative vector closer to a confounding image vector than an appropriate one?\n\nA third natural kind of validation would be some form of study employing human subjects to test it's quality as a generator of dialogue.\n\nOne thing to note, the example of learning to associate the snowy image with the text about university entrance exams demonstrates that the model is memorizing rather than generalizing. In general, this is a false association (that is, in general, there is no reason that snow should be associated with exams on the 14th and 15th—the month is not mentioned, which might justify such an association.)\n\nAnother thought: did you try not retraining the decoder and attention mechanisms for step 3? In theory, if step 2 is successful, the retraining should not be necessary. To the extent that it is necessary, step 2 has failed to accurately predict visual context from text. This seems like an interesting avenue to explore (and is obviously related to the second type of validation suggested above). Also, in addition to the baseline model, it seems like it would be good to compare a model that uses actual visual input and the model of step 1 against the model of step 3 (possibly bot retrained and not retrained) to see the effect on the outputs generated—how well do each of these do at predicting the next sentence on both training and test sets?\n\nOther concerns:\n\n1. The paper is too long by almost a page in main content.\n\n2. The paper exhibits significant English grammar and usage issues and should be carefully proofed by a native speaker.\n\n3. There are lots of undefined variables in the Eqs. (s, W_s, W_c, b_s, e_t,i, etc.) Given the context and associated discussion, it is almost possible to sort out what all of them mean, but brief careful definitions should be given for clarity. \n\n4. Using news broadcasts as a substitute for true dialogue data seems kind of problematic, though I see why it was done.\n", "Dear reviewers,\n\nWe really appreciate your constructive and helpful suggestions.\nWe attempted to address all the points raised by the reviewers as much as possible, and modified the following points.\n\n1. We added the section on the experiment of subjective evaluation (Sec.4.2).\n2. We added the description of what is being optimized in each step (Eq.13, 14, 15).\n3. We added brief definitions of several undefined variables.\n4. We added the results of calculating the cosine similarity between the visual context vector obtained in step 1 and the associative visual context vector generated by the associative encoder, to show how close the associatively recalled image context vector is to the target image context vector (Fig.3, 5, 6).\n5. We added the accuracy of each model when the test data were used (Tab.1).\n6. We clarified the teacher forcing was used when training the model in Step 1.\n7. The caption in Figure 2 was described in more detail.\n8. We added a heat map of Attention in Figure 4, in order to show how precisely the context vector is associated as visual information.\n\nBesides these points above, we replaced the image used in Fig.1 with another due to the copyright.\nAlso, we added the source descriptions of some images obtained from TV programs.\n\nThe followings are our responses to the points raised by the reviewers.\n\n-- The model might not really be learning visual context vectors associatively.\nAlthough the model could not fully acquire the precise and useful concepts we aimed at the beginning, we think that the model has learned to associate texts with the concrete information which can be obtained for the first time by associating visual information, as shown in section 4.4.\n\n-- Lack of amount of data\nIt was impossible to collect the large amounts of data due to the time constraint.\n\n-- What is the reason for using the news data ?\nThe first reason is that there exists no appropriate corpus where the texts are provided in the form of dialogues and which are composed of the texts and the videos representing the content corresponding to the texts. The second reason is that the narration in the news programs could be considered to be in the form of dialogue when supposing each sentence be the utterance and the subsequent response in the dialogue. We, however, think this procedure is problematic and we will conduct another experiment using more appropriate data.\n\n-- Why not also use the input text to help predict the visual context?\nWe considered the input texts were not necessary for predicting the visual context vector because the textual context vector should be able to reflect the information of the input text in itself.\n\n-- It may be good not to re-learn the decoder of step 3 and the mechanism of attention.\nThank you for the valuable comment. We would definitely like to complete the corresponding experiment.\n" ]
[ 4, 3, 3, -1 ]
[ 5, 4, 5, -1 ]
[ "iclr_2018_HJ39YKiTb", "iclr_2018_HJ39YKiTb", "iclr_2018_HJ39YKiTb", "iclr_2018_HJ39YKiTb" ]
iclr_2018_HypkN9yRW
DDRprog: A CLEVR Differentiable Dynamic Reasoning Programmer
We present a generic dynamic architecture that employs a problem specific differentiable forking mechanism to leverage discrete logical information about the problem data structure. We adapt and apply our model to CLEVR Visual Question Answering, giving rise to the DDRprog architecture; compared to previous approaches, our model achieves higher accuracy in half as many epochs with five times fewer learnable parameters. Our model directly models underlying question logic using a recurrent controller that jointly predicts and executes functional neural modules; it explicitly forks subprocesses to handle logical branching. While FiLM and other competitive models are static architectures with less supervision, we argue that inclusion of program labels enables learning of higher level logical operations -- our architecture achieves particularly high performance on questions requiring counting and integer comparison. We further demonstrate the generality of our approach though DDRstack -- an application of our method to reverse Polish notation expression evaluation in which the inclusion of a stack assumption allows our approach to generalize to long expressions, significantly outperforming an LSTM with ten times as many learnable parameters.
rejected-papers
The reviewers generally agree that the DDRprog method is both novel and interesting, while also seeing merit in outperformance of related methods in the empirical results. However, There were a lot of complaints about the writing quality, the clarity of the exposition, and unclear motivation of some of the work. The reviewers also noted insufficient comparisons and discussions regarding relevant prior art, including recursive NNs, Tree RNNs, IEP, etc. While the authors have made substantial revisions to the manuscript, with several additional pages of exposition, reviewers have not raised their scores or confidence in response.
train
[ "BJZGbH9lz", "Sk3LFlJZf", "Byd74WfbM", "HkibWuTQM", "SJ7lW_pmM", "By8feuaXz", "rJQ-xu6XM", "HkQ3ydpXM", "SJqSJOamM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "\nSummary: This paper leverages an explicit program format and proposes a stack based RNN to solve question answering. The paper shows state-of-the art performance on the CLEVR dataset.\n\nClarity:\n- The description of the model is vague: I have to looking into appendix on what are the Cell and Controller function.\n- The authors should also improve the intro and related work section. Currently there is a sudden jump between deep learning and the problem of interest. Need to expand the related work section to go over more literature on structured RNN.\n\nPros:\n- The model is fairly easy to understand and it achieves state-of-the-art performance on CLEVR.\n- The model fuses text and image features in a single model.\n\nCons:\n- This paper doesn’t mention recursive NN (Socher et al., 2011) and Tree RNN (Tai et al., 2015). I think they have fairly similar structure, at least conceptually, the stack RNN can be thought as a tree parser. And since the push/pop operations are static (based on the inputs), it’s no more different than encoding the question structure in the tree edges.\n- The IEP (Cells) module (Johnson et al., 2017) seems to do all the heavy-lifting in my opinion. That’s why the proposed method only uses 9M parameters. The task isn’t very challenging to learn because all the stack operations are already given. Table 1 should note clearly which methods use problem specific parsing information to train and which use raw text. Based on my understanding of FiLM at least, they use raw words instead of groundtruth parse trees. So it’s not very surprising that the proposed method can outperform FiLM (by a little bit).\n- I don’t fully agree with the title - the stack operations are not differentiable. So whatever network that outputs the stack operation cannot be jointly learned with gradients. This is based on the if-else statements I see in Algorithm 1.\n\nConclusion:\n- Since the novelty is limited and it requires explicit program supervision, and the performance is only on par with the state-of-the-art (FiLM), I am not convinced that this paper brings enough contribution to be accepted. Weak reject.\n\nReferences:\n- Socher, R., Lin, C., Ng, A.Y., Manning, C.D. Parsing Natural Scenes and Natural Language with Recursive Neural Networks. The 28th International Conference on Machine Learning (ICML 2011).\n- Tai, K.S., Socher, R., Manning C.D. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. The 53rd Annual Meeting of the Association for Computational Linguistics (ACL 2015).", "Summary:\nThe paper presents a generic dynamic architecture for CLEVR VQA and Reverse Polish notation problems. Experiments on CLEVR show that the proposed model DDRprog outperforms existing models, but it requires explicit program supervision. The proposed architecture for RPN, called DDRstack outperforms an LSTM baseline.\n\nStrengths:\n— For CLEVR VQA task, the proposed model outperforms the state-of-the-art with significantly less number of parameters.\n— For RPN task, the proposed model outperforms baseline LSTM model by a large margin.\n\nWeaknesses:\n— The paper doesn’t describe the model clearly. After reading the paper, it’s not clear to me what the components of the model are, what each of them take as input and produce as output, what these modules do and how they are combined. I would recommend to restructure the paper to clearly mention each of the components, describe them individually and then explain how they are combined for both cases - DDRprog and DDRstack. \n— Is the “fork” module the main contribution of the paper? If so, at least this should be described in detail. So, if no fork module is required for a question, the model architecture is effectively same as IEP?\n— Machine accuracy is already par with human accuracy on CLEVR and very close to 100%. Why is this problem still important? \n— Given that the performance of state-on-art on CLEVR dataset is already very high ( <5% error) and the performance numbers of the proposed model are not very far from the previous models, it is very important to report the variance in accuracies along with the mean accuracies to determine if the performance of the proposed model is statistically significantly better than the previous models or not.\n— In Figure 4, why are the LSTM32/128 curves different for Length 10 and Length 30 till subproblem index 10? They are both trained on the same training data, only test data is of different length and ideally both models should achieve similar accuracy for the first 10 subproblems (same trend as DDRstack).\n— Why is DDRstack not compared to StackRNN?\n— Can the authors provide training time comparison of their model and other/baseline models? Because that is more important than the number of epochs required in training.\n— There are only 3 curves (instead of 4) in Figure 3.\n— In a number of places, the authors are referring to left and right program branches. What are they? These names have not being defined formally in the paper.\n\nOverall:\nI think the research work in the paper is interesting and significant, but given the current presentation and level of detail in the paper, I don’t think it will be helpful for the research community. By proper restructuring of paper and adding more details, the paper can be converted to a solid submission.", "Summary:\nThe paper proposes a novel model architecture for the visual question answering task in the CLEVR dataset. The main novelty of the proposed model lies in its problem specific differentiable forking mechanism that is designed to encode complex assumptions about the data structure in the given problem. The proposed model is also applied on the task of solving Reverse Polish Notation (RPN) expression. On the CLEVR dataset, the proposed model beats the state of the art by 0.7% with ~5 times fever learning parameters and in about ~1/2 as many epochs. For the RPN task, the proposed model beats an LSTM baseline by 0.07 in terms of L1 error with ~10 times fewer parameters. \n\t\nStrengths:\n1.\tThe proposed model is novel and interesting.\n2.\tThe performance of the proposed model on the “Count” questions in the CLEVR dataset is especially better than existing models and is worth noting.\n3.\tThe discussion on the tradeoff between tacking difficult problems and using the knowledge of program structure is engaging. \n\nWeaknesses:\n1.\tThe paper writing about the model architecture can be improved. As of now, it is not easy to follow.\n2.\tThe motivation behind the proposed model for the CLEVR task has been explained example of one type of questions – “How many objects are red or spheres?”. It is not clear how the proposed model is better than existing models (in terms of model architecture) for other types of questions.\n3.\tThe motivation behind the proposed model for the RPN task is not strong enough. It is not clear why is machine learning needed for the RPN task? Is the idea that we do not want to use domain knowledge about which symbols correspond to operations vs. which correspond to numbers? Or is there more to it?\n4.\tThe proposed model needs to use the information about program structure. It would be good if authors could comment on how can the proposed model be used to answer natural language questions about images, such as, those in the VQA dataset (Antol et al., ICCV 2015).\n5.\tThe paper does not have any qualitative examples for either of the two tasks. Qualitative examples of successes and failures would be helpful to better position proposed model against existing models.\n\nOverall: The experimental results look good, however, the proposed model needs to be better motivated. The paper writing, especially the “DDR Architecture” section needs improvement to make it easy to follow. A discussion on how the existing model can be adapted for natural language questions would be good to add.\n", "\nThough you did not mention our RPN experiments, this task is crucial to the motivation of our work. Perhaps CLEVR is already too easy for modern architectures, but not every task is feasible without strong supervision. This is the purpose of our framework: generality and flexibility in incorporating complex annotation information. To this end, we present an expression evaluation task that completely breaks a standard LSTM. In contrast, our framework leverages a simple assumption to yield strong performance with a 10X smaller network.\n\nIn closing: DDRprog uses increased supervision to achieve state-of-the-art performance with a small network. We achieves comparable performance to FiLM across 3/5 subtasks and much stronger performance on Count and especially Compare Integer tasks. DDRprog is the only architecture to date that does not exhibit particularly poor performance on at least one subtask. We present DDRstack as a successful application of the same dynamic, increased-supervision framework to the RPN task where a 10X larger LSTM fails to generalize. DDRprog and DDRstack represent applications of our general reasoning framework--a novel approach to learning discrete structural data with increased differentiability and generality over all prior approaches.", "\nWe are sorry to hear that you are not convinced of the merit of our work. Your concerns seem to stem from a lack of clarity. This was a common theme--all three reviewers emphasize a lack of clarity overall. We did our best to address this in our initial submission—to that end, we included a visual, tabular, and algorithmic representation of DDRprog with a full page description and similar information for DDRstack. However, in the process of emphasizing algorithmic details, we neglected proper treatment of the extensive motivation for CLEVR over conventional VQA datasets and assumed far too much domain familiarity. The largest effort of our revision has been to increase accessibility on this front. We have reworked the intro and related works section dramatically to better capture the motivation behind CLEVR, the motivation behind our framework, and concretely how the details of our architectures combine to address all motives. We particularly emphasize the novelty of our approach as a general reasoning framework, which was not clear from our initial submission.\n\nWe have reviewed and reconsidered additional structured RNN literature. We now make mention of recursive NN and Tree RNN and detail the difference from our present work. The push/pop operations are not static in our CLEVR architecture—they are given as supervision at train time, but are learned and coupled to module prediction at test time. In our RPN architecture, the order of push/pop operations is static, but our module reuse scheme would be difficult to motivate from the static context of e.g. Tree RNN works—we use the RPN task to demonstrate the flexibility of our framework across differing degrees of supervision.\n\nYou are correct that the IEP/NMN cells do the heavy lifting—but this is precisely our intent. Our network structure overall is completely different from IEP despite reusing many of that work’s subnetworks out of convenience. In particular, our network not only has the ability to predict and execute modules one at a time (IEP must predict and compile a static program for the entire question), but it also observes the outputs of module executions and uses this to predict the next module. This is a significant contribution of our work—while IEP relies on a larger variant of the same module structure, we achieve a 2X reduction in relative error over IEP on all unary tasks with a much smaller model.\n\nWhile we are largely concerned with building a general framework for discrete, high-level reasoning rather than raw accuracy on CLEVR, our performance increment over FiLM is actually quite significant. In particular, FiLM suffers from a clear logical deficiency on Compare Integer questions—in this case, we achieve a roughly 4X relative improvement in accuracy from 93.8 to 98.4 percent. Furthermore, our architecture is the only model proposed thus far that exhibits strong, consistent performance across all tasks—all other models exhibit inconsistently poor performance on at least one subtask relative to performance on the task as a whole. We have dramatically clarified this additional merit of our model in the Discussion section.\n\nYou are correct that the stack operations themselves are not differentiable. However, the prediction of these operations is learned differentiably: our architecture learns to fork subprocesses, and the push/pop behavior is part of the forking behavior. There is one significant non-differentiable aspect of our network: the pathway from the answer loss to the program cell loss is not directly learnable. However, there is an important indirect interaction between the losses, which is a key contribution of our architecture—unlike in IEP, we maintain a visual state that is directly used by the controller during module prediction, and this visual state is affected by the question answer gradients. This it the main difference between our model and IEP--the proof of the utility of this mechanism is our model’s performance gain vs. IEP despite significantly smaller model size.\n\n", "\nFinally, we address the rationale behind several smaller objections:\n- On the topic of variance, the CLEVR test set contains 150K and is very consistent with the validation set according to the CLEVR authors--we ran the test set only once on our best model. If your concern stems from the reported 0.4% standard deviation reported by the authors of FiLM, it appears from their open sourced release that they trained for a fixed number of iterations and also reported that their model did not necessarily converge.\n- There are in fact 4 cures in Figure 3, but the train and validation curves for our model overlap due to a lack of overfitting; we have added this to the caption.\n- Training time is non-standard for dynamic architectures because it is often not meaningful. Many dynamic architectures currently run more quickly on CPU than GPU simply because research in this area has far outstripped framework optimization for these models. By FLOPs, our model is more efficient than all architectures except RN. However, RN is extremely slow to train—the authors used 10 GPUs in a distributed setting whereas all other models run on single cards. As a rough idea of computational footprint, our research was conducted with two personally owned cards.\n- The left/right program branch notation was admittedly confusing and has been stricken from our work. We were originally referring to the fact that binary programs can be written vertically as a tree with two program branches; we have reworded this in our discussions for clarity.\n- We did discuss StackRNN, but they are not comparable architectures: StackRNN does not leverage additional annotations to learn the stack structure. While this may seem more general, the original work strongly suggests that StackRNN is limited to simpler problems. In contrast, our architecture makes strong structural assumptions to solve the much harder RPN task. This has been expanded upon and clarified in our revision.\n\nIn closing: DDRprog uses increased supervision to achieve state-of-the-art performance with a small network. We achieves comparable performance to FiLM across 3/5 subtasks and much stronger performance on Count and especially Compare Integer tasks. DDRprog is the only architecture to date that does not exhibit particularly poor performance on at least one subtask. We present DDRstack as a successful application of the same dynamic, increased-supervision framework to the RPN task where a 10X larger LSTM fails to generalize. DDRprog and DDRstack represent applications of our general reasoning framework--a novel approach to learning discrete structural data with increased differentiability and generality over all prior approaches.", "We agree with your overall evaluation—from your commentary, it is clear that the better part of our work’s contribution is currently obscured by unclear presentation. This was a common theme--all three reviewers emphasize a lack of clarity overall. We did our best to address this in our initial submission—to that end, we included a visual, tabular, and algorithmic representation of DDRprog with a full page description and similar information for DDRstack. However, in the process of emphasizing algorithmic details, we neglected proper treatment of the extensive motivation for CLEVR over conventional VQA datasets and assumed far too much domain familiarity.\n\nThe largest effort of our revision has been to clarify our presentation and increase accessibility. We have reworked the intro and related works section dramatically to better capture the motivation behind CLEVR, the motivation behind our framework, and concretely how the details of our architectures combine to address all motives. We also include a thorough appendix of architecture tables sufficient to reproduce our result. We will also open source the entire project pending publication.\n\nDiscounting the fork module, our architecture still represents a significant improvement over IEP. On unary programs that do not require a fork module, we attain a 2X improvement in relative error over IEP with a 4X smaller model. While we do include the fork architecture in our revision, the layer details are less relevant than stack behavior that forking enables. In our revision, we particularly emphasize the novelty of our approach as a general reasoning framework, which was not clear from our initial submission. Furthermore, while IEP predicts programs dynamically, the actual execution is static. In contrast, our architecture is fully dynamic, predicting and executing modules on the fly. Previous module outputs are used to update an internal visual state, which is in turn used to predict the next module. This positions our architecture as a superset of IEP/NMN and NPI capable of jointly learning functional modules and complex stack traces.\n\nYou are correct that CLEVR is effectively solved. There are two reasons that the task is still important. First, every competitive previous work exhibits a deficiency in at least one category of reasoning: all prior works obtain curiously poor performance on at least one important CLEVR subtask. As the first approach to attain close consistency across all tasks, our work solves this problem. Second, CLEVR remains the best proxy task for high-level visual reasoning because of its discrete program annotations. This is more in line with the purpose of our work as a general framework for differentiably leveraging logical annotations. However, we should note that our performance numbers do in fact represent a significant improvement over prior approaches: as mentioned above, we achieve a 2X relative improvement over IEP on Count, Exist, and Query tasks with a much smaller model and also maintain comparable results on the two Compare tasks. FiLM suffers from a clear logical deficiency on Compare Integer questions—in this case, we achieve a roughly 4X relative improvement in accuracy from 93.8 to 98.4 percent. We also achieve a significant improvement from 94.5 to 96.5 percent on Count Questions; performance is comparable on all other subtasks.\n\nThe discrepancy in Figure 4 is a critical point to our argument addressed in depth in the RPN discussion section. The first 10 subproblems on the length 30 task correspond to a center crop of the data sequence: the LSTM is confused by the additional leading 20 numbers in the sequence. This indicates that LSTM has not learned the stack structure and completely fails the task. This result is crucial to the motivation of our work. Perhaps CLEVR is already too easy for modern architectures, but not every task is feasible without strong supervision. This is the purpose of our framework: generality and flexibility in incorporating complex annotation information. To this end, we present an expression evaluation task that completely breaks a standard LSTM. In contrast, our framework leverages a simple assumption to yield strong performance with a 10X smaller network.\n\n\n\n", "Thank you for your commentary despite the admittedly unclear writing in our initial submission. We address each of your concerns below:\n\n1. We did our best present clean visual, tabular, and algorithmic representations of DDRprog and similar information for DDRstack. However, we neglected proper treatment of the extensive motivation for CLEVR over conventional VQA datasets and assumed far too much domain familiarity. The largest effort of our revision has been to increase accessibility on this front. We have completely reframed the motivation behind CLEVR, the motivation behind our architecture, and concretely how the details of our architecture combine to address all motives.\n\n2. While IEP predicts programs dynamically, the actual execution is static. In contrast, our architecture is fully dynamic, predicting and executing modules on the fly. Previous module outputs are used to update an internal visual state, which is in turn used to predict the next module. This feedback loop enabled by our dynamic framework is the motivation behind DDRprog and also the reason for its success. This creates a fundamentally different behavior from IEP, even on unary programs that do not require a fork module: on unary subtasks, we attain a 2X improvement in relative error over IEP with a 4X smaller model. The fork module itself allows our architecture to model generic trees: this positions our architecture as a superset of IEP/NMN and NPI capable of jointly learning functional modules and complex stack traces. Our architecture is built specifically for high level reasoning: it is the only model with consistently high performance across all subtasks. FiLM suffers from a clear logical deficiency on Compare Integer questions—in this case, we achieve a roughly 4X relative improvement in accuracy from 93.8 to 98.4 percent. We also achieve a significant improvement from 94.5 to 96.5 percent on Count Questions; performance is comparable on all other subtasks.\n\n3. The RPN task is itself the motivating example for our work. DDRstack does not simply outperform the LSTM baseline on RPN by 0.07: it succeeds where the LSTM fails entirely. The RPN task is a toy problem that turns out to be very hard without additional supervision: the purpose of the expression evaluation task is that it fundamentally involves a parse tree—a stack based algorithm. The discrepancy between length 10 and length 30 performance in Figure 4 is critical here. The first 10 subproblems on the length 30 task correspond to a center crop of the data sequence (e.g. the first subproblem of “12345+*-/” is “45+”) : the LSTM is confused by the additional leading 20 numbers in the sequence. This indicates that LSTM has not learned the stack structure and completely fails the task. In contrast, DDRstack leverages a simple assumption to yield strong performance with a 10X smaller network. This is the purpose of our framework: generality and flexibility in incorporating complex annotation information. Perhaps CLEVR is already too easy and solvable by static architectures such as FiLM, but not every task is feasible without strong supervision.\n\n4. Following the line of reasoning above, the future direction of this work is more general reasoning over knowledge graphs. We prefer synthetic tasks for ease of annotation generation, but plan on transitioning to natural image tasks in the long term--Perhaps not Antol et al.’s VQA dataset, but the natural language and image scene graph dataset Visual Genome.\n\n5. Good point! Appendix included.\n\nIn closing: DDRprog uses increased supervision to achieve state-of-the-art performance with a small network. We achieves comparable performance to FiLM across 3/5 subtasks and much stronger performance on Count and especially Compare Integer tasks. DDRprog is the only architecture to date that does not exhibit particularly poor performance on at least one subtask. We present DDRstack as a successful application of the same dynamic, increased-supervision framework to the RPN task where a 10X larger LSTM fails to generalize. DDRprog and DDRstack represent applications of our general reasoning framework--a novel approach to learning discrete structural data with increased differentiability and generality over all prior approaches.\n", "We thank all reviewers for their commentary. We have largely restructured and streamlined the paper--particularly the introduction and architecture section--and have taken into account all reviewer specific commentary in the new version. We hope that we have assuaged your concerns and adequately clarified points of confusion through significant improvement of the writing. Please see reviewer specific responses below." ]
[ 5, 5, 6, -1, -1, -1, -1, -1, -1 ]
[ 2, 2, 2, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HypkN9yRW", "iclr_2018_HypkN9yRW", "iclr_2018_HypkN9yRW", "SJ7lW_pmM", "BJZGbH9lz", "rJQ-xu6XM", "Sk3LFlJZf", "Byd74WfbM", "iclr_2018_HypkN9yRW" ]
iclr_2018_r1TA9ZbA-
Learning to search with MCTSnets
Planning problems are among the most important and well-studied problems in artificial intelligence. They are most typically solved by tree search algorithms that simulate ahead into the future, evaluate future states, and back-up those evaluations to the root of a search tree. Among these algorithms, Monte-Carlo tree search (MCTS) is one of the most general, powerful and widely used. A typical implementation of MCTS uses cleverly designed rules, optimised to the particular characteristics of the domain. These rules control where the simulation traverses, what to evaluate in the states that are reached, and how to back-up those evaluations. In this paper we instead learn where, what and how to search. Our architecture, which we call an MCTSnet, incorporates simulation-based search inside a neural network, by expanding, evaluating and backing-up a vector embedding. The parameters of the network are trained end-to-end using gradient-based optimisation. When applied to small searches in the well-known planning problem Sokoban, the learned search algorithm significantly outperformed MCTS baselines.
rejected-papers
All reviewers agree that the contribution of this paper, a new way of training neural nets to execute Monte-Carlo Tree Search, is an appealing idea. For the most part, the reviewers found the exposition to be fairly clear, and the proposed architecture of good technical quality. Two of the reviewers point out flaws in implementing in a single domain, 10x10 Sokoban with four boxes and four targets. Since their training methodology uses supervised training on approximate ground-truth trajectories derived from extensive plain MCTS trials, it seems unlikely that the trained DNN will be able to generalize to other geometries (beyond 10x10x4) that were not seen during training. Sokoban also has a low branching ratio, so that these experiments do not provide any insight into how the methodology will scale at much higher branching ratios. Pros: Good technical quality, interesting novel idea, exposition is mostly clear. Good empirical results in one very limited domain. Cons: Single 10x10x4 Sokoban domain is too limited to derive any general conclusions. Point for improvement: The paper compares performance of MCTSnet trials vs. plain MCTS trials based on the number of trials performed. This is not an appropriate comparison, because the NN trials will be much more heavyweight in terms of CPU time, and there is usually a time limit to cut off MCTS trials and execute an action. It will be much better to plot performance of MCTSnet and plain MCTS vs. CPU time used.
val
[ "ByL3DP9gf", "HyC90Zoez", "HkTkZHjlM", "BySoOGDff", "ByptdGwfz", "r1bluMwMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors introduce an approach for adding learning to search capability to Monte Carlo tree search. The proposed method incorporates simulation-based search inside a neural network by expanding, evaluating and backing-up a vector-embedding. The key is to represent the internal state of the search by a memory vector at each node. The computation of the network proceeds just like a simulation of MCTS, but using a simulation policy based on the memory vector to initialize the memory vector at the leaf. The proposed method allows each component of MCTS to be rich and learnable, and allows the joint training of the evaluation network, backup network, and simulation policy in optimizing the MCTS network. The paper is thorough and well-explained. My only complaint is the evaluation is only done on one domain, Sokoban. More evaluations on diverse domains are called for. ", "This paper proposes a framework for learning to search, MCTSNet. The paper proposes an idea to integrate simulation-based planning into a neural network. By this integration, solving planning problems can be end-to-end training. The idea is to represent all operators such as backup, action selection and node initialisation by neural networks. The authors propose to train this using policy gradient in which data of optimal state-action pairs is generated by a standard MCTS with a large number of simulations.\n\nIn overall, it is a nice idea to use DNN to represent all update operators in MCTS. However the paper writing has many unclear points. In my point of view, the efficiency of the proposed framework is also questionable.\n\n\nHere, I have many major concerns about the proposed idea.\n\n- It looks like after training MCTSnet with a massive amount of data from another MCTS, MTCSnet algorithm as in Algorithm 2 will not do very much more planning yet. More simulations (M increases) will not make the statistics in 4(a) improved. Is it the reason why in experiments M is always small and increasing it does not make a huge improvement?. This is different from standard planning when a backup is handled with more simulations, the Q value function will have better statistics, and then get smaller regrets (see 4(b) in Algorithm 1). This supervising step is similar to one previous work [1] and not mentioned in the paper.\n\n\n- MCTSnet has not dealt well with large/continuous state spaces. Each generated $s$ will amount to one tree node, with its own statistics. If M is large, the tree maintained is huge too. It might be not correct, then I am curious how this technical aspect is handled by MCTSnet.\n\n- Other questions:\n\n + how the value network used in the MCTS in section 3.5 is constructed?\n\n + what does p(a|s,{\\mathbf z}), p({\\mathbf s}|{\\mathbf z}) mean?\n\n + is R_1 similar to R^1\n\n + is z_m in section 3.5 and z^m in section 3.6 different?\n\n + is the action distribution from the root memory p_{\\theta}(a|s)? \n\n- Other related work: \n\n\n[1] Xiaoxiao Guo et. al. Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning, NIPS 2014\n\n[2] Guez et. al. Bayes-Adaptive Simulation-based Search with Value Function Approximation, NIPS 2014\n", "This paper designs a deep learning architecture that mimics the structure of the well-known MCTS algorithm. From gold standard state-action pairs, it learns each component of this architecture in order to predict similar actions.\n\nI enjoyed reading this paper. The presentation is very clear, the design of the architecture is beautiful, and I was especially impressed with the related work discussion that went back to identify other game search and RL work that attempts to learn parts of the search algorithm. Nice job overall.\n\nThe main flaw of the paper is in its experiments. If I understand them correctly, the comparison is between a neural network that has been learned on 250,000 trajectories of 60 steps each where each step is decided by a ground truth close-to-optimal algorithm, say MCTS with 1000 rollouts (is this mentioned in the paper). That makes for a staggering 15 billion rollouts of prior data that goes into the MCTSNet model. This is compared to 25 rollouts of MCTS that make the decision for the baseline. I suspect that generating the training data and learning the model takes an enormous amount of CPU time, while 25 MCTS rollouts can probably be done in a second or two. I'm sure I'm misinterpreting some detail here, but how is this a fair comparison?\n\nWould it be fair to have a baseline that learns the MCTS coefficient on the training data? Or one that uses the value function that was learned with classic search? I find it difficult to understand the details of the experimental setup, and maybe some of these experiments are reported. Please clarify. Also: colors are not distinguishable in grey print.\n\nHow would the technique scale with more MCTS iterations? I suspect that the O(N^2) complexity is very prohibitive and will not allow this to scale up?\n\nI'm a bit worried about the idea of learning to trade off exploration and exploitation. In the end you'll just allow for the minimal amount of exploration to solve the games you've already seen. This seems risky, and I suspect that UCB and more statistically principled approaches would be more robust in this regard?\n\nAre these Sokoban puzzles easy for classical AI techniques? I know that many of them can be solved by A* search with a decent heuristic. It would be fair to discuss this.\n\nThe last sentence of conclusions is too far reaching; there is really no evidence for that claim.", "Thank you for your review. We’ll look at more domains in future work. \n", "Thank you for your review. We’re sorry you found some of the writing unclear. Let us answer point by point:\n\n* About the planning:\n\nThe quality of the prediction does improve with more simulations up to M. And training with larger M does lead to better results, cf Figure 5. \n\n[1] distills search into a regular CNN. There is no learning to search. The fact that we generate labels for our experimental setup using search is orthogonal to our contribution. (Please also take a look at our answer to the other reviews.)\n\nThe model-free baselines comparison provide evidence that well-chosen simulations of the environments are necessary to obtain good performance at test time. The data is used to learn how to perform such planning at test time.\n\n* About continuous spaces:\n\nThis is irrelevant as we are not studying continuous environments. MCTS also suffers from the exact same limitation, but there exists techniques to deal with continuous states/actions that would also apply to MCTSnets. We could also “neuralize” search algorithms that are more directly suited to continuous spaces, if this had been the aim of the paper.\n\n* “how the value network used in the MCTS in section 3.5 is constructed?”\n\nThe value network is trained by regressing towards the outcome of rollouts of a pre-trained policy in Sokoban. We’ll add these details in the appendix.\n\n* “what does p(a|s,{\\mathbf z}), p({\\mathbf s}|{\\mathbf z}) mean?”\n\n p(a|s, z) is the output of the network, where z is a random variable representing the internal actions selected by the search. \n\n p({\\mathbf s}|{\\mathbf z}) doesn’t appear in the paper. Do you mean \\pi(z|s)? This is the distribution of z based on the simulation policy \\pi for a given root state s; i.e. the probability of a tree expansion given the root state s.\n\n* “is R_1 similar to R^1”\n\nYes, R^1 should be R_1. This is a typo, thanks for catching that. \n\n * “is z_m in section 3.5 and z^m in section 3.6 different?”\n\nThat’s also a typo, they should be the same.\n\n* “is the action distribution from the root memory p_{\\theta}(a|s)?”\n\nThat’s the marginal distribution after integrating over all random choices z.\n\n* About other related work: \n\nNone of these work attempt to learn how to search. We don’t view these as being particularly relevant.\n", "Thank you for your thoughtful review. Let us reply to each point separately.\n\n* About the experimental setup:\nIt is true that the network is trained from a ground-truth close-to-optimal MCTS that uses a lot of computation (1000 rollouts per search). But that is exactly the point! Our neural network can efficiently represent and learn a search strategy that would normally take 1000 rollouts of MCTS to compute. Whereas a standard neural network (of equivalent capacity) fails to learn the search strategy, even when given the same close-to-optimal MCTS training data. Also note our MCTS baseline uses a value network trained with a comparable amount of data. Finally - when solving a level never seen before, MCTS and MCTSnet have access to the exact same amount of model information.\n\nA valid suggestion you make is to “learn[..] the coefficients of MCTS”. MCTSnets is exactly such a proposal, albeit with a more general architecture.\n\n* About exploration/exploitation tradeoff:\n\nIn the context of search, we care about exploration vs exploitation strategies inasmuch as they allow us to get good final action selection. UCB has proven empirically to be a good simulation strategy, but there is no guarantee it is optimal in that context. \n\nIt is in principle possible for our networks to implicitly re-discover UCB if it is was indeed optimal for our problem. The advantage of learning is that we can tune that strategy to the domain at hand, instead of relying on a generic approach which might perform indifferently in the task of interest.\n\nWe’ve shown some evidence that we learn an effective simulation strategy in our experimental setup; furthermore, we find that the trees constructed by MCTSnet contained variable number of branches; we believe this implies some form exploration / tradeoff is learned by the algorithm.\n\n* About scaling:\n\nNote that O(n^2) is a worse-case complexity (it will typically be O(n log n)) and not specific to MCTSnets, regular MCTS has the same complexity.\nMCTSnets compares favorably to MCTS (using deep nets) at run-time: they both run a forward of a large network for each simulation to evaluate leaf nodes. In addition, MCTSnet also runs a simulation policy and backup networks, but these are much less expensive in comparison.\n\nIt’s training and optimizing MCTSnets with very large number of simulation that is more challenging. But the hope, partially demonstrated here, is that learned search can do more with less simulations - so very large number of sims might not be required.\n\n* About sokoban puzzles:\n\nWe’re using Sokoban as a sandbox for our model. We are not attempting to compete with brute-force solvers using heuristics and other domain-specific knowledge on Sokoban - we are not putting forward our method as a Sokoban solver. Indeed, a well-tuned A* may do relatively well. \n\n* About last sentence in conclusion:\n\nWe can rephrase. But in the paper we do demonstrate that MCTSnet, with its custom rules, outperforms MCTS, with its handcrafted one. In our mind, this justifies the suggestion that learning search rules may outperform hand-crafted ones. We don’t view this as too controversial that this is a possibility, and it has been suggested before (cf meta-control literature).\n" ]
[ 7, 4, 5, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_r1TA9ZbA-", "iclr_2018_r1TA9ZbA-", "iclr_2018_r1TA9ZbA-", "ByL3DP9gf", "HyC90Zoez", "HkTkZHjlM" ]
iclr_2018_BkPrDFgR-
Piecewise Linear Neural Networks verification: A comparative study
The success of Deep Learning and its potential use in many important safety- critical applications has motivated research on formal verification of Neural Net- work (NN) models. Despite the reputation of learned NN models to behave as black boxes and theoretical hardness results of the problem of proving their prop- erties, researchers have been successful in verifying some classes of models by exploiting their piecewise linear structure. Unfortunately, most of these works test their algorithms on their own models and do not offer any comparison with other approaches. As a result, the advantages and downsides of the different al- gorithms are not well understood. Motivated by the need of accelerating progress in this very important area, we investigate the trade-offs of a number of different approaches based on Mixed Integer Programming, Satisfiability Modulo Theory, as well as a novel method based on the Branch-and-Bound framework. We also propose a new data set of benchmarks, in addition to a collection of previously released testcases that can be used to compare existing methods. Our analysis not only allowed a comparison to be made between different strategies, the compar- ision of results from different solvers also revealed implementation bugs in pub- lished methods. We expect that the availability of our benchmark and the analysis of the different approaches will allow researchers to invent and evaluate promising approaches for making progress on this important topic.
rejected-papers
All three reviewers are in agreement that this paper is not ready for ICLR in its current state. Given the pros/cons, the committee feels the paper is not ready for acceptance in its current form.
train
[ "ByZNy3ggM", "H1c3wqQef", "rkq9KFDlz", "SJFIDzpbG", "B1EHxS_bG", "ryngtRP-G", "Sk39OCPWf", "B1ZzuCwbM", "S122D0PZM", "SJGAoJ--f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "The paper studies methods for verifying neural nets through their piecewise\nlinear structure. The authors survey different methods from the literature,\npropose a novel one, and evaluate them on a set of benchmarks.\n\nA major drawback of the evaluation of the different approaches is that\neverything was used with its default parameters. It is very unlikely that these\ndefaults are optimal across the different benchmarks. To get a better impression\nof what approaches perform well, their parameters should be tuned to the\nparticular benchmark. This may significantly change the conclusions drawn from\nthe experiments.\n\nFigures 4-7 are hard to interpret and do not convey a clear message. There is no\nclear trend in many of them and a lot of noise. It may be better to relate the\nstructure of the network to other measures of the hardness of a problem, e.g.\nthe phase transition. Again parameter tuning would potentially change all of\nthese figures significantly, as would e.g. a change in hardware. Given the kind\nof general trend the authors seem to want to show here, I feel that a more\ntheoretic measure of problem hardness would be more appropriate here.\n\nThe authors say of the proposed TwinStream dataset that it \"may not be\nrepresentative of real use-cases\". It seems odd to propose something that is\nentirely artificial.\n\nThe description of the empirical setup could be more detailed. Are the\nproperties that are being verified different properties, or the same property on\ndifferent networks?\n\nThe tables look ugly. It seems that the header \"data set\" should be \"approach\"\nor something similar.\n\nIn summary, I feel that while there are some issues with the paper, it presents\ninteresting results and can be accepted.", "Summary:\n\nThis paper:\n- provides a compehensive review of existing techniques for verifying properties of neural networks\n- introduces a simple branch-and-bound approach\n- provides fairly extensive experimental comparison of their method and 3 others (Reluplex, Planet, MIP) on 2 existing benchmarks and a new synthetic one\n\nRelevance: Although there isn't any learning going on, the paper is relevant to the conference.\n\nClarity: Writing is excellent, the content is well presented and the paper is enjoyable read.\n\nSoundness: As far as I can tell, the work is sound.\n\nNovelty: This is in my opinion the weakest point of the paper. There isn't really much novelty in the work. The branch&bound method is fairly standard, two benchmarks were already existing and the third one is synthetic with weights that are not even trained (so not clear how relevant it is). The main novel result is the experimental comparison, which does indeed show some surprising results (like the fact that BaB works so well).\n\nSignificance: There is some value in the experimental results, and it's great to see you were able to find bugs in existing methods. Unfortunately, there isn't much insight to be gained from them. I couldn't see any emerging trend/useful recommendations (like \"if your problem looks like X, then use algorithm B\"). This is unfortunately often the case when dealing with combinatorial search/optimization. ", "The paper compares some recently proposed method for validation of properties\nof piece-wise linear neural networks and claims to propose a novel method for\nthe same. Unfortunately, the proposed \"branch and bound method\" does not explain\nhow to implement the \"bound\" part (\"compute lower bound\") -- and has been used \nseveral times in the same application, incl.:\n\nRuediger Ehlers. Planet. https://github.com/progirep/planet,\nChih-Hong Cheng, Georg Nuhrenberg, and Harald Ruess. Maximum resilience of artificial neural networks. Automated Technology for Verification and Analysis\nAlessio Lomuscio and Lalit Maganti. An approach to reachability analysis for feed-forward relu neural networks. arXiv:1706.07351\n\nSpecifically, the authors say: \"In our experiments, we use the result of \nminimising the variable corresponding to the output of the network, subject \nto the constraints of the linear approximation introduced by Ehlers (2017a)\"\nwhich sounds a bit like using linear programming relaxations, which is what\nthe approaches using branch and bound cited above use. If that is the case,\nthe paper does not have any original contribution. If that is not the case,\nthe authors may have some contribution to make, but have not made it in this\npaper, as it does not explain the lower bound computation other than the one\nbased on LPs.\n\nGenerally, I find a jarring mis-fit between the motivation (deep learning\nfor driving, presumably involving millions or billions of parameters) and\nthe actual reach of the methods proposed (hundreds of parameters).\nThis reach is NOT inherent in integer programming, per se. Modern solvers\nroutinely solve instances with tens of millions of non-zeros in the constraint\nmatrix, but require a strong relaxation. The authors may hence consider\nimproving the LP relaxation, noting that the big-M constraint are notorious\nfor producing weak relaxations.", "We will perform proper tuning of the hyperparameters of the MIP solver using the Gurobi tuning tool specially adapted to our MIP solver and include those results.\n\n\nRegarding the phase transition, we assume that the reviewer is referring to the phase transition in satisfiability problems: Depending on the ratio of clauses to variables, for problem such as 3-SAT, the result go from most likely SAT to most likely unsat, with a phase transition in between those two regimes containing the hardest instances. We apologize if we misunderstood the point that the reviewer was making.\n\nIf this is what was suggested, it’s not certain that NN verification exhibits such a behaviour with regards to parameters such as depth / number of hidden units or number of inputs. It however seems possible that this would be the case with regards to the margin of the property to prove.\nIf the property to prove is True with a very high margin, this is going to be an easy proof as any branch-and-bound type method won’t need to do a lot of branching. If the property is False with a very high margin as well, it will be probably be easy to exhibit a counterexample as they would be many. On the other hand, if the margin is close to zero, very few counter-examples might exist (if the property is false -> margin is negative), making them hard to find or getting tight enough bounds will require a lot of branching (if the property is True -> margin is positive). We already see some evidence of this in Figure 7: making the margin go towards zero makes the problems harder, for all of the solvers. We will add some experiments with negative margins to see if the same conclusion can be drawn on the other side of the limit point / phase transition.\n\n\nIt’s not obvious however how general this measure of hardness will be. Multiplying all weights and biases of the last layer by 10 will similarly scale the margin, without significantly changing the hardness of the problem.\n", "Thank you for your reply.\n\nRegarding hyperparameters: anything that makes sense to be changed should be tuned. It does not need to be exhaustive tuning, and there is software available to do this (e.g. spearmint, irace, smac). It might not make a difference in many cases, but especially for MIP solvers parameter tuning can cause massive differences.\n\nRegarding the measure of hardness: what about the phase transition? It seems like this would be a more robust measure.", "We thank the reviewer for reading our paper and providing comments to improve our paper.\n\nHyperparameters:\nThe reviewer mentions drawbacks in the evaluation of the paper due to the lack of hyperparameter tuning for each benchmark. \nPlanet has very little: going through their source code we found AGGREGATED_CHANGE_LIMIT used at initialization of their linear approximation and an EPSILON(...) parameter for deciding that inequalites are strict. While we didn’t explicitly experiment with the AGGREGATED_CHANGE_LIMIT parameter of Planet, experimenting with doing repeated optimization of nodes without branching (as this hyperparameters controls) didn’t significantly improve. We will observe the impact of this hyperparameter on some example of the ACAS dataset and rerun the experiment if a significant difference is observed.\nReluplex has more, mostly dedicated to addressing numerical errors. Since the submission, we had some input from the authors of Reluplex to address the numerical errors observed on TwinStream and CollisionDetection and re-ran experiments with updated parameters under their recommandation. This lead to minor speed improvements, which however didn’t change the story told by the results presented on each dataset. All those new results will be uploaded in the next version of the paper.\nMIP might have more hyperparameters if we consider all the hyperparameters that a solver like Gurobi gives access to (http://www.gurobi.com/documentation/7.5/refman/parameters.html ) . We indeed did not tune those parameters specifically, which might result in performance variance. Given the already extremely long runtime of verification experiments, it is hard to perform exhaustive tuning over this large space of parameters. We will use the gurobi tuning tool (http://www.gurobi.com/documentation/7.5/refman/tuning_api.html) to find a set of hyperparameters appropriate for each dataset based on a few properties and rerun the experiments using these.\nThe branch-and-bound strategy doesn’t have itself any parameters, unless you consider the choice of elementary functions (pick_out, split, compute_lower_bounds, compute_upper_bounds) hyperparameters.\nWe would appreciate any additional comments on hyperparameters needing tuning that we might not be aware of.\n\nRegarding the artificiality of TwinStream:\nAs mentioned in the general comment, it was useful in helping us diagnose numerical accuracy problems of networks and the importance of reapproximation for deeper network but we agree that it might be limited to a toy-problem useful for developing methods and would not be useful to draw larger conclusions from it. We will run additional experiments on a new benchmark composed of Networks of various architecture / margins but based on a real learning task and replace it.\n\nRegarding a more theoretic measure of problem hardness to better show differences.\nIt seems to me that given that the verification problem is NP-hard, any informative measure of hardness would necessarily be based on experimental results. Piecewise Neural Network verification being a problem that only recently started being studied, I’m not aware of any measure of theoretical hardness for specific problems. \n\nRegarding the description of the empirical setups.\nWe will add more details to the paper. To answer the reviewers question, the ACAS dataset has 45 different networks and several properties are defined over each network. The appendix of https://arxiv.org/pdf/1702.01135.pdf contains the exact list of network - properties it contains. The CollisionDetection dataset is a set of 500 properties on the same network. In TwinStream, each property is a unique network.\n\nWe will improve the look of the tables and correct the wrong headers in the next submitted version. \n", "We thank the reviewer for the comments. We agree that the main novel contribution of our paper lies in the experimental comparison between various methods which was lacking from the literature. Branch&Bound is indeed a fairly standard method when a global minimum is required but we would like to point out that some insights were also brought up in the way lower bounds are computed (namely the importance of rebuilding the approximation at each step, Figure 3).\n\nRegarding surfacing trends, some conclusion can already be made. The CollisionDetection dataset seems to be easily solvable by every method, which might limit the significance of its result. ACAS represents more of a significant challenge (especially given that in the alloted time, certain properties aren’t solved by any methods) and might be where one draws concrete conclusions. The additional TwinStream benchmark would hopefully have confirmed these observations but it’s not clear whether the drastically different results are due to a lack of trends or simply to the artificialness of the benchmark revealing itself. While it was useful in helping us diagnose numerical accuracy problems of networks and the importance of reapproximation for deeper network, that might be the extent of its usefulness. We will run additional experiments on a new benchmark composed of Networks of various architecture / margins but based on a real learning task and replace it.\n", "Thank you for reading our paper and providing comments. Current state of the art methods are at the moment limited to small / medium sized networks that indeed are not representative of the largest networks used in, for example, computer vision. Our dataset of properties reflects this reality. One precondition if we want to make progress and hope to one day be able to verify larger models (such as the one used in autonomous driving) is to be able to accurately assess the strengths and weaknesses of different methods. Benchmarks such as ours are a necessary step for this.\nBy establishing runtimes on a common set of test cases, and releasing the code to do so (including adapters to account for models with different capabilities), we hoped to highlight promising directions and facilitate research for others. Even as of now, our comparison already proved useful in discovering bugs in released implementations of previous methods.\n\nWe wish to highlight once again that the main contribution of our paper is in the experimental comparison and not the branch and bound method we described. The reviewer is correct in understanding that in our experiment, we used the linear programming relaxation of (Ehlers, 2017) (which is however not exactly the relaxation obtained by dropping the integrality constraint in the big-M IP formulation). Note that our use of this relaxation is different than the one made by Planet: rather than building a single linear approximation at the beginning, we rebuild it after each branching step, and shows that it makes a significant difference (see Figure 3), which might explain in a large amount the performance difference between Planet and BaB on the large networks.\n\nPlease note that branch-and-bound as a method has not been used several times in the same application. Lomuscio & Maganti (2017) and Cheng et al. (2017b) simply formulates the problem as a MIP and rely on black-box solvers (respectively Gurobi and CPlex). While these solvers might employ branch-and-bound internally, it is not clear what bounding method they used. Furthermore, they are limited by the intermediate big-M formulation, which the reviewer accurately pointed out to be notorious for producing weak relaxations. The large difference in performance observed between these methods and our BaB also clearly seem to indicate that they are not operating in the same manner.\nPlanet is motivated as an SMT solver, which we agree can be cast as a specific form of branch and bound, where branching is restricted to be over decision variables (phase of the hidden units of the network). Note however that under its formulation, Planet can only be used for satisfiability problem and will not be able to give any information over the margin by which a property is true (unless a costly binary search is employed), while a BaB approach can return such an information (provided no early stopping is employed).\n", "We thank all the reviewers and the anonymous commenters for reading our paper and giving us valuable feedback.\n\nThe main comment is with regards to the lack of novelty. While we introduced a method based on the branch-and-bound framework that surprisingly performed better than other methods on the challenging ACAS dataset, the main contribution is the gathering of existing benchmarks into a common format and the establishment of an experimental comparison.\n\nWe take into account the shared criticism that the additional dataset we introduced with the intent of offering insights into the influence of various parameters might be too artificial. While it was useful for us to discover numerical instability problems and made us discover the impact of depth on the importance of improving approximation, the fact that it isn’t representative of real, trained network limits its applicability and potentially explains why no clear trend could be drawn. We will replace it with a more interesting dataset based on networks trained on a shared task, while still varying these parameters.\n\nWe will also address specific comments as replies to each review.\n", "\"Generally, I find a jarring mis-fit between the motivation (deep learning\nfor driving, presumably involving millions or billions of parameters) and\nthe actual reach of the methods proposed (hundreds of parameters).\nThis reach is NOT inherent in integer programming, per se. Modern solvers\nroutinely solve instances with tens of millions of non-zeros in the constraint\nmatrix, but require a strong relaxation. The authors may hence consider\nimproving the LP relaxation, noting that the big-M constraint are notorious\nfor producing weak relaxations.\"\n\n-- Not questioning the review, but does the reviewer feel SMT or MILP based approaches to verification are not meaningful? I ask this because of the phrase 'jarring misfit' between the goal of deep learning for driving and the 'reach of the methods proposed '.\n\n -- It is not the number of parameters, rather the number of integer choices that make it hard. I do understand when the reviewer suggests that the bounds generated this way are generally loose (https://arxiv.org/pdf/1711.00851.pdf), and that might be a part of the problem. LPs with 10^9 parameters are easily handled these days. \n\n-- The disjunctions make life difficult for SMT solvers. \n\n-- The title reads 'comparative study'. I think the authors make it clear what the contribution might be. \n\n-- The architecture for ACAS for example has ~13k parameters, and 300 nodes. It's a decent sized network. The field of verifying deep nets is a few years old and in its infancy. If the contributions in the field can push it another 2-3 orders of magnitude, we might see it reach verifying 'deep learning for driving' problems\n\nPS. I have nothing to do with the authors, but a random bystander trying to identify possible directions to make contributions. I haven't even fully read the paper, just confused as to what this review is trying to convey." ]
[ 6, 5, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BkPrDFgR-", "iclr_2018_BkPrDFgR-", "iclr_2018_BkPrDFgR-", "B1EHxS_bG", "ryngtRP-G", "ByZNy3ggM", "H1c3wqQef", "rkq9KFDlz", "iclr_2018_BkPrDFgR-", "rkq9KFDlz" ]
iclr_2018_B1nxTzbRZ
Forward Modeling for Partial Observation Strategy Games - A StarCraft Defogger
This paper we present a defogger, a model that learns to predict future hidden information from partial observations. We formulate this model in the context of forward modeling and leverage spatial and sequential constraints and correlations via convolutional neural networks and long short-term memory networks, respectively. We evaluate our approach on a large dataset of human games of StarCraft: Brood War, a real-time strategy video game. Our models consistently beat strong rule-based baselines and qualitatively produce sensible future game states.
rejected-papers
The reviewer scores are fairly close, and the comments in their reviews are likewise similar. All reviewers indicate that they find this to be an interesting learning domain. However, they also agree in assessing the proposed method as having limited novelty and significance. They also critiqued the empirical evaluation as being too specific to Starcraft and not comprehensive, without providing evidence that the defogger contributes to winning at StarCraft. The authors wrote a substantial rebuttal to the reviews, but it did not convince anyone to increase their scores.
train
[ "SJ-MqkDVf", "HJr90mOlM", "BknmMUteM", "Sy46KdXfM", "rkoTsDaQz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "I appreciate the authors responses to my review, and their emphasis on task definition, but my other main concern about the work (poor evaluation --- no actual gameplay using defogger vs no defogger) remains. Also, the authors do not mention any added discussion about how to generalize their \"defogging\" task to other applications, which seems critical to discuss thoroughly if the authors intend introducing the task of \"defogging\" to be their primary contribution.", "The paper considers a problem of predicting hidden information in a poMDP with an application to Starcraft.\nAuthors propose a number of baseline models as well as metrics to assess the quality of “defogging”.\n\nI find the problem of defogging quite interesting, even though it is a bit too Starcraft-specific some findings could perhaps be translated to other partially observed environments.\nAuthors use the dataset provided for Starcraft: Brood war by Lin et al, 2017.\n\nMy impression about the paper is that even though it touches a very interesting problem, it neither is written well nor it contains much of a novelty in terms of algorithms, methods or network architectures.\n\nDetailed comments:\n* Authors should at very least cite (Vinyals et al, 2017) and explain why the environment and the dataset released for Starcraft 2 is less suited than the one provided by Lin et al.\n* Problem statement in section 3.1 should certainly be improved. Authors introduce rather heavy notation which is then used in a confusing way. For example, what is the top index in $s_t^{3-p}$ supposed to mean? The notation is not much used after sec. 3.1, for example, figure 1 does not use it. \n* A related issue, is that the definition of metrics is very informal and, again, does not use the already defined notation. Including explicit formulas would be very helpful, because, for example, it looks like when reported in table 1 the metrics are spatially averaged, yet I could not find an explicit notion of that.\n* Authors seem to only consider deterministic defogging models. However, to me it seems that even in 15 game steps the uncertainty over the hidden state is quite high and thus any deterministic model has a very limited potential in prediction it. At least the concept of stochastic predictions should be discussed\n* The rule-based baselines are not described in detail. What does “using game rules to infer the existence of unit types” mean?\n* Another detail which I found missing is whether authors use just a screen, a mini-map or both. In the game of Starcraft, only screen contains information about unit-types, but it’s field of view is limited. Hence, it’s unclear to me whether a model should infer hidden information based on just a single screen + minimap observation (or a history of them) or due to how the dataset is constructed, all units are observed without spatial limitations of the screen. \n", "The authors introduce the task of \"defogging\", by which they mean attempting to infer the contents of areas in the game StarCraft hidden by \"the fog of war\".\n\nThe authors train a neural network to solve the defogging task, define several evaluation metrics, and argue that the neural network beats several naive baseline models. \n\nOn the positive side, the task is a nice example of reasoning about a complex hidden state space, which is an important problem moving forwards in deep learning.\n\nOn the negative side, from what I can tell, the authors don't seem to have introduced any fundamentally new architectural choices in their neural network, so the contribution seems fairly specific to mastering StarCraft, but at the same time, the authors don't evaluate how much their defogger actually contributes to being able to win StarCraft games. All of their evaluation is based on the accuracy of defogging. \n\nGranted, being able to infer hidden states is of course an important problem, but the authors appear to mainly have applied existing techniques to a benchmark that has minimal practical significance outside of being able to win StarCraft competitions, meaning that, at least as the paper is currently framed, the critical evaluation metric would be showing that a defogger helps to win games. \n\nTwo ways I could image the contribution being improved are either highlighting and generalizing novel insights gleaned from the process of building the neural network that could help people build \"defoggers\" for other domains (and spelling out more explicitly what domains the authors expect their insights to generalize to), or doubling down on the StarCraft application specifically and showing that the defogger helps to win games. A minimal version of the second modification would be having a bot that has access to a defogger play against a bot that does not have access to one.\n \nAll that said, as a paper on an application of deep learning, the paper appears to be solid, and if the area chairs are looking for that sort of contribution, then the work seems acceptable.\n\nMinor points:\n- Is there a benefit to having a model that jointly predicts unit presence and count, rather than having two separate models (e.g., one that feeds into the next)? Could predicting presence or absence separately be a way to encourage sparsity, since absence of a unit is already representable as a count of zero? The choice to have one model seems especially peculiar given the authors say they couldn't get one set of weights that works for both their classification and regression tasks\n- Notation: I believe the space U is never described in the main text. What components precisely does an element of U have?\n- The authors say they use gameplay from no later than 11 minutes in the game to avoid the difficulties of increasing variance. How long is a typical game? Is this a substantial fraction of the time of the games studied? If it is not, then perhaps the defogger would not help so much at winning.\n- The F1 performance increases are somewhat small. The L1 performance gains are bigger, but the authors only compare L1 on true positives. This means they might have very bad error on false positives. (The authors state they are favoring the baseline in this comparison, but it would be nice to have those numbers.)\n- I don't understand when the authors say the deep model has better memory than baselines (which includes a perfect memory baseline)", "# Summary\nThis paper introduces a new prediction problem where the model should predict the hidden opponent's state as well as the agent's state. This paper presents a neural network architecture which takes the map information and several other features and reconstructs the unit occupancy and count information in the map. The result shows that the proposed method performs better than several hand-designed baselines on two downstream prediction tasks in Starcraft.\n\n[Pros]\n- Interesting problem\n\n[Cons]\n- The proposed method is not much novel.\n- The evaluation is a bit limited to two specific downstream prediction tasks.\n\n# Novelty and Significance\n- The problem considered in this paper is interesting.\n- The proposed method is not much novel. \n- Overall, this paper is too specific to Starcraft domain + particular downstream prediction tasks. It would be much stronger to show the benefit of defogging objective on the actual gameplay rather than prediction tasks. Alternatively, it could be also interesting to consider an RL problem where the agent should reveal the hidden state of the opponent as much/quickly as possible.\n\n# Quality\n- The experimental result is not much comprehensive. The proposed method is expected to perform better than hand-designed methods on downstream prediction tasks. It would be better to show an in-depth analysis of the learned model or show more results on different tasks (possibly RL tasks rather than prediction tasks).\n\n# Clarity\n- I did not fully understand the learning objective. Does the model try to reconstruct the state of the current time-step or the future? The learning objective is not clearly defined. In Section 4.1, the target x and y have time steps from t1 to t2. What is the range of t1 and t2? If the proposed model is doing future prediction, it would be important to show and discuss long-term prediction results.", "Thanks for reviewing our paper in details.\nWe updated the paper with (much) better experimental results after fixing a bug, but no new experiments.\nWe would like to address the comments by the reviewers.\n\n- Lack of novelty:\nThis is a task paper, not a model paper. The main contribution is to define the task, baselines, and report on first results (better than baselines). We believe it is sufficient to present a new task. We believe it holds several advantages to forward modeling research on an image domain, such as the environment being closed and having less complexity than the real world.\n\n- Notation: \n - We clarified that U is the set of unit types.\n - s^{(3-p)} means observed state for the player 3 minus the value of p, with p taking values in {1, 2}, which means the opponent of the player who gets to see s^p. We refactored the notation to be more clear.\n\n- About the model:\n - We do have a model that jointly learns to predict presence/absence and unit count, it is the same model that is shared by two different \"heads\". About the specific \"set of weights\" footnote, it is for this line only, and it is most likely because of limited hyperparameter sweep.\n - Both the losses are always at t+frame_skip, i.e. t+5/15/30 seconds, as mentioned in the last paragraph of section 4.2.\n - Using a stochastic model defogging model is an interesting extension.\n - We use the full view, but as we pull over number of units per unit types at the input resolution, the input looks more like a minimap with as many channels as unit types.\n\n- About the dataset:\n - length of games: We only go up to 11 minutes, and a median game is approximately this long. A distribution of game length in the dataset can be found in Figure 2 of https://arxiv.org/abs/1708.02139 \n - (Vinyals et al. 2017) was not available when we conducted most of the research, this dataset is not less nor more suited than the one provided by Lin et al., it is just additional dataset pre-processing work.\n\n- About results:\n - Taking the L1 norm only over true positives is definitely advantageous to the baselines, without which (meaning, if we include false positives) they get approximately 20 times worse L1 norms, and then the numbers are harder to compare. With the true positive filter, the difference looks like 1000 - 2000 = ~1000, whereas without the filter, it looks like 5000 - 100000 = ~95000.\n - The \"much more effective memory model than the baselines\" formulation is wrongly worded, we corrected, the difference between the perfect memory (PM, or PM+rules) and previous seen (PS) baselines is what matters: having a perfect memory maximizes the recall, but sometimes it is required to reset the memory when the location is seen. We changed the wording for \"This supports the observation that our models have a better usage of their memory than the baselines, and are able to remember objects, but also ``forget'' them based on visibility of their previous positions and other correlates.\"" ]
[ -1, 5, 4, 5, -1 ]
[ -1, 4, 1, 3, -1 ]
[ "BknmMUteM", "iclr_2018_B1nxTzbRZ", "iclr_2018_B1nxTzbRZ", "iclr_2018_B1nxTzbRZ", "iclr_2018_B1nxTzbRZ" ]
iclr_2018_H1LAqMbRW
Latent forward model for Real-time Strategy game planning with incomplete information
Model-free deep reinforcement learning approaches have shown superhuman performance in simulated environments (e.g., Atari games, Go, etc). During training, these approaches often implicitly construct a latent space that contains key information for decision making. In this paper, we learn a forward model on this latent space and apply it to model-based planning in miniature Real-time Strategy game with incomplete information (MiniRTS). We first show that the latent space constructed from existing actor-critic models contains relevant information of the game, and design training procedure to learn forward models. We also show that our learned forward model can predict meaningful future state and is usable for latent space Monte-Carlo Tree Search (MCTS), in terms of win rates against rule-based agents.
rejected-papers
There was certainly some interest in this paper which investigates learning latent models of the environment for model-based planning, particularly articulated by Reviewer3. However, the bulk of reviewer remarks focused on negatives, such as: --The model-based approach is disappointing compared to the model-free approach. --The idea of learning a model based on the features from a model-free agent seems novel but lacks significance in that the results are not very compelling. --I feel the paper overstates the results in saying that the learned forward model is usable in MCTS. -- the paper in it’s current form is not written well and does not contain strong enough empirical results
train
[ "BJ-32VOxf", "B1qenWKxM", "HJh2yfcgz", "rko8LBpXG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "The paper proposes to use a pretrained model-free RL agent to extract the developed state representation and further re-use it for learning forward model of the environment and planning.\nThe idea of re-using a pretrained agent has both pros and cons. On one hand, it can be simpler than learning a model from scratch because that would also require a decent exploration policy to sample representative trajectories from the environment. On the other hand, the usefulness of the learned representation for planning is unclear. A model-free agent can (especially if trained with certain regularization) exclude a lot of information which is potentially useful for planning, but is it necessary for reactively taking actions.\nA reasonable experiment/baseline thus would be to train a model-free agent with a small reconstruction loss on top of the learned representation.
In addition to that, one could fine-tune the representation during forward model training. \nIt would be interesting to see if this can improve the results.\n\nI personally miss a more technical and detailed exposition of the ideas. For example, it is not described anywhere what loss is used for learning the model. MCTS is not described and a reader has to follow references and infer how exactly is it used in this particular application which makes the paper not self-contained. \nAgain, due to lack of equations, I don’t completely understand the last paragraph of 3.2, I suggest re-writing it (as well as some other parts) in a more explicit way.\nI also could find the details on how figure 1 was produced. As I understand, MCTS was not used in this experiment. If so, how would one play with just a forward model?\n\nIt is a bit disappointing that authors seem to consider only deterministic models which clearly have very limited applicability. Is mini-RTS a deterministic environment? \nWould it be possible to include a non-deterministic baseline in the experimental comparison?\n\nExperimentally, the results are rather weak compared to pure model-free agents. Somewhat unsatisfying, longer-term prediction results into weaker game play. Doesn’t this support the argument about need in stochastic prediction? \n\nTo me, the paper in it’s current form is not written well and does not contain strong enough empirical results, so that I can’t recommend acceptance. \n\nMinor comments:\n* MatchA and PredictPi models are not introduced under such names\n* Figure 1 that introduces them contains typos. \n* Formatting of figure 8 needs to be fixed. This figure does not seem to be referred to anywhere in the text and the broken caption makes it hard to understand what is happening there.\n", "Summary:\n\nThis paper studies learning forward models on latent representations of the environment, and use these for model-based planning (e.g. via MCTS) in partial-information real-time-strategy games. The testbed used is MiniRTS, a simulation environemnt for 1v1 RTS.\n\nForecasting the future suffers from buildup / propagation of prediction errors, hence the paper uses multi-step errors to stabilize learning.\n\nThe paper:\n1. describes how to train strong agents that might have learned an informative latent representation of the observed state-space.\n2. Evaluates how informative the latent states are via state reconstruction.\n3. trains variatns of a forward model f on the hidden states of the various learned agents.\n4. evaluates different f within MCTS for MiniRTS.\n\nPro:\n- This is a neat idea and addresses the important question of how to learn accurate models of the environment from data, and how to integrate them with model-free methods.\n- The experimental setting is very non-trivial and novel.\n\nCon:\n- The manuscript is unclear in many parts -- this should be greatly improved.\n1. The different forward models are not explained well (what is MatchPi, MatchA, PredN?). Which forward model is trained from which model-free agent?\n2. How is the forward model / value function used in MCTS? I assume it's similar to what AlphaGo does, but right now it's not clear at all how everything is put together.\n\n- The paper devotes a lot of space (sect 4.1) on details of learning and behavior of the model-free agents X. Yet it is unclear how this informs us about the quality of the learned forward models f. It would be more informative to focus in the main text on the aspects that inform us about f, and put the training details in an appendix.\n\n- As there are many details on how the model-free agents are trained and the system has many moving parts, it is not clear what is important and what is not wrt to the eventual winrate comparisons of the MCTS models. Right now, it is not clear to me why MatchA / PredN differ so much in Fig 8.\n\n- The conclusion seems quite negative: the model-based methods fare *much* worse than the model-free agent. Is this because of the MCTS approach? Because f is not good? Because the latent h is not informative enough? This requires a much more thorough evaluation. \n\nOverall:\nI think this is an interesting direction of research, but the current manuscript does provide a complete and clear analysis.\n\nDetailed:\n- What are the right prediction tasks that ensure the latent space captures enough of the forward model?\n- What is the error of the raw h-predictions? Only the state-reconstruction error is shown now.\n- Figure 6 / sect 4.2: which model-free agent is used? Also fig 6 misses captions.\n- Figure 8: scrambled caption.\n- Does scheduled sampling / Dagger (Ross et al.) improve the long-term stability in this case?\n", "Summary: This paper proposes to use the latent representations learned by a model-free RL agent to learn a transition model for use in model-based RL (specifically MCTS). The paper introduces a strong model-free baseline (win rate ~80% in the MiniRTS environment) and shows that the latent space learned by this baseline does include relevant game information. They use the latent state representation to learn a model for planning, which performs slightly better than a random baseline (win rate ~25%).\n\nPros:\n- Improvement of the model-free method from previous work by incorporating information about previously observed states, demonstrating the importance of memory.\n- Interesting evaluation of which input features are important for the model-free algorithm, such as base HP ratio and the amount of resources available.\n\nCons:\n- The model-based approach is disappointing compared to the model-free approach.\n\nQuality and Clarity:\n\nThe paper in general is well-written and easy to follow and seems technically correct, though I found some of the figures and definitions confusing, specifically:\n\n- The terms for different forward models are not defined (e.g. MatchPi, MatchA, etc.). I can infer what they mean based on Figure 1 but it would be helpful to readers to define them explicitly.\n- In Figure 3b, it is not clear to me what the difference between the red and blue curves is.\n- In Figure 4, it would be helpful to label which color corresponds to the agent and which to the rule-based AI.\n- The caption in Figure 8 is malformatted.\n- In Figure 7, the baseline of \\hat{h_t}=h_{t-2} seems strange---I would find it more useful for Figure 7 to compare to the performance if the model were not used (i.e. if \\hat{h_t}=h_t) to see how much performance suffers as a result of model error.\n\nOriginality:\n\nI am unfamiliar with the MiniRTS environment, but given that it is only published in this year's NIPS (and that I couldn't find any other papers about it on Google Scholar) it seems that this is the first paper to compare model-free and model-based approaches in this domain. However, the model-free approach does not seem particularly novel in that it is just an extension of that from Tian et al. (2017) plus some additional features. The idea of learning a model based on the features from a model-free agent seems novel but lacks significance in that the results are not very compelling (see below).\n\nSignificance:\n\nI feel the paper overstates the results in saying that the learned forward model is usable in MCTS. The implication in the abstract and introduction (at least as I interpreted it) is that the learned model would outperform a model-free method, but upon reading the rest of the paper I was disappointed to learn that in reality it drastically underperforms. The baseline used in the paper is a random baseline, which seems a bit unfair---a good baseline is usually an algorithm that is an obvious first choice, such as the model-free approach.", "We thank the reviewers for their insightful comments. \n\nOur paper points to an interesting direction that uses learned latent space from model-free approaches as the latent space for dynamics models. In the MiniRTS game, we verified that the latent space that leads to strong performance of model-free methods is both compact and contains crucial information of the game situation, which could be interesting. We agree that the analysis can be done more thoroughly and the final performance (e.g., MCTS with learned dynamics model) is not that satisfactory, compared to model-free approaches. We will continue working on it in the future. \n\nDetails:\nWhat is MiniRTS?\n\nMiniRTS is recently proposed as part of ELF platform [Tian et al (NIPS 2017)]. It is a miniature 2-player real-time strategy game with basic functionality (e.g., resource gathering, troop/facilities building, incomplete information (fog of war), multiple unit types, continuous motion of units, etc). \n\nThe symbols \"MatchPi\", “MatchA\", etc, are now defined properly in the text (paper is updated). \n\nWe have fixed the broken captions of Fig. 8.\n\nR2:\n3. In Fig. 3b, red curves are the value average on won games, while blue curves are on lost games.\n4. \"\\hat{h_t} = h_t\" would be cheating since the baseline would have access to the most recent observation, which the forward modeling does not. Note that the forward model can only access information in the previous frames, say, 2 frames ago. Performance wise, \"\\hat{h_t} = h_t\" would yield higher performance than the learned forward model.\n\nR3:\n1. We have updated the paper to explain different training paradigms (MatchPi etc). \n\n2. How the forward (or dynamics) model is used in MCTS: \nThe forward model is used to predict the future states given the current state. The predicted future state is thus used as the latent representation of child nodes, and so on. This is useful when the game is imperfect information and the game dynamics is unknown (like what MiniRTS is). In comparison, systems like AlphaGo knows the complete information and perfect game dynamics. Other than this difference, the MCTS algorithm is like what AlphaGo does: in each rollout it expands a leaf node to get its value and policy distribution, and use the value to backpropagate the winrate estimation at each intermediate nodes. \n\n3. In Figure 6, the PrevSeen agent is used.\n\n4. We haven't tried scheduled sampling / Dagger (Ross et al.) yet. We acknowledge that this is an interesting direction to explore. \n\nR1:\n1. Fig. 1 is an illustrative fig about different ways of training forward models. Fig. 2 is the training curves for model-free agents and no MCTS is involved. \n\n2. MiniRTS is indeed an deterministic environment. This means that if all the initial states are fixed (including random seeds), then the game simulator will give exactly the same consequence. However, in the presence of Fog of War (each player cannot see the opponent's behavior if his troops are not nearby), the environment from one player's point of view may not be deterministic. We acknowledge that modeling uncertainty could be a good direction to work on. " ]
[ 5, 4, 4, -1 ]
[ 4, 5, 3, -1 ]
[ "iclr_2018_H1LAqMbRW", "iclr_2018_H1LAqMbRW", "iclr_2018_H1LAqMbRW", "iclr_2018_H1LAqMbRW" ]
iclr_2018_r15kjpHa-
Reward Design in Cooperative Multi-agent Reinforcement Learning for Packet Routing
In cooperative multi-agent reinforcement learning (MARL), how to design a suitable reward signal to accelerate learning and stabilize convergence is a critical problem. The global reward signal assigns the same global reward to all agents without distinguishing their contributions, while the local reward signal provides different local rewards to each agent based solely on individual behavior. Both of the two reward assignment approaches have some shortcomings: the former might encourage lazy agents, while the latter might produce selfish agents. In this paper, we study reward design problem in cooperative MARL based on packet routing environments. Firstly, we show that the above two reward signals are prone to produce suboptimal policies. Then, inspired by some observations and considerations, we design some mixed reward signals, which are off-the-shelf to learn better policies. Finally, we turn the mixed reward signals into the adaptive counterparts, which achieve best results in our experiments. Other reward signals are also discussed in this paper. As reward design is a very fundamental problem in RL and especially in MARL, we hope that MARL researchers can rethink the rewards used in their systems.
rejected-papers
All reviewers are unanimous that the paper is below threshold for acceptance. The authors have not provided rebuttals, but merely perfunctory generic responses. I think the most important criticism is that the approach is "very ad-hoc." I would encourage the authors to consider more principled ways of automatically designing reward functions, like for example, Inverse Reinforcement Learning, in which you start with a good agent behavior policy, and then estimate a reward function for which the behavior policy maximizes the reward function.
train
[ "r1OoL_Yxz", "rkY2B6KgM", "S1uz175xf", "HJ9oOJxZG", "r1JeKkgWM", "SkzCuyxZM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors suggest using a mixture of shared and individual rewards within a MARL environment to induce cooperation among independent agents. They show that on their specific application this can lead to a better overall global performance than purely sharing the global signal, or using just the independent rewards.\n\nThe paper is a little too focused on the packet routing example domain and fails to deliver much in terms of a general theory of reward design for cooperative behaviours beyond showing that mixed rewards can lead to improved results in their domain. They discuss what and how rewards, and this could be made more formal, as well as (at the very least) some guiding principles to follow when mixing rewards. It feels like there is a missing section between sections 2 and 3, where this methodological content could be described.\n\nThe rest of the paper has similar issues, with key intuition and concepts either missing entirely or under-represented. The technical content often assumes that the reader is familiar with certain terms, and it is difficult to see what meaningful conclusions can be drawn from the evaluation.\n\n\nOn a minor note, the use of the term cooperative in this paper could be better defined. In game theory, cooperative games are those in which agents share rewards. Non-cooperative (game theory) games are those where agents have general reward signals (not necessarily cooperature or adversarial). Conventionally (yes there is existing reward design/shaping literature for MARL) people have used the same terms in MARL. Perhaps the authors could define their approach as weakly cooperative, or emergent cooperation.\n\nThe related work could be better described. There are existing papers on MARL and the issues with cooperation among independent learners, and this could be referenced. This includes reward shaping and reward potential. I would also have expected to see brief mention of empowerment in this section too (the agent favouring states where it has the power to control outcomes in an information theoretic sense), as an underyling principle for intrinsic reward. However, more importantly, the authors really needed to do more to synthesize this into an overall picture of what principles are at play and what ideas/methods exist that have tried to exploit some of these principles.\n\nDetailed comments:\n • [p2] the authors say \"We set the meta reward signals as 1 - max(U l ).\", before they define what U_l is.\n • [p2] we have \"As many applications in the real world can be modeled using similar\nmethods, we expect that other fields can also benefit from this work.\" This statement is too vague, and the authors could do more to identify which application areas might benefit.\n • [p3, first para] \"However, the reward design studies for MARL is so limited.\" Drop the word 'so'. Also, I would argue that there have been quite a few (non-deep) discussions about reward design in MARL, cooperative, non-cooperative and competitive domains. \n • [p3, sec 2.2] \"This makes the diligent agents confuse about...\" should be \"confused\", and I would advise against anthropomorphism at least when the meaning is obscured.\n • [p3, sec 3] \"After having considered several other options, we finally choose the Packet Routing Domain as our experimental environments.\" Not sure what useful information is being conveyed here.\n • [sec 3] THe domain could be better described with intuition and formal descriptions, e.g. link utilization ratio, etc, before.\n • [p6] \"Importantly, the proposed blR seems to have similar capacity with dlR,\" The discussion here is all in terms of the reward acronyms with very little call on intuition or other such assistance to the reader.\n • [p7] \"We firstly try gR without any thinking\" The language could be better here.", "The paper provides an empirical study of different reward schemes for cooperative multi-agent reinforcement learning. A number of alternative reward schemes are proposed, partly based on existing literature. These reward schemes are evaluated empirically in a packet routing problem. \n\nThe approach taken by this paper is very ad-hoc. It is not clear to me that this papers offers any general insights or methodologies for reward design for MARL. The only conclusions that can be drawn from this paper is which reward performs best on these specific problem instances(and even this is hard to conclude from the paper). \n\nIn general, it seems strange to propose the packet routing problems as benchmark environments for reward design. From the descriptions in the paper these environments seem relatively complex and make it difficult to study the actual learning dynamics. The results shown provide global performance but do not allow to study specific properties.\n\nThe paper is also quite hard to read. It is littered with non-intuitive abbreviations. The methods and experiments are poorly explained. It claims to study rewards for multi-agent reinforcement learning, but never properly details the learning setting that is considered or how this affects the choice of rewards. Experiments are mostly described in terms of method A outperforms method B. No effort is made to investigate the cause of these results or to design experiments that would offer deeper insights. The graphs are not properly labelled, poorly described in general and are almost impossible to interpret. The main results are presented simply as a large table of raw performance numbers. This paper does not seem to offer any major fundamental or applied contributions. \n", "The authors study the problem of distributed routing in a network, where the goal is to minimize the maximal load (i.e. the load of the link with the highest utilization). The authors advocate to use multi-agent reinforcement learning. The main idea put forward by the authors is that by designing artificial rewards (to guide the agents), one can achieve faster exploration, in order to reduce convergence time.\n\nWhile the authors put forward several interesting ideas, there are some shortcomings to the present version of the paper, including:\n- The design objective seems flawed from the networking point of view: while minimizing the maximal load of a link is certainly a good starting point (to avoid instable queues) one typically wants to minimize delay (or maximize flow throughput). Indeed, it is possible to have a larger maximal load while reducing delay in many cases.\n- Furthermore, the authors do not provide a baseline to which the outcome of the learning algorithms they propose: for instance how does their approach compare to simple policies (those are commonplace in networking) such as MaxWeight, Backpressure and so on ?\n- The authors argue that using multi-agent learning is more desirable than single agent (i.e. with a single reward signal which is common to all agents). However, is multi-agent guaranteed to converge in such a setting ? If some versions of the problem (for some particular reward signal) are not guaranteed to converge, it is difficult to understand whether \"convergence\" is slow due to an inefficient exploration, or simply because convergence cannot occur in the first place.\n- The learning algorithms used are not clearly explained: the authors simply state that they use \"ACCNet\" (from some unpublished prior work), but to readers unfamiliar with this algorithm, it is difficult to judge the contents of the paper. \n- In the numerical experiments, what is the \"convergence rate\" ? is it the ratio between the mean reward of the learnt policy and that of the optimal ? For how many time steps are the learning algorithm run before evaluating their outcome ? What are the meaning of the various input parameter of ACCnet, and is the performance sensitive to those parameters ?", "The review is very pertinent. Thanks.\n\nThe paper doesn't give a general principle for reward design. It only test and verify that the adaptive rewards are better (can achieve higher convergence rate and lower max link utilization ratio) than mixed rewards as well as the global and local rewards. Those rewards are tricks in some extend. \n\nThe paper need further revise indeed. Thanks again for useful comments.", "The review is very pertinent. Thanks.\n\nThe paper doesn't give a general principle for reward design. It only test and verify that the adaptive rewards are better (can achieve higher convergence rate and lower max link utilization ratio) than mixed rewards as well as the global and local rewards. Those rewards are tricks in some extend. \n\nThe paper need further revise indeed. Thanks again for useful comments.", "The review is very pertinent. Thanks.\n\nThe paper doesn't give a general principle for reward design. It only test and verify that the adaptive rewards are better (can achieve higher convergence rate and lower max link utilization ratio) than mixed rewards as well as the global and local rewards. Those rewards are tricks in some extend. \n\nThe paper need further revise indeed. Thanks again for useful comments." ]
[ 5, 2, 5, -1, -1, -1 ]
[ 3, 4, 2, -1, -1, -1 ]
[ "iclr_2018_r15kjpHa-", "iclr_2018_r15kjpHa-", "iclr_2018_r15kjpHa-", "S1uz175xf", "r1OoL_Yxz", "rkY2B6KgM" ]
iclr_2018_SJvrXqvaZ
Adversary A3C for Robust Reinforcement Learning
Asynchronous Advantage Actor Critic (A3C) is an effective Reinforcement Learning (RL) algorithm for a wide range of tasks, such as Atari games and robot control. The agent learns policies and value function through trial-and-error interactions with the environment until converging to an optimal policy. Robustness and stability are critical in RL; however, neural network can be vulnerable to noise from unexpected sources and is not likely to withstand very slight disturbances. We note that agents generated from mild environment using A3C are not able to handle challenging environments. Learning from adversarial examples, we proposed an algorithm called Adversary Robust A3C (AR-A3C) to improve the agent’s performance under noisy environments. In this algorithm, an adversarial agent is introduced to the learning process to make it more robust against adversarial disturbances, thereby making it more adaptive to noisy environments. Both simulations and real-world experiments are carried out to illustrate the stability of the proposed algorithm. The AR-A3C algorithm outperforms A3C in both clean and noisy environments.
rejected-papers
Reviewers are unanimous in scoring this paper below threshold for acceptance. The authors did not submit any rebuttals of the reviews. Pros: Paper is generally clear. Hardware results are valuable. Cons: Limited simulation results. Proposed method is not really novel. Insufficient empirical validation of the approach.
test
[ "S14kDbqlG", "r1tT8Ncez", "HJzp02rMM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Positive:\n- Interesting approach\n- Hardware validation (the RL field needs more of this!)\n\nNegative:\n- Figure 2: what is the reward here? The one from Section 5.1?\n- No comparisons to other methods: Single pendulum swing-up is a very easy task that has been solved with various methods (mostly in a cart-pole setup). Please compare to existing methods such as PILCO, basic Q-learning, classical methods... \n- I'm not sure what's going on with the grammar in Section 5.3 (\"like crazy\", \"super hot\"...). This section also seems irrelevant (move to an appendix/supplementary or remove).\n- You should plot a typical control curve for the motors (requested torques). This might explain your heat problem (I'm guessing the motor is effectively controlled by a bang-bang controller).\n- Why did you pick this task? It's fine to only validate on a single task in hardware, but why not include additional simulation results (e.g. double pendulum)?", "The authors propose an extension of adversarial reinforcement learning to A3C. The proposed technique is of modest contribution and the experimental results do not provide sufficient validation of the approach. \n\nThe authors propose extending A3C to produce more robust policies by training a zero-sum game with two agents: a protagonist and an antagonist. The protagonist is attempting to achieve the given task while the antagonist's goal is for the task to fail. \n\nThe contribution of this work, AR-A3C, is extending adversarial reinforcement learning, namely robust RL (RRL) and robust adversarial RL (RARL), to A3C. In the context of this prior work the novelty is extending the family of adversarial RL methods. However, the proposed method is still within the same family methods as demonstrated by RARL.\n\nThe authors state that AR-A3C requires half as many rollouts as compared to RARL. However, no empirical comparison between the two methods is performed. The paper only performs analysis against the A3C and no other adversarial baseline and on only one environment: cartpole. While they show transfer to the real world cartpole with this technique, there is not sufficient analysis to satisfactorily demonstrate the benefits of the proposed technique. \n\nThe paper reads well. There are a few notational issues in the paper that should be addressed. The authors mislabel the value function V as the action value, or Q function. The action value function is action dependent where the value function is not. As a much more minor issue, the authors introduce y as the discount factor, which deviates from the standard notation of \\gamma without any obvious reason to do so.\n\nDouble blind was likely compromised with the youtube video, which was linked to a real name account instead of an anonymous account.\n\nOverall, the proposed technique is of modest contribution and the experimental results do not provide sufficient validation of the approach. ", "Clarity \nThe paper is clear in general. \n\nOriginality\nThe novelty of the method is limited. The proposed method is a simple extension of L. Pinto et al. by replacing TRPO with A3C. No evidence is provided to show the proposed method is competitive with the original TRPO version. \n\nSignificance\n- The empirical results on the hardware are valuable. \n- The simulated results are very limited. The neural networks used in the simulation have only one hidden layer. The method is tested on the Pendulum domain. \n\nPros:\n- Real hardware results are provided. \n\nCons:\n- Limited simulation results. \n- Lacking technical novelty. \n" ]
[ 4, 4, 4 ]
[ 4, 4, 4 ]
[ "iclr_2018_SJvrXqvaZ", "iclr_2018_SJvrXqvaZ", "iclr_2018_SJvrXqvaZ" ]
iclr_2018_rJIN_4lA-
Maintaining cooperation in complex social dilemmas using deep reinforcement learning
Social dilemmas are situations where individuals face a temptation to increase their payoffs at a cost to total welfare. Building artificially intelligent agents that achieve good outcomes in these situations is important because many real world interactions include a tension between selfish interests and the welfare of others. We show how to modify modern reinforcement learning methods to construct agents that act in ways that are simple to understand, nice (begin by cooperating), provokable (try to avoid being exploited), and forgiving (try to return to mutual cooperation). We show both theoretically and experimentally that such agents can maintain cooperation in Markov social dilemmas. Our construction does not require training methods beyond a modification of self-play, thus if an environment is such that good strategies can be constructed in the zero-sum case (eg. Atari) then we can construct agents that solve social dilemmas in this environment.
rejected-papers
The reviewers found numerous issues in the paper, including unclear problem definitions, lack of motivation, no support for desiderata, clarity issues, points in discussion appearing to be technically incorrect, restrictive setting, sloppy definitions, and uninteresting experiments. Unfortunately, little note of positive aspects was mentioned. The authors wrote substantial rebuttals, including an extended exchange with Reviewer2, but this had no effect in terms of score changes. Given the current state of the paper, the committee feels the paper falls short of acceptance in its current form.
train
[ "HkdM2LS4z", "Sya_dLrVz", "Sk2SPUSNz", "BkHI88rEM", "Skso_BrVz", "ryocOrBVf", "By8cjAVNM", "HJb7qCN4M", "H1CLw0V4G", "rkhkuoNgf", "B1_TQ-clG", "Bk_1Ws3xf", "SkKkV2qMM", "ByIQVhqGG", "By2P8nqGz", "ByKVU35fG", "rJDtLncGf", "rJg0Nh5zf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "I feel like things are becoming more convoluted as we go along. Surely, agents \n\n\"remain on the equilibrium path because of what they anticipate would happen if\nthey were to deviate\" -- Binmore (1992)\n\nBut this is a statement that holds for general games, I don't see how this helps define a \"social dilemma\"? Are you not just trying to say \"always cooperating is not an equilibrium\" ?\n\n\nAnd if this is the case, the response to the second \"bullet\" (>>>) is a bit confusing. Where I had interpreted social dilemmas as a broad class including non-exchangeable problems, the response now seems to say that (in other literatures) \"social dilemmas\" exclude the coordination problem. However, it is not excluded by your own definition, only by the additional assumption, if I understand correctly.\n\nIn any case, all this of course is a matter of definitions. Bottom line is that the results in this paper are only for a much narrower class than what I had imagined when reading the title.", "I feel like we could iterate on this quite a bit further, but in the end I feel that all this should have been clear in the submitted paper which I assessed. This rebuttal phase is for clarifying minor misconceptions. For performing iterations of in-depth analysis of what is really going on and clarifying the message, I usually get my name on a paper.", "\"We point out that all RL with function approximation has this same limitation (if your function approximator is bad, it won’t work), \"\n\n-> yes, but we don't say that that \"converges to the optimal value function up to function approximation\". This is precisely why I think the statement is somewhat misleading.\n\n\n\"In practice, we don’t necessarily need (Pi_D, Pi_D) to be an equilibrium, we just need it to be difficult for the other player to find the better strategy. \"\n\n-> I understand the sentiment, but defining 'difficult' here is a bit tricky. In any case \"equilibrium\" means \"impossible\" not \"difficult\".", "My bad, I had not seen that sentence, I had started reading after the line break. This now makes sense.", "\n>>> “The rollouts procedure seems very complex, and frankly, I cannot understand much of it. (why this procedure? why is it unbiased?) I think this might actually be an important contribution, but clearly it needs much better treatment.”\n \nWe apologize for any lack of clarity, we are happy to add more of this discussion to the appendix. \n \nLet pi1, pi2, be policies for players 1 and 2 respectively, and let s be an initial state. The job of the rollouts is to compute V1(s, pi1, pi2) which is the expected sum of discounted rewards for player 1 for starting in s and both behaving according to pi1, pi2. \n\nRollouts are just the straightforward application of Monte Carlo to compute this expectation, by repeatedly sampling trajectories (starting from s, behaving according to pi1, pi2) and computing their expected discounted sum of rewards. If we do this K times and then average those batches and if K is large, we will approximate V1(s, pi1, pi2) arbitrarily well. \n\nOur only additional approximation is to compute the discounted reward for finite-length trajectories. Given that there is discounting the potential bias shrinks exponentially (after t periods the bias is at most delta^t / (1-delta) * r_max) where r_max is the biggest possible reward from one period. Therefore, this MC estimate converges to the correct value in the limit of large batch size and trajectory length.\n \nThe way this is used in the paper is: let a be an action that can be taken today by player 1. We want to find the advantage of the one shot deviation taken by choosing a today and p1 forever after tomorrow relative to just playing pi1 all the time. Call d1 this modified deviation policy.\n \nWe can approximate V1(s, d1, pi2) the same way as above and subtract it from V1(s, pi1, pi2) to get the advantage.\n \n>>> “If you agree that there are many problems with function approximation, I don't understand why the formulation \"is a Markov equilibrium (up to function approximation).\" is not adapted. It just seems quite misleading... as in general it simply will not be an equilibrium.”\n \nWe meant “Up to function approximation” in the sense of “will converge to the equilibrium in the limit of low function approximation error”. Perhaps we should be more precise in our language here. \n \nWe point out that all RL with function approximation has this same limitation (if your function approximator is bad, it won’t work), so it is the assumption in all function-approximation RL work. These methods have nevertheless been quite successful.\n \nWe do this because for the computation of the D phase length we need to compute the payoffs lost to the other player from the D phase. \n \nIn order to do this, we bound this number using the payoffs of joint defection. \n \nIn order for this bound to work, we need that the other player can’t exceed their (Pi_D, Pi_D) payoffs by choosing some other clever policy to follow during the D phase (ie. that PiD, PiD is a finite time equilibrium). \n \nThis isn’t as strong as it seems. In practice, we don’t necessarily need (Pi_D, Pi_D) to be an equilibrium, we just need it to be difficult for the other player to find the better strategy. \n \nIf the other agent has the same computational capacity as our agent, we can be relatively sure that if we couldn’t find a much better response they probably won’t be able to either. This is because Pi_D was computed with self play, so if there was an obvious better response we wouldn’t have stopped at our current one during training. However, this notion is difficult to formalize. \n \nWe are happy to add this discussion to the paper.\n \n>>> “This side-steps my question: it seems that the presented experiments somehow had variance that was so large that they were not representative? Clearly this is important to clear up: if the paper reported non-significant results that is a reason to further question the other results too.”\n \nCould you clarify which question you feel was not addressed? Assuming you are referring to “why the tables in Figure 1 are not symmetric”, the answer as stated in our reply is: The tables in Figure 1 show the payoff of the row player against the column player – thus there is no reason to expect them to be symmetric (eg. the box for (D,C) corresponds to the payoff that D gets when C is their partner, which is not the same payoff that C gets when D is their partner).\n \nIn addition, we can check that there is no variance issue by computing the standard errors of the mean in our tournament payoffs. They are on the order of ~1 point in Coins where score differences between strategies are on the order of 40 points. The PPD results are similarly extremely statistically significant. We are happy to add the standard errors to the figures in the appendix. \n \n", "We thank the reviewer for the constructive discussion. We believe this has fundamentally improved the clarity of the paper. Our replies to the referee's concerns are in-line below.\n \n>>> “Sorry, I cannot understand this sentence. I am not sure what is meant with \"after every state\". I also am not certain how cooperation along the path of play is influenced by what actions are taken off the path off play. The path of play is the path of play because those alternatives paths will lead to lower payoffs (independent of whether those are 'cooperative' on 'noncooperative' actions), right?”\n \nYou’re right, we could have made this sentence clearer. We mean that a social dilemma is a game where there is no equilibrium between two ***strategies*** that cooperate in every possible state (i.e. unconditional cooperators). E.g. in the repeated PD (where the state is, eg. the history of play) always cooperating can be exploited by always defecting.\n\nHowever, there may exist strategies (such as grim trigger) that don't unconditionally cooperate (eg. Grim Trigger defects if you defect). However, if you look at the realized trajectory of play against each other, they always cooperate along the path. This is what we mean by “maintaining cooperation”.\n \nWe argue this is the key property of a social dilemma: **always cooperating** leaves one open to being cheated. We solve the social dilemma by constructing a policy which maintains cooperation on the path by off-path threats. The key difficulty is how to detect that a defection has been made (value rather than action space) and how to compute the proper “threat” (use rollouts).\n \n>>> “Alright, but I do think this is a very severe assumption, and the paper ought to be very up front about it. As is, the paper claims to MAINTAIN COOPERATION IN COMPLEX SOCIAL DILEMMAS, but truth is that it does not seem to do this for many settings, such as deciding to which movie (romance or comedy) we should go to?”\n \nWe agree that our method only attacks one aspect of sociality: creating the incentives to not cheat. It does not fix the coordination problem. We tried to be quite clear on this in the paper.\n\nHowever, while we agree the coordination problem is very important (and exchangeability is a crucial assumption) we also would like to point out that it’s not as strong as it looks. Exchangeability mostly affects games where we require coordination within a single timestep (eg. the 2 action, 2 player, matrix, Romance or Comedy game). \n \nFor example, Pong requires coordination about where on the screen to bounce the ball back and forth. However, strategies of the form “if the other player hits it to any vertical location, hit it gently back in a straight line from that location” form exchangeable cooperative strategies (this is because it takes more than 1 timestep for the ball to get across the screen).\n \nWe note that in other literatures (eg. evolutionary biology, behavioral economics) cooperation refers specifically only to the instance of social dilemmas, not to the coordination problem. See eg. the well known review by Nowak *Science* 2006 which defines cooperation as: “A cooperator is someone who pays a cost, c, for another individual to receive a benefit, b. A defector has no cost and does not deal out benefits.”\n ", "I think these clarifications are helpful, but I don't think that they are made sufficiently clear in the updated paper. (I had a really hard time decyphering these statements). I would advise to actually make the \" value space as opposed to action space\" the primary hypothesis of a revised paper, since this seems to get to the core of the novelty. ", "If you agree that there are many problems with function approximation, I don't understand why the formulation \"is a Markov equilibrium (up to function approximation).\" is not adapted. It just seems quite misleading... as in general it simply will not be an equilibrium.\n\nThe rollouts procedure seems very complex, and frankly, I cannot understand much of it. (why this procedure? why is it unbiased?) I think this might actually be an important contribution, but clearly it needs much better treatment.\n\n\"From the comments given by the review team we see [...]\"\n\nThis side-steps my question: it seems that the presented experiments somehow had variance that was so large that they were not representative? Clearly this is important to clear up: if the paper reported non-significant results that is a reason to further question the other results too.", "\"Thus we define a social dilemma to be one where cooperation after EVERY STATE is impossible in equilibrium – that is, if there is cooperation along the path of play it must be because OFF THE PATH of play cooperation stops.\"\n\nSorry, I cannot understand this sentence. I am not sure what is meant with \"after every state\". I also am not certain how cooperation along the path of play is influenced by what actions are taken off the path off play. The path of play is the path of play because those alternatives paths will lead to lower payoffs (independent of whether those are 'cooperative' on 'noncooperative' actions), right?\n\n\"The main assumption used is the exchangeability assumption that all strategies form an equivalence class in that any two pairs of cooperative strategies (C1, C1), (C2, C2) are compatible with each other in the sense that (C1, C2) generates the same stream of payoffs. [...] there is no good zero-shot solution to those issues in the literature as well.\"\n\nAlright, but I do think this is a very severe assumption, and the paper ought to be very up front about it. As is, the paper claims to MAINTAIN COOPERATION IN COMPLEX SOCIAL DILEMMAS, but truth is that it does not seem to do this for many settings, such as deciding to which movie (romance or comedy) we should go to?", "This paper addresses multiagent learning problems in which there is a social dilemma: settings where there are no 'cooperative polices' that form an equilibrium. The paper proposes a way of dealing with these problems via amTFT, a variation of the well-known tit-for-that strategy, and presents some empirical results.\n\nMy main problem with this paper is clarity and I am afraid that not everything might be technically correct. Let me just list my main concerns in the below.\n\nThe definition of social dilemma, is unclear:\n\"A social dilemma is a game where there are no cooperative policies which form equilibria. In other\nwords, if one player commits to play a cooperative policy at every state, there is a way for their\npartner to exploit them and earn higher rewards at their expense.\"\ndoes this mean to say \"there are no cooperative *Markov* policies\" ? It seems to me that the paper precisely intents to show that by resorting to history-dependent policies (such as both using amTFT), there is a cooperative equilibrium. \n\nI don't understand:\n\"Note that in a social dilemma there may be policies which achieve the payoffs of cooperative policies because they cooperate on the trajectory of play and prevent exploitation by threatening non-cooperation on states which are never reached by the trajectory. If such policies exist, we call the social dilemma solvable.\"\nis this now talking about non-Markov policies? If not, there seems to be a contradiction?\n\nThe work focuses on TFT-like policies, motivated by \n\"if one can commit to them, create incentives for a partner to behave cooperatively\"\nhowever it seems that, as made clear below definition 4, we can only create such incentives for sufficiently powerful agents, that remember and learn from their failures to cooperate in the past?\n\nWhy is the method called \"approximate Markov\"? As soon as one introduces history dependence, the Markov property stops to hold?\n\nOn page 4, I have problems following the text due to inconsistent use of notation: subscripts and superscripts seem random, it is not clear which symbols denote strategy profiles (rather than individual strategies), there seems mix-ups between 'i' and '1' / '2', there is sudden use of \\hat{}, and other undefined symbols (Q_CC?).\n\nFor all practical purposes, it seems that the made assumptions imply uniqueness of the cooperative joint strategy. I fully appreciate that the coordination question is difficult and important, so if the proposed method is not compatible with dealing with that important question, that strikes me as a large drawback.\n\nI have problems understanding how it is possible to guarantee \"If they start in a D phase, they eventually return to a C phase.\" without making more assumptions on the domain. The clear example being the typical 'heaven or hell' type of problems: what if after one defect, we are trapped in the 'hell' state where no cooperation is even possible? \n\n\"If policies converge with this training then πˆ is a Markov equilibrium (up to function approximation).\" There are two problems here:\n1) A problem is that very typically things will not converge... E.g., \nWunder, Michael, Michael L. Littman, and Monica Babes. \"Classes of multiagent q-learning dynamics with epsilon-greedy exploration.\" Proceedings of the 27th International Conference on Machine Learning (ICML-10). 2010.\n2) \"Up to function approximation\" could be arbitrary large?\n\n\nAnother significant problem seems to be with this statement:\n\"while in the cooperative reward schedule the standard RL convergence guarantees apply. The latter is because cooperative training is equivalent to one super-agent controlling both players and trying to optimize for a single scalar reward.\" The training of individual learners is quite different from \"joint action learners\" [Claus & Boutilier 98], and this in turn is different from a 'super-agent' which would also control the exploration. In absence of the super-agent, I believe that the only guarantee is that one will, in the limit, converge to a Nash equilibrum, which might be arbitrary far from the optimal joint policy. And this only holds for the tabular case. See the discussion in \nA concise introduction to multiagent systems and distributed artificial intelligence. N Vlassis. Synthesis Lectures on Artificial Intelligence and Machine Learning 1 (1), 1-71\n\nAlso, the approach used in the experiments \"Cooperative (self play with both agents receiving sum of rewards) training for both games\", would be insufficient for many settings where a cooperative joint policy would be asymmetric.\n\nThe entire approach hinges on using rollouts (the commented lines in Algo. 1). However, it is completely not clear to me how this works. The one paragraph is insufficient to get across these crucial parts of the proposed approach.\n\nIt is not clear why the tables in Figure 1 are not symmetric; this strikes me as extremely problematic. It is not clear what the colors encode either.\n\nIt also seems that \"grim\" is better against all, except against amTFT, why should we not use that? In general, the explanation of this closely related paper by De Cote & Littman (which was published at UAI'08), is insufficient. It is not quite clear to me what the proposed approach offers over the previous method.\n\n\n\n\n\n\n\n\n\n", "This paper studies learning to play two-player general-sum games with state (Markov games). The idea is to learn to cooperate (think prisoner's dilemma) but in more complex domains. Generally, in repeated prisoner's dilemma, one can punish one's opponent for noncooperation. In this paper, they design an apporach to learn to cooperate in a more complex game, like a hybrid pong meets prisoner's dilemma game. This is fun but I did not find it particularly surprising from a game-theoretic or from a deep learning point of view. \n\nFrom a game-theoretic point of view, the paper begins with somewhat sloppy definitions followed by a theorem that is not very surprising. It is basically a straightforward generalization of the idea of punishing, which is common in \"folk theorems\" from game theory, to give a particular equilibrium for cooperating in Markov games. Many Markov games do not have a cooperative equilibrium, so this paper restricts attention to those that do. Even in games where there is a cooperative solution that maximizes the total welfare, it is not clear why players would choose to do so. When the game is symmetric, this might be \"the natural\" solution but in general it is far from clear why all players would want to maximize the total payoff. \n\nThe paper follows with some fun experiments implementing these new game theory notions. Unfortunately, since the game theory was not particularly well-motivated, I did not find the overall story compelling. It is perhaps interesting that one can make deep learning learn to cooperate, but one could have illustrated the game theory equally well with other techniques.\n\nIn contrast, the paper \"Coco-Q: Learning in Stochastic Games with Side Payments\" by Sodomka et. al. is an example where they took a well-motivated game theoretic cooperative solution concept and explored how to implement that with reinforcement learning. I would think that generalizing such solution concepts to stochastic games and/or deep learning might be more interesting.\n\nIt should also be noted that I was asked to review another ICLR submission entitled \"CONSEQUENTIALIST CONDITIONAL COOPERATION IN\nSOCIAL DILEMMAS WITH IMPERFECT INFORMATION\n\" which amazingly introduced the same \"Pong Player’s Dilemma\" game as in this paper. \n\nNotice the following suspiciously similar paragraphs from the two papers:\n\nFrom \"MAINTAINING COOPERATION IN COMPLEX SOCIAL DILEMMAS USING DEEP REINFORCEMENT LEARNING\":\nWe also look at an environment where strategies must be learned from raw pixels. We use the method\nof Tampuu et al. (2017) to alter the reward structure of Atari Pong so that whenever an agent scores a\npoint they receive a reward of 1 and the other player receives −2. We refer to this game as the Pong\nPlayer’s Dilemma (PPD). In the PPD the only (jointly) winning move is not to play. However, a fully\ncooperative agent can be exploited by a defector.\n\nFrom \"CONSEQUENTIALIST CONDITIONAL COOPERATION IN SOCIAL DILEMMAS WITH IMPERFECT INFORMATION\":\nTo demonstrate this we follow the method of Tampuu et al. (2017) to construct a version of Atari Pong \nwhich makes the game into a social dilemma. In what we call the Pong Player’s Dilemma (PPD) when an agent \nscores they gain a reward of 1 but the partner receives a reward of −2. Thus, in the PPD the only (jointly) winning\nmove is not to play, but selfish agents are again tempted to defect and try to score points even though\nthis decreases total social reward. We see that CCC is a successful, robust, and simple strategy in this\ngame.", "About the first point, it does not present a clear problem definition. The paper continues stating what it should do (e.g. \"our agents only live once at at test time and must maintain cooperation by behaving intelligently within the confines of a single game rather than threats across games.\") without any support for these desiderata. It then continues explaining how to achieve these desiderata, but at this point it is impossible to follow a coherent argument without understanding why are the authors making these strong assumptions about the problem they are trying to solve, and why. Without this problem description and a good motivation, it is impossible to assess why such desiderata (which look awkward to me) are important. The paper continues defining some joint behavior (e.g. cooperative policies), but then construct arguments for individual policy deviations, including elements like \\pi_A and \\Pi_2^{A_k} that, as you see, A is used sometimes as subindex and sometimes as supperindex. Could not follow this part, as such elements lack definition. D_k is also not defined. \n\nExperiments are uninteresting and show same results as many other RL algorithms that have been proposed in the past. No comparison with such other approaches is presented, nor even recognized. The paper should include a related work section that explain such similar approaches and their difference with this approach. The paper should continue the experimental section making explicit comparisons with such related work.\n\n**Detailed suggestions**\n- On page 2 you say \"This methodology cannot be directly applied to our problem\" without first defining what the problem is.\n- When authors talk about the agent, it is unclear what agent they refer to\n- \\delta undefined\n- You say selfish reward schedule each agent i treats the other agent just as a part of their environment. However, you need to make some assumption about its behavior (e.g. adversarial, cooperative, etc.) and this disregarded. ", "We thank the reviewer for their comments. We believe that many of the reviewer's issues are actually addressed in the paper already, though we were unclear in our presentation. \n\nWe have made major revisions to the motivating text and the presentation of the main results in the newly uploaded version.\n \n>>> The reviewer argues there is a lack of clear problem definition\nWe apologize if the problem definition is unclear, we have edited the text to be clearer. In addition, we have reformulated some of the results presentations to more clearly align with our problem defintion.\n\nThe goal of the paper is to begin with a question discussed by *The Evolution of Cooperation *(Axelrod (1984)): suppose that we are going to enter into a repeated Prisoner's Dilemma with an unknown partner, how should we behave?\n\nAxelrod (and much follow up work) comes up with strategies which seek to work well against mixed populations where some individuals are cooperators, some are pure defectors but most are conditional cooperators (often this is justified by the idea that this is a good approximation of the distribution of people). This literature seeks to construct strategies (eg. Tit-for-Tat, or Win-Stay-Lose-Shift/Pavlov) which\n1) cooperate with cooperators\n2) aren't exploited by defectors\n3) incentivize conditional cooperators to cooperate\n4) are simple to explain\n\nThese are fine desiderata for what it means to \"solve\" a social dilemma. However, a weakness of this literature is that it mostly works with simple 2 player repeated Prisoner's Dilemma games.\n\nOur goal is to expand the Axelrod ideas from the repeated PD case (where there are 2 actions that are clearly labeled) to some perfect information Markov game G which is not repeated (we only play G once), has a social dilemma structure, and is too complex to be solved in a tabular format (so requires deep RL).\n\nOur question is related to, but actually quite different from, the literatures on:\n\n* The folk theorem in game theory (Fudenberg & Maskin 1986, Fudenberg, Levine & Maskin 1996) – this literature asks “given a repeated game G, does an efficient equilibrium exist?”\n* The work on “computational folk theorem” (De Cote & Littman 2008, Littman & Stone 2005) – this literature asks: “can I compute the efficient equilibrium strategies in a repeated game or Markov game?”\n* Alternative solution concepts (eg. Sodomka et al. 2013) – this literature asks: “can we define solution concepts beyond Nash and under what conditions will learning converge to them?”\n* The learning in (Markov) games literature (Fudenberg & Levine 1998, Sandholm & Crites 1996, Leibo et. al. 2017) – this literature asks: “which equilibrium will learners converge to as a function of game parameters/learning rules?”\n* The shaping in learning in games literature (Babes et al 2008) – this literature asks: “if I can change the reward functions of agents, can I guide them to a good equilibrium?”\n* Friend-or-Foe learning (Littman 2001) – this paper asks “what kind of learning rule should I use in positive sum games?” This is quite related to our work though again requires multiple plays of G with the same partner rather than self play training and then a SINGLE play of G.\n* How do humans behave in these kinds of situations? (eg. Rand et al. 2012, Kleinman-Weiner 2016)\n\nAgain, our situation is that we have access to the game and we can do whatever we want at training time, but at test time we play G once and we want to achieve good performance in the Axelrod sense: sometimes we face pure cooperators, sometimes pure defectors, but mostly we face conditional cooperators. As we can see, this question is related to but not the same as the literatures above (though they all provide valuable tools and context).\n\nNote also that we are not looking for equilibria in the game, in the PD tit-for-tat is not an equilibrium strategy (the best response is to always cooperate), however it is a very good commitment strategy if we seek to design an agent.\n\nWe can see from the reviews that the relationship between our work and prior work was unclear from the text, we have edited the introduction and main text significantly to address these comments.\n", ">>> The reviewer would like to see more baselines\nGiven the discussion above, we argue that there are 2 potential baselines to amTFT. Both of these are already studied in the paper:\n\n**Baseline 1: Apply standard self play at training time, save that strategy, use it at play time **\nWe find that self-play finds the defect policies and thus while it can exploit pure cooperators and not be exploited by pure defectors it isn’t able to realize the gains of cooperation when the partner is a conditional cooperator (eg. amTFT).\n\n**Baseline 2: De Cote & Littman (2008). **\nNote that the De Cote & Littman algorithm works ACROSS multiple iterations of a repeated Markov game (playing a Markov game G multiple times) rather than WITHIN a single game (which is what our agent faces). However, we can amend the DeCote & Littman algorithm as follows: compute a cooperative (C) policy (De Cote & Littman actually compute the equitable policies, but in our games they are identical), compute the defect policy, if our partner chooses an ACTION that is inconsistent with the C policy, use the D policy forever after.\n\nWe show that this approach does not work well because working in ACTION space is not very robust to function approximation or any existence of multiple ways to cooperate.\n\namTFT uses a very similar rule but works in value space rather than action space which makes it robust to multiple policies that have the same or similar values. We see in our experiments that this is an important property.\n\nIf the reviewer has other baselines in mind that we have missed, we are happy to compare our approach to them.\n\n****Other Responses****\n>> The paper continues defining some joint behavior (e.g. cooperative policies), but then construct arguments for individual policy deviations, including elements like \\pi_A and \\Pi_2^{A_k} that, as you see, A is used sometimes as subindex and sometimes as supperindex. Could not follow this part, as such elements lack definition. D_k is also not defined.\n\nWe apologize for the flipping of indices, we thought that we had caught all of the sub/super flips but some managed to get away from us. We have fixed many of the flips.\n\nWe do note that D_k is defined on page 4: “we first introduce the notation of a compound policy πXk Z which is a policy that behaves according to X for k turns and then Z afterwards.”\n\n>>> Experiments are uninteresting and show same results as many other RL algorithms that have been proposed in the past. No comparison with such other approaches is presented, nor even recognized.\n\nWe discuss above why we believe that our work does indeed consider, discuss, properly cite, and compare to prior work on this problem. \n\nIf there is work that the reviewer believes we have left out, we are happy to discuss it in the paper.\n\n>> “\\delta undefined”\nDelta is defined in definition 2: “We assume agents discount the future with rate *δ *which we subsume into the value function.”\n\n>> You say selfish reward schedule each agent i treats the other agent just as a part of their environment. However, you need to make some assumption about its behavior (e.g. adversarial, cooperative, etc.) and this disregarded.\n\nWe apologize if this is unclear. The “selfish reward” schedule is simply standard self-play where each agent treats the other agent as stationary (this is exactly the assumption made in other learning rules eg. fictitious play). While this assumption is incorrect in finite time it is correct in the limit if agents converge to a Nash equilibrium. We are not trying to study this assumption (it is beyond the scope of this paper), rather we use it because it is what is done in standard self-play/standard learning in games (see eg. Fudenberg & Levine 1998 for more discussion).", ">> I have problems understanding how it is possible to guarantee \"If they start in a D phase, they eventually return to a C phase.\" without making more assumptions on the domain. The clear example being the typical 'heaven or hell' type of problems: what if after one defect, we are trapped in the 'hell' state where no cooperation is even possible? **\n\nThe referee is correct that there exist domains where a single deviation by a player can simply never be made up in the D phase. Note that Theorem 1 specifically rules out this scenario, it says that for *any* state, the gain in value of cooperating from it forever (vs. defecting forever) is bigger than any one-shot deviation possible in the game. Thus, any debit a partner earns can eventually be made up by playing D for only k periods and then playing C forever.\n\nThis doesn't mean we rule out all types of “heaven or hell” scenarios. For example: suppose that cooperation earns a payoff of 10 every turn unless someone has defected once, in which case it earns 5, defection earns the defector 100 points and causes the other to lose 200 and mutual defection is worth -101. However, in this case, after a defection by a partner amTFT will return to cooperation after 1 turn of mutual defection but payoffs will be permanently lower.\n \n\n>>>R3 discusses many issues with convergence guarantees of the deep RL methods.**\nWe agree that a weakness of any deep RL approach is that it is often hard to make statements about convergence guarantees / issues with function approximation. \n\nOne way to see whether a amTFT is exploitable is to directly train an RL agent to try to exploit the amTFT agent. We see that in Coins learners fail to learn to exploit (we had issues doing this in the PPD due to instability of training Atari policies wiht low discount rates). Nevertheless the Coins result gives us confidence that this at least works empirically in some simple environments. Importantly, this method also gives us a possible way to stress test amTFT in any practical application.\n \nWe are happy to add this discussion to the main text as a direction for future results.\n \n>> The entire approach hinges on using rollouts (the commented lines in Algo. 1). However, it is completely not clear to me how this works. The one paragraph is insufficient to get across these crucial parts of the proposed approach. \n\nThe rollouts work as follows:\n\n1) The amTFT agent has policy pairs (C,C), (D,D) saved from training\n2) At time t, suppose the partner takes a' when the amTFT agent expected a (according to C(s)). \n3) The amTFT agent simulates 2B replicas of the game for M turns. \n4) In B of the replicates their partner starts with a’ and continues with C - “true path”\n5) In B of the replicates their partner starts with a and continues with C - “counterfactual path”\n6) The amTFT agent takes the difference in the average total reward to the partner from the two paths and uses that as the per period debit\n\nIn the limit of large M and B this is an unbiased estimator of the partner's Q function.\n\nThere is also the option to append the continuation value V(s) to the end of the rollout, we elide it. Note that in games where an action today can only affect payoffs up to M periods from now it suffices to use rollouts of length M and elide the continuation value\n \nWe have changed the text to make this clearer.\n \n>> It is not clear why the tables in Figure 1 are not symmetric; this strikes me as extremely problematic. It is not clear what the colors encode either. \nThe tables in Figure 1 show the payoff of the row player against the column player – thus there is no reason to expect them to be symmetric (eg. the box for (D,C) corresponds to the payoff that D gets when C is their partner, which is not the same payoff that C gets when D is their partner).\n\nFrom the comments given by the review team we see that those figures were not the best way to present our main results. Rather, we have specifically measured the exploitability of a strategy as well as whether it incentivizes cooperation from a partner and have added those numbers as our main results.\n", "We thank the reviewer for pointing out several important issues. We believe these are mostly issues of clarity in exposition/notation. We have edited the text to address these issues.\n\n>> The definition of social dilemma, is unclear: \"A social dilemma is a game where there are no cooperative policies which form equilibria…. does this mean to say \"there are no cooperative *Markov* policies\" ? **\n \nThe referee is correct, this should mean that there are no Markov policies. Note that when we refer to cooperative policies we specifically refer to ones which cooperate at ALL states. \n\nThus we define a social dilemma to be one where cooperation after EVERY STATE is impossible in equilibrium – that is, if there is cooperation along the path of play it must be because OFF THE PATH of play cooperation stops.\n \nThis is identical to the logic in the standard repeated Prisoner’s Dilemma where policies which always cooperate are not equilibria, rather, in order to maintain cooperation along the path of play there must be defection off the path of play (eg. Grim Trigger). \n \nWe have edited the text to make this point clearer.\n \n**>> Why is the method called \"approximate Markov\"? As soon as one introduces history dependence, the Markov property stops to hold? **\n \nWe call the method approximate Markov because we use function approximation (approximate) and because amTFT only uses Markov policies from the original game (only using the augmented memory to switch between them).\n \nWe have made this clearer in the paper.\n \n**>> On page 4, I have problems following the text due to inconsistent use of notation: subscripts and superscripts seem random, it is not clear which symbols denote strategy profiles (rather than individual strategies), there seems mix-ups between 'i' and '1' / '2', there is sudden use of \\hat{}, and other undefined symbols (Q_CC?).**\n\nWe apologize if the mixup between sub/superscripts caused any confusion, we have fixed these typos. In addition, we now clarify the hat/no hat notation - as in statistics we use the no hat symbol to refer to a \"real\" policy whereas \\hat{} objects refer to approximations (eg. the output of the deep RL training).\n \nWe note that the Q function is introduced in Definition 2 but the notation Q_CC is introduced in section 4 (“we call the converged policies under the selfish reward schedule πˆiD and the associated Q function approximations QˆiDD.”). We apologize for this confusion and will edit Defintion 2 notation to match the Section 4 notation.\n \n** **\n**>> For all practical purposes, it seems that the made assumptions imply uniqueness of the cooperative joint strategy. I fully appreciate that the coordination question is difficult and important, so if the proposed method is not compatible with dealing with that important question, that strikes me as a large drawback. **\n** **\nThe main assumption used is the exchangeability assumption that all strategies form an equivalence class in that any two pairs of cooperative strategies (C1, C1), (C2, C2) are compatible with each other in the sense that (C1, C2) generates the same stream of payoffs. \n\nThis is much weaker than a uniqueness assumption. Indeed, working in value space is one of the innovations of am TFT. \n\nAs an example of this, consider the Pong Player’s Dilemma. The exchangeability assumption it allows both players to do whatever they want as long as they “softly” hit the ball over to the other player in some way. \n\nFor example, our partner can move the paddle around however they like while the ball is in flight, and, importantly, it allows for a partner (eg. a human) who hits the ball slightly too fast sometimes (but not so fast that our agent can't get to it). In both of these situations a strategy like the Grim Trigger (a direct application of De Cote & Littman 2008) will assume the partner is not cooperating and defect.\n\nWe agree that there are situations where this assumption fails (for example if we need to make simultaneous decisions that may or may not be compatible with one another as in, eg. a coordination games), but there is no good zero-shot solution to those issues in the literature as well.", "**>> It also seems that \"grim\" is better against all, except against amTFT, why should we not use that? In general, the explanation of this closely related paper by De Cote & Littman (which was published at UAI'08), is insufficient. It is not quite clear to me what the proposed approach offers over the previous method. **\n\nThe main result of the experiments is that Grim, in practice, behaves almost always like the defect policy (due to its reliance on “wrong action” rather than “wrong value”), and thus gives the same rewards to both players as defect.\n\nWhat is wrong with playing a policy of pure defection? Indeed it does achieve high payoffs against either pure cooperation or pure defection. One problem is that it incentivizes the partner to defect rather than cooperate, which can be seen in the table. A second problem is that it fails to realize gains of cooperation with conditional cooperators. This can't be shown directly in the table (as we cannot enumerate all possible conditionally cooperative strategies), but as a necessary condition it even fails to cooperate with itself (or amTFT). \n\nAll of this can be extracted from the current figure, but we agree that it needs to be highlighted better, and have added an additional table that measures these desiderata explicitly. We thank the reviewer for pointing this out. \n\n\nIn addition, we note that amTFT is different from De Cote & Littman in 4 ways\n\n * amTFT is usable within a single game rather than across multiple iterations of the same game \n * amTFT uses self-play and deep RL rather than the tabular computation in De Cote & Littman thus can be applied to more complex games\n * amTFT returns to cooperation following a defection rather than applying a Grim Trigger strategy which stops cooperating after the wrong action is taken\n * The biggest difference: amTFT uses value space as the trigger as opposed to action space. As we see in our experiments this is important in Markov games where there are multiple value-equivalent cooperating strategies that differ on actions (eg. move left then move down vs. move down then move left in Coins). \n\n \nWe have edited the text to make these clearer.", "We thank R1 for their thorough comments. We have made several changes to the presentation of the paper to address them.\n \n**>>> “The paper follows with some fun experiments implementing these new game theory notions. Unfortunately, since the game theory was not particularly well-motivated, I did not find the overall story compelling…” **\n \nResponse: \nThis may stem from our lack of clarity in our problem definition (see reply to R3 above). The point here is not to “get RL to cooperate.” Rather, we are interested in expanding the ideas proposed by Axelrod (1984) to Markov games. \n\nPlease see our reply to R3 above discussing why our work is related to but also quite different from what is done in other work on cooperative games. \n \n>> Similar paragraphs in 2 papers\nWe are also the authors of the other paper. \n\nCould the reviewer please clarify the issue here? Is it that the game is re-used without attribution or is it that we use similar text to describe it? \n\nWe are happy to make it clear that this (the amTFT paper) is the first one to use the PPD as an environment and the CCC paper uses it for robustness checks.\n\nThe important differences here are as follows:\n \namTFT (this paper) is a strategy that can only be implemented in Markov perfectly observed games. The CCC paper looks at IMPERFECTLY observed games where amTFT cannot be used. Note that there are other major differences in the guarantees between strategies (eg. CCC only has infinite time limit guarantees).\n \nSince any MDP can be trivially written into a POMDP it follows that the CCC strategy introduced in the other paper can also be used whenever amTFT can be used. \n\nDoes this mean that amTFT is completely dominated by CCC? The answer is no, in the CCC paper the PPD is used as an example to show that the CCC algorithm works well in some places (standard PPD) and not others (risky PPD). \n \n \n****Other Comments****\n>>> Even in games where there is a cooperative solution that maximizes the total welfare, it is not clear why players would choose to do so. When the game is symmetric, this might be \"the natural\" solution but in general it is far from clear why all players would want to maximize the total payoff.\n\nWe agree with the reviewer on this point. One can view this as a discussion about whether a particular equilibrium is a focal point or not. It is well known that in symmetric games (including bargaining and coordination games) that people view the symmetric sum of payoffs to be a natural focal point while in asymmetric versions of the problem they do not (see eg. the chapter on bargaining in Kagel & Roth Handbook of Experimental Economics or more recent work on inequality in public goods games eg. Hauser, Kraft-Todd, Rand, Nowak & Norton 2016).\n \nFiguring out which kinds of payoff distributions are “reasonable” focal points, especially for playing with humans, is an important direction for future research but beyond the scope of this paper (the question is not even settled in behavioral science as there are many debates on whether people are averse to inequality itself, unequal treatment or perhaps something else).\n\nNote that amTFT can be adapted to any focal point that can be expressed in terms of payoffs (for example, pure inequity aversion can be expressed as U_1(payoff1, payoff2) = payoff1 - A*|payoff1-payoff2|, see eg. Charness & Rabin (2002) for a generic utility function that can express many social goals). \n\nThe way one can adapt amTFT to these focal points is to train the D policies as we do in the paper, but now train the C policies using this modified reward at each time step (and use the amTFT switching rule at test time). A full discussion of when particular focal points can be implemented in particular games is beyond the scope of the paper.\n\nWe have made this point clear in the both the introduction and conclusion of the paper.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, -1, -1, -1, -1, -1, -1 ]
[ "ryocOrBVf", "Skso_BrVz", "Skso_BrVz", "Skso_BrVz", "ryocOrBVf", "By8cjAVNM", "rJDtLncGf", "By2P8nqGz", "ByKVU35fG", "iclr_2018_rJIN_4lA-", "iclr_2018_rJIN_4lA-", "iclr_2018_rJIN_4lA-", "Bk_1Ws3xf", "SkKkV2qMM", "ByKVU35fG", "rkhkuoNgf", "By2P8nqGz", "B1_TQ-clG" ]
iclr_2018_B1EGg7ZCb
Autonomous Vehicle Fleet Coordination With Deep Reinforcement Learning
Autonomous vehicles are becoming more common in city transportation. Companies will begin to find a need to teach these vehicles smart city fleet coordination. Currently, simulation based modeling along with hand coded rules dictate the decision making of these autonomous vehicles. We believe that complex intelligent behavior can be learned by these agents through Reinforcement Learning.In this paper, we discuss our work for solving this system by adapting the Deep Q-Learning (DQN) model to the multi-agent setting. Our approach applies deep reinforcement learning by combining convolutional neural networks with DQN to teach agents to fulfill customer demand in an environment that is partially observ-able to them. We also demonstrate how to utilize transfer learning to teach agents to balance multiple objectives such as navigating to a charging station when its en-ergy level is low. The two evaluations presented show that our solution has shown hat we are successfully able to teach agents cooperation policies while balancing multiple objectives.
rejected-papers
The reviewers agree that the manuscript is below the acceptance threshold at ICLR. Many points of criticism were evident in the reviewer comments, including small artificial test domain, no new methods introduced, poor writing in some places, and dubious need for DeepRL in this domain. The reviews mentioned a number of constructive comments to improve the paper, and we hope this will provide useful guidance for the authors to rewrite and resubmit to a future venue.
train
[ "Hy73csVeG", "HyqzL3Ogz", "rJJxcdqeM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n\nThis paper proposes to use deep reinforcement learning to solve a multiagent coordination task. In particular, the paper introduces a benchmark domain to model fleet coordination problems as might be encountered in taxi companies. \n\nThe paper does not really introduce new methods, and as such, this paper should be seen more as an application paper. I think that such a paper could have merits if it would really push the boundary of the feasible, but I do not think that is really the case with this paper: the task still seems quite simplistic, and the empirical evaluation is not convincing (limited analysis, weak baselines). As such, I do not really see any real grounds for acceptance.\n\nFinally, there are also many other weaknesses. The paper is quite poorly written in places, has poor formatting (citations are incorrect and half a bibtex entry is inlined), and is highly inadequate in its treatment of related work. For instance, there are many related papers on:\n\n-taxi fleet management (e.g., work by Pradeep Varakantham)\n \n-coordination in multi-robot systems for spatially distributed tasks (e.g., Gerkey and much work since)\n\n-scaling up multiagent reinforcement learning and multiagent MDPs (Guestrin et al 2002, Kok & Vlassis 2006, etc.)\n\n-dealing with partial observability (work on decentralized POMDPs by Peshkin et al, 2000, Bernstein, Amato, etc.)\n\n-multiagent deep RL has been very active last 1-2 years. E.g., see other papers by Foerster, Sukhbataar, Omidshafiei\n\n\nOverall, I see this as a paper which with improvements could make a nice workshop contribution, but not as a paper to be published at a top-tier venue.\n\n", "In this paper, the authors define a simulated, multi-agent “taxi pickup” task in a GridWorld environment. In the task, there are multiple taxi agents that a model must learn to control. “Customers” randomly appear throughout the task and the taxi agents receive reward for moving to the same square as a customer. Since there are multiple customer and taxi agents, there is a multi-agent coordination problem. Further, the taxi agents have “batteries”, which starts at a positive number, ticks down by one on each time step and a large negative reward is given if this number reaches zero. The battery can be “recharged” by moving to a “charge” tile.\n\nCooperative multi-agent problem solving is an important problem in machine learning, artificial intelligence, and cognitive science. This paper defines and examines an interesting cooperative problem: Assignment and control of agents to move to certain squares under “physical” constraints. The authors propose a centralized solution to the problem by adapting the Deep Q-learning Network model. I do not know whether using a centralized network where each agent has a window of observations is a novel algorithm. The manuscript itself makes it difficult to assess (more on this later). If it were novel, it would be an incremental development. They assess their solution quantitatively, demonstrating their model performs better than first, a simple heuristic model (I believe de-centralized Dijkstra’s for each agent, but there is not enough description in the manuscript to know for sure), and then, two other baselines that I could not figure out from the manuscript (I believe it was Dijkstra’s with two added rules for when to recharge).\n\nAlthough the manuscript has many positive aspects to it, I do not believe it should be accepted for the following reasons. First, the manuscript is poorly written, to the point where it has inhibited my ability to assess it. Second, given its contribution, the manuscript is better suited for a conference specific to multi-agent decision-making. There are a few reasons for this. 1) I was not convinced that deep Q-learning was necessary to solve this problem. The manuscript would be much stronger if the authors compared their method to a more sophisticated baseline, for example having each agent be a simple Q-learner with no centralization or “deepness”. This would solve another issue, which is the weakness of their baseline measure. There are many multi-agent techniques that can be applied to the problem that would have served as a better baseline. 2) Although the problem itself is interesting, it is a bit too applied and specific to the particular task they studied than is appropriate for a conference with as broad interests as ICLR. It also is a bit simplistic (I had expected the agents to at least need to learn to move the customer to some square rather than get reward and move to the next job from just getting to the customer’s square). Can you apply this method to other multi-agent problems? How would it compare to other methods on those problems? \n\nI encourage the authors to develop the problem and method further, as well as the analysis and evaluation. \n", "The main contribution of the paper seems to be the application to this problem, plus minor algorithmic/problem-setting contributions that consist in considering partial observability and to balance multiple objectives. On one hand, fleet management is an interesting and important problem. On the other hand, although the experiments are well designed and illustrative, the approach is only tested in a small 7x7 grid and 2 agents and in a 10x10 grid with 4 agents. In spirit, these simulations are similar to those in the original paper by M. Egorov. Since the main contribution is to use an existing algorithm to tackle a practical application, it would be more interesting to tweak the approach until it is able to tackle a more realistic scenario (mainly larger scale, but also more realistic dynamics with traffic models, real data, etc.).\n\nSimulation results compare MADQN with Dijkstra's algorithm as a baseline, which offers a myopic solution where each agent picks up the closest customer. Again, since the main contribution is to solve a specific problem, it would be worthy to compare with a more extensive benchmark, including state of the art algorithms used for this problem (e.g., heuristics and metaheuristics). \n\nThe paper is clear and well written. There are several minor typos and formatting errors (e.g., at the end of Sec. 3.3, the authors mention Figure 3, which seems to be missing, also references [Egorov, Maxim] and [Palmer, Gregory] are bad formatted). \n\n\n-- Comments and questions to the authors:\n\n1. In the introduction, please, could you add references to what is called \"traditional solutions\"?\n\n2. Regarding the partial observability, each agent knows the location of all agents, including itself, and the location of all obstacles and charging locations; but it only knows the location of customers that are in its vision range. This assumption seems reasonable if a central station broadcasts all agents' positions and customers are only allowed to stop vehicles in the street, without ever contacting the central station; otherwise if agents order vehicles in advance (e.g., by calling or using an app) the central station should be able to communicate customers locations too. On the other hand, if no communication with the central station is allowed, then positions of other agents may be also partial observable. In other words, the proposed partial observability assumption requires some further motivation. Moreover, in Sec. 4.3, it is said that agents can see around them +10 spaces away; however, experiments are run in 7x7 and 10x10 grid worlds, meaning that the agents are able to observe the grid completely.\n\n3. The fact that partial observability helped to alleviate the credit-assignment noise caused by the missing customer penalty might be an artefact of the setting. For instance, since the reward has been designed arbitrarily, it could have been defined as giving a penalty for those missing customers that are at some distance of an agent.\n\n4. Please, could you explain the last sentence of Sec. 4.3 that says \"The drawback here is that the agents will not be able to generalize to other unseen maps that may have very different geographies.\" In particular, how is this sentence related to partial observability?" ]
[ 3, 3, 4 ]
[ 5, 3, 4 ]
[ "iclr_2018_B1EGg7ZCb", "iclr_2018_B1EGg7ZCb", "iclr_2018_B1EGg7ZCb" ]
iclr_2018_rye7IMbAZ
Explicit Induction Bias for Transfer Learning with Convolutional Networks
In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch. When using fine-tuning, the underlying assumption is that the pre-trained model extracts generic features, which are at least partially relevant for solving the target task, but would be difficult to extract from the limited amount of data available on the target task. However, besides the initialization with the pre-trained model and the early stopping, there is no mechanism in fine-tuning for retaining the features learned on the source task. In this paper, we investigate several regularization schemes that explicitly promote the similarity of the final solution with the initial model. We eventually recommend a simple L2 penalty using the pre-trained model as a reference, and we show that this approach behaves much better than the standard scheme using weight decay on a partially frozen network.
rejected-papers
This paper addresses the question of how to regularize when starting from a pre-trained convolutional network in the context of transfer learning. The authors propose to regularize toward the parameters of the pre-trained model and study multiple regularizers of this type. The experiments are thorough and convincing enough. This regularizer has been used quite a bit for shallow models (e.g. SVMs as the authors mention, but also e.g. more general MaxEnt models). There is at least some work on regularization toward a pre-trained model also in the context of domain adaptation with deep neural networks (e.g. for speaker adaptation in speech recognition). The only remaining novelty is the transfer learning context. This is not a sufficiently different setting to merit a new paper on the topic.
train
[ "ryD53e9xG", "Hku7RS6lf", "BJQD_I_eM", "S1A91_pmM", "BkoNYvaXf", "SylgFP6Xz", "BkbKdDT7G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This work addresses the scenario of fine-tuning a pre-trained network for new data/tasks and empirically studies various regularization techniques. Overall, the evaluation concludes with recommending that all layers of a network whose weights are directly transferred during fine-tuning should be regularized against the initial net with an L2 penalty during further training. \n\nRelationship to prior work:\nRegularizing a target model against a source model is not a new idea. The authors miss key connections to A-SVM [1] and PMT-SVM [2] -- two proposed transfer learning models applied to SVM weights, but otherwise very much the same as the proposed solution in this paper. Though the study here may offer new insights for deep nets, it is critical to mention prior work which also does analysis of these regularization techniques. \n\nSignificance:\nAs the majority of visual recognition problems are currently solved using variants of fine-tuning, if the findings reported in this paper generalize, then it could present a simple new regularization which improves the training of new models. The change is both conceptually simple and easy to implement so could be quickly integrated by many people.\n\nClarity and Questions:\nThe purpose of the paper is clear, however, some questions remain unanswered. \n1) How is the regularization weight of 0.01 chosen? This is likely a critical parameter. In an experimental paper, I would expect to see a plot of performance for at least one experiment as this regularization weighting parameter is varied. \n2) How does the use of L2 regularization on the last layer effect the regularization choice of other layers? What happens if you use no regularization on the last layer? L1 regularization?\n3) Figure 1 is difficult to read. Please at least label the test sets on each sub-graph.\n4) There seems to be some issue with the freezing experiment in Figure 2. Why does performance of L2 regularization improve as you freeze more and more layers, but is outperformed by un-freezing all. \n5) Figure 3 and the discussion of linear dependence with the original model in general seems does not add much to the paper. It is clear that regularizing against the source model weights instead of 0 should result in final weights that are more similar to the initial source weights. I would rather the authors use this space to provide a deeper analysis of why this property should help performance. \n6) Initializing with a source model offers a strong starting point so full from scratch learning isn’t necessary -- meaning fewer examples are needed for the continued learning (fine-tuning) phase. In a similar line of reasoning, does regularizing against the source further reduce the number of labeled points needed for fine-tuning? Can you recover L2 fine-tuning performance with fewer examples when you use L2-SP?\n\n[1] J. Yang, R. Yan, and A. Hauptmann. Adapting svm classifiers to data with shifted distributions. In ICDM Workshops, 2007.\n[2] Y. Aytar and A. Zisserman. Tabula rasa: Model transfer for object category detection. In Proc. ICCV, 2011.\n\n------------------\nPost rebuttal\n------------------\nThe changes made to the paper draft as well as the answers to the questions posed above have convinced me to upgrade my recommendation to a weak accept. The experiments are now clear and thorough enough to provide a convincing argument for using this regularization in deep nets. Since it is simple and well validated it should be easily adopted. \n", "The paper addresses the problem of transfer learning in deep networks. A pretrained network on a large dataset exists, what is the best way to retrain the model on a new small dataset? \nIt argues that the standard regularization done in conventional fine-tuning procedures is not optimal, since it tries to get the parameters close to zero, thereby forgetting the information learnt on the larger dataset.\nIt proposes to have a regularization term that penalizes divergence from initialization (pretrained network) as opposed to from zero-vector. It tries different norms (L2, L1, group Lasso) as well as Fisher information matrix to avoid interfering with important nodes, and shows the effectiveness of these alternatives over the standard practice of “weight decay”.\n\nAlthough the novelty of the paper is limited and have been shown for transfer learning with SVM classifiers prior to resurgence of deep learning, the reviewer is unable to find a prior work doing same regularization in deep networks. Number of datasets and experiments are moderately high, results are consistently better than standard fine-tuning and fine-tuning is a very common tool for ML practitioners in various application fields, so, I think there is benefit for transfer learning audience to be exposed to the experiments of this paper.\n\n---- Post rebuttal\nI think the paper has merits to be published. As I note above, it's taking a similar idea with transfer learning of SVM models from a decade ago to deep learning and fine-tuning. It's simple with no technical novelty but shows consistent improvement and has wide relevance. \n", "The paper proposes an analysis on different adaptive regularization techniques for deep transfer learning. \nSpecifically it focuses on the use of an L2-SP condition that constraints the new parameters to be close to the\nones previously learned when solving a source task. \n\n+ The paper is easy to read and well organized\n+ The advantage of the proposed regularization against the more standard L2 regularization is clearly visible \nfrom the experiments\n\n- The idea per se is not new: there is a list of shallow learning methods for transfer learning based \non the same L2 regularization choice\n[Cross-Domain Video Concept Detection using Adaptive SVMs, ACM Multimedia 2007]\n[Learning categories from few examples with multi model knowledge transfer, PAMI 2014]\n[From n to n+ 1: Multiclass transfer incremental learning, CVPR 2013]\nI believe this literature should be discussed in the related work section\n\n- It is true that the L2-SP-Fisher regularization was designed for life-long learning cases with a \nfixed task, however, this solution seems to work quite well in the proposed experimental settings. \nFrom my understanding L2-SP-Fisher can be considered the best competitor of L2-SP so I think\nthe paper should dedicate more space to the analysis of their difference and similarities both\nfrom the theoretical and experimental point of view. For instance:\n-- adding the L2-SP-Fisher results in table 2\n-- repeating the experiments of figure 2 and figure 3 with L2-SP-Fisher\n\n\n\n\n", "Once again, we would like to thank the reviewers for their comments and feedback. We answered to each reviewer individually and submitted a revision of the paper. Here is a brief summary of the changes:\n\n1 Introduction: Reformulation and clarification of sentences.\n\n2 Related Work: We have added discussions about the regularization methods used in SVM models, specifically A-SVM and PMT-SVM.\n\n4 Experiments:\n4.2: A new figure showing the sensibility of regularization hyper-parameters has been added.\n4.3.1: The results of L2-SP-Fisher have been added in Table 2.\n4.3.2: We have added a table showing the performance drops on the source tasks with fine-tuned models based on L2, L2-SP and L2-SP-Fisher regularizers.\n4.3.3: Results of the experiment with frozen layers have been updated and the same experiment has been repeated on Caltech 256. \n4.4: We have added the shrinkage estimation as another theoretical explanation. \nWe have also rephrased the sentences in this section.\n\n", "Thank you for your feedback. First, we would like to inform you that we improved our results by using the original version of ResNet. The comparisons and conclusions are qualitatively identical to the previous version.\n\nBelow are our answers to your questions.\n\n(0) Relationship to prior work:\nThanks for the references on SVM that are now added in the related work section. Although our regularizers are technically very similar to A-SVM and PMT-SVM, they have a rather different role: they act on the representation learned by the deep networks, which are equivalent to the kernels of SVM. The only weights that are not preserved by our approach are the weights in the last layer, which are the only weights that are regularized by A-SVM and PMT-SVM.\n\nOur aim in this paper is to promote regularizers that are technically similar to the ones used for SVM, but that are largely ignored in fine-tuning for transfer learning. This paper demonstrates that the regularization matters in transfer learning and that a simple change can improve the performance for the target task. \n\n(1)(2) The regularization hyper-parameters are chosen in the same way as choosing other hyper-parameters, by cross validation. We have added a new figure (Figure 1) in Section 4.2 to show the sensitivity of the two regularization hyper-parameters. This figure can also respond the question (2). \nAs for other regularization approaches on the last layer, we didn't observe the advantage of L1. The paper focuses on the demonstration that we can improve the performance by using a pre-trained model as reference to regularize the parameters. \n\n(3) Thank you for pointing this out. Sub-graph labels in Figure 2 (Figure 1 in the previous version) have been added.\n\n(4) Thanks for your observation about the figure with frozen layers. We corrected it by using the set of hyper-parameter values that were tested for getting the results of Table 2 (fixing no layers) throughout the experiments with frozen layers. The updated results, now in Figure 3, correspond to your expectations. \nWe reproduced this experiment on Caltech 256 and found a similar pattern.\n\n(5) With Figure 4 (Figure 3 in the previous version), we verify the effect of -SP approaches on parameters by measuring the linear dependence of activations. Activation similarities are easier to interpret than parameter similarities and provide a view of the network that is closer to the functional prospective we are actually pursuing. In addition, it proves that there's some connection between preserving the parameters and preserving the activations.\nFrom this figure, we can also notice that comparing with L2, L2-SP always has an R2 coefficient above 0.6, which means that L2-SP is capable to keep most of the pre-trained model.\nOn the other hand, we have added the results of L2-SP-Fisher in this Figure. Although there's no noticeable difference between L2-SP and L2-SP-Fisher, we can observe extremely high R2 in the first layer, which indicates large values of Fisher matrix and the importance of the first layer.\n\n(6) Totally right. Another way to observe the advantage of L2-SP is that we need fewer training examples to have the same performance with L2.", "Thank you for your feedback. First, we would like to inform you that we improved our results by using the original version of ResNet. The comparisons and conclusions are qualitatively identical to the previous version.\n\n(1) Thanks for the references that are now added in the related work section (J. Yang et al, T. Tommasi et al.). Although our regularizers are technically very similar to A-SVM and PMT-SVM, they have a rather different role: they act on the representation learned by the deep networks, which are equivalent to the kernels of SVM. The only weights that are not preserved by our approach are the weights in the last layer, which are the only weights that are regularized by A-SVM and PMT-SVM.\nOur aim in this paper is to promote regularizers that are technically similar to the ones used for SVM, but that are largely ignored in fine-tuning for transfer learning. This paper demonstrates that the regularization matters in transfer learning and that a simple change can improve the performance for the target task. \n\n(2) L2-SP-Fisher / L2-SP\n-- Thank you for your suggestion. We have added the L2-SP-Fisher results in Table 2. \n-- Figure 3 (Figure 2 in the previous version), the figure with frozen layers: in fact, the results with L2-SP-Fisher do not present significant differences with the ones using L2-SP. Figure 4 (Figure 3 in the previous version), the experiment on linear dependence: we have added L2-SP-Fisher results. There's no noticeable difference between L2-SP and L2-SP-Fisher but we can observe extremely high R2 in the first layer, which indicates large values of Fisher matrix and the importance of the first layer.\n-- We have also tested the performance of fine-tuned models on source tasks, just like lifelong learning problems. The -SP approaches did much better than L2. We have also observed that L2-SP-Fisher can always do better than L2-SP. This comparison has also been added in the latest version. ", "Thank you for your feedback. First, we would like to inform you that we improved our results by using the original version of ResNet. The comparisons and conclusions are qualitatively identical to the previous version.\n\nAs for the novelty of the paper, we agree that there is no technical advance. However, if one agrees that the scheme we propose is very intuitive, obvious to implement, that it significantly improves accuracy and that nobody uses it, we believe that we have a (simple) message to convey to the community.\n" ]
[ 6, 7, 6, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_rye7IMbAZ", "iclr_2018_rye7IMbAZ", "iclr_2018_rye7IMbAZ", "iclr_2018_rye7IMbAZ", "ryD53e9xG", "BJQD_I_eM", "Hku7RS6lf" ]
iclr_2018_rkZzY-lCb
Feat2Vec: Dense Vector Representation for Data with Arbitrary Features
Methods that calculate dense vector representations for features in unstructured data—such as words in a document—have proven to be very successful for knowledge representation. We study how to estimate dense representations when multiple feature types exist within a dataset for supervised learning where explicit labels are available, as well as for unsupervised learning where there are no labels. Feat2Vec calculates embeddings for data with multiple feature types enforcing that all different feature types exist in a common space. In the supervised case, we show that our method has advantages over recently proposed methods; such as enabling higher prediction accuracy, and providing a way to avoid the cold-start problem. In the unsupervised case, our experiments suggest that Feat2Vec significantly outperforms existing algorithms that do not leverage the structure of the data. We believe that we are the first to propose a method for learning unsuper vised embeddings that leverage the structure of multiple feature types.
rejected-papers
The paper presents an approach for learning continuous-valued vector representations combining multiple input feature sets of different types, in both unsupervised and supervised settings. The revised paper is a merger of the original submission and another ICLR submission. This meta-review takes into account all of the comments on both submissions and revisions. The merged paper is an improvement over the two separate ones. However, the contribution over previous work is still a bit unclear. It still does not sufficiently compare to/discuss in the context of other recent work on combining multiple feature groups. The experiments are also quite limited. The idea is introduced as extremely general, but the experiments focus on a small number of specific tasks, some of them non-standard.
train
[ "HJoZquUSM", "rJygCYHHM", "r1_2VGLlz", "HJfRKPFeM", "ByQ1mb0xM", "H1cuqDpXf", "SJSx5wTmz", "Bk_CuPamf", "rkNuOP6XG" ]
[ "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Thank you for your insightful comments.\n\nI. NOVELTY\nAfter reviewing your two references, we believe that our novelty claims still stand:\n\n1) Regarding the \"exponential family embeddings,\" our claim refers to general-purpose embeddings, which we define as “embeddings of an unsupervised method that can be used for a variety of auxiliary prediction tasks.” Therefore, our novelty claim is about unsupervised learning of embedding models with features, while the paper that you link to is a supervised approach. \n\n2) The \"structured factorization\" work that you point out is a way to introduce structured sparsity, and could be used in tandem with our method. We define the structure in the loss function (not as regularization), as a novel way to combine features to get embeddings at different levels of granularity. \n\nWhile structured PCA requires groups of features to disappear simultaneously, the features that remain in the model are jointly projected to a common space. The \"embeddings\" that structured PCA discovers are a mixture of the remaining features. On the other hand, our approach can find an embedding for *each* value of the different *group* of features. Thus, PCA finds different vectors that are useful for a fundamentally different problem.\n\nWe were unaware of these works and agree they should be cited in a published version of our work. We will include these references in a future revision. \n\nII. EVALUATION\nOur evaluation for Unsupervised Feat2Vec differs from the standard evaluations for word embeddings since we are not dealing with language data - for example, word analogy is not applicable for the IMDB dataset. The evaluations used for supervised methods such as exponential family embeddings are also not applicable, as in our case there is no specific prediction task the embeddings are tuned for. Since building unsupervised embeddings for arbitrary feature types is not a well-studied problem, we are unaware of any standard way to evaluate them.\n\nThanks again for reading our work. I hope this response addresses your reservations.\n", "The paper makes claims about being the \"first algorithm that is able to calculate general-purpose embeddings that are not tuned for a single specific prediction task for arbitrary features\". I don't think this is true:\n\nThis paper \"Exponential family embeddings\" (https://arxiv.org/pdf/1608.00778.pdf) was in NIPS 2016 and presents a principled approach to handling various feature types.\n\n-Leveraging structure/groups in the data. I think this was popular a few years ago in the matrix factorization / dictionary learning / sparse learning community e.g. the references in:\nhttps://www.di.ens.fr/~fbach/talk_sparse_pca_DL_online.pdf\n\nbut the authors don't seem to mention any of this work. \n\nThus, I find the novelty of the paper limited.\n\n\n(2) Evaluation. I am not persuaded that some of the experiments are standard evaluation, particularly Section 4.2 \"General purpose embeddings\". For instance they take a movie dataset (IMDB) and compare the similarity of movie directors to those of actors who were cast in the same film. I don't think that is standard.\n\nPerhaps the authors could consider some of the data/tasks used in the following papers to evaluate the nature of their embeddings compared to word2vec.\n\n1. https://arxiv.org/abs/1411.4166\n\n2. https://nlp.stanford.edu/pubs/glove.pdf\n\n3. https://arxiv.org/pdf/1608.00778.pdf", "Summary:\nThis paper proposes an approach to learn embeddings for structured datasets i.e. datasets which have heterogeneous set of features, as opposed to just words or just pixels. The paper proposes an approach called Feat2vec that relies on Structured Deep-In Factorization machines-- a paper that is concurrently under review at ICLR2018, which I haven't read in depth. The paper compares against a Word2vec baseline that pools all the heterogeneous content learns just one set of embeddings. Results are shown on IMDB movies and a proprietary education platform datasets. In both the tasks, Feat2vec leads to significant reduction in error compared to Word2vec.\n\nComments:\n\nThe paper is well written and addresses an important problem of learning word embeddings when there is inherent structure in the feature space. It is a very practically relevant problem. The novelty of the proposed approach seems limited in light of the related paper that is concurrently under review at ICLR2018, on which this paper heavily relies. Perhaps the authors should consider combining the two papers into one complete paper? The structured deep-in factorization machines allow higher-level interactions in embedding learning which allows the authors to learn embeddings for heterogeneous set of features. The sampling approaches proposed seem pretty straightforward adaptations of existing methods and not novel enough.\n", "SUMMARY.\n\nThe paper presents an extension of word2vec for structured features.\nThe authors introduced a new compatibility function between features and, as in the skipgram approach, they propose a variation of negative sampling to deal with structured features.\nThe learned representation of features is tested on a recommendation-like task.\n\n\n----------\n\nOVERALL JUDGMENT\nThe paper is not clear and thus I am not sure what I can learn from it.\nFrom what is written on the paper I have trouble to understand the definition of the model the authors propose and also an actual NLP task where the representation induced by the model can be useful.\nFor this reason, I would suggest the authors make clear with a more formal notation, and the use of examples, what the model is supposed to achieve.\n\n----------\n\nDETAILED COMMENTS\nWhen the authors refer to word2vec is not clear if they are referring to skipgram or cbow algorithm, please make it clear.\nBottom of page one: \"a positive example is 'semantic'\", please, use another expression to describe observable examples, 'semantic' does not make sense in this context.\nLevi and Goldberg (2014) do not say anything about factorization machines, could the authors clarify this point?\nEquation (4), what do i and j stand for? what does \\beta represent? is it the embedding vector? How is this formula related to skipgram or cbow?\nThe introduction of structured deep-in factorization machine should be more clear with examples that give the intuition on the rationale of the model.\nThe experimental section is rather poor, first, the authors only compare themselves with word2ve (cbow), it is not clear what the reader should learn from the results the authors got.\nFinally, the most striking flaw of this paper is the lack of references to previous works on word embeddings and feature representation, I would suggest the author check and compare themselves with previous work on this topic.", "This paper provides a clean way of learning embeddings for structured features that can be discrete -- indicating presence / absence of a certain quality. Further, these features can be structured i.e. a set of them are of the same 'type'. Unlike, word2vec there is no hard constraint that similar objects must have similar representations and so, the learnt embeddings reflect the likelihood of the observed features. Therefore, this can be used as a multi-label classifier by using two feature types -- the input and the set of categories. This proposed scheme is evaluated on two datasets -- movies and education in a retrieval setting. \n\nI would like to see an evaluation of these features in a classification setting to further demonstrate the utility of these embeddings as compared to directly embedding the discrete features and then performing a K-way classification. For example, I am aware of -- http://manikvarma.org/downloads/XC/XMLRepository.html contains some interesting datasets which have a large number of discrete features and classes. ", "Dear Chair,\n\nThe two main criticisms of the paper by the reviewers were (i) lack of references and (ii) that the results of the study were not significant enough to justify the two publications that we were aiming for. For this reason, we added roughly 3x more citations to the paper to better situate the contributions in the literature. Additionally, we merged this submission with our other concurrent ICLR manuscript.\n\nWe are hoping that we can get an opportunity to share our results with the ICLR community. In the unsupervised setting, our work is the first one to enable leveraging arbitrary feature types (a more general approach than exists in the literature). In the supervised scenario, we provide evidence that our general method can have better performance than ad-hoc networks that work for a single purpose. \n\nThanks,\nAuthors\n", "Thank you for your helpful comments. Because another reviewer suggested merging our two ICLR submissions, we underwent a major revision of the paper and now have two main contributions -- this is, we can calculate embeddings in a supervised setting (labels are available), and in an unsupervised setting (labels are not available). \n\nYou stated two main criticisms to the paper:\n* References. You mentioned that the most striking flaw of the paper is lack of references. We added roughly three times more citations (we increased references from ~12 to ~36). We believe that the paper is now much better situated in the literature.\n* Evaluation. To our knowledge we are the first ones to propose learning unsupervised embeddings for multiple feature types. The Word2Vec algorithms are other unsupervised embedding methods (though, W2V only works with words), and that is why we compare with them. \nBecause of the major revision of the paper, we believe we improved the empirical result section significantly. We added 2 additional datasets (total of 4), and added 4 baselines altogether (CBOW W2V, Matrix Factorization, Collaborative Topic Regression and DeepCoNN)\n\n\nOther detailed comments:\nWe removed the reference of Levy & Goldberg (but the general point is that factorization machines are a general case of matrix factorization)\nWe rewrote the introduction to make more salient our contributions, and we believe that it is now more clear what the model achieves. We streamlined the notation. Additionally, we clarified the language surrounding Word2Vec. \n\nWe hope that these major revisions address your reservations.\n", "Thank you for the constructive comments. Your main criticism for the paper was that the contribution of our work was not significant enough to justify the two publications that we were aiming for. Following your suggestion, we have combined the two papers and added the relevant parts of the other paper (we only extended our submission with the results that would be relevant to the combined version).\n\nWhile the original paper only addressed unsupervised learning of embeddings, the revised manuscript also addresses supervised learning of embeddings. We demonstrate that our general supervised method can have better performance than recently published single purpose methods (DeepCoNN and Collaborative Topic Regression) on two publicly available datasets, Yelp and CiteULike. We also explain in more detail how Feat2Vec extends Factorization Machines. \n\nWe hope that this major revision address your reservations.\n", "Thank you for your informative comments on our paper. We have added experiments for supervised Feat2Vec, which includes a multi-label prediction task on a public dataset (CiteULike) benchmarked against other state of the art methods. We hope that this experiment at least partially addresses your desire to see Feat2Vec in a K-way classification task. We would also like to point you to the ranking task done classifying the director of a film based on its task members. The 2.43% Top-1 Precision can be imagined as the performance of the unsupervised F2V embedding algorithm on a K-way classification task (as compared to Word2Vec’s CBOW algorithm). \n" ]
[ -1, -1, 7, 2, 7, -1, -1, -1, -1 ]
[ -1, -1, 3, 2, 5, -1, -1, -1, -1 ]
[ "rJygCYHHM", "iclr_2018_rkZzY-lCb", "iclr_2018_rkZzY-lCb", "iclr_2018_rkZzY-lCb", "iclr_2018_rkZzY-lCb", "iclr_2018_rkZzY-lCb", "HJfRKPFeM", "r1_2VGLlz", "ByQ1mb0xM" ]
iclr_2018_H18uzzWAZ
Correcting Nuisance Variation using Wasserstein Distance
Profiling cellular phenotypes from microscopic imaging can provide meaningful biological information resulting from various factors affecting the cells. One motivating application is drug development: morphological cell features can be captured from images, from which similarities between different drugs applied at different dosages can be quantified. The general approach is to find a function mapping the images to an embedding space of manageable dimensionality whose geometry captures relevant features of the input images. An important known issue for such methods is separating relevant biological signal from nuisance variation. For example, the embedding vectors tend to be more correlated for cells that were cultured and imaged during the same week than for cells from a different week, despite having identical drug compounds applied in both cases. In this case, the particular batch a set of experiments were conducted in constitutes the domain of the data; an ideal set of image embeddings should contain only the relevant biological information (e.g. drug effects). We develop a general framework for adjusting the image embeddings in order to `forget' domain-specific information while preserving relevant biological information. To do this, we minimize a loss function based on distances between marginal distributions (such as the Wasserstein distance) of embeddings across domains for each replicated treatment. For the dataset presented, the replicated treatment is the negative control. We find that for our transformed embeddings (1) the underlying geometric structure is not only preserved but the embeddings also carry improved biological signal (2) less domain-specific information is present.
rejected-papers
This is a nice but very narrow study of domain invariance in a microscopic imaging application. Since the problem is very general, the paper should include much more substantial context, e.g. discussion of various alternative methods (e.g. the ones cited in Sun et al. 2017). In order to contribute to the broader ICLR community, ideally the paper would also include application to more than just the one task.
val
[ "Hkyq_kqgz", "HJk2HZqxM", "SkvdG35xz", "ByRaH8pXM", "BJoM5IpXG", "Bk5rFIamG", "rkIzDL67M", "HyAuXL6Qf", "S1HeGU6Xf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "The paper discusses a method for adjusting image embeddings in order tease apart technical variation from biological signal. A loss function based on the Wasserstein distance is used. \nThe paper is interesting but could certainly do with more explanations. \n\nComments:\n1. It is difficult for the reader to understand a) why Wasserstein is used and b) how exactly the nuisance variation is reduced.\nA dedicated section on motivation is missing.\n\n2. Does the Deep Metric network always return a '64-dim' vector? \nHave you checked your model using different length vectors?\n\n3. Label the y-axis in Fig 2.\n\n4. The fact that you have early-stopping as opposed to a principled regularizer also requires further substantiation. ", "The authors present a method that aims to remove domain-specific information while preserving the relevant biological information between biological data measured in different experiments or \"batches\". A network is trained to learn the transformations that minimize the Wasserstein distance between distributions. The wasserstein distance is also called the \"earth mover distance\" and is traditionally formulated as the cost it takes for an optimal transport plan to move one distribution to another. In this paper they have a neural network compute the wasserstein distance using a different formulation that was used in Arjovsky et al. 2017, finds a lipschitz function f, which shows the maximal difference when evaluated on samples from the two distributions. Here these functions are formulated as affine transforms of the data with parameters theta that are computed by a neural network. Results are examined mainly by looking at the first two PCA components of the data. \n\n\nThe paper presents an interesting idea and is fairly well written. However I have a few concerns:\n1. Most of the ideas presented in the paper rely on works by Arjovsky et al. (2017), Gulrajani et al. (2017), and Gulrajani et al. (2017). Some selections, which are presented in the papers are not explained, for example, the gradient penalty, the choice of \\lambda and the choice of points for gradient computation.\n2. The experimental results are not fully convincing, they simply compare the first two PC components on this Broad Bioimage benchmark collection. This section could be improved by demonstrating the approach on more datasets.\n3. There is a lack comparison to other methods such as Shaham et al. (2017). Why is using earth mover distance better than MMD based distance? They only compare it to a method named CORAL and to Typical\nVariation Normalization (TVN). What about comparison to other batch normalization methods in biology such as SEURAT? \n4. Why is the affine transform assumption valid in biology? There can definitely be non-linear effects that are different between batches, such as ion detection efficiency differences. \n5. Only early stopping seems to constrain their model to be near identity. Doesn't this also prevent optimal results ? How does this compare to the near-identity constraints in resnets in Shaham et al. ?\n\n", "This contribution deal with nuisance factors afflicting biological cell images with a domain adaptation approach: the embedding vectors generated from cell images show spurious correlation. The authors define a Wasserstein Distance Network to find a suitable affine transformation that reduces the nuisance factor. The evaluation on a real dataset yields correct results, this approach is quite general and could be applied to different problems.\n\nThe contribution of this approach could be better highlighted. The early stopping criteria tend to favor suboptimal solution, indeed relying on the Cramer distance is possible improvement.\n\nAs a side note, the k-NN MOA is central to for the evaluation of the proposed approach. A possible improvement is to try other means for the embedding instead of the Euclidean one.\n\n", "We appreciate the comments and suggestions of all the reviewers, which we agreed pointed out important ways in which our work could be improved. We have amended our manuscript with more thorough descriptions and explanations, and added new results. Specific responses to the reviewer is given below:\n\n1. Most of the ideas presented in the paper rely on works by Arjovsky et al. (2017), Gulrajani et al. (2017), and Gulrajani et al. (2017). Some selections, which are presented in the papers are not explained, for example, the gradient penalty, the choice of \\lambda (now replaced by \\gamma) and the choice of points for gradient computation.\n\nWe would like to point out that while we do rely on the methods of Arjovsky et al. (2017), Gulrajani et al. (2017) for estimating the Wasserstein distance, the application of these methods for correcting nuisance variation is novel and independent of these methods. This aspect of our work is a novel way of removing nuisance variation, a significant problem in high-throughput biological experiments. Our approach is based on minimizing the sum of pairwise Wasserstein distances of a transformed set of coordinates. This method is inspired by, but distinct from, finding the Wasserstein barycenter. We hope that our approach demonstrates a novel and general framework for removing nuisance variation.\n\nWe have added more thorough explanations of our approach for approximating the Wasserstein distance, including the choice of \\lambda and the choice of points for the gradient computation. \n\n2. The experimental results are not fully convincing, they simply compare the first two PC components on this Broad Bioimage benchmark collection. This section could be improved by demonstrating the approach on more datasets.\n\nWhile we use the first two PC components to illustrate the effect of our transformation, we rely on other quantitative metrics for evaluating the performance of our framework. Specifically, we use domain classification accuracy to assess the extent to which nuisance variation has been removed (discussed in section 3.2.3 and shown in table 2). We also included the k-NN MOA assignment metric to evaluate the quality of the transformed embeddings (discussed in section 3.2.1 and shown in table 1). \n\nIn addition, in the revised manuscript we added another quantitative metric, the average silhouette index of the MOAs to better evaluate the effects of our transformation (see section 3.2.2 and table 3).\n\nIn our original manuscript the k-NN MOA metric did not show significant differences among the evaluated methods when using cross validation leaving out half the compounds at a time. This occurred because there were not enough compounds remaining in each test/evaluation cross validation folds. However, in our revised manuscript our new analysis using a leave-one-compound-out cross validation showed that the framework can be used to attain a significant improvement in the k-NN MOA metric. Using leave-one-compound-out cross validation has been used in other studies [Godinez, William J., et al. “A Multi-Scale Convolutional Neural Network for Phenotyping High-Content Cellular Images.” Bioinformatics (2017)]. This method represents a more realistic setting when a compound with unknown MOA takes the role of the held-out compound.\n\nThe reasons we base our results on specifically the BBBC021 dataset in this paper are:\n\nThis is an open dataset that has been used as a standard. We would like to produce a direct comparison with existing literature. These include:\n\n1. Ljosa, Vebjorn, et al. \"Comparison of methods for image-based profiling of cellular morphological responses to small-molecule treatment.\" Journal of biomolecular screening 18.10 (2013): 1321-1329.\n2. Ando, D. Michael, Cory McLean, and Marc Berndl. \"Improving Phenotypic Measurements in High-Content Imaging Screens.\" bioRxiv (2017): 161422.\n3. Pawlowski, Nick, et al. \"Automating Morphological Profiling with Generic Deep Convolutional Networks.\" bioRxiv (2016): 085118.\n4. Singh, S., et al. \"Pipeline for illumination correction of images for high‐throughput microscopy.\" Journal of microscopy 256.3 (2014): 231-236.\n\nWe have tested our procedure on another dataset with promising results, but unfortunately we are unable to release it at this time.\n\nIn addition, we have added the dosage response plots before and after the transformation. These plots show qualitatively that our transformation preserves the structural dosage-response.\n\nPoints 3-5 will be addressed in a separate comment due to the character limit.", "We appreciate the comments and suggestions of all the reviewers, which we agreed pointed out important ways in which our work could be improved. We have amended our manuscript with more thorough descriptions and explanations, and added new results. Specific responses to the reviewer is given below:\n\n1. It is difficult for the reader to understand a) why Wasserstein is used and b) how exactly the nuisance variation is reduced. A dedicated section on motivation is missing.\n\nWe have expanded a section to include a qualitative description of the Wasserstein distance and a motivating concept for our work, the Wasserstein barycenter. The idea of Wasserstein barycenter is two-fold. First, we want to match the distributions after transformation. Second, we want the perturbation as small as possible. These two ideas are reflected in our method. Although other metrics may also be used, the Wasserstein distance and similar related metrics capture relevant geometric information between the distributions. We have extended our description of the Wasserstein distance and its application in our paper.\n\n2. Does the Deep Metric network always return a '64-dim' vector? Have you checked your model using different length vectors?\n\nThe deep neural network we used always returns a 64-dimensional vector by design, although we do not expect the results to be sensitive to the dimensionality, as long as there are sufficiently many data points compared to the inherent dimension of the data. Additionally, we have experimented with our network on synthetic low-dimensional data as well, obtaining the expected results.\n\n3. Label the y-axis in Fig 2.\n\nDone.\n\n4. The fact that you have early-stopping as opposed to a principled regularizer also requires further substantiation. \n\nWe initially used early stopping mostly to show our framework works as a proof-of-concept. Early stopping provides a simpler framework under which fewer parameters required to be optimized (i.e. the early stopping time instead of the separate regularization weights used to limit the transformation). In our revised manuscript we have included experiments carried out with a regularizer. Our results showed that the results from early stopping were comparable, and even better than the experiments we carried out with a regularizer, which did not have their hyperparameters fine-tuned. We agree that a principled regularizer may yield better results once all of its hyperparameters have been optimized. However, tuning the hyperparameters is a potentially difficult problem outside the scope of this work.", "5. Only early stopping seems to constrain their model to be near identity. Doesn't this also prevent optimal results ? How does this compare to the near-identity constraints in resnets in Shaham et al. ?\n\nWe used early stopping to show our framework works as a proof-of-concept, and we agree it is not guaranteed to yield optimal results. We agree that a principled regularizer may in principle yield better results, because (i) the larger space of penalty hyperparameters and (ii) the potentially non-direct optimization path in the case of a penalty term. In our revised manuscript we have included experiments carried out with a regularizer, where the transformation was constrained to be close to the identity, similarly to Shaham et al. We have found that early stopping was comparable to a penalty term (actually performed better). However, we have not done an extensive search in the space of penalty hyperparameters. We expect that even once the penalty hyperparameters will be fine-tuned, the results will be comparable between the early stopping and regularizer cases, but not much better. This is because the perturbation was quite small, and therefore we expect that as learning progresses the transformed embeddings move in approximately straight paths. We have added the above comments to our manuscript (section 3.4.3).", "3. There is a lack comparison to other methods such as Shaham et al. (2017). Why is using earth mover distance better than MMD based distance? They only compare it to a method named CORAL and to Typical Variation Normalization (TVN). What about comparison to other batch normalization methods in biology such as SEURAT? \n\nWe added a more thorough motivation in our manuscript for motivating the usage of the earth mover distance.\n\nThe MMD distance is closely related to the earth mover distance. Specifically, the earth mover distance between two distributions \\nu_r and \\nu_g is equal to \n\n\\sup_{\\|f\\|_L \\le1} E_{x\\sim \\nu_r} f(x) - E_{x\\sim \\nu_g} f(x).\n\nAbove f belongs to the space of Lipschitz functions with constant 1 (i.e. f is a contraction).\n\nTo compute the MMD, we suppose first that we have a kernel function k: \\chi \\times \\chi \\to mathbb{R} with an associated reproducing kernel Hilbert space \\mathcal{H}, and the condition that f is Lipschitz-1 is replaced by the condition that the \\mathcal{H}-1 norm of f is bounded by 1. Under this condition, Shaham et al. (2017) provide a sample estimate for the MMD distance, based on the supposed kernel. In their work, the kernel is constructed as the sum of three Gaussian kernels with variances \\sigma_i chosen to be m/2 , m, and 2m, where m is the median of the average distance between a point in the target sample to its nearest 25 neighbors.\n\nIn our framework, we do not have a `source' and `target’ distribution: instead, we match an arbitrary number of distributions indexed by both treatment and domain. Therefore, to apply the MMD sample estimate of Shaham et al. (2017), we would have to continuously update the chosen variances \\sigma_i. We think that applying this sample estimate in our framework may be fruitful and give similar results to our method, but because of the added complexity of fitting this cost function into our framework we believe this experiment is outside the scope of this paper.\n\nThe alignment method in SEURAT we believe the reviewer is referring to is described here:\nhttps://www.biorxiv.org/content/biorxiv/early/2017/07/18/164889.full.pdf\n\nThe SEURAT method we think the reviewer is referring to is based on applying canonical-correlation analysis between two datasets, followed by `dynamic time warping’ to correct for changes in density. As for the application in Shaham et al. (2017), the method is specific to aligning two datasets. In our paper, we are interested in the case of many domains, so it is not clear to us how to directly compare our results with those in SEURAT.\n\n4. Why is the affine transform assumption valid in biology? There can definitely be non-linear effects that are different between batches, such as ion detection efficiency differences.\n\nFor the dataset of interest, we applied a universal transformation to the embeddings such that embeddings for negative controls are standardized to have 0 mean and unit variance in all coordinates. We observed that increasing dosages of each compound had embeddings shifted in roughly the same direction by increasing amounts, and the variances along the largest principal axes also increased in a manner consistent with the embeddings undergoing an affine transform. In this setting, we elected to model the impact of batch effects by affine transforms, the intuition being that we can think of batch effects as resembling small, random, drug-like perturbations resulting from unobserved covariates. This result can be motivated if we assume that the perturbations of applying a treatment generally have small effects for the embeddings. We do not expect this assumption to hold generally. The manuscript has been updated to reflect this motivation.\n\nWe hope in the future to test relaxing this assumption, or instead even to fine-tune the original deep neural network to control nuisance variation.", "We appreciate the comments and suggestions of all the reviewers, which we agreed pointed out important ways in which our work could be improved. We have amended our manuscript with more thorough descriptions and explanations, and added new results. Specific responses to the reviewer is given below:\n\n1. The contribution of this approach could be better highlighted. The early stopping criteria tend to favor suboptimal solution, indeed relying on the Cramer distance is possible improvement.\n\nWe remark that our main goal was to introduce a general flexible framework, and using the Wasserstein was a demonstration of a specific choice that can yield substantial improvement. In our approach, we do not separate a `target’ and `source’ distribution, may include many domains, and can have different treatments across the various domains. We highlight that other distances may be used in our framework, such as the Cramer distance or the MMD distance. This may be preferable since the Cramer distance has unbiased sample gradients (Bellemare et al. 2017). Using the Cramer distance could reduce the number of steps required to adjust the Wasserstein distance approximation for each step of training the embedding transformation.\n\nWe have added a discussion about early stopping versus a penalty term in our manuscript (section 3.4.3).\n\nTo address the issue of potentially overfitting the stopping time, we have included a cross validation procedure based on holding out a single compound at a time. We found that the optimal stopping time was consistent regardless of the choice of the held-out compound. We discuss this in more detail in section 3.3.1.\n\n2. As a side note, the k-NN MOA is central to for the evaluation of the proposed approach. A possible improvement is to try other means for the embedding instead of the Euclidean one.\n\nWe remark that one of the main reason we used the k-NN MOA metric is to compare our work with previous approaches in existing literature, and the reviewer is correct to point out there may be better ways to improve this metric both for validation of the quality of embeddings as well as making MOA predictions for unknown compounds. In our dataset, we also expect that the embeddings for each treatment are sufficiently localized (as vectors, they are close to each other in the sense of having similar length and angle), so that the choice of centroid type would not alter them much. In the case when the embeddings are not sufficiently localized, one alternative would be to use an estimator for the Frechet mean [Salehian, Hesamoddin, et al. \"An efficient recursive estimator of the Fréchet mean on a hypersphere with applications to medical image analysis.\" Mathematical Foundations of Computational Anatomy (2015)].\n", "We summarize the main updates we made in our manuscript. For more details please see also the our responses to the reviewers.\n1. In Section 2.2 we expanded our description and motivation for our framework.\n2. In Section 2.3.1 we added motivation for our choice of transformation.\n3. We revised Section 2.3.2 to include more details about how a regularizer may be used in our framework. In the same section we also included more detailed explanations, including for the gradient penalty, the choice of lambda, and the choice of points for gradient computation.\n4. In Section 3.2.1 we clarified how the k-NN metric was computed.\n5. We added a new metric (the silhouette metric), described in Section 3.2.2.\n6. We modified our cross-validation procedure. Instead of using two folds over the compounds, we now use a leave-one-compound-out procedure. This is described in Section 3.3.1. The new method results in more sensitively detecting improvements for our metrics.\n7. We estimated the standard error in some of the metrics using a bootstrapping procedure, described in Section 3.3.2.\n8. In addition to early stopping, we tried using a penalty term for several hyperparameters. The resulting differences are discussed in Section 3.4.3.\n9. We tried additional experiments which we describe in Section 3.5.\n10. We added new potential improvements and modifications in Section 4.1.\n11. We added the learning curves for various values of penalty hyperparameters in Figure 4 in Appendix A.\n12. We added plots showing the dosage response for different transformation in Figure 5 in Appendix B.\n13. We added a heatmap of the similarity matrix of the embedding space in Figure 6 in Appendix C." ]
[ 5, 4, 7, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H18uzzWAZ", "iclr_2018_H18uzzWAZ", "iclr_2018_H18uzzWAZ", "HJk2HZqxM", "Hkyq_kqgz", "HJk2HZqxM", "HJk2HZqxM", "SkvdG35xz", "iclr_2018_H18uzzWAZ" ]
iclr_2018_BJvVbCJCb
Neural Clustering By Predicting And Copying Noise
We propose a neural clustering model that jointly learns both latent features and how they cluster. Unlike similar methods our model does not require a predefined number of clusters. Using a supervised approach, we agglomerate latent features towards randomly sampled targets within the same space whilst progressively removing the targets until we are left with only targets which represent cluster centroids. To show the behavior of our model across different modalities we apply our model on both text and image data and very competitive results on MNIST. Finally, we also provide results against baseline models for fashion-MNIST, the 20 newsgroups dataset, and a Twitter dataset we ourselves create.
rejected-papers
The paper proposes an approach to jointly learning a data clustering and latent representation. The main selling point is that the number of clusters need not be pre-specified. However, there are other hyperparameters and it is not clear why trading # clusters for other hyperparameters is a win. The empirical results are not strong enough to overcome these concerns.
train
[ "ryMv4SZ7M", "rkX7hRmeM", "Hk2HKdrlz", "H1flxytxf", "SyR-e8uWG", "rJmxgLuZG", "r1l0JIdWz", "SyUwqSoJf", "HJyr5Hokf", "SkrcjjVJM" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "We have made some changes and additions to the paper during this rebuttal/discussion period. Our main changes are to add further experiments to demonstrate the robustness of the NATAC training method, and to add more baselines to our text-based experiments. In full, we have:\n\n* Added other clustering methods into the MNIST comparison table.\n* Updated the 20news NATAC results - we have found some slightly better performing hyperparameters.\n* Included NATAC-k and AE-k Results for both the 20 Newsgroups and Twitter Datasets.\n* Included a Comparison table with some other clustering algorithms for 20 Newsgoups, we perform competitively although our model converges on significantly more clusters.\n* Added an experiment to empirically show that the NATAC training method is fairly stable wrt final NMI and converged number of clusters. We only had the time to train multiple runs of models on 20 Newsgroups - so we were unable to officially comment on the other datasets.\n* Added an experiment to show how the end performance of a model changes with increasing amounts of pre-training.\n* Added an experiment to show that changes in the value of lambda (increase/decrease by a power of 10) do not greatly affect the end performance of the model.\n* Altered NATAC training so it is a bit more intuitive to understand.\n* Added more discussion about the NAT training framework.\n* Made some small edits in the introduction and conclusion.\n* Redone the Twitter dataset experiments. We used slightly different hyper-parameters which converged to fewer clusters.", "This paper presents an algorithm for clustering using DNNs. The algorithm essentially alternates over two steps: a step that trains the DNN to predict random targets, and another step that reassigns the targets based on the overall matching with the DNN outputs. The second step also shrinks the number of targets over time to achieve clustering. Intuitively, the randomness in target may achieve certain regularization effect.\n\nMy concerns:\n1. There is no analysis on what the regularization effect is. What advantage does the proposed algorithm offer to an user that a more deterministic algorithm cannot?\n2. The delete-and-copy step also introduces randomness, and since the algorithm removes targets over time, it is not clear if the algorithm consistently optimizes one objective throughout. Without a consistent objective function, the algorithm seems somewhat heuristic.\n3. Due to the randomness from multiple operations, the experiments need to be run multiple times, and see if the output clustering is sensitive to it. If it turns out the algorithm is quite robust to the randomness, it is then an interesting question why this is the case.\n4. Does the Hungarian algorithm used for matching scales to much larger datasets?\n5. While the algorithm empirically improve over k-means, I believe at this point combinations of DNN with classical clustering algorithms already exist and comparisons with such stronger baselines are missing. The authors have listed a few related algorithms in the last paragraph on page 1. I think the following one is also relevant:\n-- Law et al. Deep spectral clustering learning. ICML 2015.\n\n", "This ms presents a new clustering method which combines deep autoencoder and a recent unsupervised representation learning approach (NAT; Bojanowski and Joujin 2017). The proposed method can jointly learn latent features and the cluster assignments. Then the method is tested in several image and text data sets.\n\nI have the following concerns:\n\n1) The paper is not self-contained. The review of NAT is too brief and makes it too hard to understand the remaining of the paper. Because NAT is a fundamental starting point of the work, it will be nice to elaborate the NAT method to be more understandable.\n\n2) Predicting the noise has no guarantee that the data items are better clustered in the latent space. Especially, projecting the data points to a uniform sphere can badly blur the cluster boundaries.\n\n3) How should we set the parameter lambda? Is it data dependent?\n\n4) The experimental results are a bit less satisfactory:\na) It is known that unsupervised clustering methods can achieve 0.97 accuracy for MNIST. See for example [Ref1, Ref2, Ref3].\nb) Figure 3 is not satisfactory. Actually t-SNE on raw MNIST pixels is not bad at all. See https://sites.google.com/site/neighborembedding/mnist\nc) For 20 Newsgroups dataset, NATAC achieves 0.384 NMI. By contrast, the DCD method in [Ref3] can achieve 0.54.\n\n5) It is not clear how to set the number of clusters. More explanations are appreciated.\n\n[Ref1] Zhirong Yang, Tele Hao, Onur Dikmen, Xi Chen, Erkki Oja. Clustering by Nonnegative Matrix Factorization Using Graph Random Walk. In NIPS 2012.\n[Ref2] Xavier Bresson, Thomas Laurent, David Uminsky, James von Brecht. Multiclass Total Variation Clustering. In NIPS 2013.\n[Ref3] Zhirong Yang, Jukka Corander and Erkki Oja. Low-Rank Doubly Stochastic Matrix Decomposition for Cluster Analysis. Journal of Machine Learning Research, 17(187): 1-25, 2016.", "This paper proposes a neural clustering model following the \"Noise as Target\" technique. Combining with an reconstruction objective and \"delete-and-copy\" trick, it is able to cluster the data points into different groups and is shown to give competitive results on different benchmarks.\n\nIt is nice that the authors tried to extend the \"noise as target\" to the clustering problem, and proposed the simple \"delete-and-copy\" technique to group different data points into clusters. Even tough a little bit ad-hoc, it seems promising based on the experiment results. However, it is unclear to me why it is necessary to have the optimal matching here and why the simple nearest target would not work. After all, the cluster membership is found based on the nearest target in the test stage. \n\nAlso, the authors should provide more detailed description regarding the scheduling of the alpha and lambda values during training, and how sensitive it is to the final clustering performance. The authors cited the no requirement of \"a predefined number of clusters\" as one of the contributions, but the tuning of alpha seems more concerning.\n\nI like the authors experimented with different benchmarks, but lack of comparisons with existing deep clustering techniques is definitely a weakness. The only baseline comparison provided is the k-means clustering, but the comparisons were somewhat unfair. For all the text datasets, there were no comparisons with k-means on the features learned from the auto-encoders or clusterings learned from similar number of clusters. The comparisons for the Twitter dataset were even based on character-level with word-level. It is more convincing to show the superiority of the proposed method than existing ones on the same ground.\n\nSome other issues regarding quantitative results:\n- In Table 1, there are 152 clusters for 10-d latent space after convergence, but there are 61 clusters for 10-d latent space in Table 2 for the same MNIST dataset. Are they based on different alpha and lambda values? \n- Why does NATAC perform much better than NATAC-k? Would NATAC-k need a different number of clusters than the one from NATAC? The number of centroids learned from NATAC may not be good for k-means clustering.\n- It seems like the performance of AE-k is increasing with increase of dimensionality of latent space for Fashion-MNIST. Would AE-k beat NATAC with a different dimensionality of latent space and k?", "Thank you for the insightful comments!\n\nRegarding the points you have made:\n\n1&2: In NATAC, the targets are uniformly, randomly sampled from a unit-sphere (one for each example in the dataset). With the large size of the datasets in our experiments (between 20 and 70 thousand) the randomly sampled targets should very closely approximate a uniform distribution on the sphere. In the warm-up stage of training, we do not utilize the delete-and-copy mechanism, meaning that the initial objective is to both autoencode examples _and_ uniformly map the latent representations onto a unit sphere. Therefore, the latent representations serve as a lossy compression of the input data whilst incentivised to be spread uniformly over a unit sphere. \n\nAlthough the one of the objectives in the warm-up stage of training is to have a uniformly distributed latent representations, there will always be inconsistencies: As the reconstruction loss ‘encourages’ similar examples to have similar latent representations, the distribution of the latent representations will be denser in some regions and more sparse in others. \n\nIn the transition and clustering stages of training, we then gradually perturb the distribution of the targets to more closely match the imperfect distribution of the latent representations (we use the heuristic delete-and-copy mechanism), and also allow for targets to agglomerate.\n\nThe randomness comes from two sources:\nOne being the initial random assignment of examples in the dataset X to the latent targets Y and training in mini-batches. As mentioned before, the warm-up stage of training is responsible for finding a good assignment of the targets to the input examples, and the targets are a very close approximation of a uniform distribution of points on a sphere.\n\nThe delete-and-copy function can be seen as a way of gradually re-aligning the distribution of the targets more closely to the distribution of the latent representations made by the encoder. The randomness comes from the fact that, instead of deterministically, we randomly choose which targets to delete in training. However, the delete-and-copy mechanism is only stochastic during the transition-phase of training - after which we delete-and-copy with a probability of 1 (alpha = 1).\n\nIndeed, our delete-and-copy method is a randomized algorithm. Intuitively, what it tries to achieve is gradually remove targets from the less dense areas (and clone targets in the denser areas; in turn, this allows the latent representation more freedom (by making the constraints easier to meet), so it is easier to reconstruct the example from its latent representation. \n\nThe random effect of shifting targets may help avoid overfitting (memorizing certain locations in latent space ‘off-by-heart’). In a way it is reminiscent to VAEs, where instead the latent representation is perturbed before it is passed to the decoder. We will include a brief discussion about this intuition in the paper - thank you for raising this!\n\nWe do not know whether it is possible to achieve similar results with a deterministic perturbation algorithm. However, our delete-and-copy method has outperformed any of the deterministic heuristics we have tried (e.g. simple rules like removing one target from the least populated area and cloning one target in the most dense area). It is a very interesting question for future research to see whether more elaborate heuristics, and in particular deterministic ones, can yield results better than we’ve obtained using our simple delete-and-copy randomized rule. \n\nWe are not aware other neural methods that do not require a set number of clusters (with the one exception being another paper submitted to this ICLR), so we were unable to comment on the difference between this method and more-deterministic methods. \n\n\n3. We found the outcome of training a model with similar hyperparameters to be fairly similar in outcome. We plan on showing the variability of training the best performing model on 20 Newsgroups (as these models are quick to train) to empirically show this in the paper.\n\n4: The hungarian method itself runs in O(N^3) complexity - significantly more efficient than a brute-force search (O(n!)). This would mean computing the optimal assignments over the whole dataset would be expensive. However, we train using mini-batches, meaning that the hungarian algorithm is only computed on a batch of data, not the whole dataset. This means that the runtime of a single forward/backward pass of a model does not change wrt the size of the dataset, as the batch size remains constant. \n\n5: Thank you for bringing this paper to your attention, we will certainly mention this in our revision. Unfortunately, the paper does not report NMI on the datasets we do, so we are unable to compare performance with our method.\n", "Thank you for the helpful comments!\n\nRegarding the points that you have made:\n\n1: The NAT training framework aims to match each latent representation to a unique target in the latent space. By doing this, the model learns a mapping to latent space in which the distribution of the latent representations of the dataset very closely matches the distribution of the noise-targets. This is done by jointly learning the encoder function (parameterized by a neural network) and also learning the assignment of examples to their best-fitting target in latent space. \n\nAs we do not know the ideal assignment at the beginning of training, we randomly assign each example a noise-target at the beginning of training. During training, we progressively re-assign labels to different examples in the dataset so as to minimise the total distance between latent representations their corresponding targets (we call these optimal assignments). To find the optimal assignments, we can compute the distance from every latent representation to each target in the dataset and use the hungarian algorithm to find the optimal assignment of latent representation to target. However, finding the optimal assignment for an entire dataset of latent representations and noise-targets would be very expensive (indeed the hungarian algorithm has O(n^3) complexity), so instead we train using randomly selected batches from the dataset (we use a batch size of 100). For a batch of latent representations and targets, calculating the optimal assignments is feasible, and also means we can train NAT models similarly to other deep learning models (i.e. mini-batch SGD).\n\nWe agree that the section discussing the NAT training framework is quite brief. We decided to remove a lot of discussion to reduce the paper down to 8 pages. We plan to include more discussion on the NAT framework in an upcoming revision.\n\n2: The NATAC model does rely on some heuristics, which means we do not have analytic guarantees to this method. We set the latent space of our model to be the surface of a d-dimensional sphere (similarly to the work of Bojanowski and Joulin). Although this might be less expressive than an un-normalized latent space, we found that placing both the noise targets and the latent representations on the manifold is empirically much more effective for training (see Appendix C.1). \n\n3: The value of lambda and alpha do change during training. There is some discussion in the appendix about how exactly we set these values. In these experiments, we used some trial-and-error to find fitting values. We aim to include some experiments showcasing how the clustering algorithm behaves with different values for lambda (and alpha). Expect an update to the paper including this soon.\n\n\n4: Thank you for bringing these papers to our attention. We will certainly include these in our revision. We believe a key contribution of the paper is that our method does not need a prior number of clusters - as real-world use cases for clustering usually have no prior knowledge of the true number of clusters in the data. However it is clear that several methods of unsupervised clustering (which do require a given number of clusters) outperform our method on MNIST, which we will mention in our revision. \nRegarding 4 c) - Note that the evaluation of NATAC in our paper is different to that in the DCD paper: The NMI scores we report for the experiments are taken from the test set of 20 Newsgroups after training on the train set (using the ‘bydate’ version of the dataset). The DCD paper cites 20K training examples for it’s train set - suggesting that the clustering was performed using both the train and test set, with NMI reported on the whole dataset. If that is the case, we will report NMI values for our method trained in this way, and compare the results to those mentioned in the paper.\n\n\n5: During training, the model successively agglomerates examples to the same centroid by deleting an example’s assigned target and instead assigning the example a copy of another target (using the delete-and-copy mechanism). At the same time, the model is also trying to optimize to the auxiliary objective, by having as little reconstruction error as possible. \nThis means that, at some point, the model does not delete any more centroids during training, as agglomerating any more points would incur a huge reconstruction loss penalty. Therefore the model converges onto the number of clusters during training. We discuss convergence of the model in section 2.3 (Implementation Details).\n", "Thank you for the insightful comments!\n\nRegarding the need for the Hungarian algorithm in our model:\nWe only use the Hungarian algorithm during the first two stages of training, i.e. during the warm-up and transition stages. Subsequently, assignment between targets and latent representations is done purely by assigning a target to its nearest latent representation. \n\nThe aim early on in training is to (1) pre-train the encoder and decoder networks and (2) have the latent representations distribute (close to) uniformly across the latent space. We ensure this by using the NAT objective which makes the model minimise distances between latent representations and their targets.\n\n\nIn contrast to the above, were we to assign targets to their nearest latent representations from the very beginning, we would risk the model collapsing in on itself as the encoder function would not have learned a stable mapping to latent space. We can corroborate this empirically through the ample runs we made early on whilst working on this paper. \n\n\nWith regard to your comments on the tuning of the alpha and lambda values:\nTo give an indication of the robustness of our method we plan to include experiments highlighting how much the lambda (and alpha) values affect training on MNIST. In short, we do not tune the value of alpha very much in our experiments (0 at the beginning, then a gradual increase to 1 after some epochs). However drastically different values, for example alpha set to 1 throughout training, do lead to poor results.\n\nWith regards to the text datasets:\nWe wholeheartedly agree that the baselines used in the text-based experiments are quite weak. Our intention was not to prove that our method is optimal for text clustering; rather, we wanted to show that the technique generalizes across modalities.\n\n\nGiven your feedback, we plan on including the results from models of an identical architecture trained as vanilla autoencoders (AE-k) and k-means using the learned representations of the NATAC model. Additionally, we plan to train a model on the whole of 20-newsgroups and compare the NMI to clustering algorithms that require a predefined number of clusters. Stay tuned, the results should be in within a week.\n\nYou rightfully point out the discrepancy between the number of clusters for the model in Table 1 and Table 2. We use the same hyperparameters for both of these experiments, however the model in Table 2 is trained on the whole of MNIST, whereas the model in Table 1 is trained on the train and validation sets only (to allow for evaluation on the test set). We plan to include some experiments which show the variability of our training method (converged number of clusters, NMI score, similarity to clusters in other models.). These are the next priority after including the updated text-dataset results\n\nThe question of whether the same number of clusters would be optimal for NATAC-k is an interesting one. Indeed, seeing as many of the clusters in the NATAC models contain very few examples (the ‘dead centroids’), it would be a little unfair to compare using k-means with the same number of clusters. We are currently discussing what a more sensible baseline might be.\n", "We will certainly compare the results of the paper, along with other related clustering papers in the ICLR review, to our approach.", "Since submission, several papers have come to our attention which we would like to include and discuss:\n\n* Learning Discrete Representations via Information Maximizing Self-Augmented Training\n* Deep Continuous Clustering (ICLR 2018 submission)\n* Spectralnet Spectral Clustering Using Deep Neural Networks (ICLR 2018 submission)\n* Learning Latent Representations In Neural Networks For Unsupervised Clustering Through Pseudo Supervision And Graph Based Activity Regularization (ICLR 2018 submission)\n\nAdditionally, some of the above report higher NMI scores than our model (although they require a set number of clusters). We will adapt the paper respectively.", "I believe that you should also cite “Learning Discrete Representations via Information Maximizing Self-Augmented Training” (ICML 2017) http://proceedings.mlr.press/v70/hu17b.html.\nThis paper is closely related to your work and is also about unsupervised clustering using deep neural networks.\nAs far as I know, the proposed method, IMSAT, is the current state-of-the-art method in deep clustering (November 2017). Could you compare your results against their result?" ]
[ -1, 5, 5, 5, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJvVbCJCb", "iclr_2018_BJvVbCJCb", "iclr_2018_BJvVbCJCb", "iclr_2018_BJvVbCJCb", "rkX7hRmeM", "Hk2HKdrlz", "H1flxytxf", "SkrcjjVJM", "iclr_2018_BJvVbCJCb", "iclr_2018_BJvVbCJCb" ]
iclr_2018_S191YzbRZ
Prototype Matching Networks for Large-Scale Multi-label Genomic Sequence Classification
One of the fundamental tasks in understanding genomics is the problem of predicting Transcription Factor Binding Sites (TFBSs). With more than hundreds of Transcription Factors (TFs) as labels, genomic-sequence based TFBS prediction is a challenging multi-label classification task. There are two major biological mechanisms for TF binding: (1) sequence-specific binding patterns on genomes known as “motifs” and (2) interactions among TFs known as co-binding effects. In this paper, we propose a novel deep architecture, the Prototype Matching Network (PMN) to mimic the TF binding mechanisms. Our PMN model automatically extracts prototypes (“motif”-like features) for each TF through a novel prototype-matching loss. Borrowing ideas from few-shot matching models, we use the notion of support set of prototypes and an LSTM to learn how TFs interact and bind to genomic sequences. On a reference TFBS dataset with 2.1 million genomic sequences, PMN significantly outperforms baselines and validates our design choices empirically. To our knowledge, this is the first deep learning architecture that introduces prototype learning and considers TF-TF interactions for large scale TFBS prediction. Not only is the proposed architecture accurate, but it also models the underlying biology.
rejected-papers
This paper proposes an approach for predicting transcription factor (TF) binding sites and TF-TF interaction. The approach is interesting and may ultimately be valuable for the intended application. But in its current state, the paper has insufficient technical novelty (e.g. relative to matching networks of Vinyals 2016), insufficient comparisons with prior work, and unclear benefit of the approach. The reviewers also had some concerns about clarity.
val
[ "ryoWUP5lz", "HkM_FfLxM", "H1yqn7qlM", "ByJ37P67M", "rJn6VDaQf", "HJa3NwT7z", "r1Tj4DTQG", "By6F4v6Qz", "ryJY4Dp7f", "ByMwEP67z", "rk-UVPTmM", "Hk2EVPTQG", "H11bNPamG", "SJjyVwpXz", "SJJsmwp7z", "SkGY7vTXz", "HkmwQwpXf", "rJZSXvpXf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "This work proposes an approach for transcription factor binding site prediction using a multi-label classification formulation. It is a very interesting problem and application and the approach is interesting. \n\nNovelty:\nThe method is quite similar to matching networks (Vinyals, 2016) with a few changes in the matching approach. As such, in order to establish its broader applicability there should be additional evaluation on other benchmark datasets. The MNIST performance comparison is inadequate and there are other papers that do better on it. \nThey should clearly list what the contributions are w.r.t to the work by Vinyals et al 2016.\nThey should also cite works that learn embeddings in a multi-label setting such as StarSpace.\n\nImpact:\nIn its current form the paper seems to be most relevant to the computational biology / TFBS community. However, there is no comparison to the exact networks used in the prior works DeepBind/DeepSea/DanQ/Basset/DeepLift or bidirectional LSTMs. Further there is no comparison to existing one-shot learning techniques either. This greatly limits the impact of the work.\n\nFor biological impact, a comparison to any of the motif learning approaches that are popular in the biology/comp-bio community will help (for instance, HOMER, FIMO).\n\nCons:\nThe authors claim they can learn TF-TF interactions and it is one of the main biological contributions, but there is no evidence of why (beyond very preliminary evaluation using the Trrust database). Their examples are 200-bp long which does not mean that all TFs binding in that window are involved in cooperative binding. The prototype loss is too simplistic to capture co-binding tendencies and the combinationLSTM is not well motivated. One interesting source of information they could tap into for TF-TF interactions is CAP-SELEX (Jolma et al, Nature 2015).\n\nOne of the main drawbacks is the lack of interpretability of their model where approaches like DanQ/DeepLift etc benefit. The PWM-like filters in some of the prior works help understand what type of sequence properties contribute to binding events. Can their model lead to an understanding of this sort?\n\nEvaluation:\nThe empirical evaluation itself is not very strong as there are only modest improvements over simple baselines. Further there are no error-bars etc to indicate the variance in their performance numbers.\nIt will be useful to have a TF-level performance split-up to get an idea of which TFs benefit most.\n\nClarity:\nThe paper can benefit from more clarity in the technical aspects. It is hard to follow for anyone not already familiar with matching networks. The objective function, parameters need to be clearly introduced in one place. For instance, what is y_i in their multi-label framework?\nVarious choices are not well motivated; for instance cosine similarity, the value of hyperparameter epsilon.\nThe prototype vectors are not motif-like at all -- can the authors motivate this aspect better?\n\nUpdate: I have updated my rating based on the author rebuttal", "The authors of this manuscript proposed a model called PMN based on previous works for the classification of transcription factor binding. Overall, this manuscript is not well written. Clarification is needed in the method and data sections. The model itself is an incremental work, but the application is novel. My specific concerns are given below.\n\n1. It is unclear how the prototype of a TF is learned. Detailed explanation is necessary. \n\n2. Why did the authors only allow a TF to have only one prototype? A TF can have multiple distinct motifs.\n\n3. Why peaks with p-value>=1 were defined as positive? Were negative classes considered in the computational experiments?\n\n4. What's the relationship between the LSTM component in the proposed method and sparse coding?\n\n5. The manuscript contains lots of low-end issues, such as:\n5.1. Inconsistency in the format when referring to equations (eq. equation, Equation, attention LSTM, attentionLSTM, t and T etc);\n5.2. Some \"0\"s are missing in Table 3;\n5.3. L2 should be L_2 norm; \n5.4. euclidean -> Euclidean; pvalue-> p-value;\n5.5. Some author name and year citations in the manuscript should be put in brackets;\n5.6. The ENCODE paper should be cited properly, (\"Consortium et al., 2012\" is weird!) ;\n5.7. The references should be carefully reformatted, for example, some words in the references should be in uppercase (e.g. DNA, JASPER, CNN etc.), some items are duplicated, ...\n\nComments for the revised manuscript: I decide to keep my decision as it is. My major and minor concerns are not fully well addressed in the revised paper. ", "Summary\nThis paper proposes a prototype matching network (PMN) to model transcription factor (TF) binding motifs and TF-TF interactions for large scale transcription factor binding site prediction task. They utilize the idea of having a support set of prototypes (motif-like features) and an LSTM from the few shot learning framework to develop this prototype matching network. The input is genomic sequences from 14% of the human genome, each sequence in the dataset is bound by at least one TF. First a Convolutional Neural Network with three convolutional layers is trained to predict single/multiple TF binding. The output of the last hidden layer before sigmoid transformation is used as the LSTM input. A weighted sum of similarity score (sigmoid of cosine similarity, similar to attention mechanism of LSTMs) along with prototype vectors are used to update the read vector. The final output is a sigmoid of the final hidden state concatenated with the read vector. The loss function used is difference of a cross-entropy loss function and a lambda weighted prototype loss function. The latter is the mean square error between the output label and the similarity score. The authors compare the PMN with different lambda values with CNN with single/multi-label and see marginal improvement in auROC, auPR and Recall at 50% FDR with the PWM. To test that PWN finds biologically relevant TF interactions, the authors perform hierarchical clustering on the prototypes of 86 TFs and compare the clusters found to the known co-regulators from the TRRUST database and find 6 significant clusters. \n\n\nPros:\n1. The authors utilize the idea of prototypes and few shot learning to the task of TF-binding and cooperation. \n\n2. Attention LSTMs are used to model label interactions. \n\nJust like CNN can be related to discriminative training of PSSM or PWM, the above points demonstrate nicely how ideas/concepts from the recent developments in DL can be adopted/relate (and possibly improve on) to similar generative modeling approaches used in the past for learning cooperative TF binding.\n\nCons:\n\n1. Authors do not compare their model’s performance to the previously published TF binding prediction algorithms (DeepBind, DeepSEA). \n2. The authors miss important context and make some inaccurate statements: TF do not just “control if a gene is expressed or not” (p.1). It’s not true that previous DL works did not consider co-binding. Works such as DeepSea combined many filters which can capture cooperative binding to define which sequence is “regulated”. It is true this or DeepBind did not construct a structure a structure over those as learned by an LSTM. The authors do point out a model that does add LSTM (Quang and Xie) but then do not compare to it and make a vague claim about it modeling interactions between features but not labels (p. 6 top). Comparing to it and directly to DeepSee/Bind seems crucial to claim improvements on previous works. Furthermore, the authors acknowledge the existence of vast literature on this specific problem but completely discard it as “loose connection to our TFBS formulation”. In reality though, many works in that area are highly relevant and should be discussed in the context of what the authors are trying to achieve. For example, numerous works by Prof. Saurabh Sinha have focused specifically on joint TF modeling (e.g. Kazemian NAR 2011, He Plos One 2009, Ivan Gen Bio 2008, MORPH Plos Comp Bio 2007). In general, trying to lay claims about significant contributions to a problem, as stated here by the authors, while completely disregarding previous work simply because it’s not in a DL framework (which the authors are clearly more familiar with) can easily alienate reviewers and readers alike. \n\n3. The learning setup seems problematic:\n3a. The model may overfit for the genomic sequences that contain TF binding sites as it has never seen genomic sequences without TF binding sites (the genomic sequences that don’t have CHIP peaks are discarded from the dataset). Performance for genome wide scans should definitely include those to assess accuracy.\n3b. The train/validation/test are defined by chromosome. There does not seem to be any screening for sequence similarity (e.g. repetitive sequences, paralogs). This may inflate performance, especially for more complicated models which may be able to “memorize” sequences better. \n4. The paper claims to have 4 major contributions. The details of second claim that the prototype matching loss learns motif like features is not explained anywhere in the paper. If we look at the actual loss function equation (12), it penalizes the difference between the label and the similarity score but the prototypes are not updated. The fourth claim about the biological relevance of the network is not sufficiently explored. The authors show that it learns co-bindings already known in the literature which is a good sanity check but does not offer any new biological insight. The actual motifs or the structure of their relations is not shown or explored.\n5. PWN offers only marginal improvement over the CNN networks \n\n", "We have added a table to compare our method to the similar work in Vinyals et al. 2015 and Snell et al. 2017. The main contribution of our work over the previous two is that we extend those methods to a large-scale and multi-label task. We are currently working on adding more benchmark datasets in the large-scale and multi-label tasks so that we can further prove our method. We have also added a short review of the recent StarSpace paper in the previous multi-label works section. ", "We use a lookup table for representing TFs’ prototypes. This means for each TF we learn an embedding vector representing this TF’s pattern. The lookup table is learned and updated via gradient descent on each update to minimize the prototype-matching loss we proposed in the paper. We thank the reviewer for noting this problem, and we have updated the manuscript to better explain the lookup table.", "It is true that a TF may have multiple motifs. However, it is assumed that the secondary motifs are in fact primary motifs of other TFs (Wang et al. 2012). In addition, as pointed out by Snell et al. 2017, Multiple prototypes were proposed in Mensink et al. and Rippel et al. But, both methods require a separate partitioning phase that is decoupled from the weight updates, which complicates the model. We found that adding additional prototypes made training more difficult, and did not improve the accuracy.", "We realize that we did not explain the dataset construction very well. We have updated our explanation. \n", "The LSTM component is not connected to the sparse coding method. The proposed LSTM aims to model the dependency among labels. ", "We thank the reviewer for pointing out the low-end issues. We have since fixed these in the manuscript. ", "We thank the reviewer for pointing out that there should be a better comparison against the prior deep learning works in TFBS prediction. Our baseline CNN was the same architecture (with slightly different hyperparameters) as DeepBind, DeepSEA and Basset. DeepBind and Basset are for single task (one label per model), but we realize that we should run our model on the DeepSEA dataset which is multi-label for TFs, Histone Modifications and DNA accessibility. However, our model was designed to model the dependencies among TF binding, which may be skewed in the DeepSEA dataset which also has HM and DNase outputs.", "The DanQ method (Quand and Xie) applies a bidirectional LSTM on top of the CNN outputs, which finds dependencies among motifs at the sequence level (i.e., among different sequence positions). Our method is concerned with finding dependencies among TFs at the output level (i.e., among different labels), but we will compare to that method to show that modelling TF interactions is stronger than modelling motif interactions. DanQ does not report actual AUC values, but rather the improvements over DeepSEA, so we need to implement our own CNN+LSTM model similar to DanQ for future work.\n\nWe would like to thank the reviewer for noting related works which we did not cite. Our primary goal was to compare against state of the art deep learning methods which do not incorporate co-binding, but we realize that we should be citing related non deep learning works. ", "3a. We thank the reviewer for pointing out a valid concern in our dataset. We felt that for this experiment, running on only windows with at least one ChIP-seq peak was sufficient, especially due to runtime constraints of including more windows, but we will include completely negative windows in future experiments. \n\n3b. We would like to thank the reviewer for noting another important concern with out dataset in that there is no screening for sequence similarity. However, since this method of dividing the splits up by chromosomes was done in previous datasets (DeepSEA, ENCODE DREAM), we adopted the same methodology.\n", "We realize that we did not explain the idea of prototypes very well in the original manuscript. While we said the the prototypes learn motif-like features, they actually learn high level abstract representations summarizing patterns of each TF through an embedding vector. Since we use a lookup table for the prototypes, they are in fact updated via gradient descent on each update. \n\nIt is true that we did not add any new biological insight, but our goal for this paper was rather to design a prediction model. In future work, we plan to find new insights which can be used by biologists. ", "The PMN model does only offer a marginal improvement over CNNs. However, we believe that our architecture models the biology better, which could lead to new insights. This is similar to the marginal improvements of DanQ over DeepSEA, but the DanQ model was better fitting for the biology.", "We thank the reviewer for pointing out that there should be a better comparison against the prior deep learning works in TFBS prediction. Our baseline CNN was the same architecture (with slightly different hyperparameters) as DeepBind DeepSea and Basset, but we realize that we should compare against those exact methods. The DanQ method applies a bidirectional LSTM on top of the CNN outputs, which finds dependencies among motifs (i.e., among different sequence positions). Our method is concerned with finding dependencies among TFs (i.e., among different labels), but we should compare to that method. If the task is for single label few shot learning (as in Vinyals et al. and Snell et al.) then our method is very similar to the previous, with the one major difference of learning the support set as opposed to using the images directly or a mean of the images. \n\nWe decided not to compare against traditional motif learning methods such as HOMER and FIMO because it has been shown that the CNN filters can find related motifs (e.g. Alipanahi et al. 2015, Kelley et al. 2015). Since the first step of our method is a CNN, we assume that these filters learn these first-order motifs. We plan to validate this using the same method as Alipanahi et al. in future work. In addition, Alipanahi et al. showed that a basic 1-layer CNN model can outperform the baseline MEME-ChIP approach (Machanick & Bailey 2011), which uses the traditional position weight matrix motif approach. Thus, we did not also compare against MEME-ChIP for accuracy. ", "It is true that not all TFs in a window are involved in cooperative binding. However, we believe that our model handles this by not updating the matching score after iterating over the other TFs using the combinationLSTM. Although we have not experimentally verified this, we believe that the prototype loss does capture co-binding tendencies because of the fact that using the loss function after all the hops performs better than the loss function without iterative hops.\n\nWe would like to thank the reviewer for pointing out the CAP-SELEX paper, which is an extremely good resource for validating our method. We plan to do this for future work.\n\nWe would like to thank the reviewer for noting the lack of interpretability of our model, which is something that we realize is an important factor in computational biology. As previously noted, we believe that our CNN extracts similar motif features as in DeepBind, DanQ, and Basset. However, we will validate this in future work.", "We agree that more robust evaluations should be added to convince readers of our method. We did include the pairwise t-test among TFs which showed that our method significantly outperformed baselines. However, showing TF level performance and variance in metrics will greatly help. ", "We thank the reviewer for noting unclear technical aspects. y_i is the ground truth label (0 or 1) for TF i. y should be denoted bold for a vector, which we have changed.\n\nWe chose cosine similarity because we wanted a distance measure which mapped the similarity between 0 and 1 (since it’s multi-label). We tried squared Euclidean with a margin loss, but it did not perform as well. We realize that we did not explain this well in the original draft. \n\nWe chose epsilon=20 because we wanted a large enough epsilon so that the softmax output would be between 0 and 1. Since the max value of cosine similarity is 1, this would result in sigmoid(1) = ~0.7 . Thus we chose epsilon=20 so that sigmoid(1*20) = ~1.\n\nThe prototype vectors are not in fact like traditional motifs, but rather high level hidden representations of the TFs themselves. The CNN filters extract the individual motifs, but the prototypes are higher-level summary representations of each TF. " ]
[ 5, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S191YzbRZ", "iclr_2018_S191YzbRZ", "iclr_2018_S191YzbRZ", "ryoWUP5lz", "HkM_FfLxM", "HkM_FfLxM", "HkM_FfLxM", "HkM_FfLxM", "HkM_FfLxM", "H1yqn7qlM", "H1yqn7qlM", "H1yqn7qlM", "H1yqn7qlM", "H1yqn7qlM", "ryoWUP5lz", "ryoWUP5lz", "ryoWUP5lz", "ryoWUP5lz" ]
iclr_2018_SJzmJEq6W
Learning non-linear transform with discriminative and minimum information loss priors
This paper proposes a novel approach for learning discriminative and sparse representations. It consists of utilizing two different models. A predefined number of non-linear transform models are used in the learning stage, and one sparsifying transform model is used at test time. The non-linear transform models have discriminative and minimum information loss priors. A novel measure related to the discriminative prior is proposed and defined on the support intersection for the transform representations. The minimum information loss prior is expressed as a constraint on the conditioning and the expected coherence of the transform matrix. An equivalence between the non-linear models and the sparsifying model is shown only when the measure that is used to define the discriminative prior goes to zero. An approximation of the measure used in the discriminative prior is addressed, connecting it to a similarity concentration. To quantify the discriminative properties of the transform representation, we introduce another measure and present its bounds. Reflecting the discriminative quality of the transform representation we name it as discrimination power. To support and validate the theoretical analysis a practical learning algorithm is presented. We evaluate the advantages and the potential of the proposed algorithm by a computer simulation. A favorable performance is shown considering the execution time, the quality of the representation, measured by the discrimination power and the recognition accuracy in comparison with the state-of-the-art methods of the same category.
rejected-papers
This paper proposes an approach for learning a sparsifying transform via a set of nonlinear transforms at learning time. The presentation needs a lot of work. The original paper was 17 pages long and very difficult to understand. The revised paper is 12 pages long, which is still too long for the content. The paper needs to better distinguish between the major and minor points. It is still too difficult to judge the contribution.
train
[ "BkTGJ-9EM", "rJiAXzOVM", "Sy8Kdltgz", "HykAaKDgf", "BJy3xb9xM", "Hy5qs4_ZM", "ry7q9yQZf", "r1kudkmbz", "H1mVDJ7Wf" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "To all reviewers, we would like to extend the appreciation for taking the necessary time, involvement and effort in reading our initial and rebutted paper version, express our gratitude for all the taken considerations, raised comments and concerns about all aspects of this paper, contributing towards increasing the quality of the manuscript.", "We would like to extend the appreciation for taking the necessary time, involvement and effort in reading our initial and rebutted paper version, together with all the considerations, raised comments and concerns about all the aspects of this paper. \n\nIf possible we would kindly like to ask the reviewer if he could comment on the reasons related to the current manuscript version that lead to changing his reviews score.", "This paper proposes a method of learning sparse dictionary learning by introducing new types of priors. Specifically, they designed a novel idea of defining a metric to measure discriminative properties along with the quality of presentations.\nIt is also presented the power of the proposed method in comparison with the existing methods in the literature.\n\nOverall, the paper deals with an important issue in dictionary learning and proposes a novel idea of utilizing a set of priors. \n\nTo this reviewer’s understanding, the thresholding parameter $\\tau_{c}$ is specific for a class $c$ only, thus different classes have different $\\tau$ vectors. If so, Eq. (6) for approximation of the measure $D(\\cdot)$ is not clear how the similarity measure between ${\\bf y}_{c,k}$ and ${\\bf y}_{c1,k1}$, \\ie, $\\left\\|{\\bf y}_{c,k}^{+}\\odot{\\bf y}_{c1,k1}^{+}\\right\\|_{1}+\\left\\|{\\bf y}_{c,k}^{+}\\odot{\\bf y}_{c1,k1}^{+}\\right\\|_{1}$ and $\\left\\|{\\bf y}_{c,k}\\odot{\\bf y}_{c1,k1}\\right\\|_{2}^{2}$, works to approximate it. It would be appreciated to give more detailed description on it and geometric illustration, if possible.\n\nThere are many typos and grammatical errors, which distract from reading and understanding the manuscript.", "Summary:\nThe paper proposes a model to estimate a non-linear transform of data with labels, trying to increase the discriminative power of the transformation while preserving information. \n\nQuality:\nThe quality is potentially good but I misunderstood too many things (see below) to emit a confident judgement.\n\nClarity:\nClarity is poor. The paper is overall difficult to follow (at least for me) for different reasons. First, with 17 pages + references + appendix, the length of the paper is way above the « strongly suggested limit of 8 pages ». Second, there are a number of small typos and the citations are not well formatted (use \\citep instead of \\citet). Third, and more importantly, several concepts are only vaguely formulated (or wrong), and I must say I certainly misunderstood some parts of the manuscript.\nA few examples of things that should be clarified:\n- p4: « per class c there exists an unknown nonlinear function […] that separates the data samples from different classes in the transformed domain ». What does « separate » mean here? There exists as many functions as there are classes, do you mean that each function separates all classes? Or that when you apply each class function to the elements of its own class, you get a separation between the classes? (which would not be very useful)\n- p5: what do you mean exactly by « non-linear thresholding function »?\n- p5: « The goal of learning a nonlinear transform (2) is to estimate… »: not sure what you mean by « accurate approximation » in this sentence.\n- p5 equation 3: if I understand correctly, you not only want to estimate a single vector of thresholds for all classes, but also want it to be constant. Is there a reason for the second constraint?\n- p5 After equation 4, you define different « priors ». The one on z_k seems to be Gaussian; however in equation 4, z_k seems to be the difference between the input and the output of the nonlinear threshold operator. If this is correct, then the nonlinear threshold operator is just the addition of a Gaussian random noise, which is really not a thresholding operator. I suppose I misunderstood something here, some clarification is probably needed.\n- p5 before equation 5, you define a conditional distribution of \\tau_c given y_{c,k} ; however if you define a prior on \\tau_c given each point (i.e., each $k$), how do you define the law of \\tau_c given all points?\n- p5 Equation 6: I dont understand what the word « approximation » refers to, and how equation 6 is derived. \n-p6 equation 7 : missing exponent in the Gaussian distribution\n- p6-7: to combine equations 7-8-9 and obtain 10, I suppose in equation 7 the distributions should be conditioned on A (at least the first one), and in equation 9 I suppose the second line should be removed and the third line should be conditioned on A; otherwise more explanations are needed.\n- Lemma 1 is just unreadable. Please at least split equations in several lines.\n\nOriginality:\nAs far as I can tell, the approach is quite original and the results proved are new.\n\nSignificance:\nThe method provides a new way to learn discriminative features; as such it is a variant of several existing methods, which could have some impact if clarity is improved and a public code is provided.", "Overview:\nThis paper proposes a method for learning representations using a “non-linear transform”. Specifically, the approach is based on the form: Y =~ AX, where X is the original data, A is a projection matrix, and Y is the resulting representation. Using some assumptions, and priors/regularizers on Y and A, a joint objective is derived (eq. 10), and an alternating optimization algorithm is proposed (eq. 11 and 14). Both objective and algorithm use approximations due to hardness of the problem. Theoretical and empirical results on the quality and properties of the representation are presented.\nDisclaimer: this is somewhat outside my area of expertise, so this is a rather high-level review. I have not thoroughly checked proofs and claims.\n\nComments:\n-I found the presentation quality to be rather poor, making it hard to fully understand and evaluate the approach. In particular, the motivation and approach are not clear (sec. 1.2), making it hard to understand the proposed method. There is no explicit formulation, instead there are references to other models (e.g., sparsifying transform model) and illustrative figures (fig. 1 and 2). Those are useful following a formal definition, but cannot replace it. The separation between positive and negative elements of the representation is not motivated and explained in a footnote although it seems central to the proposed approach.\n- The paper is 17 pages long (24 pages with the appendix), so I had to skim through some parts. Due to the extensive scope, perhaps a journal submission would be more appropriate.\n\nMinors:\n- Vu & Monga 2016b and 2016c are the same.\n- p. 1: meaner => manner\n- p. 1: refereed => referred\n- p. 1: “a structural constraints”; p. 2: “a low rank constraints”, “a pairwise constraints”; p. 4: “a similarity concentrations”, “a numerical experiments”, and others...\n- p. 2, 7: therms => terms\n- p. 2: y_{c_1,k_2} => y_{c_1,k_1}?\n- p. 3, 4: “a the”\n- p. 5: “an parametric”\n- p. 8: ether => either\n- Other typos… the paper needs proofreading.\n", "\nWe have uploaded a revised version of our paper where we have carefully considered and integrated all the comments of the reviewers.\n\n\nIn summary, we introduce the following clarifications and modifications that improve the presentation:\n\na) The abstract is made more appealing and consistent.\n\nb) The motivation for the use of the non-linear transform was clarified by adding additional explanations as suggested (please, see the paragraph 1 in subsection 1.2). Additionally, the table about the most used notations was removed.\n\nc) The introduction of section 2 was changed and simplified to an overview about the proposed concept. The type of the used non-linear transforms and the general modeling concept from introduction was moved to a separate subsection 2.1 The parametric non-linear transform modeling\n\ne) Subsection 2.1 is changed to subsection 2.2 and is divided into 2 subsections one for the learning model and one for the testing model\n(an update w.r.t. the old version is that one more paragraph is added to explain the relation between the non-linear transform model used in the learning and\nthe sparsifying transform model used at test time as well as the main reason behind this particular use of the two different models).\n\nf) A minor modification w.r.t. the clarity of presentation was added to subsection 2.3 (in the initial version subsection 2.2).\n\ng) The section \"Sensitivity analysis\" was moved to the Appendix to have a more uniform structure and enhance the readability. Subsection 3.2 is moved and placed as subsection 2.4. The typos are corrected and the comments of the reviewers were integrated with additional clarifications whenever possible.\n\n\nWe would like to thank all the reviewers for their considerations, constructive attitude and raised comments that lead to the qualitative improvement of our manuscript.", "We would like to thank the reviewer for the time spent on the detailed and careful reading of our paper and providing his valuable comments.\n\nWe agree with the reviewer that the presentation quality should be improved and we will do our best to improve it. The concepts in section 1.2 paragraph one, the introduction of section 2 and section 2.1 will be simplified and better clarified. \n\nConsidering the typos, reformat of the citations and the clarification we will try definitively to enhance them accordingly.\n\nConsidering the length of the paper, we can integrate the comment of the reviewer and add the sensitivity analysis section in the appendix. However, since the results of the analysis are used to bound the proposed discrimination power measure we considered not to suspend and remove them. In addition they also give an information-geometric perspective, that to the best of our knowledge is the first analysis of this kind about a non-linear models and the similarity concentration measure without the need of a strict conditions for regularity, i.e., smoothness of the manifolds.\n\n\nConsidering the comment \"p4\" and the first 3 \"p5\" comments\n\nWe will clarify here and in the updated version that a non-linear transform models having one common ${\\bf A}$ and different thresholding parameter ${\\boldsymbol{\\tau}_c}$ per class $c$ are used only during the learning, whereas at the test time only a sparsifying transform model is used. In that sense we assumed that there are number of non-linear transforms as the number of classes and when we apply the non-linear transforms for the samples of the corresponding classes we have separation between the transform data samples from different classes. It is important to highlight that when the proposed similarity concentration is zero then the discriminative prior has no influence. Only then the non-linear transform model reduces to a sparsifying transform model and the single vector of thresholds is a constant.\n\nConsidering the comment \"p5 After equation 4 ...\"\n\nWe use a transform representation defined as ${\\bf y}_{c,k}=\\mathcal{T}^{\\mathcal{P}_c}({\\bf x}_{c,k})$ to impose a dicriminative constraint on the transform representation. As shown by equation (16) the estimation of the transform representation has a closed form solution considering a thresholding non-linear operation, therefore, it has\na non-linearity. Moreover, in this case ${\\bf A}{\\bf x}_{c,k}$ is only seen as a linear approximation to this non-linearity. Knowing something in advance about the difference between ${\\bf y}_{c,k}-{\\bf A}{\\bf x}_{c,k}$ can be used in our model. However, since in advance we do not have any prior we assume that it is Gausssian like distributed.\n\n\nConsidering the comment \"p5 before equation 5, you define ...\"\n\nWe assumed that we have a joint probability $p(\\boldsymbol{\\tau}_1, \\boldsymbol{\\tau}_2,...,\\boldsymbol{\\tau}_C, {\\bf y}_{c,k})=p(\\boldsymbol{\\tau}_1, \\boldsymbol{\\tau}_2,...,\\boldsymbol{\\tau}_C|{\\bf y}_{c,k})p({\\bf y}_{c,k}) \\propto exp(-\\frac{ min_{1 \\leq c \\leq C}( D({\\boldsymbol{\\tau}_c; {\\bf y}_{c,k}}) )   }{  \\beta_2 })exp(-\\frac{ \\Vert {\\bf y}_{c,k}  \\Vert_1   }{  \\beta_0 })$. If we\n\nfurther assume that $p(\\boldsymbol{\\tau}_1, \\boldsymbol{\\tau}_2, ..., \\boldsymbol{\\tau}_C)=\\prod_{c=1}^C p(\\boldsymbol{\\tau}_c)$ and that the class label is know then we say that\n\n$p(\\boldsymbol{\\tau}_c| {\\bf y}_{c,k}) \\propto exp(-\\frac{D({\\boldsymbol{\\tau}_c; {\\bf y}_{c,k}} )}{\\beta_2})$.\n\n\nConsidering the comment \"p5 Equation 6: I dont ...\" we note that there was a typo (a summation is missing). That is \n$D({\\bf y}_{c,k}; \\boldsymbol{\\tau}_{c} ) $ is not equal to $D^\\mathcal{P}_{\\ell_1} ({\\bf X}) + S^\\mathcal{P}_{\\ell_2} ({\\bf X})$. Since a summation in front of $D({\\bf y}_{c,k}; \\boldsymbol{\\tau}_{c} ) $ in equation (6) was missing the correct expression for the first line in equation (6) is:\n\n$\\sum_c D({\\bf y}_{c,k}; \\boldsymbol{\\tau}_{c} ) =D^\\mathcal{P}_{\\ell_1} ({\\bf X}) + S^\\mathcal{P}_{\\ell_2} ({\\bf X})$\n\n\nConsidering the comments p6-7 and Lemma 1 and all the former about the clarification pointed by the reviewer will be taken into considerations, we are working on the improvements that will be added into the revised version.\n\nWe share the reviewers concern about the necessity to provide a public code to boost the impact of the work. However, while we emphasize that the provided pseudo-code in the paper is very clear and easily reproducible, our code will shortly be accessible to public short after the announcement of the official decision. We simply do not publish it now in order to maintain the anonymity of this review procedure. Moreover, addressing the reviewers concern about the significance and impact, we highly appreciate any concrete suggestion to improve, remaining open for any suggestions from the reviewers.", "We would like to thank the reviewer for the time spend on the detailed, careful reading of our paper, providing his comments and the positive evaluation.\n\nConsidering the comment about how the similarity measure works to approximate the measure in the prior, we note that there was a typo (a summation is missing). That is $D({\\bf y}_{c,k}; \\boldsymbol{\\tau}_{c} ) $ is not equal to $D^\\mathcal{P}_{\\ell_1} ({\\bf X}) + S^\\mathcal{P}_{\\ell_2} ({\\bf X})$. Since a summation in front of $D({\\bf y}_{c,k}; \\boldsymbol{\\tau}_{c} ) $ in equation (6) was missing the correct expression for the first line in equation (6) is:\n\n$\\sum_c D({\\bf y}_{c,k}; \\boldsymbol{\\tau}_{c} ) =D^\\mathcal{P}_{\\ell_1} ({\\bf X}) + S^\\mathcal{P}_{\\ell_2} ({\\bf X})  $\n\nConsidering the geometrical illustration about the prior we had one, but, due to the long length of the paper we decided not to include it. However, since it was mentioned we will also take this into considerations.\n\nWe will consider all the typos and grammatical errors and we will do our best to increase the presentation quality. ", "We would like to thank the reviewer for the time spend on reading our paper and providing his comments.\n\n\nTo the best of our knowledge this is the first attempt at extending the sparsifying transform model as a non-linear transform model. Moreover, we will clarify, simplify and highlight here, and in the revised version, that a non-linear transform model is addressed only during the learning, whereas at test time only a sparsifying transform model is used. This is explained by the fact that if the proposed similarity concentration (that approximates the used measure in the discriminative\nprior) is zero then the discriminative prior is non-informative. Meaning that the non-linear transform model reduces to a sparsifying transform model and the prior has no influence on the estimation of the representation. This can easily be seen from the closed form solution of the transform representation in equation 16, that is if $ {\\bf g}={\\bf 0}$, then the transform representation is only a sparse representation, i.e., a thresholded version of ${\\bf A}{\\bf x}_{c,k}$.\n\nYes, we agree that the motivation behind the approach is very important, therefore, in the current version we had\n\nsection 1.2\n\nBy the end of the first sentence in section 1.2 we meant that we do not address an inverse problem where if the dimensionality of the dictionary (transform matrix) or the data is high then the solution of the inverse problem has a high computational complexity. Considering the estimation of the transform representation we radder address a direct problem that as pointed out in section 2.2 (before equation 15) represents a low complexity constrained projection problem and has a closed form solution (equation 16). By the end of the second sentence in section 1.2 in we reefer to the fact that the non-linear transform model allows more freedom in modeling and imposing constraints on the transform representation (in fact it allows other non-linearity to be modeled, i.e. very easily we can model a ReLu as a transform representation).\n\n\nConsidering the motivation about the proposed central prior and the used approximation on the measure defined in the prior, first we note that we have tried to the best of our knowledge in sec 1.1 to outlay the open issues and the disadvantages in the specifics of the used constraints in the state-of-the-art discrimnative dictionary learning methods. Upon that we have devoted the second paragraph of section 1.2 on the general advantages and the advantages w.r.t. the state-of-the-art. More precisely we had:\n\n\nsection 1.2 paragraph 2\n\n\nAdditionally, the advantages of using this prior w.r.t. the discriminative priors in the state-of-the-art methods was given in\n\n\nsection 2.1 last paragraph.\n\nWe will try to clarify, restructure and highlight these points in our revised version.\n\nConcerning all the rest comments and the typos we will correct them accordingly.\n\nConsidering the length of the paper, we note that there is a possibility the paper to be without the sensitivity analysis, moreover we can integrate the comment of the reviewer and add this section in the appendix. Nevertheless, we considered that the sensitivity analysis is beneficial since it is related to the notion about the quality of the representation, i.e., the results of the analysis are used to bound the proposed discrimination power measure. In addition they also give an information-geometric perspective, that also to the best of our knowledge is the first analysis of this kind about a non-linear models and the similarity concentration measure without the need of a strict conditions for regularity, i.e., smoothness of the manifolds.\n\nSince we target a representation learning we considered that ICLR is the best place to present our work. We will try to reduce the paper length within the allowed limits of change w.r.t. the initial version and do our best to sharpen the quality of the presentation." ]
[ -1, -1, 5, 4, 5, -1, -1, -1, -1 ]
[ -1, -1, 2, 2, 1, -1, -1, -1, -1 ]
[ "iclr_2018_SJzmJEq6W", "Sy8Kdltgz", "iclr_2018_SJzmJEq6W", "iclr_2018_SJzmJEq6W", "iclr_2018_SJzmJEq6W", "iclr_2018_SJzmJEq6W", "HykAaKDgf", "Sy8Kdltgz", "BJy3xb9xM" ]
iclr_2018_r1tJKuyRZ
The Set Autoencoder: Unsupervised Representation Learning for Sets
We propose the set autoencoder, a model for unsupervised representation learning for sets of elements. It is closely related to sequence-to-sequence models, which learn fixed-sized latent representations for sequences, and have been applied to a number of challenging supervised sequence tasks such as machine translation, as well as unsupervised representation learning for sequences. In contrast to sequences, sets are permutation invariant. The proposed set autoencoder considers this fact, both with respect to the input as well as the output of the model. On the input side, we adapt a recently-introduced recurrent neural architecture using a content-based attention mechanism. On the output side, we use a stable marriage algorithm to align predictions to labels in the learning phase. We train the model on synthetic data sets of point clouds and show that the learned representations change smoothly with translations in the inputs, preserve distances in the inputs, and that the set size is represented directly. We apply the model to supervised tasks on the point clouds using the fixed-size latent representation. For a number of difficult classification problems, the results are better than those of a model that does not consider the permutation invariance. Especially for small training sets, the set-aware model benefits from unsupervised pretraining.
rejected-papers
The paper proposes an autoencoder for sets, an interesting and timely problem. The encoder here is based on prior related work (Vinyals et al. 2016) while the decoder uses a loss based on finding a matching between the input and output set elements. Experiments on multiple data sets are given, but none are realistic. The reviewers have also pointed out a number of experimental comparisons that would improve the contribution of the paper, such as considering multiple matching algorithms and more baselines. In the end the idea is reasonable and results are encouraging, but too preliminary at this point.
train
[ "rk7TfpBlG", "B1EnXjFxG", "Hk-Qowclf", "rkyAcO7NM", "r1WktiLGG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "This paper mostly extends Vinyals et al, 2015 paper (\"Order Matters\") on how to represent sets as input and/or output of a deep architecture.\n\nAs far as I understood, the set encoder is the same as the one in \"Order Matters\". If not, it would be useful to underline the differences.\n\nThe decoder, on the other hand, is different and relies on a loss that is based on an heuristic to find the current best order (based on an ordering, or mapping W, found using the Gale-Shapely algorithm). Does this mean that Algorithm 1 needs to be run for every training (and test) example? if so, it is important to note what is the effective complexity of running it?\n\nThe experimental section is interesting, but in the end a bit disappointing: although a new artificial dataset is proposed to evaluate sets, it is unclear how different are the findings from those in the \"Order Matters\" paper:\n- the first set of results (in Section 4.1) confirms that the set encoder is important (which was also in the other paper I believe)\n- the second set of results (Section 4.2) shows that in some cases, an auto-encoder is also useful: this is mostly the case when the supervised data is small compared to the availability of a much larger unsupervised data (of sets). This is interesting (and novel compared to the \"Order Matters\" paper) but corresponds to known findings from most previous work on semi-supervised learning: pre-training is only useful when only a very small supervised data exists, and quickly becomes irrelevant. This is not specific to sets.\n\nFinally, It would have been very interesting to see experiments on real data concerned with sets.\n\n------------------\nI have read the respond to the reviewers but haven't seen any reason to\nchange my score. In particular, the authors have not answered my questions\nabout differences with the prior art, and have not provided results on\nreal data.\n\n", "Summary\nThis paper proposes an autoencoder for sets. An input set is encoded into a\nfixed-length representation using an attention mechanism (previously proposed by\n[1]). The decoder generates the output sequentially and the generated sequence\nis matched to the best-matching ordering of the target output set.\nExperiments are done on synthetic datasets to demonstrate properties of the\nlearned representation.\n\nPros\n- Experiments show that the autoencoder helps improve classification accuracy\n for small training set sizes on the shape classification task.\n- The analysis of how the decoder generates data is insightful.\n\nCons\n- The experiments are on toy datasets only. Given the availability of point\n cloud data sets, for example, KITTI which has a widely used benchmark for\npoint cloud based object detection, it would make the paper stronger if this\nmodel was benchmarked against published baselines.\n\n- The autoencoder does not seem to help much on the regression tasks where even\n for the smaller training set size setting, directly using the encoder to solve\nthe task often works best. Even finetuning is unable to recover from the\npretrained weights. Therefore, it seems that the decoder (which is the novel\naspect of this work) is perhaps not working well, or is not well suited to the\nregression tasks being considered.\n\n- The classification task, for which the learned representations work well\n empirically, seems to be geared towards representing object shape. It doesn't\nreally require remembering each point. On the other hand, the regression tasks\nthat could require remembering the points don't seem to be benefit much from the\nautoencoder pretraining. This suggests that while the model is able to represent\noverall shape, it has a hard time remembering individual elements of the set.\nThis seems like a drawback, since a general \"set auto-encoder\" should be able\nto perform a wide variety of tasks on the input set which could require remembering\nthe set's elements.\n\nQuality\nThis paper describes the proposed model quite well and provides encouraging\npreliminary results.\n\nClarity\nThe paper is easy to understand.\n\nOriginality\nThe novelty in the model is using a matching algorithm to find the best ordering\nof the target output set to match with the sequentially generated decoder\noutput. However, the paper makes a choice of one ranking based matching scheme\nand does not compare to other alternatives.\n\nSignificance\nThis paper proposes a way of learning representations of sets which will be of\nbroad interest across the machine learning community. These models are likely to\nbecome more relevant with increasing prevelance of point cloud data.\n\nReferences\n[1] Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. Order matters: Sequence to\nsequence for sets. arXiv preprint arXiv:1511.06391.", "Summary:\n\nThis paper proposes an encoder-decoder framework for learning latent representations of sets of elements. The model utilizes the neural attention mechanism for set inputs proposed in (Vinyals et al., ICLR 2016) to encode a set into a fixed-length latent representation, and then employs an LSTM decoder to reconstruct the original set of elements, in which a stable matching algorithm is used to match decoder outputs to input elements. Experimental results on synthetic datasets show that the model learns meaningful representations and effectively handles permutation invariance.\n\nMajor Concerns:\n\n1. Although the employed Gale-Shapely algorithm facilitates permutation-invariant set reconstruction, it has O(n^2) computational complexity during each back-propagation iteration, which might prevent it from scaling to sets of fairly big sizes. \n\n2. The experiments are only evaluated on synthetic datasets, and applications of the set autoencoder to real-world applications or scientific problems will make this work more interesting and significant.\n\n3. The main contribution of this work is the adoption of the stable matching algorithm in the decoder. A strong set autoencoder baseline will be, the encoder employs the neural attention mechanism proposed in (Vinyals et al., ICLR 2016), but the decoder just uses a standard LSTM as in a seq2seq framework. Comparisons to this baseline will reveal the contribution of the stable matching procedure in the whole framework of the set autoencoder for learning representations. \n\nMinor issues:\n\nOn page 5, above Section 4, d_j -> o_j ?\n\nthe footnote on page 5: we not consider -> we do not consider?\n\non page 6 and 7, 6.000, 1.000 and 10.000 training examples -> 6000, 1000 and 10,000 training examples", "Thank you for your comment. \n\nRegarding your question about prior art: Yes, the encoder is conceptually identical to the one proposed in the \"Oder Matters\" paper. We wrote \"similar to\" in the initial version since the some of the architectural details are not completely disclosed in the \"Order Matters\" paper (e.g. the \"small neural network\" for f^inp, which probably uses non-linearities, whereas our f^inp is linear). But structurally, the encoder is identical.", "We thank the reviewers for the insightful and encouraging remarks. We comment on a number of these remarks below, and have updated some of the corresponding points in the paper.\n\n== Major concerns of one or multiple reviewers ==\n\n* O(n^2) complexity of Gale-Shapely.\nIt is true that this complexity could, in practice, restrict the applicability of the proposed algorithm to smaller sets. However, there is a range of problems where small set sizes are relevant, e.g. when an agent interacts with an environment where one or multiple instances of an object can be present (as opposed to point cloud representations of objects)\n\nWe have included the above remark in the paper.\n\n* Synthetic data set vs. real-world data set\nWe completely agree that the paper will be much stronger once we include results on a real-world data set. However, in the limited time available, we were not able to do so just yet.\n\n* Proposal by AnonReviewer3: use the same encoder, but a plain LSTM decoder as benchmark (to show whether the Gale-Shapely-augmented decoder works better).\n(i.e., use the first $n$ outputs $o_i,i\\in{1,\\dots,n}$ directly)\n\nThis is an interesting idea, that we will have to try out. However, the current assumption is that its behavior will probably be worse: Unlike the Seq-AE, it will not be able to store ordering information in the permutation-invariant embedding, but penalize misaligned points in the output heavily.\n\n* Remarks about applicability to different problem types\n\nWe agree with the reviewers' comments about the applicability of the model (and its limitations). The purpose of using a range of problems with different properties was precisely to test this. We currently think that future work could either try to make the model more generally applicable to multiple of these problem classes, or specialize it for a specific type of problem (however, we think that it would be beyond the scope of this paper, especially when taking the page limit into account).\n\n== Minor issues raised by one or multiple reviewers ==\n* We fixed multiple smaller issues (typos/formatting) in the latest version.\n \n " ]
[ 4, 5, 5, -1, -1 ]
[ 5, 4, 4, -1, -1 ]
[ "iclr_2018_r1tJKuyRZ", "iclr_2018_r1tJKuyRZ", "iclr_2018_r1tJKuyRZ", "rk7TfpBlG", "iclr_2018_r1tJKuyRZ" ]
iclr_2018_B1EVwkqTW
Make SVM great again with Siamese kernel for few-shot learning
While deep neural networks have shown outstanding results in a wide range of applications, learning from a very limited number of examples is still a challenging task. Despite the difficulties of the few-shot learning, metric-learning techniques showed the potential of the neural networks for this task. While these methods perform well, they don’t provide satisfactory results. In this work, the idea of metric-learning is extended with Support Vector Machines (SVM) working mechanism, which is well known for generalization capabilities on a small dataset. Furthermore, this paper presents an end-to-end learning framework for training adaptive kernel SVMs, which eliminates the problem of choosing a correct kernel and good features for SVMs. Next, the one-shot learning problem is redefined for audio signals. Then the model was tested on vision task (using Omniglot dataset) and speech task (using TIMIT dataset) as well. Actually, the algorithm using Omniglot dataset improved accuracy from 98.1% to 98.5% on the one-shot classification task and from 98.9% to 99.3% on the few-shot classification task.
rejected-papers
This paper proposes to pre-train a feature embedding, using Siamese networks, for use with few-shot learning for SVMs. The idea is not very novel since there is a fairly large body of work in the general setting of pre-trained features + simple predictor. In addition, the experimental results could be stronger -- there are stronger results in the literature (not cited), and better data sets for testing few-shot learning.
train
[ "B1jQdMSeG", "BkatreVxM", "SybmxPplz", "r1a99OpWz", "B1VE6DTZG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "After reading the rebuttal:\n\nThis paper does have encouraging results. But as mentioned earlier, it still lacks systematic comparisons with existing (and strongest) baselines, and perhaps a better understanding the differences between approaches and the pros and cons. The writing also needs to be improved. So I think the paper is not ready for publication and my opinion remains.\n===========================================================\n\nThis paper presents an algorithm for few shot learning. The idea is to first learn representation of data using the siamese networks architecture, which predicts if a pair of two samples are similar (e.g., from the same class) or not using a SVM hinge loss, and then finetune the classifier using few labeled examples (with possibly a different set of labels). I think the idea of representation learning using a somewhat artificial task makes sense in this setting. \n\nI have several concerns for this submission.\n1. I am not very familiar with the literature of few shot learning. I think a very related approach that learns the representation using pretty much the same information is the contrastive loss:\n-- Hermann and Blunsom. Multilingual Distributed Representations without Word Alignment. ICLR 2014.\nThe intuition is similar: similar pairs shall have higher similarity in the learned representation, than dissimilar pairs, by a large margin. This approach is useful even when there is only weak supervision to provide the \"similarity/dissimilarity\" information. I wonder how does this approach compare with the proposed method.\n\n2. The experiments are conducted on a small dataset OMNIGLOT and TIMIT. I do not understand why the compared methods are not consistently used in both experiments. Also, the experiment of speaker classification on TIMIT (where the inputs are audio segments with different durations and sampling frequency) is a quite nonstandard task; I do not have a sense of how challenging it is. It is not clear why CNN transfer learning (the authors did not give details about how it works) performs even worse than the non-deep baseline, yet the proposed method achieves very high accuracy. It would be nice to understand/visualize what information have been extracted in the representation learning phase. \n\n3. Relatively minor: The writing of this paper is readable, but could be improved. It sometimes uses vague/nonstandard terminology (\"parameterless\") and statement. The term \"siamese kernel\" is not very informative: yes, you are learning new representations of data using DNNs, but this feature mapping does not have the properties of RKHS; also you are not solving the SVM dual problem as one typically does for kernel SVMs. In my opinion the introduction of SVM can be shortened, and more focuses can be put on related deep learning methods and few shot learning.", "Make SVM great again with Siamese kernel for few-shot learning \n\n** PAPER SUMMARY **\n\nThe author proposes to combine siamase networks with an SVM for pair classification. The proposed approach is evaluated on few shot learning tasks, on omniglot and timit. \n\n\n** REVIEW SUMMARY **\n\nThe paper is readable but it could be more fluent. It lacks a few references and important technical aspects are not discussed. It contains a few errors. Empirical contribution seems inflated on omniglot as the authors omit other papers reporting better results. Overall, the contribution is modest at best.\n\n** DETAILED REVIEW **\n\nOn mistakes, it is wrong to say that an SVM is a parameterless classifier. It is wrong to cite (Boser et al 92) for the soft-margin SVM. I think slack variables come from (Cortes et al 95). \"consistent\" has a specific definition in machine learning https://en.wikipedia.org/wiki/Consistent_estimator , you must use a different word in 3.2. You mention that a non linear SVM need a similarity measure, it actually need a positive definite kernel which has a specific definition, https://en.wikipedia.org/wiki/Positive-definite_kernel .\n\nOn incompleteness, it is not obvious how the classifier is used at test time. Could you explain how classes are predicted given a test problem? The setup of the experiments on TIMIT is extremely unclear. What are the class you are interested in? How many classes and examples does the testing problems have? \n\nOn clarity, I do not understand why you talk again about non-linear SVM in the last paragraph of 3.2. since you mention at the end of page 4 that you will only rely on linear SVMs for computational reasons. You need to mention explicitely somewhere that (w,\\theta) are optimized jointly. The sentence \"this paper investigates only the one versus rest approach\" is confusing, as you have only two classes from the SVM perspective i.e. pairs (x1,x2) where both examples come from the same class and pairs (x1,x2) where they come from different class. So you use a binary SVM, not one versus rest. You need to find a better justification for using L2-SVM than \"L2-SVM loss variant is considered to be the best by the author of the paper\", did you try classical SVM and found them performing worse? Also could you motivate your choice for L1 norm as opposed to L2 in Eq 3?\n\nOn empirical evaluation, I already mentioned that it impossible to understand what the classification problem on TIMIT is. I suspect it might be speaker identification. So I will focus on the omniglot experiments. \n\nFew-Shot Learning Through an Information Retrieval Lens, Eleni Triantafillou, Richard Zemel, Raquel Urtasun, NIPS 2017 [arxiv July'17]\n\nand the reference therein give a few more recent baselines than your table. Some of the results are better than your approach. I am not sure why you do not evaluate on mini-imagenet as well as most work on few shot learning generally do. This dataset offers a clearer experimental setup than your TIMIT setting and has abundant published baseline results. Also, most work typically use omniglot as a proof of concept and consider mini-imagenet as a more challenging set. ", "Summary: \nThe paper proposes to pre-train a deep neural network to learn a similarity function and use the features obtained by this pre-trained network as input to an SVM model. The SVM is trained for the final classification task at hand using the last layer features of the deep network. The motivation behind all this is to learn the input features to the SVM as opposed to hand-crafting them, and use the generalization ability of the SVM to do well on tasks which have only a handful of training examples. The authors apply their technique to two datasets, namely, the Omniglot dataset and the TIMIT dataset and show that their model does a reasonable job in these two tasks. \n\nWhile the paper is reasonably clearly written and easy to read I have a number of objections to it. \n\nFirst, I did not see any novel idea presented in this paper. Lots of people have tried pre-training a neural network on auxiliary task(s) and using the features from it as input to the final SVM classifier. People have also specifically tried to train a siamese network and use its features as input to the SVM. These works go way back to the years 2005 - 2007, when deep learning was not called deep learning. Unless I have missed something completely, I did not see any novel idea proposed in this paper. \n\nSecond, the experiments are quite underwhelming and does not fully support the superiority claims of the proposed approach. For example, the authors compare their model against rather weak baselines. While the approach (as has been shown in the past) is very reasonable, I would have liked the experiments to be more thorough, with comparison to the state of the art models for the two datasets. \n", "Thanks for the review. \n1. We made experiments with contrastive loss and cross-entropy as well. We get the best results if the network is trained with a loss function, which is the same as the used classifier's loss (if it exists).\nIn the OMNIGLOT setup, the same network was used to emphasize the importance of the loss function.\n\n2. To our best knowledge the CNN transfer-learning doesn't enjoy the advantage of the pairwise data generation, so less data causes lack of generalization.\n\n3. Agree with your point. This work focuses on linear SVMs, but we tried to give an outlook of the problem in Section 3.3, where we claimed kernel can be used and optimize the dual problem. This approach requires further research.\n\n\n", "Thanks for the review. In this paper, I wanted to point out that neural networks not only useful for feature learning, but kernel learning as well with proper techniques. Only in case of linear SVM, there is a shortcut: use the features. \nCan you provide a pointer to the mentioned prior work from 2005-2007?" ]
[ 5, 3, 4, -1, -1 ]
[ 4, 4, 5, -1, -1 ]
[ "iclr_2018_B1EVwkqTW", "iclr_2018_B1EVwkqTW", "iclr_2018_B1EVwkqTW", "B1jQdMSeG", "SybmxPplz" ]
iclr_2018_BkVf1AeAZ
Label Embedding Network: Learning Label Representation for Soft Training of Deep Networks
We propose a method, called Label Embedding Network, which can learn label representation (label embedding) during the training process of deep networks. With the proposed method, the label embedding is adaptively and automatically learned through back propagation. The original one-hot represented loss function is converted into a new loss function with soft distributions, such that the originally unrelated labels have continuous interactions with each other during the training process. As a result, the trained model can achieve substantially higher accuracy and with faster convergence speed. Experimental results based on competitive tasks demonstrate the effectiveness of the proposed method, and the learned label embedding is reasonable and interpretable. The proposed method achieves comparable or even better results than the state-of-the-art systems.
rejected-papers
This paper proposes an approach for jointly learning a label embedding and prediction network, as a way of taking advantage of relationships between labels. This general idea is well-motivated, but the specifics of the proposed approach are not motivated or described well. More discussion of relationship with prior work (e.g. other ways of "softening" the softmax) is needed. The authors claim to have state-of-the-art results, but reviewers point out that much better results exist.
train
[ "Hk7pW6HlM", "SyZf4f5gM", "r1zEZ9ief", "By15nh6eG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "The paper proposes to add an embedding layer for labels that constrains normal classifiers in order to find label representations that are semantically consistent. The approach is then experimented on various image and text tasks.\n\nThe description of the model is laborious and hard to follow. Figure 1 helps but is only referred to at the end of the description (at the end of section 2.1), which instead explains each step without the big picture and loses the reader with confusing notation. For instance, it only became clear at the end of the section that E was learned.\n\nOne of the motivations behing the model is to force label representations to be in a semantic space (where two labels with similar meanings would be nearby). The assumption given in the introduction is that softmax would not yield such a representation, but nowhere in the paper this assumption is verified. I believe that using cross-entropy with softmax should also push semantically similar labels to be nearby in the weight space entering the softmax. This should at least be verified and compared appropriately.\n\nAnother motivation of the paper is that targets are given as 1s or 0s while soft targets should work better. I believe this is true, but there is a lot of prior work on these, such as adding a temperature to the softmax, or using distillation, etc. None of these are discussed appropriately in the paper.\n\nSection 2.2 describes a way to compress the label embedding representation, but it is not clear if this is actually used in the experiments. h is never discussed after section 2.2.\n\nExperiments on known datasets are interesting, but none of the results are competitive with current state-of-the-art results (SOTA), despite what is said in Appending D. For instance, one can find SOTA results for CIFAR100 around 16% and for CIFAR10 around 3%. Similarly, one can find SOTA results for IWSLT2015 around 28 BLEU. It can be fine to not be SOTA as long as it is acknowledged and discussed appropriately.\n", "This paper proposes a label embedding network method that learns label embeddings during the training process of deep networks. \nPros: Good empirical results.\nCons: There is not much technical contribution. The proposed approach is neither well motivated, nor well presented/justified. The presentation of the paper needs to be improved. \n\n1. Part of the motivation on page 1 does not make sense. In particular, for paragraph 3, if the classification task is just to separate A from B, then (1,0) separation should be better than (0.8, 0.2). \n\n2. Label embedding learning has been investigated in many previous works. The authors however ignored all the existing works on this topic, but enforce label embedding vectors as similarities between labels in Section 2.1 without clear motivation and justification. This assumption is not very natural — though label embeddings can capture semantic information and label correlations, it is unnecessary that label embedding matrix should be m xm and each entry should represent the similarity between a pair of labels. The paper needs to provide a clear rationale/justification for the assumptions made, while clarifying the difference (and reason) from the literature works. \n\n3. The proposed model is not well explained. \n(1) By using the objective in eq.(14), how to learn the embeddings E? \n(2) The authors state “In back propagation, the gradient from z2 is kept from propagating to h”. This makes the learning process quite arbitrary under the objective in eq.(14). \n(3) The label embeddings are not directly used for the classification (H(y, z’_1)), but rather as auxiliary part of the objective. How to decide the test labels?\n", "The paper proposes a method which jointly learns the label embedding (in the form of class similarity) and a classification model. While the motivation of the paper makes sense, the model is not properly justified, and I learned very little after reading the paper.\n\nThere are 5 terms in the proposed objective function. There are also several other parameters associated with them: for example, the label temperature of z_2’’ and and parameter alpha in the second last term etc.\n\nFor all the experiments, the same set of parameters are used, and it is claimed that “the method is robust in our experiment and simply works without fine tuning”. While I agree that a robust and fine-tuning-free model is ideal 1) this has to be justified by experiment. 2) showing the experiment with different parameters will help us understand the role each component plays. This is perhaps more important than improving the baseline method by a few point, especially given that the goal of this work is not to beat the state-of-the-art.", "The authors do not mention a similar recent paper:\nhttps://arxiv.org/abs/1609.06693\n" ]
[ 4, 4, 3, -1 ]
[ 5, 3, 4, -1 ]
[ "iclr_2018_BkVf1AeAZ", "iclr_2018_BkVf1AeAZ", "iclr_2018_BkVf1AeAZ", "iclr_2018_BkVf1AeAZ" ]
iclr_2018_HJsk5-Z0W
Structured Deep Factorization Machine: Towards General-Purpose Architectures
In spite of their great success, traditional factorization algorithms typically do not support features (e.g., Matrix Factorization), or their complexity scales quadratically with the number of features (e.g, Factorization Machine). On the other hand, neural methods allow large feature sets, but are often designed for a specific application. We propose novel deep factorization methods that allow efficient and flexible feature representation. For example, we enable describing items with natural language with complexity linear to the vocabulary size—this enables prediction for unseen items and avoids the cold start problem. We show that our architecture can generalize some previously published single-purpose neural architectures. Our experiments suggest improved training times and accuracy compared to shallow methods.
rejected-papers
This paper has been withdrawn by the authors.
train
[ "rkSyVFDeG", "Bk0lEg6eG", "Sk6iGkvbG", "SJrX6Hpmz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "This paper proposes to improve time complexity of factorization machine. Unfortunately, the paper's claim that FM's time complexity is quadratic to feature size is wrong. Specifically, the dot product can be computed as (which is linear to feature size)\n\n(\\sum x_i \\beta_i)^T (\\sum x_i \\beta_i) - \\sum_i x_i^2 beta_i^T beta_i\n\nThe projection of feature group into one embedded space proposed in the paper can be viewed as another form of representing the same model when group equals one. When the number of feature groups do not equal one, they correspond to field aware factorization machine(FFM)", "The authors introduce a novel novel for collaborative filtering. The proposed model combines some of the strengths of factorization machines and of polynomial regression. Another way to understand this model is that it's a feed forward neural network with a specific connection structure (i.e., not fully connected).\n\nThe paper is well written overall and relatively easy to understand. The study seems fairly thorough (both vanilla and cold-start experiments are reported).\n\nOverall the paper feels a little bit incomplete . This is particularly apparent in the empirical study. Given the somewhat limited novelty of the model the potential impact of this work relies on more convincing experimental results. Here are some suggestions about how to achieve that: \n\n1) Methodically report results for MF, FM, CTR (when meaningful), other strong baselines (maybe SLIM?) and all your methods for all datasets.\n\n2) Report results on well-known CF datasets. Movielens comes to mind.\n\n3) Shed some light on some of the poor CTR results (last paragraph of Section 4.2.2)\n\n4) Explore the models and shed some lights on where the gains are coming from.\n\n\nMinor: \n\n- How do you deal with unobserved preferences in the implicit case?\n\n- I found the idea of Figure 1 very good but in its current form I didn't find it particularly insightful (these \"clouds\" are hard to interpret).\n\n- It may also be worth adding this reference when discussing neural factorization:\nhttp://www.cs.toronto.edu/~mvolkovs/nips2017_deepcf.pdf\n", "This paper presents a method for matrix factorization using DNNs. The suggestion is to make the factorization machine (eqn 1) deep, by grouping the features meaningfully (eqn 5), extracting nonlinear features from original inputs (deep-in, eqn 8), and adding additional nonlinearity after computing pairwise interactions (deep-out, eqn 7). From the methodology point of view, such extensions are relatively straightforward. As an example, from the experimental results, it seems the grouping of features is done mostly with domain knowledge (e.g., months of year) and not learned automatically. The authors claim the proposed method can circumvent the cold-start problem, and presented some experimental results on recommendation systems with text features.\n\nWhile the application problems look quite interesting, in my opinion, the paper needs to make the context and contribution clearer. In particular, there is a huge literature in collaborative filtering, and I believe there is by now sufficient work on collaborative filtering with input features (and possibly dealing with the cold-start problem). I think this paper does not connect very well with that literature. When reading it, at times I felt the main purpose of this paper is to solve the application problems presented in experimental results, instead of proposing a general framework. I suggest the authors to demonstrate their method on some well-known datasets (e.g., MovieLens, Netflix), to give the readers an idea if the proposed method is indeed advantageous over more classical methods, or if the success of this paper is mostly due to clever processing of text features using DNNs.\n\nSome detailed comments:\n1. eqn 4 does not indicate any rank-r factors. \n2. some statements do not seem straightforward/justified to me: \n -- the paper uses the word \"inference\" several times without definition\n -- \"if we were interested in interpreting the parameters, we could constrain w to be non-negative ... \". Is this easy to do, and can the authors demonstrate this in their experiments and show interpretable examples?\n -- \"Note that if the dot product is replaced with a neural function, fast inference for cold-start ...\". \n3. the experimental setup seems quite unusual to me: \"since we only observe positive labels, for such tasks in the test set we sample a labels according to the label frequency\". This seems very problematic if most of the entries are not observed. Why cannot you use the typical evaluation procedure for collaborative filtering, where you hide some known entries during model training, and evaluate on these entries during test? ", "Dear reviewers,\n\nThank you for your very insightful comments. We are withdrawing this paper and using some of these results to supplement a different paper. \n\nWe are not clicking on \"withdrawing\" paper yet, as this submission would be de-anonymized immediately.\n\nThanks,\nAuthor" ]
[ 3, 4, 4, -1 ]
[ 5, 5, 3, -1 ]
[ "iclr_2018_HJsk5-Z0W", "iclr_2018_HJsk5-Z0W", "iclr_2018_HJsk5-Z0W", "iclr_2018_HJsk5-Z0W" ]
iclr_2018_SyGT_6yCZ
Simple Fast Convolutional Feature Learning
The quality of the features used in visual recognition is of fundamental importance for the overall system. For a long time, low-level hand-designed feature algorithms as SIFT and HOG have obtained the best results on image recognition. Visual features have recently been extracted from trained convolutional neural networks. Despite the high-quality results, one of the main drawbacks of this approach, when compared with hand-designed features, is the training time required during the learning process. In this paper, we propose a simple and fast way to train supervised convolutional models to feature extraction while still maintaining its high-quality. This methodology is evaluated on different datasets and compared with state-of-the-art approaches.
rejected-papers
The paper addresses the training time of CNNs, in the common setting where a CNN is trained on one domain and then used to extract features for another domain. The paper proposes to speed up the CNN training step via a particular proposed training schedule with a reduced number of epochs. Training time of the pre-trained CNN is not a huge concern, since this is only done once, but optimizing training schedules is a valid and interesting topic of study. However, the approach here does not seem novel; it is typical to adjust training schedules according to the desired tradeoff between training time and performance. The experimental validation is also thin, and the writing needs improvement.
val
[ "SyWq1JvlM", "rk0P3FdeM", "rJUMQfpeM", "rJXa_V5Zz", "r1L6779WM", "ByZqDFKWG", "HkTny32gM", "Bkou3Wjez" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "public" ]
[ "This paper deals with early stopping but the contributions are limited. This work would fit better a workshop as a preliminary result, furthermore it is too short. Following a short review section per section.\n\nIntro: The name SFC is misleading as the method consists in stopping early the training with an optimized learning schedule scheme. Furthermore, the work is not compared to the appropriate baselines.\n\nProposal: The first motivation is not clear. The training time of the feature extractor has never been a problem for transfer learning tasks for example: once it is trained, you can reuse the architecture in a wide range of tasks. Besides, the training time of a CNN on CIFAR10 or even ImageNet is now quite small(for reasonable architectures), which allows fast benchmarking.\nThe second motivation, w.r.t. IB seems interesting but this should be empirically motivated(e.g. figures) in the subsection 2.1, and this is not done.\n\nThe section 3 is quite long and could be compressed to improve the relevance of this experimental section. All the accuracies(unsup dict, unsup, etc) on CIFAR10/CIFAR100 are reported from the paper (Oyallon & Mallat, 2015), ignoring 2-3 years of research that leads to new numerical results. Furthermore, this supervised technique is only compared to unsupervised or predefined methods, which is is not fair and the training time of the Scattering Transform is not reported, for example. \n\nFinally, extracting features is mainly useful on ImageNet (for realistic images) and this is not reported here.\n\nI believe re-thinking new learning rate schedules is interesting, however I recommend the rejection of this paper.", "This paper proposes a fast way to learn convolutional features that later can be used with any classifier. The acceleration of the training comes from a reduced number of training epocs and a specific schedule decay of the learning rate. \nIn the evaluation the features are used with support vector machines (SVN) and extreme learning machines on MNIST and CIFAR10/100 datasets.\n\nPros:\nThe paper compares different classifiers on three datasets.\n\nCons:\n- Considering an adaptive schedule of the learning decay is common practice in modern machine learning. Showing that by varying the learning rate the authors can reduce the number of training epocs and still obtain good performance is not a contribution and it is actually implemented in most of the recent deep learning libraries, like Keras or Pytorch.\n- It is not clear why, once a CNN has been trained, one should want to change the last layer and use a SVN or other classifiers.\n- There are many spelling errors\n- Comparing CNN based methods with hand-crafted features as in Fig. 1 and Tab.3 is not interesting anymore. It is well known that CNN features are much better if enough data is available.\n", "I am not sure how to interpret this paper. The paper seems to be very thin technically, unless I missed some important details. Two proposals in the paper are:\n\n(1) Using a learning rate decay scheme that is fixed relative to the number of epochs used in training, and \n(2) Extract the penultimate layer output as features to train a conventional classifier such as SVM.\n\nI don't understand why (1) differs from other approaches, in the sense that one cannot simply reduce the number of epochs without hurting performance. And for (2), it is a relatively standard approach in utilizing CNN features. Essentially, if I understand correctly, this paper is proposing to prematurely stop training an use the intermediate feature to train a conventional classifier (which is not that away from the softmax classifier that CNNs usually use). I fail to see how this would lead to superior performance compared to conventional CNNs.", "Dear Reviewer,\n\nIn the following lines, we try to clarify your doubts.\n\nYou wrote: \"I don't understand why (1) differs from other approaches, in the sense that one cannot simply reduce the number of epochs without hurting performance.\"\n\nTransfer learning and domain adaptation are essential in machine learning. Indeed, in some situation, we have a small image dataset which is not able to be used to train a Convolutional Neural Network (CNN) thoroughly. In such cases, we can either use a hand-designed feature or extract features from a CNN pretrained in a large dataset. \n\nDespite being a standard approach today, extracting features from a CNN to performing transfer learning or domain adaptation has a significant drawback when compared with using hand-designed features: the training time required. Indeed, hand-designed features do not need to be trained and since they are immediately available. Hence, despite usually provide higher quality features (when a large dataset is available), extracting features from a CNN takes much more time than using directly available engineered features. \n\nTherefore, this work aims to show that we can significantly mitigate this drawback by showing that it is possible to dramatically reduce the training time required to pretrain a CNN without significantly affect the quality of the generated features.\n\nHence, we propose a method that is very efficient in considerably reducing the training time need to extract features from a CNN with minor impact on the quality of the generated features. \n\nTrading a significant training time reduction by a small decrease in features quality reduces the above mention drawback of using CNN feature extraction rather than hand-designed ones. \n\nThe proposed approach, despite simple, innovates in showing that a learning schedule that is aware of the available training can maximize its use with minor performance hurting. In other words, we propose a simple way to produces high-quality features given a time constraint requirement.\n\nYou wrote: \"And for (2), it is a relatively standard approach in utilizing CNN features. Essentially, if I understand correctly, this paper is proposing to prematurely stop training an use the intermediate feature to train a conventional classifier (which is not that away from the softmax classifier that CNNs usually use). I fail to see how this would lead to superior performance compared to conventional CNNs.\"\n\nI believe the previous explanation clarifies this point. The objective is not to produce a better performance than it would be possible if more time were available. The aim is to show that the proposed method can provide almost the best possible feature quality in a small fraction of the time it would require to thoroughly train the CNN in order to get the utterly best possible feature quality.", "Dear anonymous,\n\nPlease, be more specific instead of saying that \"we had to make numerous assumptions since the paper lacks certain information about the setup of the SFC and more\".\n\nFor example, you said that \"Although not entirely clear which layout is used where, we assumed that LeNet-5 was used to represent SFC for MNIST dataset and VGG19 for CIFAR datasets\". However, the paper is very clear in saying that \"For MNIST dataset, we are using as baseline model the LeNet5 (LeCun et al., 1998)\". Moreover: \"The CIFAR-10 and CIFAR-100 dataset were trained with an adapted Visual Geometry Group (Simonyan & Zisserman, 2014) type model\".\n\nRegarding preprocessing, for either MNIST and CIFAR10/100, standard mean-std was used. Moreover, regarding CIFAR10/100, we also performed a random horizontal flip with probability 0.5. We will update to paper regarding this information.\n\nWe saw in code and it appears that you are not using batch normalization for LeNet5, but we used it in our experiments. In the paper, we wrote: \"For MNIST dataset, we are using as baseline model the LeNet5 (LeCun et al., 1998). Our modifications to the original model were changing the activation function to ReLU and adding the batch\nnormalization to the convolutional layers.\" \n\nRegarding VGG19, you appear to use dropout (p=0.5), despite we write in the paper that \"We designed the system with nineteen layers and batch normalization but without dropout (VGG19)\". Besides, you are using the original VGG19 variant used for ImageNet which has three fully connected layers instead of the VGG19 variation regularly used with CIFAR10/100 which has just one fully connected layer. Please, for example, visit https://github.com/kuangliu/pytorch-cifar/blob/master/models/vgg.py\n\nFinally, you need to pay attention to this line of the paper: \" In both cases, to extract for each image a 256 linear feature vector, the last layer before the full connected classifier was changed to present 256 nodes\" as your code does not appear to follow this instruction.\n\nRegarding the comment \"some hyperparameters were not mentioned in the paper (such as number of iterations per epoch for VGG19)\", we would like to clarify that the number of interactions per epoch is deterministically determined by the size of the training set and the batch size. Both pieces of information are in the paper for either MNIST and CIFAR10/100. Therefore, we should correct your code and use 60000/128=469 for MNIST and 50000/128=391 for CIFAR10/100.\n\nWe are using an NVidia 980i TI which has 6GB of RAM. If we are using a card with less memory or cores, this is probably the reason why your experiments are slower than ours.\n\nAgain, we asked you to be more specific instead of writing \"Overall the paper does not provide us with enough information on the setup and architectures to reliably reproduce the results observed by the authors\".", "The paper under review investigates the extraction of features from images for recognition and classification purposes. The authors of the paper propose a method to simplify and increase the speed of feature extraction using convolutional models while pointing out that the drawback to this is the time required to train a CNN. Therefore, they also propose a scheduling technique in order to accelerate the training process, while maintaining the performance level.\n\nIn order to reproduce their results, we had to make numerous assumptions since the paper lacks certain information about the setup of the SFC and more. Although not entirely clear which layout is used where, we assumed that LeNet-5 was used to represent SFC for MNIST dataset and VGG19 for CIFAR datasets. For the datasets, we assumed no preprocessing was done and that the default train / test splits were used.\n\nWe were able to reproduce most of the results for the two datasets that we tried to verify (i.e. MNIST and CIFAR-10), however, we could not confirm some outcomes that were obtained by the authors. Most notably, it seems that the SFC scheduling does not work very well with LeNet-5 CNN since we experienced a sudden drop in accuracy at around epoch 50 and therefore the final accuracy for SFC100 and CNN0.1 was only 10%. We ran this test 10 times and this behavior was exhibited in each one of the rounds. Regarding the training times claimed by the authors, our experiments took roughly twice as much time for the MNIST dataset, which could be explained by less computational power at hand, however, the CIFAR-10 training times we saw were almost 5 times slower. Since we did not have access to the original code that the authors have used and some hyperparameters were not mentioned in the paper (such as number of iterations per epoch for VGG19), this could serve as an explanation for the prolonged training times and irreproducibility of the MNIST experiments.\n\nOverall the paper does not provide us with enough information on the setup and architectures to reliably reproduce the results observed by the authors. For more details on the code that we have used, visit https://github.com/lgatting/AML2017-Assignment-4.", "First of all, thank you for your interest in our work. \n\nRegarding your questions, we have used the default split for the MNIST dataset. Referring to SVM, we have used the multiclass variant. \n\nDespite being clear in the paper, we confirm that we have used ReLU in our adapted LeNet5 model. The libraries and frameworks which we have used were Numpy, Scikit-learn, Pandas, and PyTorch. \n \nBest regards.\n", "We’re trying to reproduce your work and we found certain ambiguities which we’d like to have clarified.\n\nYou have mentioned that for the MNIST dataset, you used the 60k – 10k split, however, it is not clear whether you have used the default split or you created your own (e.g. merged all data, shuffled them and split them).\n\nWe are also not sure how you used SVM for classifying samples that belong to 10 different classes – did you use one-vs-all approach or multiclass classification?\n\nRegarding the underlying structure of SFC, we assume that LeNet5 with ReLu activation units is used since it is the only one mentioned in the paper that is not used for the baseline classifiers – is this assumption correct?\n\nCould you also provide us with some additional information regarding which (if any) libraries or frameworks have been used for constructing the learning algorithms? Alternatively, if you happen to have the code publicly available, it would be helpful if you could direct us to it.\n\nThank you for the answers in advance.\n" ]
[ 3, 3, 2, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SyGT_6yCZ", "iclr_2018_SyGT_6yCZ", "iclr_2018_SyGT_6yCZ", "rJUMQfpeM", "ByZqDFKWG", "iclr_2018_SyGT_6yCZ", "Bkou3Wjez", "iclr_2018_SyGT_6yCZ" ]
iclr_2018_r1cLblgCZ
Recurrent Auto-Encoder Model for Multidimensional Time Series Representation
Recurrent auto-encoder model can summarise sequential data through an encoder structure into a fixed-length vector and then reconstruct into its original sequential form through the decoder structure. The summarised information can be used to represent time series features. In this paper, we propose relaxing the dimensionality of the decoder output so that it performs partial reconstruction. The fixed-length vector can therefore represent features only in the selected dimensions. In addition, we propose using rolling fixed window approach to generate samples. The change of time series features over time can be summarised as a smooth trajectory path. The fixed-length vectors are further analysed through additional visualisation and unsupervised clustering techniques. This proposed method can be applied in large-scale industrial processes for sensors signal analysis purpose where clusters of the vector representations can be used to reflect the operating states of selected aspects of the industrial system.
rejected-papers
This paper applies a form of recurrent autoencoder for a specific type of industrial sensor signal analysis. The application is very narrow and the data set is proprietary. The approach is not clearly described, but seems very straightforward and is not placed in context of prior work. It is therefore not clear how to evaluate the contribution of the method. The authors have revised the paper to include more details and prior work, but it still needs a lot more work on all of the above dimensions before it can make a significant contribution to the ICLR community.
train
[ "HJI6Rf1eG", "r1CkPdteG", "BkPl4O9xM", "S1XLye3QG", "SkLr9127f", "HkHE9y27G", "B1oGcJ27M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This writeup describes an application of recurrent autoencoder to analysis of multidimensional time series. The quality of writing, experimentation and scholarship is clearly below than what is expected from a scientific article. The method is explained in a very unclear way, there is no mention of any related work. I would encourage the authors to take a look at other ICLR submissions and see how rigorously written they are, how they position the reported research among comparable works. ", "The paper describes a sequence to sequence auto-encoder model which is used to learn sequence representations. The authors show that for their application, better performance is obtained when the network is only trained to reconstruct a subset of the data measurements. The paper also presents some visualizations the similarity structure of the learned representations and proposes a window-based method for processing the data.\n\nAccording to the paper, the experiments are done using a data set which is obtained from measurements of an industrial production process. Figure 2 indicates that reconstructing fewer dimensions of this dataset leads to lower MSE scores. I don’t see how this is showing anything besides the obvious fact that reconstructing fewer dimensions is an easier task than reconstructing all of them. The only conclusions I can draw from the visual analysis is that the context vectors are more similar to each other when they are obtained from time steps in the data stream which are close to each other. Since the paper doesn’t describe much about the privately owned data at all, there is no possibility to replicate the work. The paper doesn’t frame the work in prior research at all and the six papers it cites are only referred to in the context of describing the architecture.\n\nI found it very hard to distil what the main contribution of this work was according to the paper. There were also not many details about the precise architecture used. It is implied that GRU networks and were used but the text doesn’t actually state this explicitly. By saying so little about the data that was used, it was also not clear what the temporal correlations of the context vectors are supposed to tell us. \n\nThe paper describes how existing methods are applied to a specific data set. The benefit of only reconstructing a subset of the input dimensions seems very data specific to me and I find it hard to consider this a novel idea by itself. Presenting sequential data in a windowed format is a standard procedure and not a new idea either. All in all I don't think that the paper presents any new ideas or interesting results.\n\nPros:\n* The visualizations look nice.\n\nCons:\n* It is not clear what the main contribution is.\n* Very little information about the data. \n* No clear experiments from which conclusions can be drawn.\n* No new ideas.\n* Not well rooted in prior work.\n", "This paper proposes a strategy that is inspired by the recurrent auto-encoder model, such that clustering of multidimensional time series data can be performed based on the context vectors generated by the encoding process. Unfortunately, the paper in its current form is a bit thin on content.\n \nMain issues:\nNo related works (such as those using RNN for time series analysis or clustering of time series data streams etc.) were described by the paper, no baselines were used in the comparison evaluations, and no settings/details were provided in the experiment section. As a result, it is quite difficult to judge the merits and novelty of the paper.\n \nOther issues:\nsome contribution claims highlighted in the Discussion Section, i.e., Section 4, are arguable and should be further extended. For example, the authors claim that the proposed LSTM-based autoencoder networks can be natively scaled up to data with very high dimensionality. I would like the authors to explain it in more details or empirically demonstrate that, since a LSTM-based model could be computationally expensive. As another example, the authors claim that reducing the dimensionality of the output sequence is one of the main contributions of the paper. In this sense, further elaborations from that perspective would be very beneficial since some networks already employ such a mechanism. \n\nIn short, the paper in its current form does not provide sufficient details for the reviewer to judge its merits and contributions.", "Happy new year 2018 to everybody. We have updated the paper and here are the highlights:\n\n-- Added detailed description of the problem including process graph in the appendix (Large compressor at a natural gas terminal)\n\n-- Added dataset description and names of sensors used\n\n-- Repeated experiement with a different configuration to further illustrate the idea of partial reconstruction and to ensure model robustness. Results are visualised graphically as the second example in the main text.\n\n-- Referenced related works: both non-NN (DTW) and NN-based approaches. DTW works are dominated by unidimensional time series and wearable sensor where the dimensionality remain quite low. In NN world, related researches are dominated by well-labelled audio/video data where application use cases like ours is underrepresented. The closest one is about gesture recognition (recurrent autoencoder) but only very few sensors were involved. \n\n-- We totally recognise that reconstructing a subset of the original sequence is indeed a much easier task. The key benefit is that it allows operators to diagnose different aspects of the machine rather than the machine as a whole. This is a practical benefit at the use case level.\n\n-- Most related works have well defined dataset with bounded and labelled time series datasets. In this study we focused on an unbounded and unlabelled time series dataset which is admittedly more available in the real world. (Cost of collecting labelled time series is high, and more importantly it's an objective & manual exercise) By empirically demonstrating that unbounded/unlabelled time series data can be effectively summarised by vector representation in an online scenario and to apply clustering algorithms on them, it offers a useful practical tool for diagnostics and maintenance.", "Thanks for your review. \n\nWe've added the description of the dataset to the updated paper (e.g. graphs, sensor names, locations, etc). Also added a more detailed description of the model.", "Thanks for your review.\n\nWe've added a detailed description of the dataset in the updated paper. It is sourced from a large compressor unit and the names of all sensors are also attached in the appendix. \n\nFor the related works, we've expanded this section a lot. Traditional methods based on DTW (non-NN) have been cited in the paper. Also more recently published NN-based models such as those auto-encoder applications relating to time series, video and audio have been added too. We have not found any RNN auto-encoder research relating to large-scale industrial sensor like the one we showcase. Closest are the ones relating to wearable sensors (accelerometers + gyroscope, also cited in the updated paper) but the number of dimensions involved is much smaller.\n\nThe benefit of partial reconstruction is not mathematical but purely practical (I also explained this for another reviewer). That reconstruction a selected subset of sensors allows the vector representation to reflect the underlying states of various aspects of the industrial system. Practically, engineers and operators don't need to know whether a machine has failed. Instead, they need to know which part of the machine failed and what kind of failure mode occured (clusters can be found in PCA space of the context vector, which reflects different operating states). This is useful for operators because the raw data is completely unlabelled, by using the proposed approach they can identify the cluster of the current context vector in an on-line setting. Then they can further deduce the operating states of the machine which helps with diasnostic/maintenance.\n\nThe model uses encoder/decoder with three layers. LSTM neurons were used and briefly mentioned in a small figure in the original paper but we've now adopted your comment and stated it much clearer in the main text.", "Thanks for your review. \n\nA more detailed description of the problem and dataset was added (both in-text and appendix) to the updated paper. We source the sensor data from an large-scale industrial compressor situated at a natural gas terminal. Also a summary diagram was added to provide better understanding of the problem.\n\nSecondly, we've added related works (both NN-based algorithms and non-NN works in this area too). We also found that most of the previous related works were about wearable sensors, where the number of sensor measurements are relatively low comparing with the use case we present in this paper. One contribution of the study is that we applied recurrent aut-encoder model on multidimensional time series where the dimensionity is quite high (hundreds on sensors at a large-scale industrial gas compressor) and empirically demonstrate that the vector representation can effectively summarise multidimensional sequences.\n\nThe model specification was also added. Encoder/Decoder have three layers RNN with LSTM neurons and the hidden dimension is 400.\n\nNevertheless we have also repeated the same experiment with a different configuration to demonstrate the robustness of the proposed approach. The graphical visualisation is added to the main text as example two.\n\nPartial reconstruction of the original sequence is beneficial for two very simple reasons: (1) that it is an easier task; and (2) for large-scale industrial processes it is often very hard to diagnose where the problem comes from. A complete auto-encoder where the full input is reconstructed thru the RNN decoder would simply summarise the entire industrial process. Instead, partial reconstruction allows operators to focus on selected aspects of the process (e.g. pressure issues, temperature deviation... etc) and therefore have different clusters to reflect diagnostics of different aspects of the industrial system." ]
[ 2, 2, 4, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_r1cLblgCZ", "iclr_2018_r1cLblgCZ", "iclr_2018_r1cLblgCZ", "iclr_2018_r1cLblgCZ", "HJI6Rf1eG", "r1CkPdteG", "BkPl4O9xM" ]
iclr_2018_H1uP7ebAW
Learning to diagnose from scratch by exploiting dependencies among labels
The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures. Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies. Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset. This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples -- ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks. We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training. Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice.
rejected-papers
Authors apply dense nets and LSTM to model dependencies among labels and demonstrate new state-of-art performance on an X-Ray dataset. Pros: - Well written. - New improvement to state-of-art Cons: - Novelties are not strong. One combination of existing approaches are used to achieve state-of-art on what is still a relatively new dataset. (All Reviewers) - Using LSTM to model dependencies would be affected by the selected order of the disease states. In this sense, LSTM seems like the wrong architecture to use to model dependencies among labels. This may be a drawback in comparison to other methods of modeling dependencies, but this is not thoroughly discussed or evaluated. (Reviewer 1 & 3) - There is a large body of work on multi-task learning with shared information, which have not been evaluated for comparison. Because of this, the contribution of the LSTM to model dependencies between labels in comparison to other available approaches cannot be verified. (Reviewer 1 & 3) - Top AUC performance on this dataset does not carry much significance on its own, as the dataset is new (CVPR 2017), and few approaches have been tested against it. - Medical literature not cited to justify with evidence the discovered dependencies among disease states. (Reviewer 1)
train
[ "HJZ2MKRbM", "S1KuIB5gz", "BkE5LPlZG", "SyAfEqafG", "BJoLLcTMM", "BkpwSqpMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public" ]
[ "The paper proposes to combine the recently proposed DenseNet architecture with LSTMs to tackle the problem of predicting different pathologic patterns from chest x-rays. In particular, the use of LSTMs helps take into account interdependencies between pattern labels. \n\nStrengths:\n- The paper is very well written. Contextualization with respect to previous work is adequate. Explanations are clear. Novelties are clearly identified by the authors.\n- Quantitative improvement with respect to the state the art. \n\nWeaknesses:\n- The paper does not introduce strong technical novelties -- mostly, it seems to apply previous techniques to the medical domain. It could have been interesting to know if there are more insights / lessons learned in this process. This could be of interest for a broader audience. For instance, what are the implications of using higher-resolution images as input to DenseNet / decreasing the number of layers? How do the features learned at different layers compare to the ones of the original network trained for image classification? How do features of networks pre-trained on ImageNet, and then fine-tuned for the medical domain, compare to features learned from medical images from scratch? \n- The impact of the proposed approach on medical diagnostics is unclear. The authors could better discuss how the approach could be adopted in practice. Also, it could be interesting also to discuss how the results in Table 2 and 3 compare to human classification capabilities, and if that performance would be already enough for building a computer-aided diagnosis system.\n\nFinally -- is it expected that the ordering of the factorization in Eq. 3 does not count much (results in Table 3)? As a non-expert in the field, I'd expect that ordering between pathologic patterns matters more.", "This paper presents an impressive set of results on predicting lung pathologies from chest x-ray images. \nAuthors present two architectures: one based on denseNet, and one based on denseNet + LSTM on output dimensions (i.e. similar to NADE model), and compare it to state of the art on the chest x-ray classification. Experiments are clearly described and results are significantly better compared to state of the art.\n\nThe only issue with this paper is, that their proposed method, in practice is not tractable for inference on estimating probability of a single output, a task which would be critical in medical domain. Considering that their paper is titled as a work to use \"dependencies\" among labels, not being able to evaluate their network's, and lack of interpretable evaluation results on this model in the experiment section is a major limitation. \n\nOn the other hand, there are many alternative models where one could simply use multi-task learning and shared parameter, to predict multiple outcomes extremely efficiently. To be able to claim that this paper improved the prediction by better modeling of 'dependencies' among labels, I would need to see how the (much simpler) multi-task setting works as well. \n\nThat said, the paper has several positive aspects in all areas:\n\nOriginality - the paper presents first combination of DenseNets with LSTM-based output factorization,\nWriting clarity - the paper is very well written and clear.\nQuality - (apart from the missing multi-task baseline), the results are significantly better than state of the art, and experiments are well done,\nSignificance - Apart from the issue of intractable inference which is arguably a large limitation of this work, the application in medical field is significant. \n\n", "Well written and appropriately structured. Well within the remit of the conference.\nNot much technical novelty to be found, but the original contributions are adequately identified and they are interesting on their own.\n\nMy main concern (and complaint) is not technical, but application-based. This study is (unfortunately) typical in that it focuses on and provides detail of the technical modeling issues, but ignores the medical applicability of the model and results. This is exemplified by the fact that the data set is hardly described at all and the 14 abnormalities/pathologies, the rationale behind their choice and the possible interrelations and dependencies are never described from a medical viewpoint. If I were a medical expert, I would not have a clue about how these results and models could be applied in practice, or about what medical insight I could achieve.\n\nThe bottom line seems to be: \"my model and approach works better than the other guys' model and approach\", but one is left with the impression that these experiments could have been made with other data, other problems, other fields of application and they would not have not changed much ", "\"The paper does not introduce strong technical novelties -- mostly, it seems to apply previous techniques to the medical domain.\"\n\nThe NADE-LSTM hybrid model has not previously been explored. Leveraging the label dependency has been ignored in previous work on medical diagnosis. We carefully designed such models to address it while taking into account application-specific constraints (e.g. Section 1 (especially paragraph 2), Section 3.1, Section 3.3.1 and Section 5). In addition, clinical-relevant metrics are proposed, analyzed (see Section 4.2) and measured (see Section 4.4) as opposed to conventional machine learning metrics that are hard to interpret clinically by medical practitioners. We believe all of the above contributions are novel.\n\n\"It could have been interesting to know if there are more insights / lessons learned in this process. This could be of interest for a broader audience. For instance, what are the implications of using higher-resolution images as input to DenseNet / decreasing the number of layers? How do the features learned at different layers compare to the ones of the original network trained for image classification? How do features of networks pre-trained on ImageNet, and then fine-tuned for the medical domain, compare to features learned from medical images from scratch?\"\n\nThank you for the suggestions. The central idea is to exploit the label dependencies, for which we have focus on extensively. For other non-central design choices (such as architectural choice of DenseNets, the use of fine-tuning), the reasoning has been stated while leaving quantitative evaluation for the future study. Regarding your point of pre-trained models: Table 2 shows the diminishing benefits of using pretrained out-of-domain models once a large amount of in-domain training pairs are available, as the previous SOTA relies a pretrained model from ImageNet. For both pretraining and fine-tuning, lower resolution (224*224, as opposed to 512*512 in our work) and out-of-domain bias (from ImageNet, very much different from medical images) could possibly account for the difference. As you mentioned, fine-tuning might be a reasonable middle-ground. However, as has been recently observed, fine-tuning has its inherent issue of “catastrophic forgetting” and needs to be dealt with care.\n\n\"The impact of the proposed approach on medical diagnostics is unclear. The authors could better discuss how the approach could be adopted in practice. Also, it could be interesting also to discuss how the results in Table 2 and 3 compare to human classification capabilities, and if that performance would be already enough for building a computer-aided diagnosis system.\"\n\nThe medical use case that motivated our research was the automated prediction of all (modeled) abnormalities that are present in a chest x-ray image. While the performance of the proposed model represented an improvement over the previous state of the art, the accuracy certainly falls short of human performance. In practice, however, the predictions are probably good enough to find utility in triage applications or as a second-read diagnostic aid. Measurement of the clinical impact, as well as a comparison to human performance, will have to wait for future studies designed to collect and analyze the required data in a standard randomized clinical trial. \n\n\"Finally -- is it expected that the ordering of the factorization in Eq. 3 does not count much (results in Table 3)? As a non-expert in the field, I'd expect that ordering between pathologic patterns matters more.\"\n\nJust to be clear, this is an empirical question. In theory, all orderings are equivalent but in practice they might differ because factors are parameterized by models. Based on our experiments, however, the ordering of the factorization does not significantly impact the results. This phenomenon has been consistently observed in other publications that use NADEs (e.g. Iterative Neural Autoregressive Distribution Estimator, NIPS 2014). In practice, one could average models trained with different orderings. Model averaging is not the central point of this work.\n", "\"The only issue with this paper is, that their proposed method, in practice is not tractable for inference on estimating probability of a single output, a task which would be critical in medical domain. Considering that their paper is titled as a work to use \"dependencies\" among labels, not being able to evaluate their network's, and lack of interpretable evaluation results on this model in the experiment section is a major limitation.\"\n\nThe reviewer is correct that estimating the marginal probability of a given abnormality using the proposed model would be computationally expensive. We do not consider this a major limitation because the motivation for our research was to model events (i.e. the abnormalities) that occur together. By definition, the marginal probability describes each event in isolation. Section 1 of the paper provides a few specific examples of abnormalities that domain experts know to be dependent. The proposed model attempts to capture such dependencies by describing the joint distribution abnormalities. The medical use case that we intended to support is for the model to predict all abnormalities that are present in the image and the joint probability quantifies the confidence of the prediction as a whole. That being said, other metrics such as sensitivities and specificities remain accessible (Section 4.2) for individual abnormalities.\n\n\"On the other hand, there are many alternative models where one could simply use multi-task learning and shared parameter, to predict multiple outcomes extremely efficiently. To be able to claim that this paper improved the prediction by better modeling of 'dependencies' among labels, I would need to see how the (much simpler) multi-task setting works as well.\"\n\nThe baseline model (model_{a}) described in our paper represents the standard multi-task learning approach in which the encoder parameters are shared across classes and decoder uses class-specific output layers. The alternative models (model_{b1} and model_{b2}) employ a comparable encoder architecture but a recurrent decoder. Our claim that modeling dependencies improved the predictions is based entirely on the a comparison you have suggested.", "Thank you for taking time reviewing our manuscript. Much effort has been devoted into making this paper digestible for both machine learning researchers and medical practitioners. We are surprised that you have found none of them useful. For instance, the rationale behind our modelling choices is carefully phrased from the medical point of view in length in Section 1 (especially paragraph 2), Section 3.1, Section 3.3.1, Section 4.2 and Section 5. In fact, wherever relevant, we have made an effort to motivate and justify the modeling decisions with the medical application in question, although there might still be places where the narrative could be improved to draw a deeper connection. Regarding your point of the dataset, Wang et al. (2017) (the paper introducing the dataset to the community) has thorough descriptions of its curation, context, label distribution. That being said, we agree with you that this is not strictly speaking a medicine-oriented publication that is meant to guide the clinical practice, as rigorous work in that category would require conducting a randomized clinical trial in the hospital environment to systematically measure its clinical outcome." ]
[ 6, 6, 6, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1 ]
[ "iclr_2018_H1uP7ebAW", "iclr_2018_H1uP7ebAW", "iclr_2018_H1uP7ebAW", "HJZ2MKRbM", "S1KuIB5gz", "BkE5LPlZG" ]
iclr_2018_ryserbZR-
Classification and Disease Localization in Histopathology Using Only Global Labels: A Weakly-Supervised Approach
Analysis of histopathology slides is a critical step for many diagnoses, and in particular in oncology where it defines the gold standard. In the case of digital histopathological analysis, highly trained pathologists must review vast whole-slide-images of extreme digital resolution (100,000^2 pixels) across multiple zoom levels in order to locate abnormal regions of cells, or in some cases single cells, out of millions. The application of deep learning to this problem is hampered not only by small sample sizes, as typical datasets contain only a few hundred samples, but also by the generation of ground-truth localized annotations for training interpretable classification and segmentation models. We propose a method for disease available during training. Even without pixel-level annotations, we are able to demonstrate performance comparable with models trained with strong annotations on the Camelyon-16 lymph node metastases detection challenge. We accomplish this through the use of pre-trained deep convolutional networks, feature embedding, as well as learning via top instances and negative evidence, a multiple instance learning technique fromatp the field of semantic segmentation and object detection.
rejected-papers
Authors present a method for disease classification and localization in histopathology images. Standard image processing techniques are used to extract and normalize tiles of tissue, after which features are extracted from pertained networks. A 1-D convolutional filter is applied to the bag of features from the tiles (along the tile dimension, kernel filter size equal to dimensionality of feature vector). The max R and min R values are kept as input into a neural network for classification, and thresholding of these values provides localization for disease / non-disease. Pro: - Potential to reduce annotation complexity of datasets while producing predictions and localization Con: - Results are not great. If anything, results re-affirm why strong annotations are necessary. - Several reviewer concerns regarding novelty of proposed method. While authors have made clear the distinctions from prior art, the significance of those changes are debated. Given the current pros/cons, the committee feels the paper is not ready for acceptance in its current form.
test
[ "S1O8uhkxf", "SkWQLvebf", "Bk72o4NWM", "BkH_ar6XM", "Byy2THTQG", "BJDr6HaXM", "BJ8riS6Qf", "Hy1koS67M", "rkmE5raXf", "r1WptBaXz", "rkjIwbqxM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "This paper describes a semi-supervised method to classify and segment WSI histological images that are only labeled at the whole image level. Images are tiled and tiles are sampled and encoded into a feature vector via a ResNET-50 pretrained on ImageNET. A 1D convolutional layer followed by a min-max layer and 2 fully connected layer compose the network. The conv layer produces a single value per tile. The min-max layer selects the R min and max values, which then enter the FC layers. A multi-instance (MIL) approach is used to train the model by backpropagating only instances that generate min and max values at the min-max layer. Experiments are run on 2 public datasets achieving potentially top performance. Potentially, because all other methods supposedly make use of segmentation labels of tumor, while this method only uses the whole image label.\n\nPrevious publications have used MIL training on tiles with only top-level labels [1,2] and this is essentially an incremental improvement on the MIL approach by using several instances (both min-negative and max-positive) instead of a single instance for backprop, as described in [3]. So, the main contribution here, is to adapt min-max MIL to the histology domain. Although the result are good and the method interesting, I think that the technical contribution is a bit thin for a ML conference and this paper may be a better fit for a medical imaging conference.\n\nThe paper is well written and easy to understand. \n\n\n\n[1] Hou, L., Samaras, D., Kurc, T. M., Gao, Y., Davis, J. E., & Saltz, J. H. (2016). Patch-based convolutional neural network for whole slide tissue image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2424-2433).\n[2] Cosatto, E., Laquerre, P. F., Malon, C., Graf, H. P., Saito, A., Kiyuna, T., ... (2013). Automated gastric cancer diagnosis on H&E-stained sections; training a classifier on a large scale with multiple instance machine learning. Medical Imaging, 2. 2013.\n[3] Durand, T., Thome, N., & Cord, M. (2016). Weldon: Weakly supervised learning of deep convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4743-4752).", "This paper proposes a deep learning (DL) approach (pre-trained CNNs) to the analysis of histopathological images for disease localization.\nIt correctly identifies the problem that DL usually requires large image databases to provide competitive results, while annotated histopathological data repositories are costly to produce and not on that size scale.\nIt also correctly identifies that this is a daunting task for human medical experts and therefore one that could surely benefit from the use of automated methods like the ones proposed.\n\nThe study seems sound from a technical viewpoint to me and its contribution is incremental, as it builds on existing research, which is correctly identified.\nResults are not always too impressive, but authors seem intent on making them useful for pathogists in practice (an intention that is always worth the effort).\nI think the paper would benefit from a more explicit statement of its original contributions (against contextual published research)\n\nMinor issues:\nRevise typos (e.g. title of section 2)\nPlease revise list of references (right now a mess in terms of format, typos, incompleteness", "The authors approach the task of labeling histology images with just a single global label, with promising results on two different data sets. This is of high relevance given the difficulty in obtaining expert annotated data. At the same time the key elements of the presented approach remain identical to those in a previous study, the main novelty is to replace the final step of the previous architecture (that averages across a vector) with a multiplayer perceptron. As such I feel that this would be interesting to present if there is interest in the overall application (and results of the 2016 CVPR paper), but not necessarily as a novel contribution to MIL and histology image classification.\n\nComments to the authors:\n\n* The intro starts from a very high clinical level. A introduction that points out specifics of the technical aspects of this application, the remaining technical challenges, and the contribution of this work might be appreciated by some of your readers.\n* There is preprocessing that includes feature extraction, and part of the algorithm that includes the same feature extraction. This is somewhat confusing to me and maybe you want to review the structure of the sections. You are telling us you are using the first layer (P=1) of the ResNet50 in the method description, and you mention that you are using the pre-final layer in the preprocessing section. I assume you are using the latter, or is P=1 identical to the prefinal layer in your notation? Tell us. Moreover, not having read Durand 2016, I would appreciate a few more technical details or formal description here and there. Can you detail about the ranking method in Durand 2016, for example?\n* Would it make sense to discuss Durand 2016 in the base line methods section? \n* To some degree this paper evaluates WELDON (Durand 2016) on new data, and compares it against and an extended WELDON algorithm called CHOWDER that features the final MLP step. Results in table 1 suggest that this leads to some 2-5% performance increase which is a nice result. I would assume that experimental conditions (training data, preprocessing, optimization, size of ensemble) are kept constant in between those two comparisons? Or is there anything of relevance that also changed (like size of the ensemble, size of training data) because the WELDON results are essentially previously generated results? Please comment in case there are differences. ", "> * The intro starts from a very high clinical level. A introduction that points out \n>specifics of the technical aspects of this application, the remaining technical challenges, \n>and the contribution of this work might be appreciated by some of your readers.\n\nSince our work is focused on the application of machine learning techniques to a \nvery significant and specific medical diagnostic, and given the very ML-focused audience\nof ICLR, we assume that the majority of readers will be unfamiliar with histopathological\nimage analysis and clinical pathology in general. For this reason, we discuss the \ncontext of the problem at length, including the pernicious data challenges present in \nthis application. Besides presenting our own contributions, we hope to introduce more\nresearchers to this fruitful and important field. As modern machine learning techniques \nare only recently being applied to histopathological image analysis, especially in the \nweak-learning setting, there is a tremendous opportunity for interested readers to \nmake a significant impact in this area. \n\nAs for the presentation of our own contributions with respect to prior work, we have \nmodified the last paragraph of the introduction to make these more clear.\n\n\n> * There is preprocessing that includes feature extraction, and part of the algorithm that \n>includes the same feature extraction. This is somewhat confusing to me and maybe you \n>want to review the structure of the sections. You are telling us you are using the first layer \n>(P=1) of the ResNet50 in the method description, and you mention that you are using the \n>pre-final layer in the preprocessing section. I assume you are using the latter, or is P=1 \n>identical to the prefinal layer in your notation? Tell us. \n\nIn our work, we propose the use of the ResNet-50 pre-output layer, namely, the \nvalues resulting from the convolutional stack, prior to the fully-connected output\nlayers. We interpret these values as a feature vector describing the structural and \ncolor content of each $244\\times 244$ pixel tile. In our notation, we use $P$ to refer\nto the dimensionality of this feature vector. In the case of ResNet-50, this layer has \ncontains 2048 neurons, so we note that $P = 2048$. This use of an ImageNet pre-trained \nResNet-50 architecture as a tile feature extractor remains consistent throughout the \nwork, as described in the text. It is not clear to us at which point the referee \nfound confusion between this definition of $P$ and ResNet-50 \nlayer indexing. If the referee would provide further details, we would be happy to \nclarify the text.\n\nAs for the structure of the paper, we discuss the pre-processing stages prior to the\nintroduction of both the feature-pooling techniques and CHOWDER since this pipeline \nremains consistent over all approaches. \n\n> Moreover, not having read Durand 2016, I would appreciate a few more technical details \n>or formal description here and there. Can you detail about the ranking method in \n>Durand 2016, for example?\n\nBy ranking method, we assume that the referee means the operation of the MinMax layer\noperating on the feature embedding values, as the modified ranking loss metric proposed\nin Sec. 4 of Durand et al. (2016) is easily substituted for binary cross-entropy loss \nin our binary classification setting, as we point out in Sec. 3.1.\n\nThe instance ranking method used in Durand et al. (2016) during training, as well as in \nour approach, is simply sorting the embedding values in descending order, \nas we describe in the \"Top Instances\nand Negative Evidence\" subsection of Sec. 2.3. As for formal descriptions, we err on the\nside of brevity in light of the other necessary content in the paper. Since the \napplication to HIA is novel to many readers, explanation and interpretation is \nrequired in order to relate the significance of our contribution. We have endeavoured to\nmake details explicit when necessary to the presentation, \nsuch as tile selection and the operation of the feature embedding layer. In all other\ncases, we lean on the common expertise of the ICLR audience. If the referee would point\nus to any further ambiguities we would be happy to clarify the text.\n\nAdditionally, the work of Durand et al. (2016) is very informative and \npresents an ingenious architecture. Given the referee's strong interest in our comparison\nto Durand et al. (2016), we would strongly encourage the referee to \ntake the time to read their work in detail, as well. ", "We thank the referee for their time in reviewing our work, we understand the significant\ntime constraints placed upon many referees during this season. While we appreciate the\ncall to a clear and defined statement of contributions and originality, we believe that\nperhaps the referee has misunderstood our work with respect to prior art. We hope to \nclarify the contributions of our work in our response, and with our modifications to\nthe final paragraph of the introduction. \n\n> The authors approach the task of labeling histology images with just a single global label, \n>with promising results on two different data sets. This is of high relevance given the difficulty \n>in obtaining expert annotated data. At the same time the key elements of the presented approach \n>remain identical to those in a previous study ... \n\nWe strongly contend this point. We assume that the referee is referring to the work of\nDurand et al. (2016) on the WELDON architecture. While we use this work as a starting \npoint for our application, the specifics of our approach in the application to \nHIA are not at all identical to Durand et al. Indeed, the application to HIA \n(and the modifications required to achieve it)\nwas not at all envisaged in the prior art, which was solely focused on object region \ndetection in natural images (e.g. Pascal VOC, COCO, etc.). Indeed, the application to \nmassive WSI datasets requires novel developments for acquiring instances during \ntraining (i.e. our proposed system of random sampling), much less the architectural and training \nchanges we propose. Even the pre-trained DCNN is different from Durand et al. (2016), \n(ResNet-50 in place of VGG16). Given that there is no path from Durand et al. (2016) \nto human-level diagnosis prediction in WSI from diagnosis labels without the \nsignificant developments we outline, we cannot agree with the referee's assessment.\n\n>the main novelty is to replace the final step of the previous architecture \n>(that averages across a vector) with a multiplayer perceptron. As such I feel that this \n>would be interesting to present if there is interest in the overall application \n>(and results of the 2016 CVPR paper), but not necessarily as a novel contribution to \n>MIL and histology image classification.\n\nAs we detail in our general comments, we do indeed believe that there is a place at \nICLR for works presenting state-of-the-art results for impactful applications, especially\nin medicine, and oncology diagnostics in particular. We also argue, as in our \ncomments above, and in our general comments, for the novel contributions made by our\nwork to MIL as well as to machine learning in HIA by presenting a human-level\ndiagnosis prediction system for WSI trained without using disease annotation maps.", "> * Would it make sense to discuss Durand 2016 in the base line methods section? \n\nConsidering the adaptations we make to the approach of Durand et al. (2016), \nwe felt it more appropriate to cite their work within Sec. 2.3 in order to show the line of\ndevelopment. In Sec. 2.3, we cite Durand et al. (2016) extensively, pointing out the notable \ncontributions of this work, and how the originally proposed approach must be adapted\nin order to provide an effective architecture for the setting of WSI classification\nwithout local annotations. \n\nIn Sec. 2.2, when we introduce baseline techniques, we truly mean baseline. Aggregation\nvia feature pooling is one of the most direct ways one can attempt to approach the \ntask of WSI classification sans annotations. Indeed, this approach is very attractive,\nas compared to MIL approaches, when tackling large-scale datasets from a purely \ncomputational standpoint. For this reason, we denote these approaches as our \"baseline,\"\nwhereby we demonstrate that either technique (WELDON or CHOWDER) can provide \nimprovements in both detection and localization which are significant enough, \nas compared to feature pooling, to justify their complexity.\nBy not including Durand et al. (2016) within Sec 2.2, we do not imply that we should not \ncompare (we do), rather we simply make a semantic distinction between feature \npooling and MIL.\n\n> * To some degree this paper evaluates WELDON (Durand 2016) on new data, and\n>compares it against and an extended WELDON algorithm called CHOWDER that features \n>the final MLP step. Results in table 1 suggest that this leads to some 2-5% performance \n>increase which is a nice result. I would assume that experimental conditions (training data, \n>preprocessing, optimization, size of ensemble) are kept constant in between those two \n>comparisons? Or is there anything of relevance that also changed (like size of the ensemble, \n>size of training data) \n\nWe do, in fact, evaluate the WELDON architecture of Durand et al. on new data,\nas reported in Table 1. We also compare WELDON against our proposed modifications. \nIn all cases, experimental settings remain consistent between all tested methods. In the \ncase of WELDON and CHOWDER, we have ensured that the ensemble size remains consistent \nbetween the two ($E = 10$ as described in Sec. 3.1). For both WELDON and CHOWDER, \nwe use best-case hyper-parameter settings.\n\nAdditionally, the improvement in AUC demonstrated by the CHOWDER architecture is more\nsignificant than the referee reports. In Table 1, we report a percent change in AUC\nof 12.15% and 8.53% over the WELDON architecture for the competition and \ncross-validation splits, respectively. This corresponds to a 12.59% and 23.66% percent\nchange in AUC as compared to the best-performing baseline methods for the same splits.\nIn the case of TCGA-Lung, we demonstrate a 1.32% percent change in AUC, but we point this\nout specifically in the text. This dataset is well suited to feature pooling due to \nthe balanced instance classes present in the TCGA-Lung dataset. \nThe diseased regions in these\nslides are much more diffuse over the entire tissue sample, as opposed to the \nhighly-localized metastases present in Camelyon-16. Therefore, the excellent \nperformance of the baseline feature-pooling methods is expected for this dataset, \nas the disease signal is not lost in the pooled representation.\n\n>because the WELDON results are essentially previously generated results? \n>Please comment in case there are differences. \n\nIt is not clear to us what is meant by previously generated results. In the case of\nWELDON, in Durand et al. (2016), the method was proposed only for object region \ndetection in natural images. To the best of our knowledge, there has been no other \napplication of a WELDON-inspired architecture to HIA, or to the TCGA-Lung and \nCamelyon-16 datasets in particular. ", "We thank the referee for their time and effort in assessing our work. We hope that the\ndiscussion we provide in our general comments provides further justification \nfor our work's presence at ICLR. Specifically, we note the significance of our \narchitectural contributions, as well as our modifications to the training regime. We\nalso detail how our system provides human-pathologist-level performance without being\nguided by detailed expert instruction on what structures lead to disease diagnoses. \nWe believe that this significant advance in machine learning as applied to medical \nimaging, and to this gold standard oncology diagnostic in particular, will be of \ngreat interest to the general ICLR audience.\n\n>Previous publications have used MIL training on tiles with only top-level labels [1,2] \n>and this is essentially an incremental improvement on the MIL approach by using \n>several instances (both min-negative and max-positive) instead of a single instance\n>for backprop, as described in [3]. So, the main contribution here, is to adapt min-max \n>MIL to the histology domain. Although the result are good and the method interesting, \n>I think that the technical contribution is a bit thin for a ML conference and this paper \n>may be a better fit for a medical imaging conference.\n\nWe thank the referee for their positive view of our method and results. We agree that \nthe work we present is, as Reviewer1 noted, a \"down-to-earth practical application,\"\nhowever we do make novel architectural, process, and implementation contributions. While\nwe do not provide a theory of MIL in the context of HIA, we note that many successful\nadvances in our field have been made from an empirical, rather than theoretical, \nperspective. While there is no newly proposed loss, neuron non-linearity, or adaptive \nmomentum scheme in our work, we do demonstrate the steps necessary to provide \nstate-of-the-art performance for diagnosis prediction and disease localization without\nexpert assistance beyond diagnosis labels. While these results would indeed be\nincredibly pertinent at a more medically focused venue, it is our strong belief that the audience \nof ICLR would greatly benefit both from our demonstration, as well as their introduction\nto a budding application area in great need of their technical expertise.\n\n>The paper is well written and easy to understand. \n\nWe thank the referee for their comments and positive feedback on our presentation of our\nwork. ", ">The study seems sound from a technical viewpoint to me and its contribution is incremental,\n>as it builds on existing research, which is correctly identified.\n>Results are not always too impressive, but authors seem intent on making them useful for\n> pathogists in practice (an intention that is always worth the effort).\n>I think the paper would benefit from a more explicit statement of its original contributions \n>(against contextual published research)\n\nWe thank the referee for their comments and their effort in assessing our work. \nWith respect to our specific contributions, we have added further clarifications to\n the text (as noted in the paper modifications) to identify our contribution with \nrespect to prior art. Additionally, with respect to the significance of the results we \npresent, we note that the performance reported in Table 1 represents the \nstate-of-the-art for HIA classification using only WSI-wide labels. For further\njustifications on the significance of our work, we refer to our general comments \non this subject.\n\n>Minor issues:\n>Revise typos (e.g. title of section 2)\n\nThank you for pointing out this (rather embarrassing) typo! We have corrected this\nmistake along with others throughout the text. \n\n>Please revise list of references (right now a mess in terms of format, typos, incompleteness\n\nAs noted in the general comments, we have revised the references to fit a common standard\nand have attempted to include all relevant citation details. We thank you for your \nattentiveness. ", "We would like to point out the changes made to the text of our \nsubmission to address the comments of the referees, as well as some of our own \ncorrections and clarifications.\n\n1. We have updated the explanation for our choice of a univariate feature embedding ($J=1$) \nversus a multivariate embedding ($J>1$) in light of more extensive \nexperimentation. Specifically, increasing the embedding dimensionality $J$ *can* \nimprove training loss. However, it diminishes generalization, providing \nworse scores on held-out validation data. Even though our tested\ndatasets (Camelyon-16, TCGA-Lung) rival ImageNet in overall size after tiling, \nthe number of *unique* slide images remains very limited. In the weak-learning setting,\nthe training method may attempt to find any number of possible unique features that \ncould contribute to the overall WSI (\"bag\") class. \nFor binary WSI classification, restricting the model to\n$J=1$ makes sense, as the positive/negative assignment of the embedding maps directly \nto the binary classes (e.g. \"contains cancer\", \"does not contain cancer\"). When\nsetting $J>1$, this correspondence is lost, and it becomes much more difficult to both\nregularize, as well as interpret, the model.\n\n2. We have revised the references to a consistent format.\n\n3. We have revised the metastasis detection figures (Figs. 4--6) with a more \nlegible/interpretable color map (*blue-white-red* from the previous *green-yellow-red*).\n\n4. We have updated the last paragraph of the introduction to describe our\nspecific contributions with respect to prior work (Durand et al. 2016, in particular).\n\n5. We have revised grammar, spelling, & usage in the text.\n\n6. In the results section we have added references to the recently published work of \n Bejnordi et al. (2017), which reports human pathologist AUC performance on the \n Camelyon-16 dataset.", "We thank the referees for their time and effort in reviewing our\nwork, especially in light of the many heavy review loads assigned this year. \n\nWe would like to address the general comments from the referees on the \nsignificance of our work to the ICLR community. Specifically, whether or not the \nCHOWDER architecture we propose for MIL in the context of histopathological image \nanalysis (HIA) represents both a reasonable contribution to the ever-growing body of MIL \nresearch, as well as the suitability of such an application-specific paper to the \nICLR community at large.\n\nIn response to the first point, we make affirmative arguments for both our contribution\nto the architecture of the proposed network, as well as for the procedural contribution,\ndemonstrating how to effectively regularize and train a network in the extreme \nsetting of weak learning at very small sample-to-feature ratios. In the case of \nthe proposed architecture, we believe that the additional multi-layer perceptron (MLP)\nat the classification layer does represent a meaningful contribution to MIL research. \nSpecifically, the MLP allows a more context-aware bag classification from the \ntop- and bottom-ranked embedded instance representations. \nIn the case of Durand et al. (2016), a simple sum of these $2R$ values is used. \nAs reported in Durand et al. (2016), even this summation is itself an\nincremental generalization of their \"min+max\" output reported in their MANTRA work \n(Durand et al., 2015). \n\nIn the case of HIA, as evidenced by the results we report in Table 1, using the context\nafforded by the distribution of embedded values *within* the top and bottom rankings\nleads to significant improvements in AUC (a 12.15% and 8.53% percent change for the \ncompetition and CV splits, respectively). This is especially critical for disease \ndetection in WSI, as diseased tissue is noted by its discrepancy from healthy tissue,\nrather than an absolute description of a fixed feature set common to *all* diseased \ntissue. Additionally, as in the case of metastasis detection in Camelyon-16, the \ndetection of highly localized regions (i.e. extreme instance class imbalance) requires\na more sensitive approach, such as that afforded by the use of an MLP, than the \nembedding sum used in Durand et al. (2016). Therefore, we believe that \nthe results and method we report will be useful for readers seeking to train deep\nnets in similar extreme MIL settings, especially those where class imbalance is a \nnoted concern. \n\nFurther, we note that the best-case AUC performance of an expert human pathologist for \nCamelyon-16 was reported as 0.884 in Bejnordi et al. (2017), with the mean pathologist AUC \nreported as 0.810. When using a large ensemble size, $E=50$, we report an AUC of 0.8706, \nthus demonstrating diagnosis prediction performance better than the average human\n pathologist, but *without* making any use of expert assistance during training (e.g. disease \nsegmentation maps). This demonstrates that our proposed methodology is as effective as \nan expert pathologist, while also allowing for machine ingenuity, as the model can adapt to \nnovel diseases and structures outside of the confines of human-produced segmentation maps.\n\nIn response to the second point, on the suitability of this work within ICLR, \nwe refer to the 2018 Call For Papers (CFP):\n\n> We take a broad view of the field and include topics such as feature learning, \n> [...] and issues regarding large scale learning...\n\n> A non-exhaustive list of relevant topics:\n>\n> [...]\n> * implementation issues, parallelization, software platforms, hardware\n> * applications in vision, audio, speech, natural language processing, robotics, \n> neuroscience, or any other field...\n\nGiven that this work represents an application of machine learning techniques to \na very large-scale problem requiring novel contributions to an existing architecture, \nand that we detail the many implementation issues required for the successful\nutilization of our proposed approach, and that we provide state-of-the-art results \nwithin a very relevant and socially impactful field, namely histopathological\nimage analysis as an oncology diagnostic, we believe that our work is in fact very \nwell suited and topical to ICLR.", "Greetings to the authors of this paper,\n\nYour paper is very interesting and insightful. As part of a reproducibility challenge, our team of students would like to attempt at reproducing the results of your paper. We are not affiliated with the official reviewers.\n\nIf it would be possible, it would be incredibly helpful if you are interested in providing parts of the code used in your implementations.\n\nIf you are interested, please comment below, and we can arrange to contact each other in private.\n\nThank you\n\n" ]
[ 5, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryserbZR-", "iclr_2018_ryserbZR-", "iclr_2018_ryserbZR-", "Bk72o4NWM", "Bk72o4NWM", "Bk72o4NWM", "S1O8uhkxf", "SkWQLvebf", "iclr_2018_ryserbZR-", "iclr_2018_ryserbZR-", "iclr_2018_ryserbZR-" ]
iclr_2018_rk1FQA0pW
End-to-End Abnormality Detection in Medical Imaging
Deep neural networks (DNN) have shown promising performance in computer vision. In medical imaging, encouraging results have been achieved with deep learning for applications such as segmentation, lesion detection and classification. Nearly all of the deep learning based image analysis methods work on reconstructed images, which are obtained from original acquisitions via solving inverse problems (reconstruction). The reconstruction algorithms are designed for human observers, but not necessarily optimized for DNNs which can often observe features that are incomprehensible for human eyes. Hence, it is desirable to train the DNNs directly from the original data which lie in a different domain with the images. In this paper, we proposed an end-to-end DNN for abnormality detection in medical imaging. To align the acquisition with the annotations made by radiologists in the image domain, a DNN was built as the unrolled version of iterative reconstruction algorithms to map the acquisitions to images, and followed by a 3D convolutional neural network (CNN) to detect the abnormality in the reconstructed images. The two networks were trained jointly in order to optimize the entire DNN for the detection task from the original acquisitions. The DNN was implemented for lung nodule detection in low-dose chest computed tomography (CT), where a numerical simulation was done to generate acquisitions from 1,018 chest CT images with radiologists' annotations. The proposed end-to-end DNN demonstrated better sensitivity and accuracy for the task compared to a two-step approach, in which the reconstruction and detection DNNs were trained separately. A significant reduction of false positive rate on suspicious lesions were observed, which is crucial for the known over-diagnosis in low-dose lung CT imaging. The images reconstructed by the proposed end-to-end network also presented enhanced details in the region of interest.
rejected-papers
Authors present an evaluation of end-to-end training connecting reconstruction network with detection network for lung nodules. Pros: - Optimizing a mapping jointly with the task may preserve more information that is relevant to the task. Cons: - Reconstruction network is not "needed" to generate an image -- other algorithms exist for reconstructing images from raw data. Therefore, adding the reconstruction network serves to essentially add more parameters to the neural network. As a baseline, authors should compare to a detection-only framework with a comparable number of parameters to the end-to-end system. Since this is not provided, the true benefit of end-to-end training cannot be assessed. - Performance improvement presented is negligible - Novelty is not clear / significant
train
[ "SkoQMHqlG", "S1gaKDqlM", "Byyu-H4-f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a DNN for patch-based lung nodule detection, directly from the CT projection data. The two-component network, comprising of the reconstruction network and the nodule detection network, is trained end-to-end. The trained network was validated on a simulated dataset of 1018\tlow-dose chest CT images. It is shown that end-to-end training produces better results compared to a two-step approach, where the reconstruction DNN was trained first and the detection DNN was trained on the reconstructed images. \n\nPros\n\nIt is a well written paper on a very important problem. It shows the promise of DNNs for solving difficult inverse problems of great importance. It shows encouraging results as well. \n\nCons\n\nThe contributions seem incremental, not properly enunciated, or appropriately validated.\n\nThe putative contributions of the paper can be \n(a) Directly solving the target problem from raw sensory data without first solving an inversion problem\n(b) (Directly) solving the lung nodule detection problem using a DNN. \n(c) A novel reconstruction DNN as a component of the above pipeline.\n(d) A novel detection network as a component of the above pipeline.\n\nLet's take them one by one:\n\n(a) As pointed out by authors, this is in the line of work being done in speech recognition, self-driving cars, OCR etc. and is a good motivation for the work but not a contribution. It's application to this problem can require significant innovation which is not the case as components have been explored before and there is no particular innovation involved in using them together in a pipeline either.\n\n(c) As also pointed by the authors, there are many previous approaches - Adler & Oktem (2017), Hammernik et al (2017) etc. among others. Another notable reference (not cited) is Jin et al. \"Deep Convolutional Neural Network for Inverse Problems in Imaging.\" arXiv preprint arXiv:1611.03679 (2016). These last two (and perhaps others) train DNNs to learn unrolled iterative methods to reconstruct the CT image. The approach proposed in the paper is not compared by them (and perhaps others), neither at a conceptual level nor experimentally. So, this clearly is not the main contribution of the paper.\n\n(d) Similarly, there is nothing particularly novel about the detection network nor the way it is used. \n\nThis brings us to (b). The proposed approach to solve this problem may indeed by novel (I am not an expert in this application area.), but considering that there is a considerable body of work on this problem, the paper provides not comparative evaluation of the proposed approach to published ones in the literature. It just provides an internal comparison of end-to-end training vis-a-vis two step training. \n\nTo summarize, the contributions seem incremental, not properly enunciated, or appropriately validated.", "This paper proposes to jointly model computed tomography reconstruction and lesion detection in the lung, training the mapping from raw sinogram to detection outputs in an end-to-end manner. In practice, such a mapping is computed separately, without regard to the task for wich the data is to be used. Because such a mapping loses information, optimizing such a mapping jointly with the task should preserve more information that is relevant to the task. Thus, using raw medical image data should be useful for lesion detection in CT as well as most other medical image analysis tasks.\n\n\nStyle considerations:\n\nThe work is adequately motivated and the writing is generally clear. However, some phrases are awkward and unclear and there are occasional minor grammar errors. It would be useful to ask a native English speaker to polish these up, if possible. Also, there are numerous typos that could nonetheless be easily remedied with some final proofreading. Generally, the work is well articulated with sound structure but needs polish.\n\nA few other minor style points to address:\n- \"g\" is used throughout the paper for two different networks and also to define gradients - if would be more clear if you would choose other letters.\n- S3.3, p. 7 : reusing term \"iteration\"; clarify\n- fig 10: label the columns in the figure, not in the description\n- fig 11: label the columns in the figure with iterations\n- fig 8 not referenced in text\n\n\nQuestions:\n\n1. Before fine-tuning, were the reconstruction and detection networks trained end-to-end (with both L2 loss and cross-entropy loss) or were they trained separately and then joined during fine-tuning?\n(If it is the former and not the latter, please make that more clear in the text. I expect that it was indeed the former; in case that it was not, I would expect fully end-to-end training in the revision.)\n\n2. Please confirm: during the fine-tuning phase of training, did you use only the cross-entropy loss and not the L2 loss?\n\n3a. From equation 3 to equation 4 (on an iteration of reconstruction), the network g() was dropped. It appears to replace the diagonal of a Hessian (of R) which is probably a conditioning term. Have you tried training a g() network? Please discuss the ramifications of removing this term.\n\n3b. Have you tracked the condition number of the Jacobian of f() across iterations? This should be like tracking the condition number of the Hessian of R(x).\n\n4. Please discuss: is it better to replace operations on R() with neural networks rather than to replace R()? Why?\n\n5. On page 5, you write \"masks for lung regions were pre-calculated\". Were these masks manual segmentations or created with an automated method?\n\n6. Why was detection only targetted on \"non-small nodules\"? Have you tried detecting small nodules?\n\n7. On page 11, you state: \"The tissues in lung had much better contrast in the end-to-end network compared to that in the two-step network\". I don't see evidence to support that claim. Could you demonstrate that?\n\n8. On page 12, relating to figure 11, you state:\n\n\"Whereas both methods kept similar structural component, the end-to-end method had more focus on the edges and tissues inside lung compared to the two-step method. As observed in figure 11(b), the structures of the lung tissue were much more clearer in the end-to-end networks. This observation indicated that sharper edge and structures were of more importance for the detection network than the noise level in the reconstructed images, which is in accordance with human perceptions when radiologists perform the same task.\"\n\nHowever, while these claims appear intuitive and such results may be expected, they are not backed up by figure 11. Looking at the feature map samples in this figure, I could not identify whether they came from different populations. I do not see the evidence for \"more focus on the edges and tissues inside lung\" for the end-to-end method in fig 11. It is also not obvious whether indeed \"the structures of the lung tissue were much more clearer\" for the end-to-end method, in fig 11. Can you clarify the evidence in support of these claims? \n\n\nOther points to address:\n\n1. Please report statistical significance for your results (eg. in fig 5b, in the text, etc.). Also, please include confidence intervals in table 2.\n\n2. Although cross-entropy values, detection metrics were not (except for the ROC curve with false positives and false negatives). Please compute: accuracy, precision, and recall to more clearly evaluate detection performance.\n\n3a. \"Abnormality detection\" implies the detection of anything that is unusual in the data. The method you present targets a very specific abnormality (lesions). I would suggest changing \"abnormality detection\" to \"lesion detection\".\n\n3b. The title should also be updated accordingly. Considering also that the presented work is on a single task (lesion detection) and a single medical imaging modality (CT), the current title appears overly broad. I would suggest changing it from \"End-to-End Abnormality Detection in Medical Imaging\" -- possibly to something like \"End-to-End Computed Tomography for Lesion Detection\".\n\n\nConclusion:\n\nThe motivation of this work is valid and deserves attention. The implementation details for modeling reconstruction are also valuable. It is interesting to see improvement in lesion detection when training end-to-end from raw sinogram data. However, while lung lesion detection is the only task on which the utility of this method is evaluated, detection improvement appears modest. This work would benefit from additional experimental results or improved analysis and discussion.", "The authors present an end to end training of a CNN architecture that combines CT image signal processing and image analysis. This is an interesting paper. Time will tell whether a disease specific signal processing will be the future of medical image analysis, but - to the best of my knowledge - this is one of the first attempts to do this in CT image analysis, a field that is of significance both to researchers dealing with image reconstruction (denoising, etc.) and image analysis (lesion detection). As such I would be positive about the topic of the paper and the overall innovation it promises both in image acquisition and image processing, although I would share the technical concerns pointed out by Reviewer2, and the authors would need good answers to them before this study would be ready to be presented. " ]
[ 4, 5, 6 ]
[ 4, 4, 3 ]
[ "iclr_2018_rk1FQA0pW", "iclr_2018_rk1FQA0pW", "iclr_2018_rk1FQA0pW" ]
iclr_2018_HkJ1rgbCb
Using Deep Reinforcement Learning to Generate Rationales for Molecules
Deep learning algorithms are increasingly used in modeling chemical processes. However, black box predictions without rationales have limited used in practical applications, such as drug design. To this end, we learn to identify molecular substructures -- rationales -- that are associated with the target chemical property (e.g., toxicity). The rationales are learned in an unsupervised fashion, requiring no additional information beyond the end-to-end task. We formulate this problem as a reinforcement learning problem over the molecular graph, parametrized by two convolution networks corresponding to the rationale selection and prediction based on it, where the latter induces the reward function. We evaluate the approach on two benchmark toxicity datasets. We demonstrate that our model sustains high performance under the additional constraint that predictions strictly follow the rationales. Additionally, we validate the extracted rationales through comparison against those described in chemical literature and through synthetic experiments.
rejected-papers
Pro: - Interesting approach to tie together reinforcement Q-learning with CNN for prediction and reward function learning in predicting downstream effects of chemical structures, while providing relevant areas for decision-making. Con: - Datasets are small, generalizability not clear. - Performance is not high (although performance wasn't the goal necessarily) - Sometimes test performance is higher than training performance, making results questionable. - Should include comparison to other wrapper-based combinatorial approaches. - Too targeted an appeal/audience (better for chemical journal)
train
[ "r11LXabJz", "S1wvy15xz", "SyI8c-T-f", "ByLbXBdzM", "ByqZ04_GG", "H1iOTE_fz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "\nThe paper proposes a feature learning technique for molecular prediction using reinforcement learning. The predictive model is an interesting two-step approach where important atoms of the molecule are added one-by-one with a reward given by a second Q-network that learns how well we can solve the prediction problem with the given set of atoms. The overall scheme is intuitive, but \n\nThe model is experimented on two small datasets of few thousand of molecules, and compared to a state-of-the-art DeepTox, and also to some basic baselines (RF/SVM/logreg). In the Tox21 dataset the proposed sparse RL-CNN method is less accurate than DeepTox or full CNN. In the hERG dataset RL-CNN is again weaker than the full CNN, but also seems to be beaten by several baseline methods. Overall the results are surprisingly weak, since e.g. with LASSO one often improves by using less features in complex problems. Both datasets should be compared to LASSO as well. \n\nIt's somewhat odd that the test performance in table 2 is often better than CV performance. This feels suspicious, especially with 79.0 vs 84.3. The table 2 does not seem reliable result, and should use more folds and more randomizations, etc.\n\nThe key problem of the method is its seeming inabability to find the correct number of atoms to use. In both datasets the number of atoms were globally fixed, which is counter-intuitive. The authors should at least provide learning curves where different number of atoms are used; but ideally the method should learn the number of atoms to use for each molecule.\n\nThe proposed Q+P network is interesting, but its unclear how well it works in general. There should be experiments that compare the the Q+P model with incresing number of atoms against a full CNN, to see whether the Q+P can converge to maximal performance.\n\nOverall the method is interesting and has a clear impact for molecular prediction, however the paper has limited appeal to the broader audience. Its difficult to assess how useful the Q/P-network is in general. The inability to choose the optimal number of atoms is a major drawback of the method, and the experimental section could be improved. This paper also would probably be more suitable for a chemoinformatics journal, where the rationale learning would be highly appreciated.\n", "This paper presents an interesting approach to identify substructural features of molecular graphs contributing to the target task (e.g. predicting toxicity). The algorithm first builds two conv nets for molecular graphs, one is for searching relevant substructures (policy improvement), and another for evaluating the contribution of selected substructures to the target task (policy evaluation). These two phases are iterated in a reinforcement learning manner as policy iterations. Both parts are based on conv nets for molecular graphs, and this framework is a kind of 'self-supervised' scheme compared to the standard situations that the environment provides rewards. The experimental validations demonstrate that this model can learn a competitive-performed conv nets only dependent on the highlighted substructures, as well as reporting some case study on the inhibition assay for hERG proteins.\n\nTechnically speaking, the proposed self-supervised scheme with two conv nets is very interesting. This demonstrates how we can perform progressive substructure selections over molecular graphs to highlight relevant substructures as well as maximizing the prediction performance. Given that conv nets for molecular graphs are not trivially interpretable, this would provides a useful approach to use conv nets for more explicit interpretations of how the task can be performed by neural nets. \n\nHowever, at the same time, I had one big question about the purpose and usage of this approach. As the paper states in Introduction, the target problem is 'hard selection' of substructures, rather than 'soft selection' that neural nets (with attention, for example) or neural-net fingerprints usually provide. Then, the problem would become a combinatorial search problem, which has been long studied in the data mining and machine learning community. There would exist many exact methods such as LEAP, CORK, and graphSig under the name of 'contrast/emerging/discriminative' pattern mining exactly developed for this task. Also, it is widely known that we can even perform a wrapper approach for supervised learning from graphs simultaneously with searching all relevant subgraphs as seen in Kudo+ NIPS 2004, Tsuda ICML 2007, Saigo+ Machine Learning 2009, etc. It would be unconvincing that the proposed neural nets approach fits to this hard combinatorial task rather than these existing (mostly exact) methods.\n\nIn addition to the above point, several technical points below would also be unclear.\n\n- A simple heuristic by adding 'selected or not' variables to the atom features works as intended? Because this is fed to the conv net, it seems we can ignore this elements of features by tweaking the weight parameters accordingly. If the conv net performs the best when we use the entire structure, then learning might be forced to ignore the selection. Can we guarantee in some sense this would not happen? \n\n- Zeroing out the atom features also sounds quite simple and a bit groundless. Confusingly, the P network also has an attention mechanism, and it is a bit unclear to me what was actually worked.\n\n- In the experiments, the baseline is based on LR, but this would not be fair because usually we cannot expect any linear relationship for molecular fingerprints. It's highly correlated due to the inclusion relationships between subgraphs. At least, any nonlinear baseline (e.g. Random forest or something?) should be presented for discussing the results.\n\nPros:\n- interesting self-supervised framework provided for highlighting relevant substructures for a given prediction task\n- the hard selection setting is encoded in input graph featurization\n\nCons:\n- it would be a bit unconvincing that identifying 'hard selection' is better suited for neural nets, rather than many existing exact methods (without using neural networks). At least one of the typical ones should be compared or discussed.\n- I'm still not quite sure whether or not some heuristic parts work as intended. ", "In this manuscript, the authors propose an interesting deep reinforcement learning approach via CNNs to learn the rationales associated to target chemical properties. The paper has merit, but in its current form does not match the acceptance criteria for ICLR.\n\nIn particular, the main issue lies in the poor performance reached by the systems, both overall and in comparison with baseline methods, which at the moment hardly justifies the effort required in setting up the DL framework. Moreover, the fact that test performances are sometimes (much) better than training results are quite suspicious in methodological terms.\nFinally, the experimental part is quite limited (two small datasets), making it hard to evaluate the scalability (in all sense) of the proposed solution to much larger data. ", "Thank you for your review of our work.\n\nTo address your points:\n\n1. One of the advantages of our model, which is relevant to the chemistry problem, is that we directly account for interactions between different groups of atoms. The exact methods you bring up all try to do some search over the space of the subgraphs in the dataset, picking out the most important ones. However, these selections do not seem to directly incorporate the competing/augmenting effects of having different subgraphs within a molecule. Some of the earlier papers such as (Kudo et al, 2014) use very small datasets, and a CNN can already improve on the performance of the prediction task. None of these methods seem to have any quantitative evaluation of the subgraph features selected, which was what we tried to focus on in our work.\n\n2. The atom features are zero'd out for the P network only, which is akin to other hard selection problems, in which words that are not selected as rationale are not considered for the final prediction problem. The simple heuristic by adding \"selected or not\" feature is only for the Q network, which assigns the Q-values for individual atoms, and not part of the prediction networks, which takes the molecule as given, zeros out atoms not selected, and predicts based on that. ", "Thank you for your review of our work.\n\nTo address your points:\n\n1. In molecular problems, often the result is impacted, in part, by properties of the whole molecule, so it is not surprising that using only a subset of the atoms in the molecule will see a decrease in performance. Even in the case of text, we see that using only a partial selection of text reduces the accuracy of prediction (Lei et al. 2016). The main focus of the work was to illustrate that it is possible to extract meaningful rationales through this method.\n\n2. The reason why the test performance is strictly better than the CV performance is because, as stated in the paper, the two sets of data came from different sources. This is how the original paper used this dataset, so to make a fair comparison, we did the same.\n\n3. We do see that increasing the number of atoms selected does allow the model to converge to the performance of the full model (as the two models essentially collapse into the same one), but it is a limitation of the model that we have to select, as a hyper parameter, the number of atoms to choose for molecules", "Thank you for your review of our work.\n\nTo address your points:\n\n1. The focus of the model was on the rationale aspect and less on the actual performance of the model. The reason that the test performance on one of the datasets (hERG) is better than cross validation performance is because, as stated in the paper, the two sets of data came from different papers, which was done by the original paper from which the dataset came from.\n\n2. Most of the publicly available toxicity datasets are very small, so in the chemical context, this is often the best that can be done." ]
[ 5, 5, 5, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_HkJ1rgbCb", "iclr_2018_HkJ1rgbCb", "iclr_2018_HkJ1rgbCb", "S1wvy15xz", "r11LXabJz", "SyI8c-T-f" ]
iclr_2018_HytSvlWRZ
Subspace Network: Deep Multi-Task Censored Regression for Modeling Neurodegenerative Diseases
Over the past decade a wide spectrum of machine learning models have been developed to model the neurodegenerative diseases, associating biomarkers, especially non-intrusive neuroimaging markers, with key clinical scores measuring the cognitive status of patients. Multi-task learning (MTL) has been extensively explored in these studies to address challenges associated to high dimensionality and small cohort size. However, most existing MTL approaches are based on linear models and suffer from two major limitations: 1) they cannot explicitly consider upper/lower bounds in these clinical scores; 2) they lack the capability to capture complicated non-linear effects among the variables. In this paper, we propose the Subspace Network, an efficient deep modeling approach for non-linear multi-task censored regression. Each layer of the subspace network performs a multi-task censored regression to improve upon the predictions from the last layer via sketching a low-dimensional subspace to perform knowledge transfer among learning tasks. We show that under mild assumptions, for each layer the parametric subspace can be recovered using only one pass of training data. In addition, empirical results demonstrate that the proposed subspace network quickly picks up correct parameter subspaces, and outperforms state-of-the-arts in predicting neurodegenerative clinical scores using information in brain imaging.
rejected-papers
Authors present a method for modeling neurodegenerative diseases using a multitask learning framework that considers "censored regression" problems (to model where the outputs have discrete values and ranges). Given the pros/cons, the committee feels this paper is not ready for acceptance in its current state. Pro: - This approach to modeling discrete regression problems is interesting and may hold potential, but the evaluation is not in a state where strong meaningful conclusions can be made. Con: - Reviewers raise multiple concerns regarding evaluation and comparison standards for tasks. While authors have added some model comparisons in response, in other areas comparisons don't appear complete. For example, when using MRI data, networks compared all use features derived from images, rather than systems that may learn from images themselves. Authors claim dataset is too small to learn directly from pixels in this data (in comments), but transfer learning and data augmentation have been successfully applied to learn from datasets of this size. In addition, new multitask techniques in the imaging domain have also been presented that dynamically learn the network structure, rather than relying on a hand-crafted neural network design. How this approach would compare is not addressed.
train
[ "SkwZAL4ef", "r1z2QSOlz", "B1r0SU9gz", "Skzmnk2mf", "SyFl9LxQf", "H189gvyQM", "ByQ7kvkmf", "BJl30LJ7G", "BkE02U1Xf", "rJ2_MDJXf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "This work proposes a multi task learning framework for the modeling of clinical data in neurodegenerative diseases. \nDifferently from previous applications of machine learning in neurodegeneration modeling, the proposed approach models the clinical data accounting for the bounded nature of cognitive tests scores. The framework is represented by a feed-forward deep architecture analogous to a residual network. At each layer a low-rank constraint is enforced on the linear transformation, while the cost function is specified in order to differentially account for the bounds of the predicted variables.\n\nThe idea of explicitly accounting for the boundedness of clinical scores is interesting, although the assumption of the proposed model is still incorrect: clinical scores are defined on discrete scales. For this reason the Gaussian assumption for the cost function used in the method is still not appropriate for the proposed application. \nFurthermore, while being the main methodological drive of this work, the paper does not show evidence about improved predictive performance and generalisation when accounting for the boundedness of the regression targets. \nThe proposed algorithm is also generally compared with respect to linear methods, and the authors could have provided a more rigorous benchmark including standard non-linear prediction approaches (e.g. random forests, NN, GP, …). \n\nOverall, the proposed methods seems to provide little added value to the large amount of predictive methods proposed so far for prediction in neurodegenerative disorders. Moreover, the proposed experimental paradigm appears flawed. What is the interest of predicting baseline (or 6 months at best) cognitive scores (relatively low-cost and part of any routine clinical assessment) from brain imaging data (high-cost and not routine)?\n\nOther remarks. \n\n- In section 2.2 and 4 there is some confusion between iteration indices and samples indices “i”. \n\n- Contrarily to what is stated in the introduction, the loss functions proposed in page 3 (first two formulas) only accounts for the lower bound of the predicted variables. \n\n- Figure 2, synthetic data. The scale of the improvement of the subspace difference is quite tiny, in the order of 1e-2 when compared to U, and of 1e-5 across iterations. The loss function of Figure 2.b also does not show a strong improvement across iterations, while indicating a rather large instability of the optimisation procedure. These aspects may be a sign of convergence issues. \n\n- The dimensionality of the subspace representation importantly depends on the choice of the rank R of U and V. This is a crucial parameters that is however not discussed nor analysed in the paper. \n\n- The synthetic example of page 7 is quite misleading and potentially biased towards the proposed model. The authors are generating the synthetic data according to the model, and it is thus not surprising that they managed to obtain the best performance. In particular, due to the nonlinear nature of (1), all the competing linear models are expected to perform poorly in this kind of setting.\n\n- The computation time for the linear model shown in Table 3 is quite surprising (~20 minutes for linear regression of 5k observations). Is there anything that I am missing?\n", "The authors propose a DNN, called subspace network, for nonlinear multi-task censored regression problem. The topic is important. Experiments on real data show improvements compared to several traditional approaches.\n\nMy major concerns are as follows.\n\n1. The paper is not self-contained. The authors claim that they establish both asymptotic and non-asymptotic convergence properties for Algorithm 1. However, for some key steps in the proof, they refer to other references. If this is due to space limitation in the main text, they may want to provide a complete proof in the appendix.\n\n2. The experiments are unconvincing. They compare the proposed SN with other traditional approaches on a very small data set with 670 samples and 138 features. A major merit of DNN is that it can automatically extract useful features. However, in this experiment, the features are handcrafted before they are fed into the models. Thus, I would like to see a comparison between SN with vanilla DNN. ", "This paper presents a new multi-task network architecture within which low-rank parameter spaces were found using matrix factorization. As carefully proved and tested, only one pass of the training data would help recover the parametric subspace, thus network could be easily trained layer-wise and expanded.\n\nSome novel contributions:\n1. Layer by layer feedforward training process, no back-prop.\n2. On-line settings to train parameters ( guaranteed convergence in a single pass of the data)\n\nWeakness :\n1. The assumption that a low-rank parameter space exists among tasks rather than original feature spaces is not new and widely used in literature.\n2. The proof part(Section 2.2) can be extended with more details in Appendix.\n3. In synthetic data experiments (Table1), only small margins could be observed between SN, f-MLP and rf-MLP, and only Layer 1 of SN performs better above all others. \n4. Typo: In Table2,3,5, Multi-l_{2,1} (denotes the L2,1 norm) were written wrong.\n5. In the synthetic data experiments on comparison with single-task and multi-task models, counter-intuitive results (with larger training data split, ANMSE raises instead of decreases) of multi-task models may need further explanation. \n6. Extra models like Deep Networks with/without matrix factorization could be added. ( As proposed model is a deep model, the lack of comparison with deep methods is dubious)\n7. In Section 4.2, the real dataset is rather small thus the results on this small dataset were not convincing enough. SN model outperforms the state-of-the-art with only small margin. Extensive experiments could be added.\n8. The performance on One-Layer Subspace Network (with only the input features) could be added. \n\nConclusion:\nThough with a quite novel idea on solving multi-task censored regression problem, the experiments conducted on synthetic data and real data are not convincing enough to ensure the contribution of the Subspace Network. \n", "We wish all the reviewers happy new year and we are looking forward to addressing any new comments to our responses posted previously. If you have any comments, feel free to let us know. Thank you very much.", "We would like to thank all reviewers for all the valuable and constructive comments. By following the suggestions, we have significantly extended our experiments, revised our paper and addressed questions from all the reviewers. Below we summarize the improvements in this revision:\n\nMajor Improvements:\n\n1. For synthetic experiments (Table 1), we provide additional metrics to evaluate the recovery achieved by different methods. Results show that SN outperforms all methods in various metrics, with comparable margins. \n\n2. We largely extend experiments and compare SN with three DNN baselines (naive, with censoring, and with censoring + low-rank) for both synthetic and real data. Results (Table 2 and 5) show SN outperforms all baselines.\n\n3. We verify the benefit of considering target boundedness in all sets of methods considered: (1) Single-task and multi-task shallow methods; (2) Deep methods. Results (Table 2 and 5) show performance improvement when considering the boundedness of targets.\n\n4. We’ve extended more detailed proof outlines for convergence properties for both asymptotic and non-asymptotic cases in Appendix.\n\nMinor Improvements:\n\n1. We solve the counter-intuitive experimental observation by fixing numerical issues in multi-task algorithms and update results in Table 2 and 5.\n\n2. We remeasure the computation speed more accurately and update results in Table 6.\n\n3. For all the confusions we made previously, we have made it more clear in the revised version.\n", "Thanks very much for your suggestions. We have addressed all your concerns below. Most notably, we have significantly enriched our experiments to testify the performance advantages of SN over DNNs in real data, and to analyze how much each component (censored, low-rank, and online feed-forward training) accounts for this performance gain. We have also revised our paper and corrected typos. Overall, we hope that our revised version has manifested its value as a unique and effective predictive method of neurodegenerative disorders.\n \nQ1: Clinical scores are defined on discrete scales: the Gaussian assumption for the cost function is thus not appropriate for the proposed application.\n \nA1: As defined in (1), the Gaussian assumption is not enforced on the scores, but on the noise \\epsilon that we assume in the latent space. It is a standard assumption in subspace sensing, and the standard deviation hyperparameter controls how accurately the low-rank subspace assumption can fit the data. It does not solely determine the distribution of y, and other noise models can also be assumed here. We apologize for the confusion and have revised the paper to make it clearer.\n\nQ2: The paper does not show evidence about improved predictive performance and generalization when accounting for the target boundedness\n \nA2: We extended experiments to compare results between considering target boundedness (Censored) and not (Uncensored) for both single-task and multi-task models. In either scenario, we reported the best performance (LS+L1 for single task and Multi Trace for multi-task in our case) in Table 2. In the revised version, we further compared SN with several DNN baselines, where the benefit of setting censored regression goals is also found to be evident in DNN settings. Please see next response for details.\n \nQ3: The authors should provide a more rigorous benchmark including non-linear prediction approaches\n \nA3: As suggested, we compared with three DNN baselines (naive, with censoring, and with censoring + low-rank) for both synthetic and real data in the paragraph “Benefits of Going deep” in Section 4.1 and “Performance” in Section 4.2 of the revised paper, in addition to the existing nonlinear Tobit censored regression model. The comparisons of three baselines indicate that both censored regression and low-rank assumption improve DNN’s performance on the given MTL task. Meanwhile, SN clearly outperforms all three, even DNN equipped with censoring + low-rank, suggesting the advantage of our proposed online one-pass sensing and feed-forward training strategy. The performance advantage of SN over DNNs is also confirmed across different rank assumptions in Table 8 in Appendix.\n\nQ4: What is the interest of predicting baseline (or 6 months at best) cognitive scores from brain imaging data?\n \nA4: Thanks for pointing out. We have revised the paper to refer readers interested in this setting to relevant clinical references. The predictive modeling paradigm that we used in the paper is a rather common setting in clinical studies of neurodegenerative diseases such as Alzheimer’s disease (AD), e.g.,\n \n[1] Stonnington, C. M., Chu, C., Klöppel, S., Jack, C. R., Ashburner, J., Frackowiak, R. S., & Alzheimer Disease Neuroimaging Initiative. (2010). Predicting clinical scores from magnetic resonance scans in Alzheimer's disease. Neuroimage, 51(4), 1405-1413.\n[2] Orrù, G., Pettersson-Yeo, W., Marquand, A. F., Sartori, G., & Mechelli, A. (2012). Using support vector machine to identify imaging biomarkers of neurological and psychiatric disease: a critical review. Neuroscience & Biobehavioral Reviews, 36(4), 1140-1152.\n[3] Zhang, D., Shen, D., & Alzheimer's Disease Neuroimaging Initiative. (2012). Multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimer's disease. NeuroImage, 59(2), 895-907.\n[4] Zhou, J., Liu, J., Narayan, V. A., Ye, J., & Alzheimer's Disease Neuroimaging Initiative. (2013). Modeling disease progression via multi-task learning. NeuroImage, 78, 233-248.\n \nThe rationale behind this setting is as follows. For example, the diagnosis of AD requires autopsy confirmation, which is not applicable on live patients. Hence many cognitive measures have been designed to evaluate the cognitive status of a patient. These measures are important criteria for clinical diagnosis of probable AD. These cognitive status/scores can be considered as phenotypes that are tangled with complicated neurological pathologies in the brain. Currently there are many hypotheses the pathological pathways of AD progression over time, but we are far from understanding the ultimate cause and thus the studies of associations between cognitive scores and neuroimages are critical in understanding the progression and predictability of the disease. The models can reveal important insights and may lead to novel target for therapeutic intervention and drug developments. \n\n", "\nQ6. In Section 4.2, the real dataset is rather small thus the results on this small dataset were not convincing enough. Extensive experiments could be added.\n \nA6: Thanks for the comments. We have extended our experiments by comparing SN with several baseline DNNs (in previous comments) and results verify that SN outperforms all three variants DNNs.\n \nWe note that the MTL, like the one proposed in this paper, is typically used for solve learning problems with insufficient training data. Nowadays this is very typical in medical research, and is one motivation for us to design this method. The ADNI data used in our paper is so far the largest cohort collected for Alzheimer’s disease study, despite that it still only has less than 1000 patients available due to the expensive data collection process. The ADNI dataset is widely used for building machine learning models, where researchers proposed many algorithms to tackle challenges arising from the small sample-size:\n \n[1] Huang, S., Li, J., Sun, L., Ye, J., Fleisher, A., Wu, T., ... & Alzheimer's Disease NeuroImaging Initiative. (2010). Learning brain connectivity of Alzheimer's disease by sparse inverse covariance estimation. NeuroImage, 50(3), 935-949.\n[2] Stonnington, C. M., Chu, C., Klöppel, S., Jack, C. R., Ashburner, J., Frackowiak, R. S., & Alzheimer Disease Neuroimaging Initiative. (2010). Predicting clinical scores from magnetic resonance scans in Alzheimer's disease. Neuroimage, 51(4), 1405-1413.\n[3] Orrù, G., Pettersson-Yeo, W., Marquand, A. F., Sartori, G., & Mechelli, A. (2012). Using support vector machine to identify imaging biomarkers of neurological and psychiatric disease: a critical review. Neuroscience & Biobehavioral Reviews, 36(4), 1140-1152.\n[4] Zhang, D., Shen, D., & Alzheimer's Disease Neuroimaging Initiative. (2012). Multi-modal MTL for joint prediction of multiple regression and classification variables in Alzheimer's disease. NeuroImage, 59(2), 895-907.\n[5] Zhou, J., Liu, J., Narayan, V. A., Ye, J., & Alzheimer's Disease Neuroimaging Initiative. (2013). Modeling disease progression via MTL. NeuroImage, 78, 233-248.\n[6] Liu, M., & Zhang, D. (2016). Pairwise constraint-guided sparse learning for feature selection. IEEE transactions on cybernetics, 46(1), 298-310.\n[7] Zheng, X., Shi, J., Li, Y., Liu, X., & Zhang, Q. (2016, April). Multi-modality stacked deep polynomial network based feature learning for Alzheimer's disease diagnosis. In Biomedical Imaging (ISBI), 2016 IEEE 13th International Symposium on (pp. 851-854). IEEE.\n \nWe do plan to collaborate with clinical partners to collect larger neurodegenerative datasets and apply the proposed method.\n \nQ7. Add 1-Layer SN results\n \nA7: The results of 1-layer SN have been added: see Table 2 and Table 5.\n", "\nThanks very much for appreciating the novelty of our work, and for your very insightful and constructive comments. We have addressed all your questions below. In particular, we have significantly enriched our experiments as suggested. All results consistently suggest the performance advantage and robustness of Subspace Network over state-of-art linear /nonlinear methods, as well as DNNs. We have also revised the paper and corrected typos. \n\nQ1: The assumption that a low-rank parameter space exists among tasks rather than original feature spaces is not new.\n \nA1: SN was mainly built on the work series of online subspace sensing, where low-rank assumption was enforced on the input space, e.g., Mardani et al. (2015), Shen et al. (2016). Motivated by the popularity of low-rank parameter space in MTL, we introduced the first-of-its-kind combination of online subspace sensing (mostly focusing on input space), and low-rank parameter assumption for MTL: we believe their marriage to be new.\n \nQ2: The proof part (Section 2.2) can be extended with more details in Appendix.\n \nA2: We’ve extended more detailed proof outlines in Appendix. At some steps of the proofs, we point to the important key result to refer to. Proofs are provided for self-containedness only.\n\nQ3: In synthetic data experiments (Table1), only small margins could be observed between SN, f-MLP and rf-MLP, and only Layer 1 of SN performs better above all others.\n \nA3: We have provided additional experimental results after further tuning all three networks. In addition to subspace difference, we added two new metrics: (1) the maximum mutual coherence of all column pairs from two matrices, as a classical measurement on how correlated the two matrices' column subspaces are; (2) the mean mutual coherence of all column pairs from two matrices. Note that the two mutual coherence-based metrics are more robust since they are immune to linear transformations of subspace coordinates, to which the L2-based subspace difference is not immune to. Results can be found in Table 1, showing the clear advantage of SN over f-MLP and rf-MLP under all three metrics. The performance margins of SN in terms of maximum/mean mutual coherences are remarkably more visible than under L2-based difference.\n\nQ4: In the synthetic data experiments on comparison with single-task and multi-task models, counter-intuitive results (with larger training data split, ANMSE raises instead of decreases) of multi-task models may need further explanation.\n \nA4: Thanks so much for pointing out. We found there were numerical convergence issues with the optimization algorithms for MTL models. We have fixed the problem and updated the results in Table 3, where the performance turns now intuitive.\n\nWe also extended our experiments to compare results between considering target boundedness (Censored) and not (Uncensored) for both single-task and multi-task models. In either scenario, we reported the best performance (LS+L1 for single task linear, and Multi Trace for multi-task in our case) in Table 2.\n \nQ5: Extra models like Deep Networks with/without matrix factorization could be added.\n \nA5: Thanks very much for your suggestion. We have added them as suggested. Please refer to the new paragraph “Benefits of Going deep” in Section 4.1 and “Performance” in Section 4.2 in the revised paper. In sum, we compared with three DNN baselines (naive, with censoring, and with censoring + low-rank) for both synthetic and real data. The comparisons of three baselines indicated that both censored regression and low-rank assumption improved DNN’s performance on the given MTL task. Meanwhile, SN clearly outperformed all three, even DNN equipped with censoring + low-rank, suggesting the advantage of our proposed online one-pass sensing and feed-forward training strategy. The performance advantage of SN over DNNs was confirmed across different rank assumptions, and across both synthetic and real data.\n", " \nThanks very much for your suggestions. We have addressed all your concerns below. \n\nQ1. The paper is not self-contained. The authors claim that they establish both asymptotic and non-asymptotic convergence properties for Algorithm 1. However, for some key steps in the proof, they refer to other references. If this is due to space limitation in the main text, they may want to provide a complete proof in the appendix.\n \nA1: Thanks very much for your comment. In Appendix of the revised paper, please find the more detailed long version of the proof outlines of both asymptotic and non-asymptotic convergence properties in Appendix. We expect that the new proof has better self-containedness.\n\nAt some steps of the proof, we pointed to the important key results to refer to (in precise forms, e.g., Lemma 2 of Mardani et al. (2013) ). In order to focus on the key contribution of this paper, we did not include the detailed assumptions and proofs of all those intermediate theorems/lemma that we used. The reasons are: (1) they can be very lengthy; (2) they were well-established results in other relevant literature and were not our innovations; (3) those intermediate results were not tightly related the main contributions of this paper (SN model). We believe that the current proof outline has already captured all main proof ideas and should be easy to follow for readers of interests. \n\nQ2. The experiments are unconvincing. They compare the proposed SN with other traditional approaches on a very small data set with 670 samples and 138 features. A major merit of DNN is that it can automatically extract useful features. However, in this experiment, the features are handcrafted before they are fed into the models. Thus, I would like to see a comparison between SN with vanilla DNN.\n\nA2: Thanks for your comments. We agree that one major merit of DNN is to automatically extract features from images, that demonstrated huge success in many domains. Such capability is based on the availability of large labeled training data. In the medical research domain, however, such labeled data is rarely available, especially in the challenging disease of neurodegenerative diseases such as Alzheimer’s disease (AD) and Parkinson. The ADNI data used in our paper is so far the largest cohort collected for Alzheimer’s disease study, and however has less than 1000 patients available for building predictive models due to the expensive data collection process. Due to extreme high dimension of an MRI image (voxel size: 512x512x16 = 4,194,304), most studies use region-of-interests features extracted by existing neuroimaging tools, instead of raw imaging data for studying progression of a disease. As such, a majority amount of AD study performs predictive modeling using extracted features:\n\n[1] Duchesne, S., Caroli, A., Geroldi, C., Collins, D. L., & Frisoni, G. B. (2009). Relating one-year cognitive change in mild cognitive impairment to baseline MRI features. Neuroimage, 47(4), 1363-1370.\n[2] Stonnington, C. M., Chu, C., Klöppel, S., Jack, C. R., Ashburner, J., Frackowiak, R. S., & Alzheimer Disease Neuroimaging Initiative. (2010). Predicting clinical scores from magnetic resonance scans in Alzheimer's disease. Neuroimage, 51(4), 1405-1413.\n[3] Orrù, G., Pettersson-Yeo, W., Marquand, A. F., Sartori, G., & Mechelli, A. (2012). Using support vector machine to identify imaging biomarkers of neurological and psychiatric disease: a critical review. Neuroscience & Biobehavioral Reviews, 36(4), 1140-1152.\n[4] Zhang, D., Shen, D., & Alzheimer's Disease Neuroimaging Initiative. (2012). Multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimer's disease. NeuroImage, 59(2), 895-907.\n[5] Zhou, J., Liu, J., Narayan, V. A., Ye, J., & Alzheimer's Disease Neuroimaging Initiative. (2013). Modeling disease progression via multi-task learning. NeuroImage, 78, 233-248.\n\nWe also improve our experiments by comparing with three DNN baselines (naive, with censoring, and with censoring + low-rank) in both synthetic and real data, please refer to the new paragraph “Benefits of Going deep” in Section 4.1 and “Performance” in Section 4.2 in the revised paper. The comparisons of three baselines indicate that both censored regression and low-rank assumption improve DNN’s performance on the given MTL task. Meanwhile, Subspace Network clearly outperforms all three, even DNN equipped with censoring + low-rank, suggesting the advantage of our proposed online one-pass sensing and feed-forward training strategy. The performance advantage of Subspace Network is confirmed across different rank assumptions, and across both synthetic and real data.\n", "\nQ5: In section 2.2 and 4 there is some confusion between iteration indices and samples indices\n \nA5: In our online algorithm, since each iteration a new sample comes in, the iteration indices and samples indices are identical. We apologize for the confusion caused and have clarified it in the revised paper.\n\nQ6: Contrarily to what is stated in the introduction, the loss functions proposed in page 3 (first two formulas) only accounts for the lower bound of the predicted variables.\n \nA6: Thanks for pointing out. This paper discussed the most basic nonnegativity-censored regression, which coincides with a RELU-based form (1). The effectiveness of this simplest lower bound has been verified in experiments.\n\nHowever, there is no obstacle to extend the developed methodology to general Tobit models with both lower and upper bounds, and to multiple regression targets where each target has varying bounds: that will only affect the form of likelihood function in Section 2. We will certainly extend Subspace Network to account for those cases in future work.\n \nQ7: Figure 2, the scale of the improvement of the subspace difference is quite tiny. The loss function of Figure 2.b also does not show a strong improvement across iterations, while indicating a rather large instability of the optimization.\n \nA7: (1) Subspace difference is defined as ||U-U_i||_F / ||U||_F, which is divided by the Frobenius norm of U matrix after getting the difference, therefore, it is in small scale; (2) Iteration-wise difference is defined as ||U_i - U_i-1||_F / ||U||_F, similar with (1); (3) our loss is plotted per each single sample passing through the online algorithm: please note the important difference with typical DNN training curves, where loss values are plotted per batch or even per epoch and thus the curve looks smoother. Our training loss, subspace difference between groundtruth, and the iteration-wise subspace differences all steadily decrease as more data points are fed in, showing healthy convergence and fitting our theoretical results well.\n \nIn the revised version, please further refer to the two new, more robust mutual coherence-based metrics introduced in Section 4.1, in terms of which much sharper margins are achieved by Subspace Network, suggesting that our recovered subspaces are significantly better aligned with the groundtruth.\n \nQ8. The dimensionality of the subspace representation importantly depends on the choice of the rank R of U and V. This is not discussed nor analyzed.\n \nA8: For real data, since there is no known ground truth of the rank, we choose the rank R by selecting one that leads to overall best performance in cross-validation. We display the effect of rank assumption on Subspace Network performance in real data, in Table 8 of Appendix. The overall robustness of SN to rank assumptions is fairly remarkable: the performance of SN under all rank assumptions appears to be competitive, consistently outperforming DNNs under the same rank assumptions and other baselines. \n \nQ9: The synthetic example of page 7 is potentially biased towards the proposed model. The authors are generating the synthetic data according to the model, and it is thus not surprising that they managed to obtain the best performance. \n \nA9: One goal of the synthetic experiments is to verify if our model could correctly recover the underlying low rank parameter subspaces: that is why we generate data in the controlled way. The synthetic results align with our theory. Note that the third synthetic experiment in Section 4.1 (“Benefits of Going Deep”) shows an interesting example, that a multi-layer SN performs the best even the data is generated using the one-layer model.\n \nMore importantly, the practical effectiveness of Subspace Network is validated by the real data experiments, where no data generation process is assumed and no underlying parameter (e.g., rank, layer number) is pre-known. Subspace Network proves to automatically discover latent low-rank subspaces from data and achieves superior predictions. We also compare our model with several DNN baselines in the revised paper, and still achieverperformance margins over them.\n \nQ10: The computation time for the linear model shown in Table 3 is quite surprising (~20 minutes for linear regression of 5k observations).\n \nA10: Thanks very much for pointing out. There was some unintentional bug in measuring the algorithm running time. We realized and corrected it right after submission. We apologize for this careless mistake and have reported the correct running time in the revised version, Table 6 in Appendix." ]
[ 4, 5, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HytSvlWRZ", "iclr_2018_HytSvlWRZ", "iclr_2018_HytSvlWRZ", "iclr_2018_HytSvlWRZ", "iclr_2018_HytSvlWRZ", "SkwZAL4ef", "BJl30LJ7G", "B1r0SU9gz", "r1z2QSOlz", "H189gvyQM" ]
iclr_2018_HkanP0lRW
Data-driven Feature Sampling for Deep Hyperspectral Classification and Segmentation
The high dimensionality of hyperspectral imaging forces unique challenges in scope, size and processing requirements. Motivated by the potential for an in-the-field cell sorting detector, we examine a Synechocystis sp. PCC 6803 dataset wherein cells are grown alternatively in nitrogen rich or deplete cultures. We use deep learning techniques to both successfully classify cells and generate a mask segmenting the cells/condition from the background. Further, we use the classification accuracy to guide a data-driven, iterative feature selection method, allowing the design neural networks requiring 90% fewer input features with little accuracy degradation.
rejected-papers
Area chair is in agreement with reviewers: this is a good experiment that successfully applies specific machine learning techniques to the particular task. However, the authors have not discussed or studied the breadth of other possible methods that could also solve the given task ... besides those mentioned by the reviewers, U-Nets, and variants thereof, come to mind. Without these comparisons, the novelty and significance cannot be assessed. Authors are encouraged to study similar works, and perform a comparison among multiple possible approaches, before submission to another venue.
train
[ "HyrFUnNgf", "rkEn8swgG", "S12q91ZZM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Authors propose a greedy scheme to select a subset of (highly correlated) spectral features in a classification task. The selection criterion used is the average magnitude with which this feature contributes to the activation of a next-layer perceptron. Once validation accuracy drops too much, the pruned network is retrained, etc. \n\nPro: \n- Method works well on a single data set and solves the problem\n- Paper is clearly written \n- Good use of standard tricks \n\nCon: \n- Little novelty\n\nThis paper could be a good fit for an applied conference such as the International Symposium on Biomedical Imaging. \n", "This paper explores the use of neural networks for classification and segmentation of hypersepctral imaging (HSI) of cells. The basic set-up of the method and results seem correct and will be useful to the specific application described. While the narrow scope might limit this work's significance, my main issue with the paper is that while the authors describe prior art in terms of HSI for biological preps, there is a very rich literature addressing HSI images in other domains: in particular for remote sensing. I think that this work can (1) be a lot clearer as to the novelty and (2) have a much bigger impact if this literature is addressed. In particular, there are a number of methods of supervised and unsupervised feature extraction used for classification purposes (e.g. endmember extraction or dictionary learning). It would be great to know how the features extracted from the neural network compare to these methods, as well as how the classification performance compares to typical methods, such as performing SVM classification in the feature space. The comparison with the random forests is nice, but that is not a standard method. Putting the presented work in context for these other methods would help make there results more general, and hopefully increase the applicability to more general HSI data (thus increasing significance). \n\nAn additional place where this comparison to the larger HSI literature would be useful is in the section where the authors describe the use of the network weights to isolate sub-bands that are more informative than others. Given the high correlation in many spectra, typically something like random sampling might be sufficient (as in compressive sensing). This type of compression which can be applied at the sensor -- a benefit the authors mention of their band-wise sub-sampling. It would be good to acknowledge this prior work and to understand if the features from the network are superior to the random sampling scheme. \n\nFor these comments, I suggest the authors look at the following papers (and especially the references therein):\n[1] Li, Chengbo, et al. \"A compressive sensing and unmixing scheme for hyperspectral data processing.\" IEEE Transactions on Image Processing 21.3 (2012): 1200-1210.\n[2] Bioucas-Dias, José M., et al. \"Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches.\" IEEE journal of selected topics in applied earth observations and remote sensing 5.2 (2012): 354-379.\n[3] Charles, Adam S., Bruno A. Olshausen, and Christopher J. Rozell. \"Learning sparse codes for hyperspectral imagery.\" IEEE Journal of Selected Topics in Signal Processing 5.5 (2011): 963-978.\n\n", "In this paper, the authors proposed a framework to classify cells and implement cell segmentation based on the deep learning \ntechniques. Using the classification results to guide the feature selection method, the method can achieve comparable performance even 90% of the input features are reduced. \n\nIn general, the paper addresses an interesting problem, but the technical contribution is rather incremental. The authors seem to apply some well-define methods to realize a new task. The authors are expected to clarify their technical contributions or model improvement to address the specific problem. \n\nMoreover, there also exist some recent progress on image segmentation, such as FCN or mask R-CNN. The authors are expected to demonstrate the results by improving these advanced models. \n\nIn general, this is an interesting paper, but would be more fit to MICCAI or ISBI. \n\n\n\n" ]
[ 3, 6, 4 ]
[ 5, 5, 5 ]
[ "iclr_2018_HkanP0lRW", "iclr_2018_HkanP0lRW", "iclr_2018_HkanP0lRW" ]
iclr_2018_H1K6Tb-AZ
TESLA: Task-wise Early Stopping and Loss Aggregation for Dynamic Neural Network Inference
For inference operations in deep neural networks on end devices, it is desirable to deploy a single pre-trained neural network model, which can dynamically scale across a computation range without comprising accuracy. To achieve this goal, Incomplete Dot Product (IDP) has been proposed to use only a subset of terms in dot products during forward propagation. However, there are some limitations, including noticeable performance degradation in operating regions with low computational costs, and essential performance limitations since IDP uses hand-crafted profile coefficients. In this paper, we extend IDP by proposing new training algorithms involving a single profile, which may be trainable or pre-determined, to significantly improve the overall performance, especially in operating regions with low computational costs. Specifically, we propose the Task-wise Early Stopping and Loss Aggregation (TESLA) algorithm, which is showed in our 3-layer multilayer perceptron on MNIST that outperforms the original IDP by 32\% when only 10\% of dot products terms are used and achieves 94.7\% accuracy on average. By introducing trainable profile coefficients, TESLA further improves the accuracy to 95.5\% without specifying coefficients in advance. Besides, TESLA is applied to the VGG-16 model, which achieves 80\% accuracy using only 20\% of dot product terms on CIFAR-10 and also keeps 60\% accuracy using only 30\% of dot product terms on CIFAR-100, but the original IDP performs like a random guess in these two datasets at such low computation costs. Finally, we visualize the learned representations at different dot product percentages by class activation map and show that, by applying TESLA, the learned representations can adapt over a wide range of operation regions.
rejected-papers
General consensus among reviewers that paper does not meet criteria for publication. Pro: - Improvement over the original IDP proposal. - Some promising preliminary results. Con: - Insufficient comparison to other methods of network compression, - Insufficient comparison to other datasets (such as ImageNet) - Insufficient evaluation on variety of other models - Writing could be more clear
train
[ "SJF0AbKgG", "rJyYwFhlz", "r1VT-4J-z", "SykDmN6Xz", "ryaL9Qp7G", "S11DgXamf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "An approach to adjust inference speed, power consumption or latency by using incomplete dot products McDanel et al. (2017) is investigated.\n\nThe approach is based on `profile coefficients’ which are learned for every channel in a convolution layer, or for every column in the fully connected layer. Based on the magnitude of this profile coefficient, which determines the importance of this `filter,’ individual components in a neural net are switched on or off. McDanel et al. (2017) propose to train such an approach in a stage-by-stage manner.\n\nDifferent from a recently proposed method by McDanel et al. (2017), the authors of this submission argue that the stage-by-stage training doesn’t fully utilize the deep net performance. To address this issue a `loss aggregation’ is proposed which jointly optimizes a deep net when multiple fractions of incomplete products are used.\n\nThe method is evaluated on the MNIST and CIFAR-10 datasets and shown to outperform work on incomplete dot products by McDanel et al. (2017) by 32% in the low resource regime.\n\nSummary:\n——\nIn summary, I think the paper proposes an interesting approach but more work is necessary to demonstrate the effectiveness of the discussed method. The results are preliminary and should be extended to CIFAR-100 and ImageNet to be convincing. In addition, the writing should be improved as it is often ambiguous. See below for details.\n\nReview:\n—————\n1. Experiments are only provided on very small datasets. According to my opinion, this isn’t sufficient to illustrate the effectiveness of the proposed approach. As a reader I wouldn’t want to see results on CIFAR-100 and ImageNet using multiple network architectures, e.g., AlexNet and VGG16.\n\n2. Usage of the incomplete dot product for the fully connected layer and the convolutional layer seems inconsistent. More specifically, while the profile coefficient is applied for every input element in Eq. (1), it’s applied based on output channels in Eq. (2). This seems inconsistent and a comment like `These two approaches, however, are equivalent with negligible difference induced by the first hidden layer’ is more confusing than clarifying.\n\n3. The writing should be improved significantly and statements should be made more precise, e.g., `From now on, x% DP, where \\leq x \\geq 100, means the x% of terms used in dot products’. While sentences like those can be deciphered, they aren’t that appealing.\n\n4. The loss functions in Eq. (3) should be made more precise. It remains unclear whether the profile coefficients and the weights are trained jointly, separately, incrementally etc.\n\n5. Algorithm 1 and Algorithm 2 call functions that aren’t described/defined.\n\n6. Baseline numbers for training on datasets without incomplete dot products should be provided.\n", "This paper presents a modification of a numeric solution: Incomplete Dot Product (IDP), that allows a trained network to be used under different hardware constraints. The IDP method works by incorporating a 'coefficient' to each layer (fully connected or convolution), which can be learned as the weights of the model are being optimized. These coefficients can be used to prune subsets of the nodes or filters, when hardware has limited computational capacity. \n\nThe original IDP method (cited in the paper) is based on iteratively training for higher hardware capacities. This paper improves upon the limitation of the original IDP by allowing the weights of the network be trained concurrently with these coefficients, and authors present a loss function that is linear combination of loss function under original or constrained network setting. They also present results for a 'harmonic' combination which was not explained in the paper at all.\n\nOverall the paper has very good motivation and significance. \nHowever the writing is not very clear and the paper is not self-contained at all. I was not able to understand the significance of early stopping and how this connects with loss aggregation, and how the learning process differs from the original IDP paper, if they also have a scheduled learning setting. \n\nAdditionally, there were several terms that were unexplained in this paper such as 'harmonic' method highlighted in Figure 3. As is, while results are promising, I can't fully assess that the paper has major contributions. ", "The authors propose a method for reducing the computational burden when performing inference in deep neural networks. The method is based a previously-developed approach called incomplete dot products, which works by pruning some of the inputs in the dot products via the introduction of pre-specified coefficients. The authors of this paper extend the method by introducing a task-wise learning procedure that sequentially optimizes a loss function for decreasing percentage of included features in the dot product. \n\nUnfortunately, this paper was hard to follow for someone who does not actively work in this field, making it hard to judge if the contribution is significant or not. While the description of the problem itself is adequate, when it comes to describing the TESLA procedure and the alternative training procedure, the relevant passages are, in my opinion, too vague to allow other researchers to implement this procedure.\n\nPositive points:\n- The application seems relevant, and the task-wise procedure seems like an improvement over the original IDP proposal.\n- Application to two well-known benchmarking datasets.\n\nNegative points:\n- The method is not described in sufficient detail to allow reproducibility, the algorithms are no more than sketches.\n- It is not clear to me what the advantage of this approach is, as opposed to alternative ways of compressing the network (e.g. via group lasso regularization), or training an emulator on the full model for each task.\n\nMinor point:\n- Figure 1 is unclear and requires a better caption. ", "Thanks for your comments. The followings are the response to your questions in the review:\n\n1. Application to another dataset, VGG-16 on CIFAR-100\nIn this revision, we evaluate the effectiveness of TESLA over a large dataset, CIFAR-100, and TESLA does outperform the original IDP design by a great margin, 60% accuracy, at low computation costs. Please find the experiment result on CIFAR-100 in Figure 4.\n\n2. In actual, for both fully connected layer and convolution layer, the profile coefficient is applied equivalently. For example, in a fully connected layer, an input element first multiplies its weights and then multiplies its corresponding profile coefficients.\n\n4. It is worthy to clarify that TESLA takes one new task into consideration at one time, and aggregate the loss of that new task into the current objective function and try to optimize the aggregated loss until meeting the early stopping criterion. Once early stopped, we add another new task and repeat the process until all tasks are optimized.\n\n3, 5 and 6. Thanks for pointing out these errors. We have corrected them in this revision.\n\nGreatly thanks for your valuable comments.", "Thanks for your detailed comments.\n\n1. To answer the question about \"how the learning process differs from the original IDP paper, if they also have a scheduled learning setting.\":\n\n(i) original IDP only trains the network using complete dot product, so, in our terminology, original IDP optimizes one task of 100% DP. And that is why original IDP does not perform well at inference time if low computation costs. On the other hand, TESLA adds a new task at a time and tries to optimize multiple tasks incrementally and jointly for the sake of improving the performance of the new task without contaminating the performance of the past tasks. \n\n(ii) A scheduled learning is similar but not equivalent to the effect of our early stopping design. For example, ideally, task A needs to 10 epochs to reach the optimal performance. If only allocates less than 10 epochs on task A, the model will underfit in task A; if allocates more than 10 epochs, says 20 epoches, the model will overfit. Thus, we believe the early stopping is a better way to adapt across tasks with different convergence rates.\n\n2. Revision on the description of TESLA\nWe have revised our paper and elaborate the details of the TESLA algorithm, which was originally cut for space considerations. Briefly, TESLA is designed to optimize multiple tasks incrementally and jointly by:\n\n(i) Task-wise early stopping: due to different learning difficulties and convergence rates of tasks, we apply an early stopping mechanism to adjust the training process of each task to reduce overfitting and optimize across all tasks sequentially.\n\n(ii) Task-wise loss aggregation: because of shared weights between any two tasks, when adding a new task at a time, we aggregate the loss of the new task into the current objective function and optimize jointly.\n\nPlease check the revised section 3 to better understand the design intuition of our algorithm.\n\n3. Application to another dataset, CIFAR-100\nIn this revision, we also evaluate the effectiveness of TESLA over a large dataset, CIFAR-100, and TESLA does outperform the original IDP design. Please find the experiment result in Figure 4.\n\nThank you.", "Thanks for your comments. \n\n1. Revision on the description of TESLA\nWe have revised our paper and elaborate the details of the TESLA algorithm, which was originally cut for space considerations. Briefly, TESLA is designed to optimize multiple tasks incrementally and jointly by:\n\n(a) Task-wise early stopping: due to different learning difficulties and convergence rates of tasks, we apply an early stopping mechanism to adjust the training process of each task to reduce overfitting and optimize across all tasks sequentially.\n\n(b) Task-wise loss aggregation: because of shared weights between any two tasks, when adding a new task at a time, we aggregate the loss of the new task into the current objective function and optimize jointly.\n\nPlease check the revised section 3 to better understand the design intuition of our algorithm.\n\n2. Application to another dataset, CIFAR-100\nIn this revision, we also evaluate the effectiveness of TESLA over a large dataset, CIFAR-100, and TESLA does outperform the original IDP design. Please find the experiment result in Figure 4." ]
[ 4, 5, 4, -1, -1, -1 ]
[ 4, 2, 2, -1, -1, -1 ]
[ "iclr_2018_H1K6Tb-AZ", "iclr_2018_H1K6Tb-AZ", "iclr_2018_H1K6Tb-AZ", "SJF0AbKgG", "rJyYwFhlz", "r1VT-4J-z" ]
iclr_2018_HkjL6MiTb
Siamese Survival Analysis with Competing Risks
Survival Analysis (time-to-event analysis) in the presence of multiple possible adverse events, i.e., competing risks, is a challenging, yet very important problem in medicine, finance, manufacturing, etc. Extending classical survival analysis to competing risks is not trivial since only one event (e.g. one cause of death) is observed and hence, the incidence of an event of interest is often obscured by other related competing events. This leads to the nonidentifiability of the event times’ distribution parameters, which makes the problem significantly more challenging. In this work we introduce Siamese Survival Prognosis Network, a novel Siamese Deep Neural Network architecture that is able to effectively learn from data in the presence of multiple adverse events. The Siamese Survival Network is especially crafted to issue pairwise concordant time-dependent risks, in which longer event times are assigned lower risks. Furthermore, our architecture is able to directly optimize an approximation to the C-discrimination index, rather than relying on well-known metrics of cross-entropy etc., and which are not able to capture the unique requirements of survival analysis with competing risks. Our results show consistent performance improvements on a number of publicly available medical datasets over both statistical and deep learning state-of-the-art methods.
rejected-papers
Reviewers unanimous in assessment that manuscript has merits, but does not satisfy criteria for publication. Pros: - Potentially novel application of neural networks to survival analysis with competing risks, where only one terminal event from one risk category may be observed. Cons: - Incomplete coverage of other literature. - Architecture novelty may not be significant. - Small performance gains (though statistically significant)
train
[ "SyJXpk5lG", "SkpfobogG", "rkOt8g2ef", "H106JBomz", "SyZNJHimz", "SksVpEimG", "ByYiANimf", "HkcPpViXf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper introduces siamese neural networks to the competing risks framework of Fine and Gray. The authors optimize for the c-index by minimizing a loss function driven by the cumulative risk of competing risk m and correct ordering of comparable pairs. While the idea of optimizing directly for the c-index directly is a good one (with an approximation and with useful complementary loss function terms), the paper leaves something to be desired in quality and clarity.\n\nRelated works:\n- For your consideration: is multi-task survival analysis effectively a competing risks model, except that these models also estimate risk after the first competing event (i.e. in a competing risks model the rates for other events simply go to 0 or near-zero)? Please discuss. Also, if the claim is that there are not deep learning survival analyses, please see, e.g. Jing and Smola.\n- It would be helpful to define t_k explicitly to alleviate determining whether it is the interval time between ordered events or the absolute time since t_0 (it's the latter). Consider calling k a time index instead of t_k a time interval (\"subject x experiences cause m occurs [sic] in a time interval t_k\")\n- Line after eq 8: do you mean accuracy term?\n- I would not call Reg a regularization term since it is not shrinking the coefficients. It is a term to minimize a risk not a parameter.\n- You claim to adjust for event imbalance and time interval imbalance but this is not mathematically shown nor documented in the experiments.\n- The results show only one form of comparison, and the results have confidence intervals that overlap with at least one competing method in all tasks.", "The authors tackle the problem of estimating risk in a survival analysis setting with competing risks. They propose directly optimizing the time-dependent discrimination index using a siamese survival network. Experiments on several real-world dataset reveal modest gains in comparison with the state of the art.\n\n- The authors should clearly highlight what is their main technical contribution. For example, Eqs. 1-6 appear to be background material since the time-dependent discrimination index is taken from the literature, as the authors point out earlier. However, this is unclear from the writing. \n\n- One of the main motivations of the authors is to propose a model that is specially design to avoid the nonidentifiability issue in an scenario with competing risks. It is unclear why the authors solution is able to solve such an issue, specially given the modest reported gains in comparison with several competitive baselines. In other words, the authors oversell their own work, specially in comparison with the state of the art.\n\n- The authors use off-the-shelf siamese networks for their settting and thus it is questionable there is any novelty there. The application/setting may be novel, but not the architecture of choice.\n\n- From Eq. 4 to Eq. 5, the authors argue that the denominator does not depend on the model parameters and can be ignored. However, afterwards the objective does combine time-dependent discrimination indices of several competing risks, with different denominator values. This could be problematic if the risks are unbalanced.\n\n- The competitive gain of the authors method in comparison with other competing methods is minor.\n\n- The authors introduce F(t, D | x) as cumulative incidence function (CDF) at the beginning of section 2, however, afterwards they use R^m(t, x), which they define as risk of the subject experiencing event m before t. Is the latter a proxy for the former? How are they related?", "The paper entitled 'Siamese Survival Analysis' reports an application of a deep learning to three cases of competing risk survival analysis. The author follow the reasoning that '... these ideas were not explored in the context of survival analysis', thereby disregarding the significant published literature based on the Concordance Index (CI). \n\nBesides this deficit, the paper does not present a proper statistical setup (e.g. 'Is censoring assumed to be at random? ...) , and numerical results are only referring to some standard implementations, thereby again neglecting the state-of-the-art solution. That being said, this particular use of deep learning in this context might be novel.", "Q1. the paper leaves something to be desired in quality and clarity.\n\nA1. We worked hard to improve the paper’s clarity. We thank all the reviewers for their valuable comments which helped us in this pursuit and we are hopeful that the reviewers will reconsider their scores after seeing the revised paper.\n\n\nQ2. For your consideration: is multi-task survival analysis effectively a competing risks model, except that these models also estimate risk after the first competing event (i.e. in a competing risks model the rates for other events simply go to 0 or near-zero)? Please discuss.\n\nA2. Multi-task survival analysis can be indeed interpreted as a competing risks model. However, most works on competing risks assume that each subject experienced only a single event (See for instance the state-of-the-art Fine Gray model). This assumption originates from the constraints posed by actual clinical data (such as the well-known SEER dataset) where risks commonly correspond to deaths from various causes. \n\n. \nQ3. Also, if the claim is that there are not deep learning survival analyses, please see, e.g. Jing and Smola.\n\nA3. We do not aim to make such claims regarding the classical single risk survival analysis.\nIn fact, we compare against such conventional survival analysis benchmarks such as the deep learning survival analysis algorithm in [18]. Our paper only claimed that this is the first deep learning architecture for survival analysis in the presence of competing risks (please refer to A1 reviewer 1 for the differences between the problems). We are sorry for the confusion caused and have now made clear our contributions in the revised paper.\n\n\nQ4. It would be helpful to define t_k explicitly to alleviate determining whether it is the interval time between ordered events or the absolute time since t_0 (it's the latter). Consider calling k a time index instead of t_k a time interval (\"subject x experiences cause m occurs [sic] in a time interval t_k\")\n\nA4. We have made the change. Thank you.\n\n\nQ5. Line after eq 8: do you mean accuracy term?\n\nA5. We renamed this term and thank the reviewer again.\n\n\nQ6. I would not call Reg a regularization term since it is not shrinking the coefficients. It is a term to minimize a risk not a parameter.\n\nA6. We will rename this term as a loss term.\n\n\nQ7. You claim to adjust for event imbalance and time interval imbalance but this is not mathematically shown nor documented in the experiments.\n\nA7. We adjust for event imbalance and time interval imbalance using inverse propensity weights. These weights are the frequency of the occurrence of the various events at the various times. We have now clarified this point in the revised paper.\n\n\nQ8. The results show only one form of comparison, and the results have confidence intervals that overlap with at least one competing method in all tasks.\n\nA8. We have optimized the time-dependent discrimination index. If we were to optimize a different evaluation metric, we would include a different form of comparison. We agree that some confidence intervals overlap, however, this fact does not contradict the claim that this paper succeeds in providing a statistically significant improvement over the state-of-the-art on all datasets. Non-overlapping confidence intervals are a sufficient but not a necessary condition for statistical significance.\n\nReferences\n\n[18] Katzman, Jared, et al. \"Deep survival: A deep cox proportional hazards network.\" arXiv preprint arXiv:1606.00931 (2016).", "Q5. From Eq. 4 to Eq. 5, the authors argue that the denominator does not depend on the model parameters and can be ignored. However, afterwards the objective does combine time-dependent discrimination indices of several competing risks, with different denominator values. This could be problematic if the risks are unbalanced.\n\nA5. We agree with the reviewer that in the case of unbalanced risks, the denominators of different discrimination indices cannot be ignored. We overcome this by balancing the risks using inverse propensity weighting. Please refer to the sentence following equation 10:\n“Finally, we adjust for the event imbalance and the time interval imbalance caused by the unequal number of pairs for each event and time interval with inverse propensity weights on the loss function.”\n\n\nQ6. The competitive gain of the authors method in comparison with other competing methods is minor.\n\nA6. As mentioned before, since our method is on medical applications, where these methods can be used for improving outcomes or providing interventions, even relatively small performance improvements can lead to improved healthcare delivery. Our method is especially suitable for dealing with competing risks in multi-morbid population which represents a major healthcare challenge. Multimorbidity – the accumulation of multiple chronic diseases – has emerged as a major contemporary challenge of the ageing population. More than two-thirds of people aged over 65 are nowadays multimorbid, i.e. have two or more chronic diseases. However, current prognosis models are not designed to consider diseases in combination leading to poor use of scant resources and complications. Our method is one of the few methods especially designed to address this important problem. Thus, a performance improvement even as low as 0.1% has the potential to save many lives (since the majority of the elderly population is multimorbid) and improve healthcare delivery and utilization.\n\n\nQ7. The authors introduce F(t, D | x) as cumulative incidence function (CDF) at the beginning of section 2, however, afterwards they use R^m(t, x), which they define as risk of the subject experiencing event m before t. Is the latter a proxy for the former? How are they related?\n\nA7. As the reviewer pointed out, Rm(t,x) is indeed a proxy for the CDF of cause m, F(t,D|x). \nWe deem the R notation necessary since it is used to symbolize the algorithm’s output. We have that F(t,D=m|x)=Rm(t,x).\n\nReferences\n\n[14] Bromley, Jane, et al. \"Signature verification using a\" siamese\" time delay neural network.\" Advances in Neural Information Processing Systems. 1994.\n\n[15] Chopra, Sumit, Raia Hadsell, and Yann LeCun. \"Learning a similarity metric discriminatively, with application to face verification.\" Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. Vol. 1. IEEE, 2005.\n\n[16] Wang, Juan, et al. \"A multi-resolution approach for spinal metastasis detection using deep Siamese neural networks.\" Computers in Biology and Medicine 84 (2017): 137-146.\n\n[17] Antolini, Laura, Patrizia Boracchi, and Elia Biganzoli. \"A time‐dependent discrimination index for survival data.\" Statistics in medicine 24.24 (2005): 3927-3944.", "Q1. The paper entitled 'Siamese Survival Analysis' reports an application of a deep learning to three cases of competing risk survival analysis. The author follow the reasoning that '... these ideas were not explored in the context of survival analysis', thereby disregarding the significant published literature based on the Concordance Index (CI). \n\nA1. We are sorry for the confusion which the misnaming of our method has caused. We should have dubbed our method “survival analysis with competing risks” rather than simply “survival analysis”. The area of survival analysis with competing risks is a much less explored research domain. Survival analysis with competing risks differs from the single risk survival analysis for several reasons:\n\n- The times to the different event types (causes) are generally not independent [2]-[4]. As a result, our analysis requires developing a joint estimation model that can account for the latent relationships. (See for instance one of the earliest works on this problem [1] and more recent studies [2]-[4].)\n\n- We acknowledge that a significant amount of work was dedicated to c-index optimization for survival analysis. However, the problem of competing risks is much less studied. This is partly due to the fact that the c-index is designed for the single risk setting; competing risks problems require a different optimization metric. Therefore, our algorithm optimizes a time-dependent discrimination index for competing risks. This index cannot be optimized (in a straightforward manner) using the algorithms in the state-of-the-art survival analysis works. For instance, a smooth approximation of the c-index was optimized in [5]-[7] by using gradient boosting. These works require a smooth approximation of the time-dependent discrimination index and derivatives of it (for gradient boosting), which are challenging to obtain; thus, extensions of these methods to our considered problem are not straightforward. In [8], a random forest for survival analysis with competing risks was grown using the c-index as the splitting rule. Also in this case, the extension to the time-dependent discrimination index for competing risks as the splitting rule is not straightforward.\n\n\nQ2. Besides this deficit, the paper does not present a proper statistical setup (e.g. 'Is censoring assumed to be at random? ...)\n\nA2. This paper makes standard assumptions that are commonly used in survival analysis [1]-[8]. Specifically, censoring is assumed to be independent, i.e., the time-to-event conditional on covariates is independent of other variables including censoring [9]-[12]. We have clarified these assumptions in the revised paper. Thank you.\n\n\nQ3. and numerical results are only referring to some standard implementations, thereby again neglecting the state-of-the-art solution. \n\nA3. Again, we are very sorry for the confusion created. Our paper primarily focuses on the problem of survival analysis with competing risks, where only a limited number of solutions exist: the Fine-Gray model [9] and the Competing Risks Forest [13]. We have compared their method with both algorithms on multiple datasets and showed a consistent improvement as seen in tables 2 and 3. Table 4 illustrated results for the conventional single risk survival analysis and serves only as a sanity check for the competing risks benchmarks. To remove future confusion about the focus of this paper, we have moved this table to the appendix.\n\n\nQ4. That being said, this particular use of deep learning in this context might be novel.\n\nA4. We thank you for this acknowledgment and we hope the reviewer will reconsider his/her score following the revised version of this paper.", "Q1. The authors should clearly highlight what is their main technical contribution. For example, Eqs. 1-6 appear to be background material since the time-dependent discrimination index is taken from the literature, as the authors point out earlier. However, this is unclear from the writing. \n\nA1. We agree with the reviewer that the main technical contributions were not clearly stated. We have now improved the paper and clearly highlight our main technical contributions as follows: \n\n- We develop a novel Siamese feed-forward neural network for survival analysis with competing risks. Our novel neural network architecture is designed to optimize concordance. This is achieved by estimating risks in a relative fashion, meaning, the risk for the “true” event of a patient (i.e. the event which actually took place) must be higher than:\n1. all other risks for the same patient (Eq. 8);\n2. the risks for the same true event of other patients that experienced it at a later time (Eq. 7).\n\n- Because our neural network issues a joint risk for all competing events, our architecture needs to compare different risks for the different events at different times and arrange them in a concordant fashion (earlier time means higher risk for any pair of patients). To enable such a comparison, we develop a novel type of Siamese network architecture. Unlike previous Siamese neural networks architectures [14]-[16] which were developed for different purposes such as learning the pairwise similarity between different inputs, our architecture aims to maximize the gap between output risks among the different inputs. Instead of learning a representation that captures the similarities between the inputs, we learn a representation that generates the highest possible difference between the outputs. \nBy estimating the risks of all causes jointly, our Siamese survival network for competing risks is able to learn a shared representation that captures the latent structure of the data and allows estimating cause-specific risks.\n \n- We use a loss term (Eq. 9) that is based on the structure of the comparable pairs (the right patient has a longer event time). This component comes in the form of a learning constraint; that is, the survival curve of the right patient (longer survival time) must be lower than the survival curve of the left patient (shorter survival time) up to the event time of the left patient. This improves the generalization capabilities of our algorithm.\n\nWe have now clarified in the revised paper that equations 1-3 define the well-known time-dependent discrimination index and equation 4 is an estimator for it, as given in [17].\nEquations 5 and 6 are simplifications of the above index that we present before introducing our approximation. We have added the above citation before equations 1 and 4 for the sake of clarity.\n\n\nQ2. One of the main motivations of the authors is to propose a model that is specially design to avoid the nonidentifiability issue in an scenario with competing risks. It is unclear why the authors solution is able to solve such an issue, \n\nA2. Nonidentifiability in the competing risks settings arises from the inability to estimate the true cause-specific survival curves from the empirical data. However, this paper focuses on generating concordant risks in a relative fashion as opposed to true cause-specific survival curves. By avoiding the estimation of the true cause-specific survival curves, we are able to avoid the nonidentifiability problem.\n\n\nQ3. specially given the modest reported gains in comparison with several competitive baselines. In other words, the authors oversell their own work, specially in comparison with the state of the art.\n\nA3. Our paper provides a statistically significant improvement over the state-of-the-art methods on survival analysis with competing risks on both synthetic as well as on real medical data. We wish to stress that our focus is on the medical domain were even minor gains are important because of the potential to save lives. For example, there are 72809 patients in the SEER dataset we used. A performance improvement even as low as 0.1% has the potential to save lives and therefore should not be disregarded. However, we tamed our claims in the revised paper and explained better why we believe the gains obtained matter.\n\n\nQ4. The authors use off-the-shelf siamese networks for their setting and thus it is questionable there is any novelty there. The application/setting may be novel, but not the architecture of choice.\n\nA4. We are very sorry for not clearly describing the novelty of the proposed architecture. We have now clarified the architectural novelty. Please refer to A1.", "[1] Elandt-Johnson, Regina C. \"Conditional failure time distributions under competing risk theory with dependent failure times and proportional hazard rates.\" Scandinavian Actuarial Journal 1976.1 (1976): 37-51.\n\n[2] Lim, Hyun J., et al. \"Methods of competing risks analysis of end-stage renal disease and mortality among people with diabetes.\" BMC medical research methodology 10.1 (2010): 97.\n\n[3] Lambert, P. C., et al. \"Estimating the crude probability of death due to cancer and other causes using relative survival models.\" Statistics in medicine 29.7‐8 (2010): 885-895.\n\n[4] Satagopan, J. M., et al. \"A note on competing risks in survival data analysis.\" British journal of cancer 91.7 (2004): 1229-1235.\n\n[5] Chen, Yifei, et al. \"A gradient boosting algorithm for survival analysis via direct optimization of concordance index.\" Computational and mathematical methods in medicine 2013 (2013).\n\n[6] Mayr, Andreas, and Matthias Schmid. \"Boosting the concordance index for survival data–a unified framework to derive and evaluate biomarker combinations.\" PloS one 9.1 (2014): e84483.\n\n[7] Mayr, Andreas, Benjamin Hofner, and Matthias Schmid. \"Boosting the discriminatory power of sparse survival models via optimization of the concordance index and stability selection.\" BMC bioinformatics 17.1 (2016): 288.\n\n[8] Schmid, Matthias, Marvin N. Wright, and Andreas Ziegler. \"On the use of Harrell’s C for clinical risk prediction via random survival forests.\" Expert Systems with Applications 63 (2016): 450-459.\n\n[9] Fine, Jason P., and Robert J. Gray. \"A proportional hazards model for the subdistribution of a competing risk.\" Journal of the American statistical association 94.446 (1999): 496-509.\n\n[10] Crowder, Martin J. Classical competing risks. CRC Press, 2001.\n\n[11] Gooley, Ted A., et al. \"Estimation of failure probabilities in the presence of competing risks: new representations of old estimators.\" Statistics in medicine 18.6 (1999): 695-706.\n\n[12] Tsiatis, Anastasios. \"A nonidentifiability aspect of the problem of competing risks.\" Proceedings of the National Academy of Sciences 72.1 (1975): 20-22.\n\n[13] Ishwaran, Hemant, et al. \"Random survival forests for competing risks.\" Biostatistics 15.4 (2014): 757-773." ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HkjL6MiTb", "iclr_2018_HkjL6MiTb", "iclr_2018_HkjL6MiTb", "SyJXpk5lG", "ByYiANimf", "rkOt8g2ef", "SkpfobogG", "SksVpEimG" ]
iclr_2018_ByJbJwxCW
Relational Multi-Instance Learning for Concept Annotation from Medical Time Series
Recent advances in computing technology and sensor design have made it easier to collect longitudinal or time series data from patients, resulting in a gigantic amount of available medical data. Most of the medical time series lack annotations or even when the annotations are available they could be subjective and prone to human errors. Earlier works have developed natural language processing techniques to extract concept annotations and/or clinical narratives from doctor notes. However, these approaches are slow and do not use the accompanying medical time series data. To address this issue, we introduce the problem of concept annotation for the medical time series data, i.e., the task of predicting and localizing medical concepts by using the time series data as input. We propose Relational Multi-Instance Learning (RMIL) - a deep Multi Instance Learning framework based on recurrent neural networks, which uses pooling functions and attention mechanisms for the concept annotation tasks. Empirical results on medical datasets show that our proposed models outperform various multi-instance learning models.
rejected-papers
This paper presents a MIL method for medical time series data. General consensus among reviewers that work does not meet criteria for being accepted. Specifically: Pros: - A variety of meta-learning parameters are evaluated for the task at hand. - Minor novelty of the proposed method Cons: - Minor novelty of the proposed method - Rationale behind architectural design - Thoroughness of experimentation - Suboptimal choice of baseline methods - Lack of broad evaluation across applications for new design - Small dataset size - Significance of improvement
train
[ "Hk2mNy-gG", "rkcWXX9gf", "Hyu6DTogG", "Skeyh697G", "r1ozjTcQM", "SyYWc6qQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper addresses the classification of medical time-series data by formulating the problem as a multi-instance learning (MIL) task, where there is an instance for each timestep of each time series, labels are observed at the time-series level (i.e. for each bag), and the goal is to perform instance-level and series-level (i.e. bag-level) prediction. The main difference from the typical MIL setup is that there is a temporal relationship between the instances in each bag. The authors propose to model this using a recurrent neural network architecture. The aggregation function which maps instance-level labels to bag-level labels is modeled using a pooling layer (this is actually a nice way to describe multi-instance classification assumptions using neural network terminology). An attention mechanism is also used.\n\nThe proposed time-series MIL problem formulation makes sense. The RNN approach is novel to this setting, if somewhat incremental. One very positive aspect is that results are reported exploring the impact of the choice of recurrent neural network architecture, pooling function, and attention mechanism. Results on a second dataset are reported in the appendix, which greatly increases confidence in the generalizability of the experiments. One or more additional datasets would have helped further solidify the results, although I appreciate that medical datasets are not always easy to obtain. Overall, this is a reasonable paper with no obvious major flaws. The novelty and impact may be greater on the application side than on the methodology side.\n\nMinor suggestions:\n\n-The term \"relational multi-instance learning\" seems to suggest a greater level of generality than the work actually accomplishes. The proposed methods can only handle time-series / longitudinal dependencies, not arbitrary relational structure. Moreover, multi-instance learning is typically viewed as an intermediary level of structure \"in between\" propositional learning (i.e. the standard supervised learning setting) and fully relational learning, so the \"relational multi-instance learning\" terminology sounds a little strange. Cf.:\nDe Raedt, L. (2008). Logical and relational learning. Springer Science & Business Media.\n\n-Pg 3, a capitalization typo: \"the Multi-instance learning framework\"\n\n-The equation for the bag classifier on page 4 refers to the threshold-based MI assumption, which should be attributed to the following paper:\nWeidmann, N., Frank, E. & Pfahringer, B. 2003. A two-level learning method for generalized multi-instance problems. In Proceedings of the 14th European Conference on Machine Learning,\nSpringer, 468-479.\n(See also: J. R. Foulds and E. Frank. A review of multi-instance learning assumptions. Knowledge Engineering Review, 25(1):1-25, 2010. )\n\n- Pg 5, \"Table 1\" vs \"table 1\" - be consistent.\n\n-A comparison to other deep learning MIL methods, i.e. those that do not exploit the time-series nature of the problem, would be valuable. I wouldn't be surprised if other reviewers insist on this.", "This paper proposes a framework called 'multi-instance learning', in which a time series is treated as a 'set' of observations, and label is assigned to the full set, rather than individual observations. In this framework, authors propose to do set-level prediction (using pooling) and observation level predictions (using various attention mechanisms). \nThey test their approach in a medical setting, where the goal is to annotate vital signs time series by clinical events. Their cohort is 2014 adults time series (average length 4 time steps), and their time series has dimension of 21 and their clinical events have dimension of 26. Their baselines are other 'multi-instance learning' prior work and results are achieved through cross-validation. A few of the relevant hyper-parameters are tuned and some important hyper-parameters (i.e. number of hidden states in the LSTMs, or optimization method and learning rate) are not tuned. \n\nOriginality - I find the paper to be very incremental in terms of originality of the method. \n\nQuality and Significance - Due to small size of the cohort and lack of additional dataset, it is difficult to reliably access quality of experiments. Given that results are reported via cross-validation and without a true held-out dataset, and given that a number of hyperparameters are not even tuned, it is difficult to be confident that the differences of all the methods reported are significant. \n\nClarity - The writing has good clarity.\n\nMajor issues with the paper: \n- Lack of reliable experiment section. Dataset is too small (2000 total samples), and model training is not described with enough details in terms of hyper-parameters tuned. \n", "==== Post Rebuttal ====\nI went through the rebuttal, which unfortunately claimed a number statements without any experimental support as requested. The revision didn't address my concerns, and I've lowered my rating.\n\n==== Original Review ====\nThis paper proposed a novel Multiple Instance Learning (MIL) formulation called Relation MIL (RMIL), and discussed a number of its variants with LSTM, Bi-LSTM, S2S, etc. The paper also explored integrating RMIL with various attention mechanisms, and demonstrated its usage on medical concept prediction from time series data.\n\nThe biggest technical innovation in this paper is it combines recurrent networks like Bi-LSTM with MIL to model the relations among instances. Other than that, the paper has limited technical innovations: the pooling functions were proposed earlier and their integration with MIL was widely studied before (as cited by the authors); the attention mechanisms are also proposed by others.\n\nHowever, I am doubtful whether it’s appropriate to use LSTM to model the relations among instances. In general MIL, there exists no temporal order among instances, so modeling them with a LSTM is unjustified. It might be acceptable is the authors are focusing on time-series data; but in this case, it’s unclear why the authors are applying MIL on it. It seems other learning paradigm could be more appropriate.\n\nThe biggest concern I have with this paper is the unconvincing experiments. First, the baselines are very weak. Both MISVM and DPMIL are MIL methods without using deep learning features. It them becomes very unclear how much of the gain on Table 3 is from the use of deep learning, and how much is from the proposed RMIL.\n\nAlso, although the authors conducted a number of ablation studies, they don’t really tell us much. Basically, all variants of the algorithm perform as well, so it’s confusing why we need so many of them, or whether they can be integrated as a better model.\n\nThis could also be due to the small dataset. As the authors are proposing a new MIL learning paradigm, I feel they should experiment on a number of MIL tasks, not limited to analyzing time series medical data. The current experiments are quite narrow in terms of scope.\n", ">> Many Thanks for your encouraging feedback. We have fixed the typos, cited the suggested references and incorporated your suggestions in our revised draft. \n\n- Comparison with other deep learning baselines: We have tried other deep learning baselines such as CNN, and CNN with attention; however, their performance was slightly worse than the RNN models. CNN model obtained AUROC and AUPRC of [0.857,0.756] and [0.785,0.397] for concept prediction and localization tasks respectively. While, CNN with attention model obtained AUROC and AUPRC of about [0.855, 0.755] and [0.785, 0.409] for concept prediction and localization tasks respectively. We have included these results in the revised draft.\n\n", "Thank you for your comments and suggestions. Please find our response below:\n\nOriginality - I find the paper to be very incremental in terms of originality of the method. \n>> We agree that the ideas presented here are simple. However, we want to point out that the simple way of looking at RNNs in MIL settings has not been presented before to the best of our knowledge. That is, there is no existing RNN work which is trained with overall labels but aims for labels at each time steps. Also, we show that such a frustratingly simple RNN model can achieve excellent performance compared to the other existing MIL approaches.\n\nQuality and Significance - Due to small size of the cohort and lack of additional dataset, it is difficult to reliably access quality of experiments. Given that results are reported via cross-validation and without a true held-out dataset, and given that a number of hyperparameters are not even tuned, it is difficult to be confident that the differences of all the methods reported are significant. \n>> We have conducted exhaustive experiments to fine-tune the model’s hyper-parameters, and found that all the models achieve similar performance as reported in our paper. \n\nMajor issues with the paper: \n- Lack of reliable experiment section. Dataset is too small (2000 total samples), and model training is not described with enough details in terms of hyper-parameters tuned. \n>> This was the biggest dataset we could obtain under our problem settings by mining one of the largest publicly available healthcare dataset called MIMIC-III [1]. Thus, we believe the dataset size is reasonable given the data source and application domain. Also, we have provided additional results on a different dataset for the anomaly detection problem using the RMIL framework in the appendix of our paper. Kindly note that unlike other application domains, in medical domain the dataset sizes are relatively smaller. \n\n[1] AEW Johnson, TJ Pollard, L Shen, L Lehman, M Feng, M Ghassemi, B Moody, P Szolovits, LA Celi,\nand RG Mark. Mimic-iii, a freely accessible critical care database. Scientific Data, 2016.", "Thank you for your useful comments. Please find our response below:\n\nIn general MIL, there exists no temporal order among instances, so modeling them with a LSTM is unjustified.\n>> Yes, in general MIL the temporal order is not modeled. However, in this paper, we are working with time series data which come with temporal dependencies, thus, we employ LSTM to model them in MIL setting which we refer to as Relational MIL. \n\nOne of the key contributions of this work is to show that Recurrent Neural Network models such as LSTM can be used in MIL setting with a suitable way to model instance-level and bag-level predictions. To the best of our knowledge, this simple way of looking at RNNs in MIL settings has not been presented before. Another point is to showcase that frustratingly simple RNN models can achieve excellent performance compared to the other MIL approaches. \n\nThe biggest concern I have with this paper is the unconvincing experiments. First, the baselines are very weak. Both MISVM and DPMIL are MIL methods without using deep learning features. \n>> The baselines included in the paper are some of the popular and best performing baselines available for MIL framework. Both MISVM and DPMIL do not provide a way to model the relations between the instances as considered in the proposed RMIL. Thus, even if we use deep learning features with MISVM and DPMIL, they are bound to perform worse than the RMIL models since they do not model temporal dependencies present in the data. We will include these comparison results in our future work. In the revised draft we have included results from CNN models which obtain better results than MISVM and DPMIL, but performs slightly worse than our RMIL models. \n\n\nAlso, although the authors conducted a number of ablation studies, they don’t really tell us much. Basically, all variants of the algorithm perform as well, so it’s confusing why we need so many of them, or whether they can be integrated as a better model.\n>> As stated earlier we wanted to show that frustratingly simple RNN models can achieve excellent performance compared to the other MIL approaches. We have conducted exhaustive experiments on more complicated deep models, and have also tested a combination of several deep models; however, all our experiments showed that simple RNN models in RMIL framework achieve simpler results. In terms of an integration model, we’ve tried combinations / ensembles of different pooling layers / activation mechanisms, but we did not find any improvements in the performance. \n\nAs the authors are proposing a new MIL learning paradigm, I feel they should experiment on a number of MIL tasks, not limited to analyzing time series medical data.\n>> Thanks for the suggestions. Our goal was to solve the time series prediction and localization problem as applicable to medical time series data. We have shown additional results of RMIL for anomaly detection task in the appendix. Unfortunately, conducting experiments outside of time series data is beyond the scope of this paper. " ]
[ 6, 3, 3, -1, -1, -1 ]
[ 4, 3, 5, -1, -1, -1 ]
[ "iclr_2018_ByJbJwxCW", "iclr_2018_ByJbJwxCW", "iclr_2018_ByJbJwxCW", "Hk2mNy-gG", "rkcWXX9gf", "Hyu6DTogG" ]
iclr_2018_SJFM0ZWCb
Deep Temporal Clustering: Fully unsupervised learning of time-domain features
Unsupervised learning of timeseries data is a challenging problem in machine learning. Here, we propose a novel algorithm, Deep Temporal Clustering (DTC), a fully unsupervised method, to naturally integrate dimensionality reduction and temporal clustering into a single end to end learning framework. The algorithm starts with an initial cluster estimates using an autoencoder for dimensionality reduction and a novel temporal clustering layer for cluster assignment. Then it jointly optimizes the clustering objective and the dimensionality reduction objective. Based on requirement and application, the temporal clustering layer can be customized with any temporal similarity metric. Several similarity metrics are considered and compared. To gain insight into features that the network has learned for its clustering, we apply a visualization method that generates a heat map of regions of interest in the timeseries. The viability of the algorithm is demonstrated using timeseries data from diverse domains, ranging from earthquakes to sensor data from spacecraft. In each case, we show that our algorithm outperforms traditional methods. This performance is attributed to fully integrated temporal dimensionality reduction and clustering criterion.
rejected-papers
Joint optimization of dimensionality reduction and temporal clusters. Results suggest performance improvement in a variety of scenarios versus a baseline of a recent state-of-art clustering method. Pro: - Joint optimization may be new and results suggest performance improvement when done on NASA Magnetospheric Multiscale (MMS) Mission. Con: - Small datasets evaluated, impact unclear - Breadth of possible applications unclear - Similarities exist to prior works. Significance of novelty not clear. - Unanimous consensus among reviewers that work is not in a state to be accepted.
train
[ "ryMizdDef", "rkq18W9eG", "HyWGBr5lf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors proposed an algorithm named Deep Temporal Clustering (DTC) that integrates autoencoder with time-series data clustering. Compared to existing methods, DTC used a network structure (CNN + BiLSTM) that suits time-series data. In addition, a new clustering loss with different similarity measures are adopted to DTC. Experiments on different time-series data show the effectiveness of DTC compared to complete link. \n\nAlthough the idea of applying deep learning for temporal clustering is novel and interesting, the optimization problem is not clearly stated and experiments section is not comprehensive enough.\n\nHere are the detailed comments.\nThe methods are described in a higher level language. The formula of overall loss function and its optimization should be written down to avoid unclearness.\nThe framework adopt the K-medoid clustering idea. But complete-link is used for initialization and comparison. Is that a difference? In addition, how to generate K centroids from complete-link clustering is not described at all.\nThe author Dynamic Time Warping is too expensive to integrate into DTC. However, most of the evaluated dataset are with small time points. Even for the longer ones, DTC does dimensionality reduction to make the time-series shorter. I do not see why quadratic computation is a problem here. DTW is most effective similarity measure for time-series data clustering. There is no excuse to skip it.\nIs DTC robust to hyperparameters? If not, are there any guidelines to tune the hyperparameters, which is very important for unsupervised clustering. \n\nIn summary, the method need to be described clearer, state-of-the-arts need to be compared and the usability of the method needs to be discussed. Therefore, at the current stage the paper cannot be accepted in my opinion. \n", "This paper proposes an algorithm for jointly performing dimensionality reduction and temporal clustering in a deep learning context. An autoencoder is utilized for dimensionality reduction alongside a clustering objective - that is the autoencoder optimizes the mse (using LSTM layers are utilized in the autoencoder for modelling temporal information), while the latent space is fed into the temporal clustering layer. The clustering/autoencoder objectives are optimized in an alternating optimization fashion.\n\nThe main con lies in this work being very closely related to t-sne, i.e. compare the the temporal clustering loss based on kl-div (eq 6) to t-sne. If we consider e.g., a linear 1-layer autoencoder to be equivalent to PCA (without the rnn layers), in essence this formulation is closely related to applying pca to reduce the initial dimensionality and then t-sne. \n\nAlso, do the cluster centroids appear to be roughly stable over many runs of the algorithm? As the authors mention, the method is sensitive to intitialization. As the averaged results over 5 runs are shown, the standard deviation would be helpful towards showing this empirically.\n\nOn the positive side, it is likely that richer representations can be obtained via this architecture, and results appear to be good with comparison to other metrics \n\nThe section of the paper that discusses heat-maps should be written more clearly. Figure 3 is commented with respect to detecting an event - non-event but the process itself is not clearly described as far as I can see.\n\nminor note: the dynamic time warping is formally not a metric", "\nSummary:\nThe authors proposed an unsupervised time series clustering methods built with deep neural networks. The proposed model is equipped with an encoder-decoder and a clustering model. First, the encoder employs CNN to shorten the time series and extract local temporal features, and the CNN is followed by bidirectional LSTMs to get the encoded representations. A temporal clustering model and a DCNN decoder are applied on the encoded representations and jointly trained. An additional heatmap generator component can be further included in the clustering model. The authors compared the proposed method with hierarchical clustering with 4 different temporal similarity methods on several univariate time series datasets.\n\nDetailed comments:\nThe problem of unsupervised time series clustering is important and challenging. The idea of utilizing deep learning models to learn encoded representations for clustering is interesting and could be a promising solution.\n\nOne potential limitation of the proposed method is that it is only designed for univariate time series of the same temporal length, which limits the usage of this model in practice. In addition, given that the input has fixed length, clustering baselines for static data can be easily applied and should be compared to demonstrate the necessity of temporal clustering.\n\nSome important details are missing or lack of explanations. For example, what is the size of each layer and the dimension of the encoded space? How much does the model shorten the input time series and how is this be determined?\n\nHow does the model combine the heatmap output (which is a sequence of the same length as the time series) and the clustering output (which is a vector of size K) in Figure 1? The heatmap shown in Figure 3 looks like the negation of the decoded output (i.e., lower value in time series -> higher value in heatmap). How do we interpret the generated heatmap? \n\nFrom the experimental results, it is difficult to judge which method/metric is the best. For example, in Figure 4, all 4 DTC-methods achieved the best performance on one or two datasets. Though several datasets are evaluated in experiments, they are relatively small. Even the largest dataset (Phalanges OutlinesCorrect) has only 2 thousand samples, and the best performance is achieved by one of the baseline, with AUC score only 0.586 for binary classification.\n\nMinor suggestion: \nIn Figure 3, instead of showing the decoded output (reconstruction), it may be more helpful to visualize the encoded time series since the clustering method is applied directly on those encoded representations.\n\n" ]
[ 3, 5, 4 ]
[ 5, 4, 4 ]
[ "iclr_2018_SJFM0ZWCb", "iclr_2018_SJFM0ZWCb", "iclr_2018_SJFM0ZWCb" ]
iclr_2018_rJr4kfWCb
Lung Tumor Location and Identification with AlexNet and a Custom CNN
Lung cancer is the leading cause of cancer deaths in the world and early detection is a crucial part of increasing patient survival. Deep learning techniques provide us with a method of automated analysis of patient scans. In this work, we compare AlexNet, a multi-layered and highly flexible architecture, with a custom CNN to determine if lung nodules with patient scans are benign or cancerous. We have found our CNN architecture to be highly accurate (99.79%) and fast while maintaining low False Positive and False Negative rates (< 0.01% and 0.15% respectively). This is important as high false positive rates are a serious issue with lung cancer diagnosis. We have found that AlexNet is not well suited to the problem of nodule identification, though it is a good baseline comparison because of its flexibility.
rejected-papers
Pros: - Addresses an important medical imaging application - Uses an open dataset Con: - Authors do not cite original article describing challenge from which they use their data: https://arxiv.org/pdf/1612.08012.pdf , or the website for the corresponding challenge: https://luna16.grand-challenge.org/results/ - Authors either 1) do not follow the evaluation protocol set forth by the challenge, making it impossible to compare to other methods published on this dataset, or 2) incorrectly describe their use of that public dataset. - Compares only to AlexNet architecture, and not to any of the other multiple methods published on this dataset (see: https://arxiv.org/pdf/1612.08012.pdf). - Too much space is spent explaining well-understood evaluation functions. - As reviewers point out, no motivation for new architecture is given.
train
[ "HkQQ3IQxf", "B1dApr_lf", "SkOp9W5gf", "rk24sxMzf", "Hk36mQzfz", "rkWJCpgGM", "SkbN34JWz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public", "public" ]
[ "This paper compares 2 CNN architectures (Alexnet and a VGG variant) for the task of classifying images of lung cancer from CT scans. The comparison is trivial and does not go in depth to explain why one architecture works better than the other. Also, no effort is made to explain the data beyond some superficial description. No example of input data is given (what does an actual input look like). The authors mention \"the RCNN object detector\" in step 18, that presumably does post-processing after the CNN. But there is no explanation of that module anywhere. Instead the authors spend most of the paper listing in wordy details the architecture of their VGG variant. Also, a full page is devoted to detailed explanation of what precision-recall and Matthews Correlation Coefficient is! Overall, the paper does not provide any insight beyond: i tried this, i tried that and this works better than that; a strong reject.", "The authors compare a standard DL machine (AlexNet) with a custom CNN-based solution in the well known tasks of classifying lung tumours into benign or cancerous in the Luna CT scan dataset, concluding that the proposed novel solution performs better.\nThe paper is interesting, but it has a number of issues that prevents it from being accepted for the ICLR conference.\n\nFirst, the scope of the paper, in its present form, is very limited: the idea of comparing the novel solution just with AlexNet is not adding much to the present landscape of methods to tackle this problem.\nMoreover, although the task is very well known and in the last few year gave rise to a steady flow of solutions and was also the topic of a famous Kaggle competition, no discussion about that can be found in the manuscript.\nThe novel solution is very briefly sketched, and some of the tricks in its architecture are not properly justified: moreover, the performance improvement w.r.t . to AlexNet is hardly supporting the claim.\nExperimental setup consists of just a single training/test split, thus no confidence intervals on the results can be defined to show the stability of the solution.\nThe whole sections 2.3 and 2.4 include only standard material unnecessary to mention given the target venue, and the references are limited and incomplete.\nThis given, I rate this manuscript as not suitable for ICLR 2018.", "The paper compares AlexNet and a custom CNN in predicting malignant lung nodules, and shows that the proposed CNN achieves significantly lower false positives and false negative rates.\n\nMajor comments\n\n- I did not fully understand the motivation of the custom CNN over AlexNet. \n\n- Some more description of the dataset will be helpful. Do the 888 scans belong to different patients, or same patient can be scanned at different times? What is the dimensionality of each CT scan?\n\n- Are the authors predicting the location of the malignant nodule, or are they classifying if the image has a malignant nodule? How do the authors compute a true positive? What threshold is used?\n\n- What is 'Luna subsets'? What is 'unsmoothed and smoothed image'?\n\nMinor comments\n\n- The paper is difficult to read, and contains a lot of spelling and grammatical errors.", "The motivation for the paper was a challenge to improve on the classification of CT scans whether they contain cancerous\nor benign tumors. The publicly available and well-labeled dataset that the paper uses is the modified LIDC-IDRI dataset that\ncontains sets of CT scans of patients with lung tumours. The size of the dataset is 128GB of DICOM images, and is 32GB\nwhen converted to PNG. Manipulation of this data was difficult and time consuming on its own.\nThe authors obtained their input and output sets from the closed Luna 2016 Challenge, which came in a preprocessed format\nand also came divided into 10 different subsets. We did not have access to the Luna 2016 Challenge dataset (no public access),\nso we ended up using the publicly available LIDC-IDRI dataset and pre-processed it in a way that matched the Luna dataset\ndescription. The authors state that they worked with 888 patients, whereas for our dataset, following the same procedure,\nyielded 896 patients. This discrepancy in input was not so big that we would expect significantly different results. The only\nimplication is that some additional patient files were removed before finalizing the Luna dataset in a manner that was not\ndocumented. Converting the data to PNG files was needed as the original format (DICOM and MDH) contains more than just\nthe pictures of the scans, which we were able to achieve. We also had to extract the output labels from the public dataset\nourselves, which could have also contributed to some differences from the data that the paper uses. Given that the authors of\nthe paper had to convert the pixel labels from millimeters to pixels, we believe the labels provided to the authors differed in\nformat from those supplied with the LIDC-IDRI.\nThe paper implemented two CNN architectures: an AlexNet modified to binary output, and a custom CNN designed by\nthe authors for the purpose of tumour identification. The architecture was described using both figures and a layer-wise text\ndescription, however these descriptions were inconsistent. We used a version that contained all the layers from both architectures,\nwhich adds another uncertainty in the reproduction of the results. The motivation for using the custom CNN was the ability\nto decrease the false positive rates compared to the baseline AlexNet. Ultimately, our findings were unable to support or deny\nthis assertion. In our efforts to trying to run both of these models, the difficulty that we faced (apart from the ambiguity in the\nnetwork architecture) was the lack of proper hardware available to us. We were able to use a NVIDIA Titan X GPU, which\nonly provided 12GB of memory that turned out to be a limiting factor for us as we needed to decrease the resolution of our\ninput images for the custom CNN. We would also like to note the lack of the hyper-parameters identified in the paper of which\nwere used to train the two classifiers.", "Greetings to the authors,\n\nMcGill University’s COMP 551 Applied Machine Learning class has decided to conduct a reproducibility challenge, and our team has decided to analyze the paper \"Lung Tumor Location and Identification with AlexNet and a Custom CNN\". \n\nLung cancer is the leading cause of cancer deaths in the world, with roughly 1.67 million deaths a year. Recent development of machine learning methods in medical imaging have provided doctors with a complementary set of tools to tackle these issues. \nHowever, existing CNN architectures may not be optimal for specialized medical tasks such as cancer nodule classification. Thus, the authors propose a new architecture and compare it to AlexNet, an existing architecture.\n\nAs such, we seek verify the claims by the original authors which state that the proposed CNN architecture classifies cancer nodes at an accuracy of 99.79 and that false positive rates are reduced to less than 0.01%.\n\nThe dataset utilized, as outlined by the original authors are derived from the LUNA2016 dataset, part of the LIDC/IDRI database. We were able to obtain the 888 scans, along with the annotations, and are fairly confident that this is the same data set the authors used. However, the authors did not provide sufficient information in order to perform the preprocessing of the raw images. Therefore, our team followed LUNA2016's tutorials which explain the procedure to convert relevant sections of the medical images into either PNG format. We assumed that the authors followed the same method as in the tutorial.\n\nIn order to reproduce the authors' results, we constructed a network following the proposed architecture from their paper. In this respect, the authors provided very clear and concise instructions about the architecture of their CNN. Our CNN was implemented using Keras with TensorFlow backend instead of MATLAB, as we assume the authors used. Every layer of our network follows the description of the authors' network, with the exception of the fully-connected layers due to the lack of specifications. Thus, our architecture contains 7 convolutional layers, along with their associated padding, pooling and dropout layers. The authors did not specify the number of dense layers at the end of their convolutional layers, nor the activation functions for the dense layers. Thus, we opted for flattening the convolutional layer, and connecting it to two fully-connected layers. Due to the output being binary, a Sigmoid activation function with binary cross-entropy loss function. It is also worth noting that the figure provided in the original paper differs from the architecture presented. We followed the written architecture instead of the graphical one. The model was then trained on an NVidia GTX970 GPU, which is less powerful than the four NVidia GTX1080 which the authors used. The authors clearly stated the time requirements for training their networks.\n\nThe original paper provides an extensive validation table. However, the procedures were unclear as our team could not understand how exactly the authors obtained their validation results. Thus, we decided to work with the subsets contained in the data set. On the 10 total subsets, we performed 7-fold cross validation, using 6 subsets as training sets, 1 subset as validation set each time, and withholding 3 subsets for a final test set. Our results turned out similar to the ones in the paper. However, in addition, we propose a baseline consisting of predicting all tumors as benign. Such a baseline performs significantly better than a random baseline, and due to class imbalance in the data set, it provides a comparable accuracy to the CNN's.\n\nAs a final note, the authors' paper contains sections which are well-described and detailed such as the architecture of their proposed model. However, other sections may be lacking in clarity and could be ambiguous, notably in their description of validation procedures. Moreover, there are discrepancies between the architecture of the proposed CNN and the figure showing the graphical model. Minor typos and mistakes include the extension of the data set, which is \".MHD\", not \".MDH\", and the confusion of FPR and FNR in the explanations for the evalutation metrics.\n\nOverall, the authors' approach to the classification of tumors was interesting, and we would have liked to reproduce the authors' results more faithfully if the code behind the results was provided.\n\nOur full review is located at: https://github.com/ExTee/COMP551-Final", "We have attempted to reproduce the following manuscript entitled Lung Tumor Location and Identification with AlexNet\nand a Custom CNN, as part of the Reproducibility Challenge sponsored by our Machine Learning course. We find the paper\ninteresting in that it focuses on automating Cancer diagnosis from Lung CT scans which would perhaps have real clinical impact on the lives of patients.\n\nThe paper was difficult to reproduce due to missing key information. Code and input data were not included with the\nmanuscript. The software packages used were not always mentioned and software versions were not shared. Computational\nresources needed to run the analysis were also not disclosed. Hyper-parameters of the CNN models were sometimes missing, such as the Random Seed used throughout the analysis, the number of trained epochs for each CNN and the optimizer used to compile the models.\n\nThe authors mention that they split the dataset into 70% training data and 30% test data, however, they did so based on\nCT scans and not based on instances of the labels. This was not described in enough detail to allow proper reproduction. Some\nmore details would have been useful.\n\nThe authors described their CNN structure in great detail; however, there was a mismatch between the description in the\ntext and the figure representing the CNN architecture. Two convolutional layers, steps 12 and 13, were not displayed in the figure.\n\nThe authors mention that they also attempted to localize the tumor across the CT scan and mention using a RCNN,\nhowever, no other details were given and hence this part was difficult to reproduce.\n\nThe authors used an input size of 512 x 512 for their CNN but used 227 x 227 input size for AlexNet. It was not\ndescribed how the transformation was done or whether input data was processed separately at two different sizes for each CNN.\n\nAlmost 80% of the time spent on this challenge was spent on understanding and processing the input data, in order to\nmake it ready to train the CNN. Input data is a fundamental part of Machine Learning and small differences in data processing\nmight result in different datasets and thus different performance by the algorithm. Given the complexity of the dataset and the difficulty in describing all steps in details, it is advisable that the processed data used to train a model be shared in its exact format or that the code that produced the data be made available to avoid small discrepancies.\n\nThe authors have reported an accuracy of about 99.79%, roughly 3% higher than anything reported online to date,\nincluding the scoreboards of the Grand LUNA Challenge and the Kaggle Data Science Bowl 2017. It would be interesting for the authors to submit their results to both of these scoreboards to register their performance.\n\n\n\nNote: We are researchers ourselves and have authored manuscripts which if judged today, would likely have missing\ninformation that would make them hard to reproduce as well. This has been an interesting mental exercise as your readers, but also for us as authors of scientific content. While it might not be fun to go through this task of thinking of reproducibility and what details to include in a manuscript, it certainly is necessary for all of us as researchers to ensure that our work has the highest impact on the scientific community and society as a whole. I hope you appreciate the value of what we are trying to do and not take any comments personally; this is really more of an issue of us as a community and a need to standardize scientific reporting and instigate thought about reproducibility.", "Hello, \n\nI would like to reproduce your results as part of the ICLR 2018 Reproducibility Challenge (http://www.cs.mcgill.ca/~jpineau/ICLR2018-ReproducibilityChallenge.html). \n\nWould you be able to share your Code with me? Or any other helpful material for that matter?\nYou mentioned some pre-processing of the data (converting to PNG and annotating image with coordinates), would you be able to share the scripts that do so?\n\nCan you give me some information about the computational resources needed to run the analysis (CPU/GPU, RAM,...)?\n\nThank you very much\n" ]
[ 2, 3, 3, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_rJr4kfWCb", "iclr_2018_rJr4kfWCb", "iclr_2018_rJr4kfWCb", "iclr_2018_rJr4kfWCb", "iclr_2018_rJr4kfWCb", "iclr_2018_rJr4kfWCb", "iclr_2018_rJr4kfWCb" ]
iclr_2018_HJqUtdOaZ
ENRICHMENT OF FEATURES FOR CLASSIFICATION USING AN OPTIMIZED LINEAR/NON-LINEAR COMBINATION OF INPUT FEATURES
Automatic classification of objects is one of the most important tasks in engineering and data mining applications. Although using more complex and advanced classifiers can help to improve the accuracy of classification systems, it can be done by analyzing data sets and their features for a particular problem. Feature combination is the one which can improve the quality of the features. In this paper, a structure similar to Feed-Forward Neural Network (FFNN) is used to generate an optimized linear or non-linear combination of features for classification. Genetic Algorithm (GA) is applied to update weights and biases. Since nature of data sets and their features impact on the effectiveness of combination and classification system, linear and non-linear activation functions (or transfer function) are used to achieve more reliable system. Experiments of several UCI data sets and using minimum distance classifier as a simple classifier indicate that proposed linear and non-linear intelligent FFNN-based feature combination can present more reliable and promising results. By using such a feature combination method, there is no need to use more powerful and complex classifier anymore.
rejected-papers
The presented method essentially builds a model that remaps features into a new space that optimizes nearest-neighbor classification. The model is a neural network, and the optimization is carried out through a genetic algorithm. Pros: - One major issue with neural network classification is that of a lack of explainability. Many networks are currently "black box" approaches. By moving to the optimization problem to that of building a feature space for nearest neighbor classification, one can, to a degree, alleviate the "black box" issue by providing the discovered nearest neighbor instances as "evidence" of the decision. - Authors use established datasets. Cons: - Authors do not properly cite previous work, as brought up by reviewers. There is much literature on optimization of feature spaces (such as the entire field of metric learning), as well as prior approaches using genetic optimization. The originality and significance here is therefore not clear.
val
[ "rkbO-pIgf", "rkvIrecgG", "r1o4FsqgM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents a method for feature projection which uses a two level neural network like structure to generate new features from the input features. The weights of the NN like structure are optimised using a genetic search algorithm which optimises the cross-validation error of a nearest neighbor classifier. The method is tested on four simple UCI datasets. There is nothing interesting or novel about the paper. It is not clear whether the GA optimisation takes place on the level of cross validation error estimation or within an internal validation set as it should have been the case. The very high accuracies reported seem to hint the latter, which is a serious methodological error. The poor language and presentation does not help in clearing that, as it does not help in general. ", "This paper proposes using a feedforward neural network (FFNN) to extract intermediate features which are input to a 1NN classifier. The parameters of the FFNN are updated via a genetic algorithm with a fitness function defined as the error on the downstream classification, on a held-out set. The performance of this approach is measured on several UCI datasets and compared with baselines.\n– The paper’s main contribution seems to be a neural network with a GA optimization for classification that can learn “intelligent combinations of features”, which can be easily classified by a simple 1NN classifier. But isn't this exactly what neural networks do – learn intelligent combinations of features optimized (in this case, via GA) for a downstream task? This has already been successfully applied in multiple domains eg. in computer vision (Krizhevsky et al, NIPS 2011), NLP (Bahdanau et al 2014), image retrieval (Krizhevsky et al. ESANN 2011) etc, and also studied comprehensively in autoencoding literature. There also exists prior work on optimizing neural nets via GA (Leung, Frank Hung-Fat et al., IEEE Transactions on Neural networks 2003). However, this paper claims both as novelties while not offering any improvement / comparison. \n– The claim “there is no need to use more powerful and complex classifier anymore” is unsubstantiated, as the paper’s approach still entails using a complex classifier (a FFNN) to learn an optimal intermediate representation.\n– The choice of activations is not motivated, and performance on variants is not reported. For instance, why is that particular sigmoid formulation used? \n– The use for a genetic algorithm for optimization is not motivated, and no comparison is made to the performance and efficiency of other approaches (like standard backpropagation). So it is unclear why GA makes for a better choice of optimization, if at all.\n– The primary baselines compared to are unsupervised methods (PCA and LDA), and so demonstrating improvements over those with a supervised representation does not seem significant or surprising. It would be useful to compare with a simple neural network baseline trained for K-way classification with standard backpropagation (though the UCI datasets may potentially be too small to achieve good performance).\n– The paper is poorly written, containing several typos and incomplete, unintelligible sentences, incorrect captions (eg. Table 4) etc.\n", "The main issue is the scientific quality. What the authors call \"intelligent mapping and combining system\" for the proposed system is simply a fully connected neural network. Such systems have been largely investigated in the literature. The use of genetic algorithms has also been considered. Moreover, mapping features to some appropriate feature space has been widely investigated, including the choice of appropriate mapping. We didn't find anything \"intelligent\" in the proposed mapping. \n\nThere are many spelling and grammatical errors.\n" ]
[ 1, 3, 2 ]
[ 5, 4, 3 ]
[ "iclr_2018_HJqUtdOaZ", "iclr_2018_HJqUtdOaZ", "iclr_2018_HJqUtdOaZ" ]
iclr_2018_S1m6h21Cb
The Cramer Distance as a Solution to Biased Wasserstein Gradients
The Wasserstein probability metric has received much attention from the machine learning community. Unlike the Kullback-Leibler divergence, which strictly measures change in probability, the Wasserstein metric reflects the underlying geometry between outcomes. The value of being sensitive to this geometry has been demonstrated, among others, in ordinal regression and generative modelling, and most recently in reinforcement learning. In this paper we describe three natural properties of probability divergences that we believe reflect requirements from machine learning: sum invariance, scale sensitivity, and unbiased sample gradients. The Wasserstein metric possesses the first two properties but, unlike the Kullback-Leibler divergence, does not possess the third. We provide empirical evidence suggesting this is a serious issue in practice. Leveraging insights from probabilistic forecasting we propose an alternative to the Wasserstein metric, the Cramér distance. We show that the Cramér distance possesses all three desired properties, combining the best of the Wasserstein and Kullback-Leibler divergences. We give empirical results on a number of domains comparing these three divergences. To illustrate the practical relevance of the Cramér distance we design a new algorithm, the Cramér Generative Adversarial Network (GAN), and show that it has a number of desirable properties over the related Wasserstein GAN.
rejected-papers
Pros: - The authors propose a new algorithm to train GAN based on Cramer distance arguing that this eases optimization compared to Wasserstein GAN. - Reviewers agree that the paper reads well and provides a good overview of the properties of divergence measures used for GAN training. Cons: - It is not clear how much the central arguments about scale sensitivity, sum invariance, and unbiased sample gradients of the distances hold true in practice and generalize. - The reviewers do not agree the benefits of the new algorithm is clear from the experiments shown. Given the pros/cons ,the committee feels the paper falls short of acceptance in its current form.
train
[ "SJrqRODeM", "B1tHGLTgG", "B1xzpQy-z", "Hy_hZL2WM", "S14Zi72ZG", "rJYu5XnbM", "Bk4m5X2bM", "S19g4lpAZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "The manuscript proposes to use the Cramer distance as a measure between distributions (acting as a loss) when optimizing\nan objective function using stochastic gradient descent (SGD). Cramer distance is a Bregman divergence and is a member of the Lp family of divergences. Here a \"distance\" means a symmetric divergence measure that satisfies the relaxed triangle inequality. The motivation for using the Cramer distance is that it has unbiased sample gradients while still enjoying some other properties such as scale sensitivity and sum invariant. The authors also proof that for the Bernoulli distribution, there is a lower bound independent of the sample size for the deviation between the gradient of the Cramer distance, and the expectation of the estimated gradient of the Cramer distance. Then, the multivariate case of the Cramer distance, called the energy distance, is also briefly presented. The paper closes with some experiments on ordinal regression using neural networks and training GANs using the Cramer distance. \n\nIn general, the manuscript is well written and the ideas are smoothly presented. While the manuscript gives some interesting insights, I find that the contribution could have been explained in a more broader sense, with a stronger compelling message.\n\nSome remarks and questions:\n\n1.\tThe KL divergence considered here is sum invariant but not scale sensitive, and has unbiased sample gradients. The \n\tauthors are considering here the standard (asymmetric) KL divergence (sec. 2.1). Is it the case that losing scale\n\tsensitivity make the KL divergence insensitive to the geometry of the outcomes? or is it due to the fact the KL \n\tdivergence is not symmetric? or ?\n\n2.\tThe main argument for the paper is that the simple sample-based estimate for the gradient using the Wasserstein \n\tmetric is a biased estimate for the true gradient of the Wasserstein distance, and hence it is not favored with\n\tSGD-type algorithms. Are there any other estimators in the literature for the gradient of the Wasserstein distance?\n\tWas this issue overlooked in the literature?\n\n3.\tI am not sure if a biased estimate for the gradient will lead to a ``wrong minimum'' in an energy space that has \n\tinfinitely many local minima. Of course one should use an unbiased estimate for the gradient whenever this is possible.\n\tHowever, even when this is possible, there is no guarantee that this will consistently lead to deeper and ``better''\n\tminima, and there is no guarantee as well that these deep local minima reflect meaningful results.\n\n4.\tTo what extent can one generalize theorem 1 to other probability distributions (continuous and discrete) and to the \n\tmultivariate cases as well?\n\n5.\tI also don't think that the example given in sec. 4.2 and depicted in Fig. 1 is the best and simplest way to illustrate\n\tthe benefit of Cramer distance over Wasserstein. Similarly, the experiments for the multivariate case using GANs and\n\tNeural Networks do not really deliver tangible, concrete and conclusive results. Partly, these results are very \n qualitative, which can be understood within the context of GANs. However, the authors could have used other \n models/algorithms where they can obtain concrete quantitative results (for this type of contribution). In addition, \n\tsuch sophisticated models (with various hyper-parameters) can mask the true benefit for the Cramer distance, and can \n\talso mask the extent of how good/poor is the sample estimate for the Wasserstein gradient.", "The contribution of the article is related to performance criteria, and in particular to the Wasserstein/Mallows metric, which has received a good deal of attention these last few years in the machine learning literature. The paper starts with a discussion about desirable properties of a loss function and points out the fact that (plug-in) empirical versions of the gradient of this quantity are biased, which limits its interest, insofar as many learning techniques are based on (stochastic) gradient descent. In its current state, this argument looks artificial. Indeed, zero bias can be a desirable properties for an estimate but being biased does not prevent it from being accurate. In contrast, in many situations like ridge regression, incorporating bias permits to drastically reduce variance.It quite depends on the structural assumptions made. For this reason, the worst case result (Theorem 1) is not that informative in my opinion. As they are mainly concerned by the L_1 version of the Wasserstein distance, rather than focussing on the bias, the authors could consider the formulation in terms of inverse cumulative distribution functions in the 1-d setup and the fact that the empirical cdf is never invertible: even if the theoretical cdf is invertible (which naturally guarantees uniqueness of the optimal transport) the underlying mass transportation problem is not as well-posed as that related to its statistical counterpart (however, smoothing the empirical distribution may remedy this issue).\nThe authors propose to use instead the Cramer distance, which is a very popular distance in Statistics and on which many statistical hypothesis testing procedures rely, and review its appealing properties. The comparisons between KL, Wasserstein and Cramer distances is vain in my opinion and willing to come to a general conclusion about the merits of one against the others is naive. In a nonparametric setup, it is always possible to find distributions such that certain of its properties are hidden by certain distances and highlighted by others. This is precisely why you are forced to specify the type of deviations between distributions in nonparametric hypothesis testing (a shift, a change in scale, etc.), there is no way of assessing universally that two distributions are close: optimality can only be assessed for sequences of contiguous hypotheses. The choice of the distance is part of the learning problem. ", "The authors investigate how the properties of different discrepancies for distributions affect the training of parametric model with SGD. They argue that in order for SGD to be a useful training procedure, an ideal metric should be scale sensitive, sum invariant and also provide unbiased gradients. The KL-divergence is not is scale sensitive, and the Wasserstein metric does not provide unbiased gradients. The authors thus posit the Cramer distance as a foundation for the discriminator in the GAN, and then generalize this to an energy based discriminator. The authors then test their Cramer GAN on the CelebA dataset and show comparable results to the Wasserstein GAN, with less mode collapse.\n\nFrom what I can gather, the Cramer GAN is unlikely to be a huge improvement in the GAN literature, but the mathematical relationships investigated in the paper are illuminating. This brings some valuable understanding for improving upon previous GANs [e.g. WGAN]. As energy based GANs and MMD GANs have become more prominent, it would be nice to see how these ideas interplay with those GANs. However, overall I thought the paper did a nice job presenting some context for GAN training.\n\n", "Thank you for the insightful comment. You’re right that there is a subtle difference between the two approaches. To paraphrase your comment, the WGAN equivalent with Cramer distance would have a critic that learns the function f* itself (Equation 4).\n\nOne resulting difference is that if we stop training the WGAN critic, the WGAN generator will collapse to the single point with the maximal critic value. On the other hand, if we stop training the Cramer GAN critic, the Cramer GAN generator will learn to minimize the energy distance between the distributions of the critic outputs. The Cramer GAN generator will then learn a distribution, not a single point.\n\nYou can see in Figure 4 that WGAN is much worse if we do just one critic update per generator update. It is still helpful to train the Cramer GAN critic, to avoid ignoring information not originally present in the critic outputs.", "1. The Cramer and Wasserstein are scale sensitive because they incorporate the Euclidean metric between outcomes. The KL divergence, on the other hand, only compares the relative probability densities.\n\n2. Quantile regression is one method for minimizing Wasserstein distances which we learned of after completing this work. However and to the best of our knowledge, it is not always applicable, for example in a GAN setting.\n\n3. We believe the deterministic completions discovered in Wasserstein GAN reflect the issues raised by Theorem 1, i.e. that the wrong minimum is found (in the sense that the true underlying distribution is not deterministic).\n\n4. We expect an analogue of Theorem 1 to hold whenever we consider asymmetric distributions, both continuous and discrete.\n\n5. We also trained a PixelCNN model using different loss functions. PixelCNN uses autoregressive univariate distributions to model whole images. There we found that minimizing the sample Wasserstein loss by gradient descent leads to far worse results than when minimizing the Cramer loss, even when the results are measured in terms of the Wasserstein distance itself. The results are summarized in Appendix B.2 and Figure 8. Similar results also held for ordinal regression (Appendix B.1).", "We thank the reviewer for their points, which are well-taken and will certainly help us improve the paper presentation. In particular, your feedback suggests it would be helpful to emphasize the empirical results currently in the appendix in order to better frame the theory.\n\nThere seems to be a misunderstanding regarding Theorem 1: our result shows that bias occurs not just in the gradients but also in the minimizer of the sample Wasserstein loss. Put another way: SGD on this loss, as an estimation procedure, is not consistent. This is fundamentally different from what occurs in ridge regression. \n\nIn the appendix we included empirical results on two domains (ordinal regression and image modelling) where we show that minimizing the sample Wasserstein loss by gradient descent leads to far worse results than when minimizing the Cramer loss, even when the results are measured in terms of the Wasserstein distance itself. While it's true that in general no distributional metric dominates the others, here we are highlighting a problem that has visible consequences.\n\nRegarding the merits of comparing to the KL divergence: we acknowledge the point, but do not completely agree. The shift in machine learning in the last years has exactly been recognizing that the KL divergence is ill-suited in many problems of current interest (generative modelling, cost-sensitive classification, reinforcement learning…). Our aim here was to illustrate the qualitative similarities between Cramer and Wasserstein distances, compared to the KL divergence.", "Thank you for your comments; we agree there is a need for unifying some of the GAN literature into a coherent story.", "There are some subtleties I find the authors may need to clarify a bit more:\n\nIn the formulation of W-GAN, the critic/discriminator is essentially part of the definition of Wasserstein distance -- a parametric approximation for the dual potential. That is, W-GAN is a generative model that \"attempts\" to minimize the Wasserstein distance between real distribution and generated distribution. W-GAN is not a GAN with Wasserstein loss.\n\nWhereas, in the authors' proposed approach, the concepts of GAN (two-player game) are unavoidable. The use of energy distance creates a surrogate loss function that discriminator ultimately wants to maximize. \n\nThis is part I don't understand why the GAN with energy distance can compare directly to Wasserstein GAN from the model perspective. The way they use distance is by nature different. \n\nMy opinion: While I agree that the W-GAN and the approach by authors share many similarities in algorithm, they actually follow different modeling framework to use the distance. The claim that Wasserstein distance does not have unbiased sample gradients is in fact also applicable to the approach by authors if one consider the maximized quantity by critic as a loss for training generator.\n\n" ]
[ 5, 4, 7, -1, -1, -1, -1, -1 ]
[ 3, 5, 2, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1m6h21Cb", "iclr_2018_S1m6h21Cb", "iclr_2018_S1m6h21Cb", "S19g4lpAZ", "SJrqRODeM", "B1tHGLTgG", "B1xzpQy-z", "iclr_2018_S1m6h21Cb" ]
iclr_2018_SJahqJZAW
Stabilizing GAN Training with Multiple Random Projections
Training generative adversarial networks is unstable in high-dimensions as the true data distribution tends to be concentrated in a small fraction of the ambient space. The discriminator is then quickly able to classify nearly all generated samples as fake, leaving the generator without meaningful gradients and causing it to deteriorate after a point in training. In this work, we propose training a single generator simultaneously against an array of discriminators, each of which looks at a different random low-dimensional projection of the data. Individual discriminators, now provided with restricted views of the input, are unable to reject generated samples perfectly and continue to provide meaningful gradients to the generator throughout training. Meanwhile, the generator learns to produce samples consistent with the full data distribution to satisfy all discriminators simultaneously. We demonstrate the practical utility of this approach experimentally, and show that it is able to produce image samples with higher quality than traditional training with a single discriminator.
rejected-papers
The paper proposes to use multiple discriminators to stabilize the GAN training process. Additionally, the discriminators only see randomly projected real and generated samples. Some valid concerns raised by the reviewers which makes the paper weak: - Multiple discriminators have been tried before and the authors do not clearly show experimentally / theoretically if the random projection is adding any value. - Authors compare only with DCGAN and the results are mostly subjective. How much improvement the proposed approach provides when compared to other GAN models that are developed with stability as the main goal is hence not clear.
val
[ "r1rx-5Oxf", "BJARkptxz", "rkhnnvolz", "rkk7fo37G", "BkwO6iHGf", "ry0wsiSGz", "SygZiiBMM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "\nThe paper proposes to stabilize GAN training by using an ensemble of discriminators, each workin on a random projection of the input data, to provide the training signal for the generator model.\n\nQ1: “In relation to “Theorem 3.1. … will produce samples from a distribution whose marginals along each of the projections W_k match those of the true distribution”.. I presume an infinite number of generator distributions could give rise to the correct marginals however not necessarily be converged to the data distribution. In Theorem A.2 the authors upperbound this residual as a function of the smoothness and support of the distributions as well as the projections presented to the discriminators. Can the authors comment on how tight this bound is e.g. as a function the number of used discriminators or the choosen projection methods ? \n\nQ2: Related to the above. Did the authors do or considered any frequency analysis of the ensemble of random projection? I guess you could easily do a numeric simulation of the expected frequency spectrum of the combined set discriminators?\n\n\nQ3: My primary concern with the work is the above mentioned computational complexity of running K discriminators in parallel. This is especially in relation to the experimental results showing significant high-frequency artefacts when running with K=12 classifiers (K=12 celebA results and “Random Imagenet-Canine Images: Proposed Method” in suplementary results). I think this is as expected as the authors are effectively fitting each classifier to the distributions of smoothed (with 8x8 random kernel) subsampled version of the input image. I would expect that each discriminator sees none or only a very limited amount the high frequency component in the images. Do the authors have any comments on how the sampling of the projection kernels affects the image results especially if the number of needed classifiers can be reduced somehow? I would expect that a combination of smoothing and high frequency filters would be needed to remove the high frequency artefacts?\n\nQ4: Whats the explanation of the oscilating patterns in figure 2?\n\nQ5: In the conclusion the authors mention that their framework is currently limited by the computational of running K discriminators and proposes:\n\n“In our current framework, the number of discriminators is limited by computational cost. In future work, we plan to investigate training with a much larger set of discriminators, employing only a small subset of them at each iteration, or every set of iterations”\n\nIn the extreme case of only using a single randomly discriminator the approach is quite similar to the quite widely used input dropout to the discriminator?\n\nOverall I like the simplicity of the proposed idea. However i’m not completely convinced that the “marginal” convergence proof holds for the relative low number of discriminators possible to use in practice. At least i would like the authors to touch on this key aspect of the method both theoretically and with experiments/simulations. Also several other methods have recently been proposed to improve stability of GANs, however no experimental comparisons is made with these methods (WGAN, EGAN, LSGAN etc.)\n", "\n- Paper summary\n\nThe paper proposes a GAN training method for improving the training stability. The key idea is to let a GAN generator competes with multiple GAN discriminators where each discriminator takes a random low-dimensional projection of an input image for differentiate whether the input image is a real or generated one. Visual generation results from the proposed method with comparison to those generated by the DCGAN were used as the main experimental validation for the merit of the proposed method. Due to poor experimental validation and inconclusive results, the reviewer does not recommend the acceptance of the paper.\n\n- Inconclusive results\n\nThe paper fails to compare the proposed method with the GMAN framework [a], which was the first work proposing utilizing multiple discriminators for more stable GAN training. Without comparing to the GMAN work, we do not know whether the benefit is from using multiple discriminators proposed in the GMAN work or from using the random low dimensional projections proposed in this paper. If it is former, then the proposed method has no merits at all.\n\nIn addition, the generator loss curve shown in Figure 2 is not making much sense. The generator loss curve will be meaningful if each discriminator update is optimal. However, this is not the case in the proposed method. There is little to conclude from Figure 2.\n\n[a] Durugkar et al. \"Generative multi-adversarial networks.\" ICLR 2017\n\n- Poor experimental validation\n\nThe paper fails to utilize more established performance metrics such as the inception loss or human evaluation score to evaluate its benefit. It does not compare to other approaches for stabilizing GAN training such as WGAN or LSGAN. The main results shown in the paper are generating 64x64 human face images, which is not impressive.", "The paper proposes a new approach to GAN training whereby they train one generator against an ensemble of discriminative that each receive a randomly projected version of the data. The authors show that this approach provides stable gradients to train the generator. \n\nThis is a nice idea, and both the theoretical analysis presented and the experiments on image data sets are interesting. Although the idea to train an ensemble of learning machines is not new, see e,.g. [1,2] -- and it would be useful to add some background on this, and the regularisation effect that emerges from it -- it does become new in the new context considered here, as the paper shows that such ensemble can also fulfil the role of stabilising GAN training. \nThe results are quite convincing that the proposed method is useful in practice,\n\nIt would be interesting to know if weighting the discriminators, or discarding the unlucky random projections as it was done in [1] would have potential in this context?\n\n[1] Timothy I. Cannings, Richard J. Samworth. Random-projection ensemble classification. Journal of the Royal Statistical Society B, 79(4), 2017, Pages 959-1035. \n[2] Robert J. Durrant, Ata Kabán. Random projections as regularizers: learning a linear discriminant from fewer observations than dimensions. Machine Learning 99(2), 2015, Pages 257-286.\n\n\n", "We've posted a revision adding the two papers suggested by R1 to the related work section. Individual responses to the reviews were posted below earlier.", "\nWe thank the reviewer for their comments and detailed observations. We address these individually below:\n\nQ1: Without further assumptions on the data, the bound is tight in the problem parameters. But, as the reviewer also notes, there usually is additional structure (conditional independence, sparsity, etc.) in natural data distributions that makes the approach succeed with fewer projections. We're interested in exploring more precise characterizations, but this would have to be domain / data specific. The proof and analysis technique used for Thm A.2 will serve as a useful starting point for such domain-specific analysis: still using that each discriminator constrains a different marginal, we hope to exploit assumptions about structure to provide tighter bounds on the error as a function of the number of such marginal constraints.\n\nWhile domain-specific analysis is an important direction of future work, we believe it is beyond the scope of this paper, which we see as making the first step in introducing the simple idea of using random projections ensembles to stabilize GAN training, showing that it has promise and utility, and in providing a starting point for analyzing this setup. We believe that the general idea will be applicable over a broad range of domains (data types, conditional vs general GANs, etc.), perhaps each with their own adaptations and extensions, and we believe that the paper as-is will therefore be of interest to a broader community.\n\nQ2: Each projection is a combination of filtering + downsampling. Since the filters are random iid, in terms of spatial frequency (ignoring color), they have a flat expected power spectral density---in other words, they're as likely to be high-pass as low-pass. Downsampling then folds in the higher freq. quarters of the spectrum to produce an aliased image. Fundamentally, the projection operation (again, ignoring the projection along color channels) can be seen as taking sets of frequency components, and retaining different random linear combinations of these sets in each projection.\n\nThe noise one sees in the K=12 case is actually \"periodic\" noise, with a period equal to the sampling rate (2), and can affect low and high-frequencies equally. Basically, with too few discriminators, the generator can choose to generate the 'right' weighted sum of each of the aliased low and high frequency sets. With enough discriminators which look at different weighted aliased combinations, the generator is more and more constrained in getting each element of that set right individually.\n\nQ3: The reviewer's intuition is correct---the generator does fit the aliased version (rather than the smoothed version) of the data when trained against too few discriminators. This also follows from the intuition from the proof of Theorem A.2. Regarding choosing efficient projections, this would again have to depend on the data distribution/be domain specific.\n\nActually, one approach we're actively pursuing currently (as follow-on work) applies to the domain of natural images is related to the reviewer’s comments---we're exploring a multi-resolution approach with crops over a wavelet pyramid. This approach is motivated through a modeling assumption of conditional independence of coefficients in the pyramid (two fine-level coefficients are independent conditioned on scaling coefficients).\n\nBut note that while domain-specific efficient projections are desirable, random projections will succeed with a large enough number of discriminators (and may be the only choice in some applications). To that end, we want to highlight that the issue of computation cost is mitigated to some extent by the fact that the forward/backward pass through the multiple discriminators can be done in parallel (on multiple GPUs). This means that for applications where a large number of discriminators is required under the proposed approach, one could still train the generator in the same amount of time given access to more computational resources. And at best, it will provide a starting point for searching for more efficient projections.\n\nQ4: These are due to orbits between the discriminator/generator---discriminator improves, then generator catches up, etc. (we see this with a single discriminator as well).\n\nQ5: This would be different from dropout because we'd still have a different discriminator for each 'drop configuration'. The idea would be to keep the discriminators for every projection around, but only train / use-for-generator-update a few or one of them in each iteration. But one can think of this as 'dropping' parts of the loss term for the generator.\n\n- Finally, note our approach is complementary to the other methods mentioned by the reviewer. Our approach addresses the 'high-dimensional' aspect of the stability problem. But it can be used with better losses (like WGAN), better architectures, etc, because one can apply the approach of operating on multiple random projections to such versions.", "\nWe'd like to clarify that our paper is focused on the specific goal of addressing instability in training GANs in high-dimensions. (Arjovsky et al. provide an excellent description of this phenomenon as well as more context to the problem we're trying to solve). Our theoretical analyses and experimental evaluations are therefore both geared towards this goal. We respond to specific questions below, but would like to respectfully ask the reviewer to take a second look at the paper in this context (rather than the generic context of improved GAN results) to see if it changes their mind.\n\n- We compare to DCGAN in order to fix a reasonably successful yet generic architecture, and then to isolate the effect of training with a single full-dimensional discriminator, and multiple low-dimensional discriminators. There are definitely better architectures and loss functions (e.g., WGAN) out there, but ours is a training approach that can be applied with those architectures and losses as well. \n\n- Figure 2 shows that the low-dimensional discriminators don't saturate like they do with a high-dimensional one. The curves in Figure 2 are not meant to analyze quality of the discriminator (and hence don't make sense in that context), but they do show that the discriminator isn't able to perfectly separate real and fake samples (at which point, as described in Arjovsky et al., its gradients become meaningless and this causes training to diverge). It is Figure 3 that compares quality by showing generated faces across training.\n\n- Note that we cite the Durugkar et al. paper and GMAN method, and discuss it in Sec 1.1, along with a host of other ensemble approaches. But again, the goal of GMAN is different, and is to better approximate the optimization over the discriminator. They do not address stability (like all other methods for GAN training, they stop training early). Further note the fact that our experiments already show that a single full-dimensional discriminator will saturate. With the goal of evaluating stability, using multiple full dimensional discriminators would not be useful since they would all also saturate (they have the same capacity, and if anything, they have an even higher advantage over the generator).\n", "- We thank the reviewer for their encouraging comments as well as for pointers to [1,2]. Using ensembles as well as random projections has a rich history as means to solve a variety of challenges in machine learning. Our work is motivated by these successes, and aims to show that these ideas are useful also for the challenge of instability in training GANs in high dimensions.\n\nWe draw connections to some prior works in Sec 1.1, and adding these works to the discussion (especially [1]) would definitely make it more informative. Thanks !\n\n- Discarding discriminators / projections is an interesting idea ! To some degree, the unlucky discriminators are already being weighted down because as they saturate, their contribution to the sum of gradients from all discriminators already goes to 0. But it would be interesting to see if we can make training more 'efficient', by discarding saturated discriminators and restarting them with a different projection matrix. Combining this idea with some of the ones we discuss in the conclusion is definitely an interesting direction of future work, and one that we intend to explore.\n" ]
[ 5, 3, 8, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_SJahqJZAW", "iclr_2018_SJahqJZAW", "iclr_2018_SJahqJZAW", "iclr_2018_SJahqJZAW", "r1rx-5Oxf", "BJARkptxz", "rkhnnvolz" ]
iclr_2018_Hy7EPh10W
Novelty Detection with GAN
The ability of a classifier to recognize unknown inputs is important for many classification-based systems. We discuss the problem of simultaneous classification and novelty detection, i.e. determining whether an input is from the known set of classes and from which specific class, or from an unknown domain and does not belong to any of the known classes. We propose a method based on the Generative Adversarial Networks (GAN) framework. We show that a multi-class discriminator trained with a generator that generates samples from a mixture of nominal and novel data distributions is the optimal novelty detector. We approximate that generator with a mixture generator trained with the Feature Matching loss and empirically show that the proposed method outperforms conventional methods for novelty detection. Our findings demonstrate a simple, yet powerful new application of the GAN framework for the task of novelty detection.
rejected-papers
Pros: The paper aims to unify classification and novelty detection which is interesting and challenging. Cons: - The reviewers find that the work is incremental and contains heuristics. Reviewers find the repurposing of the fake logit in semi-supervised GAN discriminator for assigning novelty strange. - The experiments presented are weak and authors do not compare with traditional/stronger approaches for novelty detection such as "learning with abstention" models and density models. GIven the pros and cons, the committe finds the paper to fall short of acceptance in its current form.
train
[ "H18Yvh7xG", "HkMvZzYez", "HkCfpG5xf", "Hy-6lhjgG", "BJZYcBsxf", "rJrVZgdlG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "This paper proposed a GAN to unify classification and novelty detection. The technical difficulty is acceptable, but there are several issues. First of all, the motivation is clearly given in the 1st paragraph of the introduction: \"In fact for such novel input the algorithm will produce erroneous output and classify it as one of the classes that were available to it during training. Ideally, we would like that the classifier, in addition to its generalization ability, be able to detect novel inputs, or in other words, we would like the classifier to say, 'I don't know.'\" There is a logical gap between the ability of saying 'I don't know' and the necessity of novelty detection. Moreover, there are many papers known as \"learning with abstention\" and/or \"learning with rejection\" from NIPS, ICML, COLT, etc. (some are coauthored by Dr. Peter Bartlett or Dr. Corinna Cortes), but the current paper didn't cite those that are particularly designed to let the classifier be able to say 'I don't know'. All those abstention/rejection papers have solid theoretical guarantees.\n\nThe 3rd issue is that the novelty for the novelty detection part in the proposed GAN seems quite incremental. As mentioned in the paper, there are already a few GANs, such that \"If the 'real' data consists of K classes, then the output of the discriminator is K+1 class probabilities where K probabilities corresponds to K known classes, and the K+1 probability correspond to the 'fake' class.\" On the other hand, the idea in this paper is that \"At test time, when the discriminator classifies a real example to the K+1th class, i.e., class which represented 'fake examples' during training, this the example is most likely a novel example and not from one of the K nominal classes.\" This is just a replacement of concepts, where the original one is the fake class in training and the new one is the novel class in test. Furthermore, the 4th issue also comes from this replacement. The proposed method assumes a very strong distributional assumption, that is, the class-conditional density of the union of all novel classes at test time is very similar to the class-conditional density of the fake class at training time, where the choice of similarity depends on the divergence measure for training GAN. This assumption is too strong for the application of novelty detection, since novel data can be whatsoever unseen during training.\n\nThis inconsistency leads to the last issue. Again mentioned in the 1st paragraph, \"there are no requirements whatsoever on how the\nclassifier should behave for new types of input that differ substantially from the data that are available during training\". This evidences that novel data can be whatsoever unseen during training (per my words). However, the ultimate goal of the generator is to fool the discriminator by generating fake data as similar to the real data as possible in all types of GANs. Therefore, it is conceptually and theoretically strange to apply GAN to novelty detection, which is the major contribution of this paper.\n\nLast but not least, there is an issue not quite directly related to this paper. Novelty detection sounds very data mining rather than machine learning. It is fully unsupervised without a clearly-defined goal which makes it sounds like an art rather than a science. The experimental performance is promising indeed, but a lot of domain knowledge is involved in the experiment design. I am not sure they are really novelty detection tasks because the real novelty detection tasks should be fully exploratory. \n\nBTW, there is a paper in IPMI 2017 entitled \"Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery\", which is very closely related to the current paper but the authors seem not aware of it.", "The paper presents a method for novelty detection based on a multi-class GAN which is trained to output images generated from a mixture of the nominal and novel distributions. The trained discriminator is used to classify images as belonging to the nominal or novel distributions. To deal with the missing data from the novel distributions the authors propose to use a proxy mixture distribution resulting from training the GAN using the Feature matching loss. \n\nI liked the paper and the idea of using the discriminator of the GAN to detect novelty. But I do feel the paper lakes some details to better justify/explain the design choices:\n\n1. A multi-class GAN is used but in the formal background presentation of GANs, section 2.2 only the binary version of the discriminator is presented. I think it would be helpful if the paper is more self contained and add the discriminator objective function to complete the presentation of the GAN design actually used.\nAlso would be nice if the authors can comment on whether the multi-class design necessary? can't the approach presented in the paper be naively extended to a regular GAN as long as it is trained to output a mixture distribution? \n\n2. It is not clear to me why the Feature Matching loss results in a mixture distribution or more specifically why it results in a mixture distribution which is helpful for novelty detection? The paragraph before eq (7) and the two after hint to why this loss results in a good mixture distribution. I think this explanation would benefit from a more formal attempt of defining what is a \"good\" mixture distribution. \n\n3. In addition to the above remark, generally I feel there is a gap between the definition of mixture distribution and proposition 1 to the actual implementation choice where it cannot be assumed the p_novel is known. I feel the paper would be clearer if the authors can draw a more direct connection.\n\n4. I am missing a baseline approach comparing to a 'regular' multi-class GAN with a reject option. i.e. a GAN which was not trained to output a mixture distribution. Comparing ROC curves for the output of a discriminator from such a regular GAN to that of the ND-GAN would help to asses the importance of the discussion of the mixture distribution.\n\n5. Is the method proposed sensitive to noise, i.e will poor quality images of known classes have a higher chance to be classified as novel classes? \n\nSome typos such as :\n\n'able to generates'\n'loss function i able to generate'\n'this the example'", "\nThe paper proposes a GAN for novelty detection (predicting novel versus nominal data), using a mixture generator with feature matching loss. The key difference between this paper and previous is the different definition of mixture generator. Here the authors enforce p_other to have some significant mass in the tails of p_data (Def 1), forcing the 'other' data to be on or around the true data, creating a tight boundary around the nominal data.\n\nThe paper is well written, derives cleanly from previous work, and has solid experiments. The experiments are weak 1) in the sense that they are not compared against simple baselines like p(x) (from, say, just thresholding a vae, or using a more powerful p(x) model -- there are lots out there), 2) other than KNNs, only compared with class-prediction based novelty detection (entropy, thresholds), and 3) in my view perform consistently, but not significantly better, than simply using the entropy of the class predictions. How would entropy improve if it was a small ensemble instead of a single classifier?\n\nThe authors may be interested in [1], a principled approach for learning a well-calibrated uncertainty estimate on predictions. Considering how well entropy works, I would be surprised in the model in [1] does not perform even better.\n\npros:\n- good application of GAN models\n- good writing and clarity\n- solid experiments and explanations\n\ncons:\n - results weak relative to naive baseline (entropy)\n - weak comparisons\n - lack of comparison to density models \n\n\n[1] Louizos, Christos, and Max Welling. \"Multiplicative Normalizing Flows for Variational Bayesian Neural Networks.\" arXiv preprint arXiv:1703.01961 (2017).", "Dear authors, \nThank you for your answers. However, I still have the following questions:\n\nQ1: As stated in the paper of Dai et al. (2017), the feature matching loss forces the generator to be close to the true distribution p_{data} not to the novel distribution p_{novel}. I think that the key elements of loss function of the bad generator which force the generator to be close to the novel distribution are entropy term and log-likelihood of density estimator (please see Section 5.3 of paper of Dai et al. (2017)). Therefore, it is still unclear that what elements of SSL-GAN with Feature Matching loss make the generator to be close to novel distribution p_{novel}(x) since SSL-GAN also targets to recover the data distribution p_{data}... \n\nQ2: For the same reason in Q1, I think that the feature matching loss does not produce a mixture generator of novel distribution and true distribution.", "Dear anonymous,\nThank you very much for reading our paper. Please see bellow answers to your questions. \n\nQ1: In the experiments section we used SSL-GAN trained with Feature Matching loss. We would like to clarify our claim: we claim that a generator of SSL-GAN with Feature Matching loss is a mixture generator. As we mentioned in Section 2.3 (page 5, paragraph 6: “In their paper Dai et al. (2017) experimentally demonstrated…”), the empirical results of Dai et al. (2017) suggest that a generator that is trained with the Feature Matching loss is a mixture generator. Moreover, as Saliman et.al (2016) show that SSL-GAN trained with the Feature Matching loss improves the classification accuracy of the multi-class discriminator. According to the Proposition 1 of Dai et al. (2017) this contradicts the claim that a generator that is trained with Feature Matching loss converge p_{data}(x) .\n \nQ2: You are correct that if we have a density estimator then we can estimate a level set of nominal density. Indeed, the bad generator proposed by Dai et al. (2017) requires nominal density estimation p_{data}(x). As a matter of fact we noted in Section 2.3, page 5, paragraph 5, last sentence: “Unfortunately, to train a generator with the loss function in eq. 7, we need to estimate p_{data}(x), the same problem which we wanted to avoid in conventional novelty detection methods”. We want to avoid to estimate p_{data}(x), and therefore we proposed an alternative method. As we explained in Section 2.3 we propose to use Feature Matching loss, or any other loss which produce a mixture generator without explicitly estimating nominal density. ", "Dear authors,\n\nThe topic of this paper is interesting, but I have following questions:\n\n1. According to the Proposition 1, an optimal discriminator of GAN becomes an optimal novelty detector if p_g(x) = \\pi p_novel(x) - (1-\\pi) p_{data}(x). However, in the experiments section, the authors used SSL-GAN, which forces generator to recover the data distribution (i.e., p_g(x)=p_{data}(x)), for the GAN based novelty detection score. Therefore, it seems that SSL-GAN is not a proper choice for the experiments.\n\n2. In case of bad generator proposed by Dai et al. (2017), it requires a density estimator p_{data}(x). If we have the density estimator p_{data}(x) such as PixelCNN, it seems that there is no need to build a novelty detector based on GAN since we can estimate a level set of nominal density using the density estimator... Do I correctly understand?\n\nIf I don't understand the paper correctly, please do not hesitate to let me know.\n\nThanks in advance.\n" ]
[ 4, 5, 6, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_Hy7EPh10W", "iclr_2018_Hy7EPh10W", "iclr_2018_Hy7EPh10W", "BJZYcBsxf", "rJrVZgdlG", "iclr_2018_Hy7EPh10W" ]
iclr_2018_S1EfylZ0Z
Anomaly Detection with Generative Adversarial Networks
Many anomaly detection methods exist that perform well on low-dimensional problems however there is a notable lack of effective methods for high-dimensional spaces, such as images. Inspired by recent successes in deep learning we propose a novel approach to anomaly detection using generative adversarial networks. Given a sample under consideration, our method is based on searching for a good representation of that sample in the latent space of the generator; if such a representation is not found, the sample is deemed anomalous. We achieve state-of-the-art performance on standard image benchmark datasets and visual inspection of the most anomalous samples reveals that our method does indeed return anomalies.
rejected-papers
The authors propose to detect anomaly based on its representation quality in the latent space of the GAN trained on valid samples. Reviewers agree that: - The proposed solution lacks novelty and similar approaches have been tried before. - The baselines presented in the paper are primitive and hence do not demonstrate the clear benefits over traditional approaches.
val
[ "By9QpjXlf", "ryxTDKPlz", "BJ1oIDYlG", "By8xh4T7z", "H1-oo4aQf", "Hy3EoV6QM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "In the paper, the authors proposed using GAN for anomaly detection.\nIn the method, we first train generator g_\\theta from a dataset consisting of only healthy data points.\nFor evaluating whether the data point x is anomalous or not, we search for a latent representation z such that x \\approx g_\\theta(z).\nIf such a representation z could be found, x is deemed to be healthy, and anomalous otherwise.\nFor searching z, the authors proposed a gradient-descent based method that iteratively update z.\nMoreover, the authors proposed updating the parameter \\theta of the generator g_\\theta.\nThe authors claimed that this parameter update is one of the novelty of their method, making it different from the method of Schlegl et al. (2017).\nIn the experiments, the authors showed that the proposed method attained the best AUC on MNIST and CIFAR-10.\n\nIn my first reading of the paper, I felt that the baselines in the experiments are too primitive.\nSpecifically, for KDE and OC-SVM, a naive PCA is used to reduce the data dimension.\nNowadays, there are several publicly available CNNs that are trained on large image datasets such as ImageNet.\nThen, one can use such CNNs as feature extractor, that will give better low dimensional expression of the data than the naive PCA.\nI believe that the performances of KDE and OC-SVM can be improved by using such feature extractors.\n\nAdditionally, I found that some well-known anomaly detection methods are excluded from the comparison.\nIn Emmott et al. (2013), which the authors referred as a related work, it was reported that Isolation Forest and Ensemble of GMMs performed well on several datasets (better than KDE and OC-SVM).\nIt would be essential to add these methods as baselines to be compared with the proposed method.\n\nOverall, I think the experimental results are far from satisfactory.\n\n\n### Response to Revision ###\nIt is interesting to see that the features extracted from AlexNet are not helpful for anomaly detection.\nIt would be interesting to see whether features extracted from middle layers are helpful or they are still useless.\nI greatly appreciate the authors for their extensive experiments as a response to my comments.\nHowever, I have decided to keep my score unchanged, as the additional experiments have shown that the performance of the proposed method is not significantly better than the other methods.\nIn particular, in MNIST, GMM performed better.", "The paper is about doing anomaly detection for image data. The authors use a GAN based approach where it is trained in a standard way. After training is completed, the generator's latent space is explored to find a representation for a test image. Both the noise variable and generator model is updated using back propagation to achieve this. The paper is original, well written, easy to follow and presented ideas are interesting. \n\nStrengths:\n- anomaly detection for images is a difficult problem and the paper uses current state of the art in generative modeling (GAN) to perform anomaly detection.\n- experiments section includes non-parametric methods such as OC-SVM as well as deep learning methods including a recent GAN based approach and the results are promising. \n\nWeaknesses:\n - It is not clear why updating the generator during the anomaly detection helps. On evaluating on a large set of anomalies, the generator may run a risk of losing its ability to generate the original data if it is adjusted too much to the anomalies. The latent space of the generator no longer is the same that was achieved by training on the original data. I don't see how this is not a problem in the presented approach.\n- The experimental results on data with ground truth needs statistical significance tests to convince that the benefits are indeed significant. \n- It is not clear how the value of \"k\" (number of updates to the noise variable) was chosen and how sensitive it is to the performance. By having a large value of k, the reconstruction loss for an anomalous image may reduce to fall into the nominal category. How is this avoided?", "Authors propose an anomaly detection scheme using GANs. It relies on a realistic assumption: points that are badly represented in the latent space of the generator are likely to be anomalous. Experiments are given in a classification and unsupervised context.\nIn the introduction, authors state that traditional algorithms \"often fail when applied to high dimensional objects\". Such claim should be supported by strong references as oc-svm or k-pca based anomaly detection algorithms (see Hoffman 2007) perform well in this context.\nOC-SVM is a well-known technique that gives similar performances: authors fail at convincing that there are advantages of using the proposed framework, which do not strongly differs from previously published AnoGAN.\nThe underlying assumption of the algorithm (points badly represented by GANs are likely to be anomalous) justifies the fact that anomalies should be detected by the algorithm (type-I error). What is the rationale behind the type-II error? Is it expected to be small as well? What happens with adversarial examples for instance?\n", "- We thank the reviewer for the recommendations for additional experimental baselines. We have now included all of the reviewer’s recommended baseline methods, other than the ensemble-GMM, where we used a standard GMM instead (we are not investigating ensemble methods, as many of these techniques have ensemble variants). Our method still performs the best.", "Thanks to the reviewer for their helpful suggestions. Some remarks:\n- We actually reset the theta values to the original trained values for each testing sample before again optimizing them, so the theta adjustments made at testing time are not retained. We have updated our submission to make this more clear (page 4, paragraph 1).\n- We have updated our submission to explain our choice of the value for k (page 6, paragraph 3). Larger k always yields better performance, we chose k=5 for a good performance/evaluation time trade-off. In practice not many optimization steps are necessary before the performance gained from adding additional steps becomes small.", "We thank the reviewer for their comments and suggestions.\n- Some traditional methods do indeed work well for high-dimensional learning and we have updated our paper to reflect that. We feel that it is accepted by the ML community that deep methods work the best for many high-dimensional tasks and we simply wanted to highlight that point.\n- We did not specifically address the type-II rationale in our original draft. We now highlight why we do not expect the generator to come up with an adequate representation of anomalous classes (page 3, final paragraph), thus giving some rationale for why the anomaly detector should avoid type-II errors.\n- ADGAN improves the CIFAR-10 AUC by about .014 over OC-SVM which is about an 13% improvement over the .5 random guessing baseline. (.624-.5)/(.610-.5)=1.127. We feel that this is a noteworthy improvement.\n" ]
[ 4, 6, 4, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_S1EfylZ0Z", "iclr_2018_S1EfylZ0Z", "iclr_2018_S1EfylZ0Z", "By9QpjXlf", "ryxTDKPlz", "BJ1oIDYlG" ]
iclr_2018_rJHcpW-CW
NOVEL AND EFFECTIVE PARALLEL MIX-GENERATOR GENERATIVE ADVERSARIAL NETWORKS
In this paper, we propose a mix-generator generative adversarial networks (PGAN) model that works in parallel by mixing multiple disjoint generators to approximate a complex real distribution. In our model, we propose an adjustment component that collects all the generated data points from the generators, learns the boundary between each pair of generators, and provides error to separate the support of each of the generated distributions. To overcome the instability in a multiplayer game, a shrinkage adjustment component method is introduced to gradually reduce the boundary between generators during the training procedure. To address the linearly growing training time problem in a multiple generators model, we propose a method to train the generators in parallel. This means that our work can be scaled up to large parallel computation frameworks. We present an efficient loss function for the discriminator, an effective adjustment component, and a suitable generator. We also show how to introduce the decay factor to stabilize the training procedure. We have performed extensive experiments on synthetic datasets, MNIST, and CIFAR-10. These experiments reveal that the error provided by the adjustment component could successfully separate the generated distributions and each of the generators can stably learn a part of the real distribution even if only a few modes are contained in the real distribution.
rejected-papers
The paper aims to address the mode collapse issue in GANs by training multiple generators and forcing them to be diverse. Reviewers agree that the proposed solution is not novel and has disadvantages such as increased parameters due to multiple generator models. The authors do not provide convincing arguments as to why the proposed approach should work well. The experiments presented also fail to demonstrate this. The results are limited to smaller MNIST and CIFAR10 datasets. Comparisons with approaches that directly address the mode collapse problem are missing.
train
[ "HyqGENDgz", "ByyDCx9xf", "BkQh8t5gz", "B1n0E0vlf", "S19N17XlM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Overall, the writing is very confusing at points and needs some attention to make the paper clearer. I’m not entirely sure the authors understand the material particularly well, as I found some of the arguments and narrative confusing or just incorrect. I don’t really see any significant contribution here except “we had this idea for this model, and it works”. There’s no interesting questions being asked about missing modes (and no answers through good experimentation), no insight that might contribute to our understanding of the problem, and no comparison to other models. My guess is this submission was rushed (and perhaps they were just looking for feedback). I like the idea, don’t get me wrong: a model that is trainable across multiple GPUs and that distributes generative work is pretty cool, and I want to see this work succeed (after a *lot* more work). But the paper really lacks what I’d consider good science, and I don’t see it publishable without significant improvement.\n\nPersonally I think you should change the angle from missing modes to parallel training. I don’t see any strong guarantees that the model will do what you say it will, especially as beta goes to zero.\n\nDetailed comments\n\nP1\n“, that explicitly approximate data distribution, the approximation of GAN is implicit”\nThe wording of this is pretty strange: by “implicit”, we mean that we only have *samples* from the distribution(s) of interest, but what does it mean for an approximation to be “implicit”?\n\nFrom the intro, it doesn’t sound like the approach is meant for the “mode collapse” problem, but for dealing with missing modes. These are different types of failures for GANs, and while there are many theories for why these happen, to my knowledge there’s no such consensus that these issues are the same.\nFor instance, what is keeping each of the generators from collapsing onto a single value? We often see the model collapse on several different values: why couldn’t each of your generators do this?\n\nP2: No, it is incorrect that the KL is what is causing mode collapse, and I think actually you mean “missing modes”. Arjovsky et al addresses the mode collapse problem, which is just another word for a type of instability in GANs. But this isn’t because of “vanishing gradients”, as the “proxy loss” (which you call “heuristic loss”, this isn’t a common term, fyi), which is what GANs are trained on in practice don’t vanish, but show some other sorts of instabilities (Arjovsky 2016). That said, other GAN variants without regularization also show collapse *and* missing modes, such as LSGAN and all the f-GAN variants (even the auto encoder variants).\n\nYou should also probably cite Che et al 2016 as another model that addressed missing modes. Also, what about ALI, BiGAN, and ALiCE? These also address missing modes (at least they claim to).\n\nI don’t understand why you’re comparing f-GAN and WGAN convergences: they are addressing different things with GANs: one shows insight into what exactly traditional GANs are doing (solving a dual problem of minimizing an f-divergence) versus addressing stability through using an IPM (though also a dual formulation of the wasserstein). f-GANs ensure neither stability nor non-vanishing gradients.\n\nP3: I like the breakdown of how the memory is organized.\nThis is for multi-GPU, correct? This needs to be explicitly stated.\n\nP6:\nThere’s a sign error in proof 1 (both in the definition of the reverse KL and when the loss is written out).\nAlso, the gradient w.r.t. theta magically appears in the second half.\nThis is a pretty round-about way to arrive at that you’re minimizing the reverse KL: I’m pretty sure this can be shown by formulating the second term in f-gan (the one where you sample from the generator), that is f*(T), where f* is the convex conjugate of f = -log\n\nMixture of Gaussians: common *missing modes* experiment.\n\nSo my general comments about the experiments\nYou need to compare to other models that address missing modes. Overall, many people have shown success with experiments similar to your simple mixture of Gaussians experiments, so in order to show something significant here, you will need to have a more challenging experiments and show a comparison to other models.\nThe real-world experiments are fairly unconvincing, as you only show MNIST and CIFAR-10 (and MNIST doesn’t look very good). Overall, the good inception scores aren’t too surprising given the model has several generators for each mode, but I think we need to see a demonstration on better datasets.", "Summary:\nThis paper proposes parallel GANs (PGANs). This is a new architecture which composes the generator based on a mixture of weak generators with the main intended purpose that each unique generator may suffer mode collapse, but as long as each generator collapses to a distinct mode, the combination of generators will cover the whole image distribution. The paper proposes a number of technical details to 1) ensure that each sub generator offers distinct information (adjustment component, C) and 2) to efficiently train the generators in parallel while accumulating information to update both the discriminator and the adjustment component. \nResults are shown on a synthetic dataset of gaussian mixtures, demonstrating that the model does indeed find all modes within the data, and on two small real image datasets: MNIST and CIFAR-10. Overall the parallel generator model results in ~x2 speedup in training time compared with a single complex generator model.\n\nStrengths:\nMode collapse in GANs is a timely and unsolved problem. While most work aims to construct auxiliary loss function to prevent this collapse, this paper instead chooses to accept the collapse and instead encourage multiple models which collapse to unique modes. Though this does present a new problem in chooses the number of modes to estimate within a data source, the paper also presents a solution to systematically combine redundant modes over time, making the model more robust to the choice of number of generators overall. \n\nWeaknesses:\nOrganization - The paper is quite difficult to read. Some concepts are presented out of order. For example, the notion of an adjustment component is very natural but not introduced until after it is mentioned a few times. Similarly, G_{-k} is mentioned many times but not clearly defined. I would suggest to the authors to reorder the subsections in the method part to first outline the main idea: (parallel generators to capture different parts of overall distribution), mention the need to prevent redundancy between the generators (C), and mention some technical overhead in determining how to process all generated images by D. All of this may be discussed within the context of Fig 1. Also Fig 1a-b may be combined and may aid in explanation. \n\nExperiments - Comparison is limited to single generator models. Many other generator approaches exist beyond a single generator/discriminator GAN. In particular, different loss functions for training the generator (LS-GAN etc). Missing some relevant details like why use HogWild or what it is. \n\nMinimal understanding - I would like to know what exactly each generator contributes in the real world datasets. Can you show some generations from each mode? Is there a human perceivable difference?\n\nFigure 4: why does the inception score for the single generator models vary with the #generators?\n\nLast paragraph before 4.2.1: Please clarify this sentence - “we designed a relatively strong discriminator with a high learning rate, since the gradient vanish problem is not observed in reverse KL GAN.” \n\nTypo: last line page 7: “we the use” → “we use the”", "The paper proposes to use multiple generators to fix mode collapse issue. The multiple generators are trained to be diverse. Each generator uses the reverse KL loss so that it models a single mode. One disadvantage is that it increases the number of networks (and hence the number of parameters). \n\nThe paper needs some additional experiments to convincingly demonstrate the usefulness of the proposed method. Experiments on a challenging dataset with large number of classes (e.g. ImageNet as done by AC-GAN paper) would better illustrate the power of the method.\n\nAC-GAN paper:\nConditional Image Synthesis with Auxiliary Classifier GANs\nhttps://arxiv.org/pdf/1610.09585.pdf\n\nThe paper lacks clarity in some places and could use another round of editing/polishing.", "Hello, thanks for your comment. In GAP, multiple discriminators are trained and the swap operator will reduce the coupling between a generator and discriminator since a tight pair could lead to mode collapse problem. However, in our proposed method, only one global discriminator is used and each generator is trained to capture different modes of the data distribution. The extra component C will penalize those generators that collapse to the same mode. Another simple understanding of our proposed method is that each generator tries to capture the data distribution while keeps a distance with any other generators. So that the search space will be partitioned into k separate parts(k is the number of generator) and each generator will capture a certain part. \nSo our method is different from GAP, where GAP use swap operator to bring different adversaries to each generator, while in our method, we partition the space using extra component C, and each generator will capture a certain part of the data distribution. ", "[1] Generative Adversarial Parallelization (GAP) is framework where multiple generative and discriminator are trained simultaneously via exchanging their discriminators, which eliminates the tight coupling between a generator and discriminator. Is their relationship between the method proposed and GAP?\n\n[1] Daniel Jiwoong Im, He Ma, Chris Dongjoo Kim, Graham Taylor. Generative Adversarial Parallelization https://arxiv.org/abs/1612.04021" ]
[ 3, 6, 5, -1, -1 ]
[ 5, 4, 4, -1, -1 ]
[ "iclr_2018_rJHcpW-CW", "iclr_2018_rJHcpW-CW", "iclr_2018_rJHcpW-CW", "S19N17XlM", "iclr_2018_rJHcpW-CW" ]
iclr_2018_S1FQEfZA-
A Classification-Based Perspective on GAN Distributions
A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on. The current approaches to examining this issue require significant human supervision, such as visual inspection of sampled images, and often offer only fairly limited scalability. In this paper, we propose new techniques that employ classification-based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data. These techniques require only minimal human supervision and can easily be scaled and adapted to evaluate a variety of state-of-the-art GANs on large, popular datasets. They also indicate that GANs have significant problems in reproducing the more distributional properties of the training dataset. In particular, the diversity of such synthetic data is orders of magnitude smaller than that of the original data.
rejected-papers
The paper proposes a new metric to measure GAN performance by training a classifier on the true labeled dataset and then comparing the distribution of the labels of the generated samples to the true label distribution. Reviewers find that the paper is well written but lacks novelty and is quite experimental does not present any new insights. The paper investigates well-known model collapse and diversity issues. Reviewers are not convinced that this is a good metric to measure sample quality or diversity as the generator can drop examples far away from the boundary and still achieve a good score on this metric.
train
[ "H1WAH_kxM", "BkTles9xM", "rJCxSIslz", "BJ-hES67G", "SJYJ1Ip7z", "SkMOSHp7M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Overall comments: Trying to shed light at comparison between different GAN variants, but the metrics introduced are not very novel, results are not comparable with prior work and older version of certain models are used (WGAN instead of Improved WGAN)\n\nSection 2.1: quantifying mode collapse\n* This section should mention Inception scores. A model which collapses on only one class will have a low inception score, and this metric also uses a conv net classifier, as the approach is very similar (the method is only mentioned briefly in section 2.3)\n* The authors might not be aware of concurrent work published before the ICLR deadline, which introduces a very similar metric: https://arxiv.org/abs/1706.08500\n\nSection 2.2: measuring diversity:\n* There is an inherent flaw in this metric, namely it trains one GAN per class. One cannot generalize from this metric on how different GAN models will perform when trained on the entire dataset. One model might be able to capture more diverse distributions, but lose a bit of quality, while another model might be able to create good samples when train on low diversity data. We already know that when looking at other generative models, we can find such examples. VAEs can obtain very good samples on celebA, a dataset with relative low diversity, but not so good samples on cifar. \n* The authors compare their experiment with Radford et al. (2015), but that needs to be done with caution. In Radford et al. (2015), the authors use a conditional generative model trained on the entire dataset. In that setting, this test is more suitable since one can test how good well the model has learned the conditioning. For example, for a conditional model trained on cats and dogs, a failure mode is that the model generates only cats. This failure mode can then be captured by this metric. However, when training two models, one on cats and one on dogs, this failure mode is not present since the data is already split into classes. \n* The proposed metric is not necessarily a diversity metric, it is also a quality metric: in a situation where all the models diverge and generate random noise, with high diversity, but without any structure. This metric would capture this issue, because a classifier will not be able to learn the classes, because there is no correlation between the classes and the generated images. \n\nExperimental results:\n* Positive insights regarding labels and celeba. Looks like subtle labels on faces are not being captured by GAN models. \nFigure 1 is hard to read. \n* ALI having higher diversity on celeba is explicitly mentioned in a paper the authors cite, namely “Do GANs actually learn the distribution? An empirical study”. Would be nice to mention that in the paper.\n\nWould like to see:\n* A comparison with the Improved Wasserstein GAN model. This model is now the one used by the community, as opposed to the original Wasserstein GAN.\n* Models trained on cifar, with the reported inception scores of the models on cifar. That makes the paper comparable with previous work and is a test against bugs in model implementations or other parts of the code. This would also allow to test for claims such as the fact that the Improved GAN has more mode collapse than DCGAN, while the Improved GAN paper says the opposite. \nThe reason why the authors chose the models they did for comparison. In the BiGAN (same model as ALI) paper the authors report a low inception score, which suggests that their model is not able to capture the subtleties of the Cifar dataset, and this seems to be correlated with the results obtained in this work. \n\n\n", "The paper proposes a new evaluation measure for evaluating GANs. Specifically, the paper proposes generating synthetic images using GAN, training a classifier (for an auxiliary task, not the real vs fake discriminator) and measuring the performance of this classifier on held out real data. \n\nWhile the idea of using a downstream classification task to evaluate the quality of generative models has been explored before (e.g. semi-supervised learning), I think that this is the first paper to evaluate GANs using such an evaluation metric.\n\nI'm not super convinced that this is an useful evaluation metric as the absolute number is somewhat to interpret and dependent on the details of the classifier used. The results in Table 1 change quite a bit depending on the classifier. \n\nIt would be useful to add a discussion of the failure modes of the proposed metric. It seems like a generator which generates samples close to the classification boundary (but drops examples far away from the boundary) could still achieve a high score under this metric. \n\nIn the experiments, were different architectures used for different GAN variants?\n\nI think the mode-collapse evaluation metrics in MR-GAN are worth discussing in Section 2.1\nMode Regularized Generative Adversarial Networks\nhttps://arxiv.org/abs/1612.02136", "This paper propose to evaluate the distributions learned by GAN using classification-based methods. As two examples, the authors evaluates the mode collapse effect and measure the diversity for GAN distributions. The proposed approaches are experimental but does not require human inspection. The main idea is to fit a classifier on the training data and also learn a GAN model using the training data. Then generate simulated data using GAN and use the classifier to predict the labels of the simulated data. The distribution of predicted labels and the labels of the true data can be easily compared.\n\nDetailed comments:\n\n1. The proposed method is purely experimental. It would be better to gain some theoretical insights of this methodology. Moreover, in terms of experiments, it would be nice to consider more examples except for mode collapse and diversity, since these problems are well-known for GAN.\n\n2. Since mode collapse is a well-known phenomenon, the novelty of this paper is not sufficient.\n\n3. There are other measures for the quality of GAN. For example, the inception scores and mode scores (Salimans et al. 2016, Che et al. 2017). It would be nice to compare the method here with other related work.\n\nReferences:\n1. Improved Techniques for Training GANs https://arxiv.org/abs/1606.03498\n\n2. Mode Regularized Generative Adversarial Networks https://arxiv.org/abs/1612.02136\n\n", "We thank the reviewer for their comments. The reviewer accurately summarized the part of our experiments relating to mode collapse. However, we would like to remark that there is another part of our submission that provides a more general methodology for evaluating the quality of the learned distribution in GANs. It focuses on measuring fidelity of class-wise decision boundaries. We view this as an important aspect of our work and thus want to bring it to the reviewer’s attention. \n\nBefore addressing the specific points raised in the review, we would like to make a broader comment:\n\nThe overarching goal of generative models is to produce samples of comparable quality and diversity to the distribution they are trained on (true distribution). While many of the state-of-the-art GANs produce synthetic samples that have good perceptual quality, it is rather clear at this point that the primary challenge for the current GANs is their inability to fully capture the diversity of the true distribution (and mode collapse is just one aspect of this problem.) This phenomenon is well known at the intuitive level but remains fairly vague (in particular, the concept of mode collapse has been so far defined concretely only in the context of very simple models such as Gaussian mixtures). It is thus crucial to make the notion of “a GAN (not) capturing the diversity of the underlying distribution” be precise and measurable. Otherwise, it will be difficult to make progress in this area in a principled manner.\n\nThe primary goal of our work is to address this exact challenge. Specifically, we view our key contribution to be putting forth a methodology to enable us to measure the “learned diversity” of GAN distributions in a quantitative manner and, crucially, to make this methodology be fully scalable and automated (in particular, to make it not require visual inspection by a human). This way, the resulting benchmarks are directly applicable to actual state-of-the-art GANs on realistic high-dimensional image datasets (instead of having to rely solely on toy models and/or datasets). They can thus be used to, on one hand, obtain a much more fine-grained understanding of the distributions that actual state-of-the-art GANs learn and, on the other hand, to guide further development and progress in this domain. \n\nWe hope that the above point helps to clarify the extent and focus of the contribution of the proposed work.\n\nNow, to address the specific concerns of the reviewer:\n\n1. Our work, at this point, is indeed purely experimental but this was intentional. On one hand, we wanted to establish first the utility of our general methodology in large-scale analysis of GAN distributions. On the other hand, we believe there is a need to expand first the experimental foundations of the topic before we are able build a principled and well-grounded theory (as discussed above). In particular, we believe that our experimental insights and a precise way to capture some of the key aspects of GAN distribution diversity will exactly provide such foundations for the follow-up theoretical work. \n\n2. We would like to clarify that we do not claim that the discovery of mode collapse is a contribution of our work. In particular, prior work exploring this phenomenon has been discussed in Section 1. As discussed above, our goal is to provide a methodology for capturing and quantifying this phenomenon in state-of-the-art GAN settings. \n\n3. We compare our method to Inception scores for all diversity experiments (Tables 1 and 3). One should keep in mind, however, that Inception scores are computed based on labels from an Inception network pre-trained on ImageNet. It is not clear if these labels are meaningful for conventional GAN datasets like LSUN or CelebA, where images belong to the same high-level category (faces, rooms). Furthermore, a model could attain a high inception score even if it produces a single compelling image and does not capture the dataset diversity (Che et al. (2017), Hendrycks & Basart (2017)). Our methodology thus puts the GAN distributions through a more stringent test by analyzing how well they capture key characteristics of the underlying classes or modes in the true distribution. We will include a comparison to the metric proposed in Che et al. (2017) in the revised version of our paper. However, even the authors of that work suggest that their metric may not be appropriate for the CelebA and LSUN datasets (which are the datasets used in our paper). We would like to also emphasize that our choice of these datasets is deliberate, given that most GAN research is centered around these datasets, while most popular evaluation metrics seem better suited to other datasets such as CIFAR-10 and ImageNet.\n", "We thank the reviewer for their analysis and questions. Below we address specific concerns raised by the reviewer.\n\nSection 2.1: quantifying mode collapse\nWe compare to the Inception Score in all experiments performed in Section 3.4 (Tables 1 and 3) and will include a similar comparison for the mode collapse experiments in the paper revision. However, we believe that the Inception Score has certain shortcomings for measuring diversity, as discussed in point 3 of our response to AnonReviewer2. We thank the reviewer pointing us to the concurrent work of Heusel et al. (2017). We will compare to it in the paper revision.\n\nSection 2.2: measuring diversity:\n* We would like to clarify the premise of our metric - if a GAN trained per class learned the true distribution for that class, then samples from these class-wise GANs would be able to accurately reconstruct the inter-class decision boundaries. Given the reduced performance of GANs on classification tasks when compared to true data, even for binary classification, it is clear that the GAN distributions are inherently lacking. \n\n* We agree with the reviewer that if a GAN is trained separately for each class, one cannot identify mode dropping (at the scale of whole classes) based on the resulting samples. However, we would like to clarify that this is the approach we propose for diversity studies, where we assess how good GANs are at classification (and do not attempt to directly measure mode collapse). In such a setting, we want to have balanced datasets, with an equal number of examples per class. \n\nOur approach to study mode dropping is explained in Sections 2.1 and 3.3. This method does not involve training a separate GAN per class. In fact, a single GAN is trained on a multi-class dataset and then a pre-trained classifier is used as an annotator to observe learned modes. The approach proposed in Radford et al. (2015) uses a conditional GAN, wherein sampling requires specifying the class label, e.g., cat or dog. Thus it is unlikely that these GANs would generate only cats since the images are produced conditioned on the label - similar to sampling from class-wise GANs. Hence we believe that the GANs used in the Radford et al. (2015) study would be less likely to exhibit dropping of entire classes as compared to a GAN trained unconditionally. Thus the later is deliberately chosen for our mode collapse studies. \n\n* We agree with the reviewer that classification performance is a measure of both quality and diversity. However, in the setting of common GANs, images are known to have good perceptual quality. This prior is reinforced by the high ‘confidence scores’ and Inception scores observed in our experiments for the synthetic GAN samples (Section 3.4). Thus we believe that the reduced classification performance is more a result of lower diversity than quality.\n\nExperimental results:\n\n* We will make Figure 1 more readable in the paper revision. We will also include the reference to Arora & Zhang (2017) in the comment on ALI diversity.\n\n* In this paper, we tried to identify GANs that are somewhat classic (e.g., DCGAN and WGAN) and some newer GANs to evaluate. We also chose GANs that have been studied by similar prior work (Arora & Zhang (2017)). Our objective was not to provide a relative ranking of all existing GANs but to propose a methodology to perform large-scale automated analysis of the learned image distributions in GANs. So, our choice of GANs was focused on demonstrating this methodology on popular variants. It is straightforward to apply this suite of tools to any other GAN, and we will include an evaluation of Improved WGAN in the paper revision.\n\n“Would like to see”:\n\nIn this paper, we use standard implementations for all GANs as mentioned in Section 5.1.2. We tried to use CIFAR-10 as a dataset in our evaluations. However, in our experiments, most of the GANs did not generate meaningful images on this dataset, as these GANs have been not optimized for this setting. Thus, we decided to use datasets for which these GANs have been optimized - CelebA and LSUN.\n\nHowever, as an additional sanity check of our setup, below we provide results for DCGAN trained on CIFAR-10 (as this GAN produces realistic samples on CIFAR-10). Our experiment was as follows:\n1. We trained an unconditional DCGAN on CIFAR-10. The Inception Score measured on 50000 GAN samples is 6.5104±0.0698087, similar to prior art (https://github.com/xunhuang1995/SGAN).\n2. We trained 10 class-wise DCGANs on CIFAR-10. The Inception Score measured on 50000 GAN samples, 5000 from each class-wise GAN is 5.94101±0.051225. We performed diversity studies (Section 3.4) on these samples. A ResNet trained on the true CIFAR-10 without data augmentation gets test accuracy of 85.2% after 60k steps whereas on DCGAN samples it gets near-random performance of 17.4% (training accuracy is 100%).\n\nWe will include a comparison of DCGAN and Improved GAN on CIFAR-10 in the paper revision.\n", "We thank the reviewer for their analysis and comments. Below we address specific questions and concerns:\n\n1. We agree with the reviewer that the absolute value of the scores might depend on the choice of classifier used. This is, however, true even for the inception score, another popular metric used to evaluate GANs. (After all, the inception score directly relies on the Inception architecture). Still, we believe that the relative performance of different GANs and, especially, how they compare to true data performance already provides sufficiently meaningful information. It is worth noting that true data gets more than 90% classification accuracy on both the simple linear classifier and a deep ResNet, whereas synthetic GAN data performance is far from comparable to true data for either of the classifiers. In our experiments, we also studied other classifier networks and observed that the trends of the relative performance of the GANs was preserved irrespective of the choice of classifier.\n\n2. We thank the reviewer for this comment and will include a discussion of the failure modes of our approach in the revised version of the paper. It is true that a GAN could attain good classification score by generating samples close to the decision boundary. However, using the proposed mode collapse experiments we would be able to identify this phenomenon. In these experiments, we use a pre-trained classifier to annotate the data and measure dropping of modes. Here, using a classifier with really fine-grained annotation (more than just two classes), we could ascertain whether the GAN is omitting modes away from the decision boundary. \n\nMore importantly, we believe that quantifying the behavior of a generative model inherently requires using multiple metrics. Given that we aim to evaluate very complex behaviour in terms of simple, concise metrics, each of these metrics individually will inevitably conflate some aspects. In particular, our diversity score, by definition, can only capture certain aspects of diversity and thus should be used in conjunction with other evaluation metrics that would be more sensitive to, e.g., a dataset concentrated around the decision boundary.\n\n3. For every GAN, we used the standard architecture, code and hyperparameters provided by the respective authors as listed in Section 5.1.2. We did this to compare the most optimized and best-performing version of each GAN, as well as to emphasize that our methodology can be used “out of the box”, without a need for customization. For the classifiers, the same architecture, code, and hyperparameters were used for both true data and synthetic data. Thus in any column of Table 1 and Table 3, the results are based on exactly the same classifier network.\n\n4. We thank the reviewer for this suggestion and will include the metric proposed in MR-GAN (Che et al. (2017)) as a comparison in the revised version of our paper. It is worth noting, however, that even the authors of this paper suggest that their metric may not be appropriate for the CelebA and LSUN datasets (as mentioned in our reply to AnonReviewer2, we focused on these two datasets because they are the most commonly used in the context of state-of-the-art GANs).\n" ]
[ 5, 6, 3, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_S1FQEfZA-", "iclr_2018_S1FQEfZA-", "iclr_2018_S1FQEfZA-", "rJCxSIslz", "H1WAH_kxM", "BkTles9xM" ]
iclr_2018_B1tExikAW
LatentPoison -- Adversarial Attacks On The Latent Space
Robustness and security of machine learning (ML) systems are intertwined, wherein a non-robust ML system (classifiers, regressors, etc.) can be subject to attacks using a wide variety of exploits. With the advent of scalable deep learning methodologies, a lot of emphasis has been put on the robustness of supervised, unsupervised and reinforcement learning algorithms. Here, we study the robustness of the latent space of a deep variational autoencoder (dVAE), an unsupervised generative framework, to show that it is indeed possible to perturb the latent space, flip the class predictions and keep the classification probability approximately equal before and after an attack. This means that an agent that looks at the outputs of a decoder would remain oblivious to an attack.
rejected-papers
The paper proposes to launch adversarial attacks in the latent space of VAE such that the minimal change in the latent representation leads to the decoder producing an image with class predictions altered. Given the pros/cons the paper in its current form falls short of acceptance. Pros: Reviewers agree that the paper is well written and easy to follow Cons: - The paper lacks novelty and uses standard attacks and defense methodology. - Reviewers find the attack scenario presented is unrealistic and hence may not useful. - Experiments lack rigorous comparisons with baselines and it is not clear if the attack in the latent space will be stronger than the attack in the input space.
train
[ "H1I-1LYxf", "HkCuu2YxG", "B1xzWeqgG", "rJUUQkn7f", "Sk9azy3mf", "BJ5KMJhmf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The idea is clearly stated (but lacks some details) and I enjoyed reading the paper. \n\nI understand the difference between [Kos+17] and the proposed scheme but I could not understand in which situation the proposed scheme works better. From the adversary's standpoint, it would be easier to manipulate inputs than latent variables. On the other hand, I agree that sample-independent perturbation is much more practical than sample-dependent perturbation.\n\nIn Section 3.1, the attack methods #2 and #3 should be detailed more. I could not imagine how VAE and T are trained simultaneously.\n\nIn Section 3.2, the authors listed a couple of loss functions. How were these loss functions are combined? The final optimization problem that is used for training of the propose VAE should be formally defined. Also, the detailed specification of the VAE should be detailed.\n\nFrom figures in Figure 4 and Figure 5, I could see that the proposed scheme performs successfully in a qualitative manner, however, it is difficult to evaluate the proposed scheme qualitatively without comparisons with baselines. For example, can the proposed scheme can be compared with [Kos+17] or some other sample-dependent attacks? Also, can you experimentally show that attacks on latent variables are more powerful than attacks on inputs?\n\n\n", "This paper misses the point of what VAEs (or GANs, in general) are used for. The idea of using VAEs is not to encode and decode images (or in general any input), but to recover the generating process that created those images so we have an unlimited source of samples. The use of these techniques for compressing is still unclear and their quality today is too low. So the attack that the authors are proposing does not make sense and my take is that we should see significant changes before they can make sense. \n\nBut let’s assume that at some point they can be used as the authors propose. In which one person encodes an image, send the latent variable to a friend, but a foe intercepts it on the way and tampers with it so the receiver recovers the wrong image without knowing. Now if the sender believes the sample can be tampered with, if the sender codes z with his private key would not make the attack useless? I think this will make the first attack useless. \n\nThe other two attacks require that the foe is inserted in the middle of the training of the VAE. This is even less doable, because the encoder and decoder are not train remotely. They are train of the same machine or cluster in a controlled manner by the person that would use the system. Once it is train it will give away the decoder and keep the encoder for sending information.\n\n", "This paper is concerned with both security and machine learning. \nAssuming that data is encoded, transmited, and decoded using a VAE,\nthe paper proposes a man-in-middle attack that alters the VAE encoding of the input data so that the decoded output will be misclassified.\nThe objectives are to: 1) fool the autoencoder; the classification output of the autoencoder is different from the actual class of the input; 2) make minimal change in the middle so that the attack is not detectable. \n\nThis paper is concerned with both security and machine learning, but there is no clear contributions to either field. From the machine learning perspective, the proposed \"attacking\" method is standard without any technical novelty. From the security perspective, the scenarios are too simplistic. The encoding-decoding mechanism being attacked is too simple without any security enhancement. This is an unrealistic scenario. For applications with security concerns, there should have been methods to guard against man-in-the-middle attack, and the paper should have at least considered some of them. Without considering the state-of-the-art security defending mechanism, it is difficult to judge the contribution of the paper to the security community. \n\nI am not a security expert, but I doubt that the proposed method are formulated based on well founded security concepts and ideas. For example, what are the necessary and sufficient conditions for an attacking method to be undetectable? Are the criteria about the magnitude of epsilon given on Section 3.3. necessary and sufficient? Is there any reference for them? Why do we require the correspondence between the classification confidence of tranformed and original data? Would it be enough to match the DISTRIBUTION of the confidence? ", "Thank you for your careful reading of our paper and for providing useful feedback. We have addressed your comments below:\n\n1. I understand the difference between [Kos+17] and the proposed scheme but I could not understand in which situation the proposed scheme works better. \n(a) From the adversary's standpoint, it would be easier to manipulate inputs than latent variables. \n(b) On the other hand, I agree that sample-independent perturbation is much more practical than sample-dependent perturbation.\n\nResponse: (a) If it is possible to perturb the input to the encoder, it makes sense that it should be equally easy to perturb the input to the decoder. Possible confusion here may arise from the fact that the perturbation applied in [Kos2017] is applied to an image selected by the user therefore perturbation must still be applied in an online manner, unlike with traditional adversarial examples {CITE}, that may be synthesised offline and supplied to an algorithm.\n\n2. In Section 3.1, the attack methods #2 and #3 should be detailed more. I could not imagine how VAE and T are trained simultaneously.\n\nResponse: The way that we trained these was, in one epoch to update the parameters of the VAE and then the adversary, T. If you are suggesting that the adversary would not have access to the model during training, we point towards this quotation:\n\n“Attackers may also act during the learning process, for example tampering with some of the training data, or reading intermediate states of the learning system.” - Abadi et al. (On the Protection of Private Information in Machine Learning Systems, 2017).\n\n3. In Section 3.2, the authors listed a couple of loss functions. How were these loss functions are combined? The final optimization problem that is used for training of the propose VAE should be formally defined. Also, the detailed specification of the VAE should be detailed.\n\nResponse: In section 3.2 we define 3 losses: Jvae for updating the VAE, Jclass for updating the classifier for method #3 only and Jz for updating the adversarial transform model, T. The loss functions were not combined because they are each used to train a different model in the system. \n(a) In method #1 the VAE and T are not updated at the same time therefore it does not make sense to combine Jvae and Jz in the same cost function. The VAE is first trained to minimise Jvae, then T is trained to minimise Jz. The classifier is learned separately too, and does not update the parameters of the VAE.\n(b) In method #2, for each epoch the VAE is updated using Jvae and then T is updated using Jz. The losses update different sets of parameters. The classifier is learned separately and does not update the parameters of the VAE.\n(c) In method #3, arguably we could have written a new cost function, J = Jvae + Jclass, however since this only applied to method #3 we did not include this.\n\n4. From figures in Figure 4 and Figure 5, I could see that the proposed scheme performs successfully in a qualitative manner, however, it is difficult to evaluate the proposed scheme qualitatively without comparisons with baselines. For example, can the proposed scheme can be compared with [Kos+17] or some other sample-dependent attacks? Also, can you experimentally show that attacks on latent variables are more powerful than attacks on inputs?\n\nThese experiments could certainly be carried out. However, there would not be space to add this comparison here.", "Thank you for your comments, we have addressed them below:\n\n1. This paper misses the point of what VAEs (or GANs, in general) are used for. The idea of using VAEs is not to encode and decode images (or in general any input), but to recover the generating process that created those images so we have an unlimited source of samples. The use of these techniques for compressing is still unclear and their quality today is too low. So the attack that the authors are proposing does not make sense and my take is that we should see significant changes before they can make sense. \n\nResponse: While we appreciate that the specific VAE architecture is not directly used for image compression, autoencoders more generally have been used for image compression {Theis2017} and outperform industry standard compression techniques such as JPEG. We choose to look at the VAE as an example of *an* autoencoder as this appeared to be a good starting point for research in this area. Further, it was not possible (given the space and resource constraints) to apply this technique to all state of art autoencoding compression models. Instead, we chose a model that shares many properties with state-of-art autoencoding compression models. \n\nDeep autoencoders are being developed for tasks such as compression and the purpose of our paper is to expose vulnerabilities in deep autoencoders, so that these vulnerabilities may be kept in mind when developing such algorithms.\n\n2. But let’s assume that at some point they can be used as the authors propose. In which one person encodes an image, send the latent variable to a friend, but a foe intercepts it on the way and tampers with it so the receiver recovers the wrong image without knowing. Now if the sender believes the sample can be tampered with, if the sender codes z with his private key would not make the attack useless? I think this will make the first attack useless. \n\nResponse: If the sender encodes z with his own private key, rather than with the encoder, the decoder will not be able to decode the image at all. \n\n3. The other two attacks require that the foe is inserted in the middle of the training of the VAE. This is even less doable, because the encoder and decoder are not train remotely. They are train of the same machine or cluster in a controlled manner by the person that would use the system. Once it is train it will give away the decoder and keep the encoder for sending information.\n\nResponse: The adversary itself does not affect the training. It simply learns along side the autoencoding model. The only assumption we make is that the adversary can access the encodings and see the outputs of the autoencoder model during training. Abadi et al. suggest that this is not an unreasonable assumption:\n\n“Attackers may also act during the learning process, for example tampering with some of the training data, or reading intermediate states of the learning system.” - Abadi et al. (On the Protection of Private Information in Machine Learning Systems, 2017).\n\nFurther, training on clusters often suggests training remotely and transfer of models, data and results between machines.", "Thank you for your comments and feedback. We have addressed your concerns below:\n\n1. This paper is concerned with both security and machine learning, but there is no clear contributions to either field. From the machine learning perspective, the proposed \"attacking\" method is standard without any technical novelty.\n\nAll previous work, to our knowledge, has considered learning attacks on *image* space, not encoding space. Further, the perturbation in image space is such that a valid image can be constructed (but the class is flipped, according to a classifier). Please note that the most similar relevant work of Kos (2017), the attack requires access to the encoder; ours does not.  We consider these to be worthy and significant technical contributions.\n \n2. From the security perspective, the scenarios are too simplistic. The encoding-decoding mechanism being attacked is too simple without any security enhancement. This is an unrealistic scenario. For applications with security concerns, there should have been methods to guard against man-in-the-middle attack, and the paper should have at least considered some of them. Without considering the state-of-the-art security defending mechanism, it is difficult to judge the contribution of the paper to the security community. \n\nResponse: The purpose of the paper is to expose vulnerabilities in deep autoencoders. A significant amount of research has been done to explore the application of autoencoders. Perhaps the most significant are the compressive autoencoders {Theis2017}, some of which out perform JPEG compression. If these systems were to be deployed, it would be appropriate to be aware of their vulnerabilities. It makes sense to simultaneously  develop (1) attack, (2) defence and (3) autoencoder technologies so that when autoencoder technologies are deployed, we know that they can be deployed safely. Our experience with  teams using latent spaces to encode images for downstream tasks, is that latent spaces tend to be thought of as being incorruptible.  This is clearly not the case.\n\nNone of the other previous, widely cited papers {Goodfellow2014, Kos2017, Papernot2016} consider any other layers of security, such as encryption. We agree that, ultimately, these should be taken into account, but best practice  in cybersecurity involves examining each layer of security *independently*. \n\n3. I am not a security expert, but I doubt that the proposed method are formulated based on well founded security concepts and ideas. For example, what are the necessary and sufficient conditions for an attacking method to be undetectable? \n\nResponse: Good question.  This is what we started to explore in the appendix (Section 6.4). We do not think the answer can be given in anything other than a probabilistic sense. We implicitly think about the scenario in which the only way to detect attack would be through examining the likelihood that  a given unit, of an encoding, lies outside certain confidence intervals of the encoding distribution.  For this paper, we considered a Gaussian prior on each element of latent encoding space; the logical extension to this would be run experiments and develop the principle behind detecting change in at least one unit.\n\n4. Are the criteria about the magnitude of epsilon given on Section 3.3. necessary and sufficient? Is there any reference for them?\n\nResponse: No, we would only be able to make decisions based on probabilistic measures.\n\n5. Why do we require the correspondence between the classification confidence of tranformed and original data? \n\nResponse: For any particular classifier, we may know it’s classification confidence under normal operating conditions. If an attack occurs, to produce a “transformed” data sample, then it is possible that the confidence with which a sample is classified is different to that of “original” data samples that the model was validated on. If this difference is detectable, then we may say that the attack is detectable. If the difference is not detectable, we may not know if an attack has taken place.\n\nIndeed by simply sending the same sample multiple times and comparing classification confidence, you would be able to tell whether any samples had been contaminated.\n\nThese experiments are designed to show that the attack is not easily detectable in the image space.\n\n6. Would it be enough to match the DISTRIBUTION of the confidence? \n\nIt is not entirely clear what you mean by “match”. I agree, that it would make sense to divide a distribution of confidence values for a system operating under normal conditions, and then it may be easier to identify if a corrupted samples is an outlier. However, the distribution is likely to be Gaussian, suggesting that to describe the distribution it is sufficient to know the mean and standard deviation. However, rather than doing this in image space, based on classification confidence, we proposed to do this in latent space. We explore these ideas in the Appendix section 6.4." ]
[ 5, 3, 4, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_B1tExikAW", "iclr_2018_B1tExikAW", "iclr_2018_B1tExikAW", "H1I-1LYxf", "HkCuu2YxG", "B1xzWeqgG" ]
iclr_2018_ryepFJbA-
On Convergence and Stability of GANs
We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse. We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points. We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN. We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions.
rejected-papers
Pros: The proposed regularization for GAN training is interesting and simple to implement. Cons: - Reviewers agree that the methodology is incremental over the WGAN with gradient penalty and the modification is not well motivated. - Experimental results do not clearly demonstrate the benefits of the proposed algorithm and the paper also lacks comparisons with related works. GIven the pros/cons, the committee feels the paper is not ready for acceptance in its current state.
train
[ "ByPQQOX1G", "SyYO2aIlG", "Hkd3vAUeG", "Bk9rWSD-f", "rJbz9QP-z", "H1HD-BDWf", "ryKUsoYbf", "S1wmdHPWf", "B1nRUuAgG", "S1GEAvq0Z", "S1EGL-uC-", "rJ91vcOCb", "ryIPRnuCb", "HkXuRauAZ", "HkxbyyKCb", "Sk8vLeK0b", "Hy1sFOsRW", "r184LvsCZ", "HyVsPP5CZ", "HkcJxbYRb", "r1nCXkK0Z", "BJlRh0dC-", "Bk-qS6_RW", "S1yaH2dAb", "rJxYq4uAZ", "ByD_wguCW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "author", "author", "author", "author", "author", "author", "author", "author", "public", "public", "public", "public", "public", "public", "public", "public", "public" ]
[ "Summary\n========\nThe authors present a new regularization term, inspired from game theory, which encourages the discriminator's gradient to have a norm equal to one. This leads to reduce the number of local minima, so that the behavior of the optimization scheme gets closer to the optimization of a zero-sum games with convex-concave functions.\n\n\nClarity\n======\nOverall, the paper is clear and well-written. However, the authors should motivate better the regularization introduced in section 2.3.\n\n\nOriginality\n=========\nThe idea is novel and interesting. In addition, it is easy to implement it for any GANs since it requires only an additional regularization term. Moreover, the numerical experiments are in favor of the proposed method.\n\n\nComments\n=========\n- Why should the norm of the gradient should to be equal to 1 and not another value? Is this possible to improve the performance if we put an additional hyper-parameter instead?\n\n- Are the performances greatly impacted by other value of lambda and c (the suggested parameter values are lambda = c = 10)?\n\n- As mentioned in the paper, the regularization affects the modeling performance. Maybe the authors should add a comparison between different regularization parameters to illustrate the real impact of lambda and c on the performance.\n\n- GANs performance is usually worse on very big dataset such as Imagenet. Does this regularization trick makes their performance better?\n\n\n\nPost-rebuttal comments\n---------------------------------\n\nI modified my review score, according to the problems raised by Reviewer 1 and 3. Despite the idea looks pretty simple and present some advantages, the authors should go deeper in the analysis, especially because the idea is not so novel.", "This paper addresses the well-known stability problem encountered when training GANs. As many other papers, they suggest adding a regularization penalty on the discriminator which penalizes the gradient with respect to the data, effectively linearizing the data manifold.\n\nRelevance: Although I think some of the empirical results provided in the paper are interesting, I doubt the scientific contribution of this paper is significant. First of all, the penalty the author suggest is the same as the one suggest by Gulrajani for Wasserstein GAN (there the motivation behind this penalty comes from the optimal transport plan). In this paper, the author apply the same penalty to the GAN objective with the alternative update rule which is also a lower-bound for the Wasserstein distance.\n\nJustification: The authors justify the choice of their regularization saying it linearizes the objective along the data manifold and claim it reduces the number of non-optimal fixed points. This might be true in the data space but the GAN objective is optimized over the parameter space and it is therefore not clear to me their argument hold w.r.t to the network parameters. Can you please comment on this?\n\nRegularizing the generator: Can the authors motivate their choice for regularizing the discriminator only, and not the generator? Following their reasoning of linearizing the objective, the same argument should apply to the generator.\n\nComparison to existing work: This is not the first paper that suggests adding a regularization. Given that the theoretical aspect of the paper are rather weak, I would at least expect a comparison to existing regularization methods, e.g.\nStabilizing training of generative adversarial networks through regularization. NIPS, 2017\n\nChoice of hyper-parameters: The authors say that the suggested value for lambda is 10. Can you comment on the choice of this parameter and how it affect the results? Have you tried annealing lambda? This is a common procedure in optimization (see e.g. homotopy or continuation methods).\n\nBogonet score: I very much like the experiment where the authors select 100 different architectures to compare their method against the vanilla GAN approach. I here have 2 questions:\n- Did you do a deeper examination of your results, e.g. was there some architectures for which none of the method performed well?\n- Did you try to run this experiment on other datasets?\n", "This paper contains a collection of ideas about Generative Adversarial Networks (GAN) but it is very hard for me to get the main point of this paper. I am not saying ideas are not interesting, but I think the author needs to choose the main point of the paper, and should focus on delivering in-depth studies on the main point.\n\n1. On the game theoretic interpretations \n\nThe paper, Generative Adversarial Nets, NIPS 2014, already presented the game theoretic interpretations to GANs, so it's hard for me to think what's new in the section. Best response dynamics is not used in the conventional GAN training, because it's very hard to find the global optimal of inner minimization and outer maximization.\nThe convergence of online primal-dual gradient descent method in the minimax game is already well-known, but this analysis cannot be applied to the usual GAN setting because the objective is not convex-concave. I found this analysis would be very interesting if the authors can find the toy example when GAN becomes convex-concave by using different model parameterizations and/or different f-divergence, and conduct various studies on the convergence and stability on this problem.\n\nI also found that the hypothesis on the model collapsing has very limited connection to the convex-concave case. It is OK to form the hypothesis and present an interesting research direction, but in order to make this as a main point of the paper, the author should provide more rigorous arguments or experimental studies instead of jumping to the hypothesis in two sentences. For example, if the authors can provide the toy example where GAN becomes convex-concave vs. non-convex-concave case, and how the loss function shape or gradient dynamics are changing, that will provide very valuable insights on the problem. \n\n2. DRAGAN\n\nAs open commenters pointed out, I found it's difficult to find why we want to make the norm of the gradient to 1.\nWhy not 2? why not 1/2? Why 1 is very special?\nIn the WGAN paper, the gradient is clipped to a number less than 1, because it is a sufficient condition to being 1-Lipshitz, but this paper provides no justification on this number.\nIt's OK not to have the theoretical answers to the questions but in that case the authors should provide ablation experiments. For example, sweeping gradient norm target from 10^-3, 10^-2, 10^-1, 1.0, 10.0, etc and their impact on the performance.\nAlso scheduling regularization parameter like reducing the size of lambda exponentially would be interesting as well.\nMost of those studies won't be necessary if the theory is sound. However, since this paper does not provide a justification on the magic number \"1\", I think it's better to include some form of ablation studies.\n\nNote that the item 1 and item 2 are not strongly related to each other, and can be two separate papers. I recommend to choose one direction and provide in-depth study on one topic. Currently, this paper tries to present interesting ideas without very deep investigations, and I cannot recommend this paper to be published.\n", "Clarification regarding the importance of our theory sections:\n\nWe admit that the clarity in our presentation was lacking (especially ties to the GAN literature) and tried to address it in the new revision. We urge you to please take another look. Specifically, our contributions are: (reflected in updated abstract and introduction):\n\n- We propose to study GAN training as regret minimization. This is a completely novel contribution. In contrast, the popular view is that there is consistent divergence minimization and this is based on the unrealistic assumption that the discriminator is playing optimally at each step and making these updates in the function space. This isn't tractable (nor) close to what happens in practice. This forms the main motivation for our paper.\n\n- We present the analysis of artificial convex-concave case based on standard results in game theory literature. More importantly, we make explicit the connection between GAN training process (alternating gradient updates) and regret minimization, along the way, in section 2.2. These are not widely known results in the GAN literature and we provide supporting references in the new revision.\n\n**The useful outcome is that this analysis yields a novel proof for the asymptotic convergence of GAN training in the non-parametric limit and it does not require the discriminator to be optimal at each step**\nThe current revision reflects this message. \n\n- To explain mode collapse, we next analyze the realistic non-convex case in section 2.3 from regret minimization perspective. This is very different from the convex-concave case actually (we apologize for the confusion) and we cite the works of Hazan et.al to rigorously argue that convergence to potentially bad local equilibria happens using gradient updates (under some conditions). Please see the updated section 2.3.\n\nThis leads us to the main hypothesis of our paper - that mode collapse is just an undesirable local equilibrium and it should be possible to avoid it. We apologize for not being clear earlier. The natural question now is how we can avoid these equilibria? \n\n- A new section 2.4 has been added which explains how we can avoid 'mode collapse' equilibria (this was implicit and not clear earlier). Based on empirical observations, we basically characterize 'mode collapse' equilibria with sharp gradients of the discriminator function around some real data points. This is key to fighting mode collapse and avoiding such undesirable equilibria. We provide arguments and supporting experiments for this in a new section 2.4. This was a key transition that was missing in the earlier version.\n\n- From this motivation (of keeping D's gradients small in ambient data space), we propose DRAGAN penalty scheme. In fact, our theory also explains how other gradient penalties (WGAN-GP/LS-GAN) might be mitigating mode collapse. We compare and discuss its advantages over them in section 2.5. And present the main experiments in section 3.", "We found your review to be extremely helpful to understand where our paper was lacking. Our paper did have multiple new ideas and the presentation wasn't always clear (ties to GAN literature were missing). Thanks to your feedback, we chose the most important strand and strengthened it in the current revision using additional theoretical arguments and targeted experiments. The core content is still the same but the presentation has been changed significantly for improved clarity. We urge you to take another look.\n\nSpecifically, the main points now read as (reflected in updated abstract and introduction):\n- We propose to study GAN training as regret minimization (this is a novel view), which is in contrast to the popular view that there is consistent divergence minimization. More about this below.\n- We provide a novel proof for the asymptotic convergence of GAN training in the non-parametric limit and it does not require the discriminator to be optimal at each step.\n- Regret minimization (AGD) in non-convex games converges to potentially bad local equilibria, under some conditions and we hypothesize mode collapse to be resulting from this. Please see updated section 2.3, where we added theoretical arguments to support this. Next question is how we can avoid these equilibria? \n- We characterize mode collapse equilibria with sharp gradients of the discriminator function around some real data points. We provide toy experiments and arguments in a new section 2.4 to support this. We apologize for missing this key transition earlier.\n- From this motivation, we propose DRAGAN penalty scheme. We compare and discuss advantages over WGAN-GP, LS-GAN in section 2.5. And present main experiments in section 3.\n\nTheory sections:\n\n1. We should have called section 2.1 as background since its reviewing original GAN paper's formulation and we apologize for not making it clear.\n\n2. However, studying GAN training dynamics as regret minimization is a completely novel contribution. And the popular divergence minimization hypothesis stems from D using best-response algorithm in the function space. This isn't tractable as you mention, nor close to what happens in practice. In fact, this is the main motivation for our paper.\n\n3. We agree with you that the content in section 2.3 is well-known in game theory literature and moreover, the convex-concave is artificial. Our goal here was to simply review these results, introduce formal notions along the way, which we use in later sections and most importantly, make explicit the connection between GAN training (alternating gradient updates) and regret minimization. None of this is widely known in GAN literature and we support this claim with references as recent as 2017.\n**The useful outcome is that the analysis yields a novel proof for the asymptotic convergence of GAN training in the non-parametric limit and it does not require the discriminator to be optimal at each step**\nThe current revision reflects this message. \n\n4. To explain mode collapse, we analyze the realistic non-convex case in section 2.3. This is very different from convex-concave case actually (we apologize for the confusion) and we cite the works of Hazan et.al to rigorously argue that convergence to local equilibria can be expected using regret minimization or OGD. This leads us to the main hypothesis of our paper - that mode collapse is just an undesirable local equilibrium and it should be possible to avoid it. We apologize for not being clear earlier.\n\n5. A new section 2.4 has been added which explains how we can avoid 'mode collapse' equilibria. Based on empirical observations, we characterize mode collapse situation with sharp gradients of the discriminator function around some real data points. This is key to fighting mode collapse or avoiding such undesirable equilibria. We provide arguments and supporting experiments for this. From this motivation, DRAGAN penalty scheme is introduced. This was a key transition that was missing in the earlier version.\n\nDRAGAN algorithm:\n\n1. From the strengthened theory sections, it should be clear that as long as D(x) has small gradients around real data, mode collapse can be mitigated. We removed the arbitrary '1' in our scheme and used a generic 'k' (some small constant). We apologize for the jump earlier. \n2. The key idea is keeping D(x) gradients small and this stems from our observation that 'mode collapse' equilibria can be characterized by large gradients of D in the data space. In fact, this partly explains why WGAN-GP and LS-GAN improve stability, despite being motivated by reasoning (divergence minimization hypothesis) that is based on unrealistic assumptions. We urge you to take another look at sections 2.3 and 2.4.", "DRAGAN algorithm:\n\n- You are right that our penalty is applied on top of the vanilla objective. But our penalty (local penalty) is also quite different from WGAN-GP and LS-GAN (coupled penalties, both of these are shown to be very similar in our paper) as we only regularize in local regions around the real data. We dedicate the entire section 2.5 to compare/contrast these methods. \n\n- Further, we discuss how WGAN-GP's gradient penalty has little to do with Wasserstein duality as claimed in their paper (please see section 2.5) and in fact, this adds more credence to our theory that keeping D(x) gradients around real data to be small, is how they mitigate mode collapse. \n\n- The explanation for why constraining D(x)'s gradients around real data helps, is provided in section 2.4. And adding this penalty to the cost function of D and performing gradient descent w.r.t parameters, encourages the player to learn smooth functions which is what we want. This same idea has been used in Gulrajani et.al, Qi et.al as well. It would be interesting to come up with architectural design principles that inherently result in smooth discriminator functions. \n\n- We observed that 'mode collapse' equilibria exhibit sharp gradients of the discriminator function around real points and hence, we propose regularizing D (a gradient penalty scheme). Our work does not study mode collapse from the generator's perspective but what you propose is an excellent research direction. I think the method to achieve stability will look very different if one takes that approach since the generator's architecture is significantly different.\n\n- Our work mainly deals with the question of why mode collapse happens from 'regret minimization' perspective. And connect that to gradient penalties for constraining D's gradients in the data space. Our work was done prior to Roth et.al and so, we were only able to compare with WGAN-GP. But they also suggest a similar gradient penalty.\n\n- This is an excellent point that one should perform annealing of lamdba to get the best results using regularization schemes. However, we focus in our paper on why mode collapse happens, how we can characterize it and methods to avoid it. As long as D(x) has small gradients around real data, mode collapse can be averted. Our aim was not to get the best experimental results, and we only wanted to demonstrate the effectiveness of our scheme.\n\n\nBogoNet Score:\n\n- We did observe architectures where both the methods performed well and cases where both of them failed. To nullify the effect of such non-differentiating architectures, our bounty model awards 2.5 points each (out of 5) in such cases. \n\n- This experiment was only done using the standard CIFAR-10 dataset. Due to the constraints on GPU resources available to us, we couldn't try different datasets. Especially, since we included ResNets in the experiment which take days to train. But, what you suggest is an interesting experiment as well.", "LS-GAN paper has two main ideas:\n1. Adding a margin\n2. Making D(x) Lipschitz in data space\nTogether, they result in the condition that D(real) - D(fake) ~ ||real - fake||, for any pair. Our paper credits this idea of imposing gradient constraints (penalties) in the data space to Qi et.al, though its wrongly credited to WGAN-GP paper in the literature. \n\nWGAN-GP provides a different theory/motivation for a very similar constraint (Here, fake can get higher scores than real) or algorithm and does extensive experiments to demonstrate that it helps. So, we only compare with it because essentially they are the same method. \n\nIn contrast, DRAGAN only applies constraints in local regions of real samples (we argue for its advantages) and moreover, our paper is focused on developing a novel theory regarding convergence in GANs and mode collapse issue.\n\nAnd what's suggested in various github pages or later versions of the paper could not be explored or discussed in our paper. We apologize for that", "- A small correction in your summary. Our penalty scheme helps avoid bad local equilibria and the convex-concave case, while being simple, is quite different from the non-convex case.\n\n- We changed section 2.3 to rigorously argue that regret minimization converges to (potentially bad) local equilibria, added a new section 2.4 to characterize what these 'mode collapse' equilibria look like (D has large gradients around real samples in this case) and demonstrate that they can be averted using gradient constraints, through new toy experiments. This provides good intuition and a strong motivation for the introduction of DRAGAN scheme. We urge you to take another look at sections 2.3, 2.4.\n\n- In the updated revision, we correct this arbitrary choice and use 'k', which should be something small. Basically, we observe that 'mode collapse' equilibria exhibit sharp gradients of the discriminator function around real samples. So, we regularize so as to keep these gradients small. We apologize for not making it clear earlier. \n\n- You make an excellent point that by tuning 'k'/'c'/'lambda', it could be possible to get better performance but our aim here was just to demonstrate the effectiveness of our method. My intuition is that optimal configuration will depend on data domain, architecture and hence, its beyond the scope of our paper. But, this is an important topic for possibly a future work.\n\n- We only explore the performance of our penalty on MNIST, CIFAR-10 and CelebA, like most papers in this direction. I think the performance on ImageNet depends heavily on the architecture but we did not explore this aspect in our paper. It is an interesting topic to compare various methods on bigger datasets, maybe using ResNets.", "I suggest this paper should compare its performance with LS-GAN in experiments.\n\nIn Chapter 2.4, this paper refers to LS-GAN and its theorem. Worth to mention that LS-GAN has gradient penalty, which is very flexible since it does not require the norm of gradient to be close to 1 as required in the dual of Wasserstein distance. Instead LS-GAN directly penalizes the gradient as a surrogate of Lipschitz constant derived from its generalization theorem. This makes the gradient penalty different both in theory and in algorithm from that of WGAN-GP. The conclusion in experiments on WGAN-GP thus cannot generalize to LS-GAN. So a direct comparison with LS-GAN is necessary.\n\nYou can find the code here, https://github.com/zzzucf/lsgan-gp.", "1. DRAGAN's regularization is a simple constraint on the discriminator that is used to improve stability. It comes at a cost though! However, as we write at the end of section 2.4, by carefully choosing how you regularize, you can gain stability without losing too much performance. \n\nAnd though we suggest a hyperparameter setting for mostly image datasets in our paper, it doesn't work for all the distributions possible in all domains. Some tuning is required especially if your domain changes too much (from pixel space) or if the game/players are simple enough, I suggest also reducing the regularization intensity. I think the problem you observed is caused by this (see point 2). \n\n(In hindsight, we should have added a couple of points in Algorithm section to help practitioners use it. We will do so in the final version.)\n\n2. We show experiments on simple toy datasets in our paper without any issues. Since your domain is [-1,1] (not pixel space) and you are using small networks, I suggest not using default hyperparameters. Reduce 'c' first to something less than 0.1 (say). Further, if you want the best performance, I suggest tuning 'lambda' as well. Understanding what these hyperparameters are doing is essential to use DRAGAN to your advantage - c (size of local regions) and lambda (how much you want to bias). I can take a look at your code or share sample code after the review process :)\n\nEdit: I just realized your training data is uniform in [-1,1]. Your perturbations will be on the manifold in this case, which explains the issue.\n", "1. As we explain at the end of section 2.4, one can constrain D in multiple ways and still improve stability. It should be clear from our paper that stability requires trading off modeling performance as flexible models come with game-theoretic problems.\n\nWe chose this specific form using our intuitions so as to have the least negative impact on modeling performance. Hence, we only apply constraints near real samples unlike other approaches. Next, we wanted that D be \"smooth\" in x-space to help the generator learn better and change gradually w.r.t theta so that game dynamics improves (see why FTRL works in previous section for intuitions regarding this). \n\nLet's see why our constraint achieves both of these. It is reasonable to expect that almost any small pixel-wise perturbation will make a given image less realistic. So we want that D(x) and D(x') to be different and to somewhat depend on how far x and x' themselves are. Thus, gradient should be greater than zero or the generator cannot learn to tell real images and the noise apart. Of course, you can play around with that parameter but we found that it doesn't matter much (atleast for stability). The gradual change in D w.r.t theta happens as these local perturbations act as auxiliary data points holding D down (in some sense) to prevent rapid changes. \n\n2. To answer your second question, please look back to our section 2.3 where we outline the possibilities for theta and phi in non-convex settings. They can:\n\n-> Converge to an equilibrium (can be local)\n-> Cycle s.t averages converge\n-> Don't converge at all\n\nIf D(x) converges to optimal, notice that this means we are \"almost\" in a local equilibria. The cost function for G is now fixed and he will perform SGD updates until reaching a local minima mostly. This is usual deep learning and so, there's no question of instability due to the game! Whether this result is optimal depends on well-posedness of the game. This is where our paper comes in :) As we explain in our conclusion, the dynamics of GANs are not understood in the right perspective yet. Thinking of them as consistently estimating and minimizing JS-divergence or Wasserstein distance is not appropriate.\n\n ", "It is incorrect to assume that D* will have zero gradient w.r.t X, which means that D* will be a constant function and there's no reason to believe this will happen. However, w.r.t theta, gradient will be zero as its optimal.\n\nNow, let's talk about D* vs D'. You are right that D' can be worse than D*, however, as we discuss in the paper, getting to D* is a perilous journey fraught with local equilibria/instabilities. But, add the regularization term and you get D' which we show isn't that worse off. But now, you get significantly improved stability!\n\nThe generated distribution isn't close to P_real in both the cases, atleast we have no reason to believe so, expect using visual inspection (which can be a slippery slope). But w.r.t metrics we have (inception score, visual inspection), D' and D* are almost the same in our experiments. \n\nThe explanation could be that constraining to have norm-k gradients is actually reasonable (perturbed images should get strictly smaller probability than actual images) and P_real itself satisfies this condition. So, with large enough data D' -> P_real just as D*-> P_real, except that with DRAGAN, we are more likely to reach there due to stability.", "1. Goodfellow et.al show D*(x)=1/2 as we converge to P_real, when we have infinite data and large (maybe infinite) capacity networks. This isn't a realistic setting and it can be misleading to use the intuition that D* actually represents the ratio of densities in practice.\n\nSo, D*(x) need not have zero gradient w.r.t X, in general. \n\n2. Now, the generator learns from D in GAN framework. All G cares about is getting high scores and all D cares about is providing high scores to only real samples! What happens at noise doesn't matter as long as they get strictly lower scores. \n\nIf you use vanishing numbers, then you encourage the generator to learn noise and I think you are suggesting to slowly remove the regularization, which is a good idea in the limit case.", "Arora et al's is a great paper which supports what I said...how it's misleading to apply asymptotic intuitions in practice.\n\nFrom a game-theoretic perspective, yes, any form of gradient norm regularization should improve stability. Our goal was to demonstrate this idea and foster research in this direction. \n\nWe didn't explore all possibilities or claim to have the best answer here, this is beyond the scope of our work. In fact, we didn't even explore all numerical possibilities to optimize for performance! And yet we beat the state-of-the-art wgan-gp. Hopefully, practitioners will build off our work and develop better algorithms.", "1. \"A differentiable function is 1-Lipschtiz if and only if it has gradients with norm at most 1 everywhere\", but WGAN-GP doesnt do this.\n\n2. Read our section 2.4 where we show WGAN-GP has little to do with Wasserstein duality. In fact, our game-theoretic arguments could be the basis for why it works to some extent.\n\n3. Most of other such GAN variants come up with techniques by applying asymptotic arguments or sometimes those that don't hold in practice! Our paper is trying to counter that and we just follow the game.\n\nOur section 2.3 is the starting point for theory you are looking for. However, I suggest to carefully think about assumptions made in the development of each algorithm \n", "Wasserstein distance is if we use infinite family of 1-Lip (norm of gradient <=1) functions. \n\nBut, wgan-gp forces norm-1 gradients between all real and fake pairs. So, there is little connection to Wasserstein duality theory here (asymptotic or otherwise).\n\nI agree that more theoretical investigation is needed in the community. But this process of using only asymptotic intuitions to develop algorithms can go wrong as there are many moving pieces in GAN framework. \n\nAnd your question of whether we should require our algorithm to work in limit case..yes, absolutely! But do we understand what the right notion is? Density ratio was one way to think about this as suggested by original Goodfellow's paper. But this breaks the moment infinite data assumption is relaxed. So, we are yet to find a way to nicely reason about limit case and until we have it, using such narratives is overly restrictive. In our paper, we see D as some entity that can tell real images vs everything else and hence, our penalty makes sense.", "We make no claims that our regularization term helps find the optimal critic. In fact, we clearly mention that constraints on D actually \"hurt\" the performance. So, one should use vanilla GANs whenever possible but if you encounter instability, then some form of constraint will help. And we show that DRAGAN can achieve this without losing too much in performance. See end of section 2.4.\n\n1. GAN structure is responsible for the good performance, and the constraints are only to improve stability in hard cases. \n\n2. We show that DRAGAN helps make the underlying game \"easier\" in some sense. So, it works with any objective function (see section 3.3), although it might require small amount of hyperparameter tuning.\n\n3. There's no easy answer to this. Depends on the game, how strong the players are, domain space and many other factors. But yes, there's need for more research into understanding this and developing better forms of regularization.\n", " I agree that if we downplay the parameter lambda, it will definitely help in this simple case, because it is becoming the original GAN. I think it raises the following questions that need more investigation:\n\n1. Does the GAN structure itself or the regularization term play a more important role in the good performance? This is also one of the objective of our course project and the issue raised in the paper of \"Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step\".\n\n2. Does a certain regularization only make sense for particular GAN structures or it can be universally used on top of all existing GAN structures? For this question, it seems the regularization in DRAGAN and the one in the other paper \"ON THE REGULARIZATION OF WASSERSTEIN GANS\" may only work for WGAN.\n\n3. What is a systematic way of weighing the regularization term and the original objective function, instead of heuristic parameter tuning?\n\n4. What are the pros and cons for all these different kinds of regularizations?\n", " I was taking a graduate level machine learning class. In the final course project, we tried to investigate the effects of different regularizations on GAN and WGAN. The two main regularization methods include the one proposed as DRAGAN in this paper and the one proposed in \"ON THE REGULARIZATION OF WASSERSTEIN GANS\" (referred to as w_reg in the following).\n\nWe mainly investigated the following methods: 1a. WGAN with weight clipping, 1b. WGAN with gradient penalty, 1c. WGAN with DRAGAN regularization, 1d. WGAN with w_reg; 2a. GAN, 2b. GAN with DRAGAN regularization.\n\nSince the generated images can only be judged with visualization, besides the image experiments, we also did some experiments on some simple synthetic cases. One exercise is to generate a [-1,1] uniform distribution from a Gaussian distribution. We observed the following results: all the methods 1a-2a are good at generating this simple distribution. However, GAN with the DRAGAN regularization does not. What we observed is that D(x) converges to a function with a hump and therefore all the generated samples are concentrated on a small region, instead of uniform distribution. We adopt a sample code from github. The generator has 2 layers and the discriminator has 1 layer. The lambda is 10.\n\nWe got stuck in the observation for a long time. Later we find out the reason is the regularization term pushes the function to have some slope at the data support, which results in the hump shape. Therefore, the generated samples are mostly concentrated in the region with a large D(x).\n\nI am wondering if the authors have similar experience with the synthetic data experiments? Sometimes, the quality of generated images are hard to judge. Some synthetic data experiments are also needed to verify the performance. Probably I made some mistakes in the code experiments, would be great if the authors can share the code and insights after the review process. My email is leonboellmann0110@gmail.com. Thanks!\n\n", "Thanks!", "I understand that your analysis suggests that we should add the norm regularization in the local areas around real examples. This is clearly an improvement over WGAN-GP. However, this norm~=1 only holds for the case of Wasserstein distance. This is also what motivates Gulrajani et al to introduce this regularization term in the objective function.\n\nHowever, for other variants of GANs based on general f-divergence, the optimal D* should not have a norm~=1. That is what puzzling me and want to seek for theoretical insights.\n\nI agree that in practical training of GANs. We do not have infinite data samples, the network may not have enough capacity, and the neural networks are not convex/concave over its parameters. But when we design our algorithm, should we design one that at least works for the ideal simplest case when we have the true expectation, the network has enough capacity and the global optimum points can be reached? I think we should at least guarantee the algorithm works for the ideal case and then think about how to make it robust for practice. ", " Thanks a lot for your explanations!\n\nActually if you look at the WGAN-GP paper, they have a strong theoretical support for adding the norm regularization term, because \". A differentiable function is 1-Lipschtiz if and only if it has gradients with norm at most 1 everywhere\". \n\nFor your paper, there is no doubt that the regularization brings improvement in the experiments. The paper is very well written. Besides the good experiment performance, I am looking for similar theoretical insights of adding this regularization for other variants of GANs. Instead of knowing that it works, I am more interested in knowing why it works, because obtaining the theoretical insights will lead to more effective algorithms that can be generalized. Hopefully, we will obtain more theoretical insights from future works, if more practitioners build more algorithms on this. ", " I see. Actually, quite a few people consider D(x) as the density ration estimation:\nM Uehara, \"Generative Adversarial Nets from a Density Ratio Estimation Perspective\"\nB. Poole et al. , \"Improved generator objectives for GANs\". \nAnd the analysis for the case with finite sample data is also aligned with this perspective. See Arora et al, \"Generalization and Equilibrium in Generative Adversarial Nets (GANs)\"\nProbably you are right, in actual training, it may be misleading to consider from this perspective. \n\nFollowing the logic, does it imply that the any gradient norm regularization should work? So the constant \"1\" is chosen heuristically based on experiment results? Is there any general guideline to choose the norm regularization constant, should it be always \"1\"?", "Dear authors,\nI am new to this area, so I have quite a few questions. I understand that the experiments show that adding this norm regularization makes the performance good and stable. The paper is very well written and the results are actually very impressive. I just want to step back and seek for a theoretical explanation of why this regularization makes sense. \n\n1. If we look at the original minimax problem of GAN, the optimal D* is indeed D*(x) = 1/2. Why is it not correct to assume D* has zero gradient w.r.t. X? \n\n2. Let us first step back and assume the minimax problem is convex/concave in theta_d/theta_g. When we design the algorithm, we should find one that at least works for the ideal case, and then think about how to make it robust for the general case, when the problem is not convex/concave in theta_d/theta_g. I understand that adding the norm regularization in the data support would definitely alleviate mode collapse, because it artificially adds gradients to encourage the generator to generate the data samples. My question is why should we restrict the norm to be close to \"1\" instead of some other small numbers (or some vanishing numbers), which would make your D' hopefully most closely to D*? ", " \nThere is one thing that is puzzling me. Suppose D* is the optimal to the original GAN without the regularization term. The gradient of D* with respect to x should be zero. Suppose we let D' be the optimal point for the new objective function with the regularization term. If the regularization term dominates, the converging point D' would have a gradient whose norm is close to 1. In this case, D' may be very different from the original D*, and the generated distribution pg may be very different from data distribution. My question is why do we require the norm of the gradient to be close to 1, instead of any other number? For example, if the regularization requires the norm of the gradient to be close to some small constant, would the converging D' be more close to D*?", " This paper is very well written. It is very easy to follow. It takes me less than 10 mins to read the whole paper. I have a question on the regularization term. Why does it require the norm of gradient to be close to 1? When D(x) approaches the optimal, the gradient should be zero, then this additional regularization term would make the training unstable or not converge to the optimal?" ]
[ 5, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryepFJbA-", "iclr_2018_ryepFJbA-", "iclr_2018_ryepFJbA-", "SyYO2aIlG", "Hkd3vAUeG", "Bk9rWSD-f", "B1nRUuAgG", "ByPQQOX1G", "iclr_2018_ryepFJbA-", "HyVsPP5CZ", "ByD_wguCW", "rJxYq4uAZ", "S1yaH2dAb", "Bk-qS6_RW", "BJlRh0dC-", "r1nCXkK0Z", "r184LvsCZ", "S1GEAvq0Z", "iclr_2018_ryepFJbA-", "Sk8vLeK0b", "HkxbyyKCb", "HkXuRauAZ", "ryIPRnuCb", "rJ91vcOCb", "S1EGL-uC-", "iclr_2018_ryepFJbA-" ]
iclr_2018_ry4SNTe0-
Improve Training Stability of Semi-supervised Generative Adversarial Networks with Collaborative Training
Improved generative adversarial network (Improved GAN) is a successful method of using generative adversarial models to solve the problem of semi-supervised learning. However, it suffers from the problem of unstable training. In this paper, we found that the instability is mostly due to the vanishing gradients on the generator. To remedy this issue, we propose a new method to use collaborative training to improve the stability of semi-supervised GAN with the combination of Wasserstein GAN. The experiments have shown that our proposed method is more stable than the original Improved GAN and achieves comparable classification accuracy on different data sets.
rejected-papers
The paper aims to combine Wasserstein GAN with Improved GAN framework for semi-supervised learning. The reviewers unanimously agree that: - the paper lacks novelty and such approaches have been tried before. - the approach does not make sufficient gains over the baselines and stronger baselines are missing. - the paper is not well written and experimental results are not satisfactory.
test
[ "B1uXHzDeM", "SkaNEl9xM", "BJGtGM9eM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "* Summary *\nThe paper addresses the instability of GAN training. More precisely, the authors aim at improving the stability of the semi-supervised version of GANs presented in [1] (IGAN for short) . The paper presents a novel architecture for training adversarial networks in a semi-supervised settings (Algorithm 1). It further presents two theoretical results --- one (Theorem 2.1) showing that the generator's gradient vanish for IGAN, and the second (Theorem 3.1) showing that the proposed algorithm does not suffer this behaviour. Finally, experiments are provided (for MNIST and CIFAR10), which are meant to support empirically the claimed improved stability of the proposed method compared to the previous GAN implementations (including IGAN).\n\nI need to say the paper is poorly written and not properly polished. Among many other things:\n\n(1) It refers to non-existent results in other papers. Eq 2 is said to follow [1], meanwhile the objectives are totally different: the current paper seems to use the l2 loss, while Salimans et al. use the cross-entropy;\n\n(2) Does not introduce notations in statements of theorems ($J_\\theta$ in Theorem 2.1?) and provides unreadable proofs in appendix (proof of Theorem 2.1 is a sequence of inequalities involving the undefined notations with no explanations). In short, it is very hard to asses whether the proposed theoretical results are valid;\n\n(3) Does not motivate, discuss, or comment the architecture of the proposed method at all (see Section 3).\n\nFinally, in the experimental section it is unclear how exactly the authors measure the stability of training. The authors write \"unexpectedly high error rates and poor generate image quality\" (page 5), however, these things sounds very subjective and the authors never introduce a concrete metric. The authors only report \"0 fails\", \"one or two out of 10 runs fail\" etc. Moreover, for CIFAR10 it seems the authors make conclusions based only on 3 independent runs (page 6).\n\n[1] Salimans et al, Improved Techniques for Training GANs, 2016", "Summary of paper and review:\n\nThe paper presents the instability issue of training GANs for semi-supervised learning. Then, they propose to essentially utilize a wgan for semi-supervised learning. \n\nThe novelty of the paper is minor, since similar approaches have been done before. The analysis is poor, the text seems to contain mistakes, and the results don't seem to indicate any advantage or promise of the proposed algorithm.\n\nDetailed comments:\n\n- Unless I'm grossly mistaken the loss function (2) is clearly wrong. There is a cross-entropy term used by Salimans et al. clearly missing.\n\n- As well, if equation (4) is referring to feature matching, the expectation should be inside the norm and not outside (this amounts to matching random specific random fake examples to specific random real examples, an imbalanced form of MMD).\n\n- Theorem 2.1 is an almost literal rewrite of Theorem 2.4 of [1], without proper attribution. Furthermore, Theorem 2.1 is not sufficient to demonstrate existence of this issues. This is why [1] provides an extensive batch of targeted experiments to verify this assumptions. Analogous experiments are clearly missing. A detailed analysis of these assumptions and its implications are missing.\n\n- In section 3, the authors propose a minor variation of the Improved GAN approach by using a wgan on the unsupervised part of the loss. Remarkably similar algorithms (where the two discriminators are two separate heads) to this have been done before (see for example, [2], but other approaches exist after that, see for examples papers citing [2]).\n\n- Theorem 3.1 is a trivial consequence of Theorem 3 from WGAN.\n\n- The experiments leave much to be desired. It is widely known that MNIST is a bad benchmark at this point, and that no signal can be established from a minor success in this dataset. Furthermore, the results in CIFAR don't seem to bring any advantage, considering the .1% difference in accuracy is 1/100 of chance in this dataset.\n\n[1]: Arjovsky & Bottou, Towards Principled Methods for Training Generative Adversarial Networks, ICLR 2017\n[2]: Mroueh & Sercu, Goel, McGan: Mean and Covariance Feature Matching GAN, ICML 2017", "In the paper, the author tried to address the training issue of SSL-GANs. Arguing that the main problem is the gradients vanishing, it proposed a co-training framework which combining the Wasserstein GAN training. The experiments were executed on MNIST and CIFAR-10. \n\nI think the paper made two strong claims, which are not reasonable for me: firstly, it argued that this is the first work to address training issue of SSL-GANs. Actually, the Fisher GAN paper [Youssef et al., 2017] proposed the \"New Parametrization of the Critic\" for SSL and showed it was very stable. In [Abhishek at al., 2017], the author also addressed how to make the SSL-GANs stable, following the improved GANs paper idea. Secondly, it made an impression that the author thought the main issue of SSL-GANs is the gradient vanishing. Following the paper [Zihang et al., 2017], it is hard to make claim like this. \n\nThe co-training framework is not so novel for me, which combined the Wasserstein loss and general GAN loss. Meanwhile, the experimental results are not solid. The baselines listed are not the state-of-the-art. I suggested that the author should compare with some very recent ones, such as [Youssef et al., 2017], [Zihang et al., 2017], [Abhishek et al., 2017], [Jeff et al., 2016]." ]
[ 3, 2, 3 ]
[ 4, 4, 5 ]
[ "iclr_2018_ry4SNTe0-", "iclr_2018_ry4SNTe0-", "iclr_2018_ry4SNTe0-" ]
iclr_2018_By9iRkWA-
Phase Conductor on Multi-layered Attentions for Machine Comprehension
Attention models have been intensively studied to improve NLP tasks such as machine comprehension via both question-aware passage attention model and self-matching attention model. Our research proposes phase conductor (PhaseCond) for attention models in two meaningful ways. First, PhaseCond, an architecture of multi-layered attention models, consists of multiple phases each implementing a stack of attention layers producing passage representations and a stack of inner or outer fusion layers regulating the information flow. Second, we extend and improve the dot-product attention function for PhaseCond by simultaneously encoding multiple question and passage embedding layers from different perspectives. We demonstrate the effectiveness of our proposed model PhaseCond on the SQuAD dataset, showing that our model significantly outperforms both state-of-the-art single-layered and multiple-layered attention models. We deepen our results with new findings via both detailed qualitative analysis and visualized examples showing the dynamic changes through multi-layered attention models.
rejected-papers
Generally solid engineering work but a bit lacking in terms of novelty and some issues with clarity. At the end of the day the empirical gains are not sufficient for acceptance - the results are state-of-the-art relative to published work, but not in the top 10 based on the official leaderboard (not even at time of submission). Since the technical contributions are small and the engineering contributions have been made obsolete by concurrent work, I suggest rejection.
train
[ "HkFCIBhgf", "HyrDvKC1z", "B1s84WMlM", "Bkhb6wpXM", "BJVf3P67G", "HykAjPT7G", "By6zYD6mz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper introduces a fairly elaborate model for reading comprehension evaluated on the SQuAD dataset. The model is shown to improve on the published results but not as-of-submission leaderboard numbers.\n\nThe main weakness of the paper in my opinion is that the innovations seem to be incremental and not based on any overarching insight or general principle. A less significant issue is that the English is often disfluent.\n\nSpecific comments: I would remove the significance daggers from table 2 as the standard deviations are already reported and the null hypothesis for which significance is measured seems unclear. I am also concerned to see test performance significantly better than development performance in table 3. Other systems seem to have development and test performance closer together. Have the authors been evaluating many times on the test data?\n", "Summary: The paper introduces \"Phase Conductor\", which consists of two phases, context-question attention phase and context-context (self) attention phase. Each phase has multiple layers of attention, for which the paper uses a novel way to fuse the layers, and context-question attention uses different question embedding for getting the attention weight and getting the attention vector. The paper shows that the model achieves state of the art on SQuAD among published papers, and also quantitatively and visually demonstrates that having multiple layers of attention is helpful for context-context attention, while it is not so helpful for context-question attention.\n\n\nNote: While I will mostly try to ignore recently archived, non-published papers when evaluating this paper, I would like to mention that the paper's ensemble model currently stands 11th on SQuAD leaderboard.\n\n\nPros:\n- The model achieves SOTA on SQuAD among published papers.\n- The sequential fusing (GRU-like) of the multiple layers of attention is interesting and novel. Visual analysis of the attention map is convincing.\n- The paper is overall well-written and clear.\n\nCons:\n- Using different embedding for computing attention weights and getting attended vector is not entirely novel but rather an expected practice for many memory-based models, and should cite relevant papers. For instance, Memory Networks [1] uses different embedding for key (computing attention weight) and value (computing attended vector).\n- While ablations for number of attention layers (1 or 2) were visually convincing, numerically there is a very small difference even for selfAtt. For instance, in Table 4, having two layers of selfAtt (with two layers of question-passage) only increases max F1 by 0.34, where the standard deviation is 0.31 for the one layer. While this may be statistically significant, it is a very small gain nonetheless.\n- Given the above two cons, the main contribution of the paper is 1.1% improvement over previous state of the art. I think this is a valuable engineering contribution, but I feel that it is not well-suited / sufficient for ICLR audience. \n\n\nQuestions:\n- page 7 first para: why have you not tried GloVe 300D, if you think it is a critical factor?\n\n\nErrors:\n- page 2 last para: \"gives an concrete\" -> \"gives a concrete\"\n- page 2 last para: \"matching\" -> \"matched\"\nFigure 1: I think \"passage embedding h\" and \"question embedding v\" boxes should be switched.\n- page 7 3.3 first para: \"evidence fully\" -> \"evidence to be fully\"\n\n\n[1] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory Networks. ICLR 2015.", "This paper proposes a new machine comprehension model, which integrates several contributions like different embeddings for gate function and passage representation function, self-attention layers and highway network based fusion layers. The proposed method was evaluated on the SQuAD dataset only, and marginal improvement was observed compared to the baselines.\n\n(1) One concern I have for this paper is about the evaluation. The paper only evaluates the proposed method on the SQuAD data with systems submitted in July 2017, and the improvement is not very large. As a result, the results are not suggesting significance or generalizability of the proposed method.\n\n(2) The paper gives some ablation tests like reducing the number of layers and removing the gate-specific question embedding, which help a lot for understanding how the proposed methods contribute to the improvement. However, the results show that the deeper self-attention layers are indeed useful (but still not improving a lot, about 0.7-0.8%). The other proposed components contribute less significant. As a result, I suggest the authors add more ablation tests regarding (1) replacing the outer-fusion with simple concatenation (it should work for two attention layers); (2) removing the inner-fusion layer and only use the final layer's output, and using residual connections (like many NLP papers did) instead of the more complicated GRU stuff.\n\n(3) Regarding the ablation in Table 2, my first concern is that the improvement seems small (~0.5%). As a result, I am wondering whether this separated question embedding really brings new information, or the similar improvement can be achieved by increasing the size of LSTM layers. For example, if we use the single shared question embeddings, but increase the size from 128 to some larger number like 192, can we observe similar improvement. I suggest the authors try this experiment as well and I hope the answer is no, as separated input embeddings for gate functions was verified to be useful in some \"old\" works with syntactic features as gate values, like \"Semantic frame identification with distributed word representations\" and \"Learning composition models for phrase embeddings\" etc.\n\n(4) Please specify which version of the SQuAD leaderboard is used in Table 3. Is it a snapshot of the Jul 14 one? Because this paper is not comparing to the state-of-the-art, no specification of the leaderboard version may confuse the other reviewers and readers. By the way, it will be better to compare to the snapshot of Oct 2017 as well, indicating the position of this work during the submission deadline.\n\nMinor issues:\n\n(1) There are typos in Figure 1 regarding the notations of Question Features and Passage Features.\n\n(2) In Figure 1, I suggest adding an \"N \\times\" symbol to the left of the Q-P Attention Layer and remove the current list of such layers, in order to be consistent to the other parts of the figure.\n\n(3) What is the relation between the \"PhaseCond, QPAtt+\"\b in Table 2 and the \"PhaseCond\" in Table 3? I was assuming that those are the same system but did not see the numbers match each other.", "We thank the reviewer for acknowledging the contributions of our work and for providing insightful comments. Several issues on the experiments have been fixed during the review period, and we see the significant performance gain on both dev and test dataset. We have responded each of the comments below.\n\nComment 1.\nOriginal Comment:\nUsing different embedding for computing attention weights and getting attended vector is not entirely novel but rather an expected practice for many memory-based models, and should cite relevant papers. For instance, Memory Networks [1] uses different embedding for the key (computing attention weight) and value (computing attended vector).\n\nResponse:\nThe reviewer is right that our research is not the first one to explore the idea of using different embeddings. Although the motivation and approaches we use are very different from what has been proposed in Memory Network paper (e.g., we explicitly added a separate question embedding for dot-product attention function in order to improve attention model’s robustness), we do believe that it is a good idea to incorporate the memory network paper into our citations. We will update the citation in the final version of the paper. However, we do want to emphasize that our proposed structure of PhaseCond is novel and quite effective on the machine comprehension task for the following reasons. First, our network consists of multiple phases, 1) each phase has a stack of attention (or functional) layers, 2) each layer is followed by a stack of inner, and 3) at the end of each phase outer fusion layers are configured. Second, we use a novel approach to increase the attention model’s robustness with respect to the dot-product attention function by adding a separate question/query embedding. Overall our proposed PhaseCond model is significantly more effective compared with existing attention-based architectures.\n\n\nComment 2:\nOriginal Comment:\nWhile ablations for number of attention layers (1 or 2) were visually convincing, numerically there is a very small difference even for selfAtt. For instance, in Table 4, having two layers of selfAtt (with two layers of question-passage) only increases max F1 by 0.34, where the standard deviation is 0.31 for the one layer. While this may be statistically significant, it is a very small gain nonetheless.\n\nResponse:\nOur updated ablation study shows (please see our paper and replies to reviewer 3) that even the well-developed approaches (e.g., highway network, or residual connections) have a small gain on SQuAD dataset, which means that the dataset itself is very challenging. Considering the fact that the higher F1 score is, the more difficult to improve the performance, we believe that our improvements over the state-of-art multi-layered attention model are not trivial. \n\nComment 3:\nOriginal Comment:\nGiven the above two cons, the main contribution of the paper is 1.1% improvement over the previous state of the art. I think this is a valuable engineering contribution, but I feel that it is not well-suited / sufficient for ICLR audience. \n\nResponse:\nWe have conducted experiments again after fixing several bugs and our single model version has managed to achieve 74.405 on exact match (compares to 73.240 reported in the manuscript to review) and 82.742 on F1 score (compares to 81.933 reported in the manuscript to review) on the test dataset. Compared to MReader, which is the best baseline reported in the paper with EM 71 and F1 80.1, our model delivers a significant improvement by 2-3%. We believe that this reflects algorithmic innovations rather than engineering contributions. \n\nComment 4:\nOriginal Comment:\npage 7 first para: why have you not tried GloVe 300D, if you think it is a critical factor?\n\nResponse:\nWe have incorporated GloVe 300D in our latest experiments. Along with other bug fixes, we have noticed a 1% performance boost over the results reported in the previous version of the paper.\n", "Comment 3:\nOriginal Comment:\nRegarding the ablation in Table 2, my first concern is that the improvement seems small (~0.5%). As a result, I am wondering whether this separated question embedding really brings new information, or the similar improvement can be achieved by increasing the size of LSTM layers. For example, if we use the single shared question embeddings, but increase the size from 128 to some larger number like 192, can we observe similar improvement. I suggest the authors try this experiment as well and I hope the answer is no, as separated input embeddings for gate functions was verified to be useful in some \"old\" works with syntactic features as gate values, like \"Semantic frame identification with distributed word representations\" and \"Learning composition models for phrase embeddings\" etc.\n\nResponse:\nWe thank the reviewer for the insightful comments. We have conducted additional experiments by changing the number of units. \n1)\tPhaseCond (use the best hidden size 150)\n F1: 81.16 (SD: 0.10) | EM: 71.99 (SD: 0.17)\n We find that increasing the hidden size (original 128) can improve the performance, and the optimal parameter we use now is 150.\n2)\tIncrease the hidden size to 192\n F1: 80.68 (SD: 0.90) | EM: 71.29 (SD: 1.07)\n Further increasing the hidden size can cause the performance drop and hurt the stability.\n3)\tRemove the separated question embedding from PhaseCond\n F1: 80.58 (SD: 0.47) | EM: 71.16 (SD: 0.24)\n There are reasons for us to train a separated question embedding: 1) for similarity between question and passage, they have to be in the same “embedding space” and therefore we jointly train them, but 2) the attention model is to use the question to represent the passage, so for the question embedding, there’s no need to consider the passage itself. Training question and passage parameters jointly may not be suited for question embeddings to represent a passage, so we train a separate question embedding to “stabilize” the quality of our attention model. It is also optimal for the dot-product attention function we chose, which doesn’t have the parameter W (it consumes too much VRAM that we cannot afford). \n\nNote: all the correctness can be directly verified via changing our latest experiment code (all the sources, data are available):\nhttps://worksheets.codalab.org/bundles/0xfc1bc55358b049029514f1018ff70ece/ \n\nComment 4:\nOriginal Comment:\nPlease specify which version of the SQuAD leaderboard is used in Table 3. Is it a snapshot of the Jul 14 one? Because this paper is not comparing to the state-of-the-art, no specification of the leaderboard version may confuse the other reviewers and readers. By the way, it will be better to compare to the snapshot of Oct 2017 as well, indicating the position of this work during the submission deadline.\n\nResponse:\nWe could have compared the performance of our papers to those on the leaderboard. However, in reality, we noticed that at the time when we were writing the paper, most of the systems on the top of the leaderboard do not have publications released. Even for those systems that have publications, their performance on the leaderboard are constantly changing and there is no guarantee that their systems are implemented strictly according to their original paper. For those reasons, we chose to compare results on the published papers. During the time we developed our model (around Aug - Sep in 2017), our ranking was among top 5 on SQuAD. After we submitted the paper, our ranking is as good as top 10 (around Oct 2017). ", "We thank the reviewer for acknowledging the contributions of our work and for providing insightful comments. Several issues on the experiments are fixed during the review period, and we see the significant performance gain on both dev and test dataset compares to the results reported in the previous version of the paper. To address concerns from the reviewer especially in the section of the ablation test, we have also conducted additional experiments. We have responded each of the comments below.\n\nComment 1.\nOriginal Comment:\nOne concern I have for this paper is about the evaluation. The paper only evaluates the proposed method on the SQuAD data with systems submitted in July 2017, and the improvement is not very large. As a result, the results are not suggesting significance or generalizability of the proposed method.\n\nResponse:\nWe have conducted experiments again after several bug fixes and our single model version has managed to achieve 74.405 on exact match (compares to 73.240 reported in the draft to review) and 82.742 on F1 score (compares to 81.933 reported in the draft to review) in the test dataset. Compares to MReader, which is the best baseline reported in the paper with EM 71 and F1 80.1, our model delivered significant improvement with 2-3% performance improvement. We believe that the updated results reflected a significant improvement by applying our model. The standard deviation across multiple models is much lower (around 0.1 for our model and 0.5 for strongest baseline). We will update those results in the final version of the paper.\n\nAlthough we only evaluate our method on SQuAD, we note that our approach is a general method to handle multi-layered attentions and can be applied to any application that matches a query and a context (e.g., VQA, image or text search) that is not limited to SQuAD. We do agree with the reviewer that this is an important aspect of the paper. We will add discussions of the generalizability in the final version of the paper.\n\nComment 2:\nOriginal Comment:\nThe paper gives some ablation tests like reducing the number of layers and removing the gate-specific question embedding, which help a lot for understanding how the proposed methods contribute to the improvement. However, the results show that the deeper self-attention layers are indeed useful (but still not improving a lot, about 0.7-0.8%). The other proposed components contribute less significant. As a result, I suggest the authors add more ablation tests regarding (1) replacing the outer-fusion with simple concatenation (it should work for two attention layers); (2) removing the inner-fusion layer and only use the final layer's output, and using residual connections (like many NLP papers did) instead of the more complicated GRU stuff.\n\nResponse:\nThe reviewer made a good point that we should add more ablation tests and we have presented the results below. Those results are after several bug fixes and they, in general, perform much better than models previously reported in the draft.\n\nEach setting has 3 runs without any model selection:\n1)\tThe proposed approach (PhaseCond)\n F1: 81.16 (SD: 0.10) | EM: 71.99 (SD: 0.17)\nremark: After fixing some bugs in our model, we observe that the proposed model outperforms all the comparable baselines and the variance of the models are the lowest (SD stands for standard deviation).\n2)\treplacing the outer-fusion with simple concatenation (PhaseCond-Outer+Concat)\n F1: 76.36 (SD: 0.20) | EM: 66.21 (SD: 0.43)\nremark: The result shows that 1) our outer layer structure is critical to the model and 2) outer layer cannot be simply replaced by concatenation. \n3)\tremoving the inner-fusion layer and only use the final layer's output (PhaseCond-Inner)\n F1: 80.32 (SD: 0.31) | EM: 70.91 (SD: 0.72)\nremark: It demonstrates that the inner layer implemented by highway network (RK Srivastava et al., ‎2015) can be complementary to LSTM-style outer layer and it is not replaceable.\n4)\tusing residual connections (PhaseCond-Inner+Residual) \n F1: 80.47 (SD: 0.19) | EM: 71.14 (SD: 0.26)\nremark: Our result shows that adding residual layers is a little bit helpful but cannot replace either inner layer or outer layer of the proposed PhaseCond. The residual connections are designed for deep networks with hundreds of layers, but our model is attention-based and comparably shallow.\n\nNote: all the correctness can be directly verified via changing our latest experiment code (all the sources, data are available):\nhttps://worksheets.codalab.org/bundles/0xfc1bc55358b049029514f1018ff70ece/ \n", "We thank the reviewer for acknowledging the contributions of our work and for providing insightful comments. Several issues on the experiments are fixed during the review period, and we see the significant performance gain on both dev and test dataset. We have responded each of the comments below.\n\nComment 1.\nOriginal Comment:\nThe main weakness of the paper in my opinion is that the innovations seem to be incremental and not based on any overarching insight or general principle. A less significant issue is that the English is often disfluent.\nResponse:\nThanks for your comments and feedbacks. We have conducted experiments again after fixing several bugs and our single model version has managed to achieve 74.405 on exact match (compared to 73.240 reported in the manuscript to review) and 82.742 on F1 score (compared to 81.933 reported in the draft to review) regarding the test dataset. Compared to MReader, which is the best baseline reported in the paper with EM 71 and F1 80.1, we believe that our model delivered significant improvement by 2-3%. \n\nWhen we were building the mode, we did have a principle in mind which was to divide the information flow into two phases: Question-Paragraph attention phase and self-attention phase, each equipped with a multi-layered attention structure. We will detail the rationales of such a design and how it is motivated by real examples in the final version of the paper. \n\nThe reviewer also made a remark in the language of the paper. We will correct all the result and grammatical mistakes and have professionals to proofread the paper when we are preparing the final version of the paper. \n\n\nComment 2:\nOriginal Comment:\nSpecific comments: I would remove the significance daggers from table 2 as the standard deviations are already reported and the null hypothesis for which significance is measured seems unclear. I am also concerned to see test performance significantly better than development performance in table 3. Other systems seem to have development and test performance closer together. Have the authors been evaluating many times on the test data?\nResponse:\nWe fixed some bugs in our last experiments (e.g., one outer fusion layer is missing, hidden size 128 is not optimal, etc) and observed much stronger performance. As we have mentioned in the previous paragraph, the performance improvement is more than 1% over the last results reported in the manuscript to review. \n\nIt is a well-known fact in the community that the test dataset yields slightly better performance than the development set. This has been reported on the SQuAD dataset (e.g., RNET (Wang et al., 2017), BIDAF (Seo et al., 2017)). We have worked on the SQuAD dataset for a long time and we are pretty confident that this is the expected behavior on SQuAD.\n" ]
[ 8, 5, 5, -1, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_By9iRkWA-", "iclr_2018_By9iRkWA-", "iclr_2018_By9iRkWA-", "HyrDvKC1z", "B1s84WMlM", "B1s84WMlM", "HkFCIBhgf" ]
iclr_2018_S1Q79heRW
Unsupervised Learning of Entailment-Vector Word Embeddings
Entailment vectors are a principled way to encode in a vector what information is known and what is unknown. They are designed to model relations where one vector should include all the information in another vector, called entailment. This paper investigates the unsupervised learning of entailment vectors for the semantics of words. Using simple entailment-based models of the semantics of words in text (distributional semantics), we induce entailment-vector word embeddings which outperform the best previous results for predicting entailment between words, in unsupervised and semi-supervised experiments on hyponymy.
rejected-papers
Two knowledgeable and confident reviewers suggest rejection, while one not confident reviewer suggests acceptance. I agree with the confident reviewers. All reviewers also point out that the paper is confusingly written and difficult to understand.
train
[ "r1kr4BQgG", "SJ4HCiUgz", "BkjD4Eqxz", "ryK-BFqfM", "H1mMzY9Gf", "Sk_LeFqGf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "I'm finding this paper really difficult to understand. The introduction is very abstract, and it is hard for me to understand the model as it is explained at the moment. Could the authors please clarify, perhaps in more algorithmic terms, how the model works?\n\nAs for the evaluation, BLESS is a nice dataset, but it certainly isn't enough to make a broad claim because it has certain artifacts in the way negative examples were constructed. I recommend looking at the collection of datasets used by Levy et al. [1] and Shwartz et al. [2], and evaluating on their union.\n\nAnother discrepancy that appears in the paper is that the authors cite Shwartz et al. [2] as achieving 44.1% average precision on BLESS, when in fact, this number reflects their performance on the WordNet-based dataset created by Weeds et al. [3].\n\n[1] http://www.aclweb.org/anthology/N15-1098\n[2] http://aclweb.org/anthology/E/E17/E17-1007.pdf \n[3] http://sro.sussex.ac.uk/53103/1/C14-1212.pdf \n", "The paper presents a word embedding algorithm for lexical entailment. The paper follows the work of Henderson and Popa (ACL,2016) that presented an interpretation of word2vec word representation in which each feature in a word vector corresponds to the probability of it being known/unknown, and suggested an operator to compute the degree of entailment between two words. In this paper, the authors train the word2vec algorithms to directly optimize the objective function suggested by Henderson and Popa (2016),\nI find the paper interesting. The proposed approach is novel and not standard and the paper reports significant improvement of entailment results compared to previous state of the art\n \nThe method part of the paper (sections 2.1 and 2.2 ) which is the main contribution is not clearly written. The paper heavily relies on Henderson and Popa (2016). You dont need to restate in the current paper all the mathematical analysis that appears in the previous paper. You are expected, however, that the model description and the notation that is used here should be clearly explained. Maybe you can also add algorithm box. I think that the author should prepared a revised version of section 2.\n\nIn Word2vec, Levy and Goldberg provided an elegant analysis of the algorithm and showed that the global optimum is obtained at the PMI matrix. Can you derive a similar analysis for your variant of the word2vec algorithm?\n", "This work proposes to learn word vectors that are intended to specifically model the lexical entailment relationship. This is achieved in an unsupervised manner from unstructured data, through an approach heavily influenced by recent work by Henderson and Popa, which \"reinterprets word2vec\" by modeling distributions over discrete latent \"pseudo-phrase\" vectors. That is, instead of using two vectors per word, as in word2vec, a latent representation is introduced that models the joint properties of the target and context words. While Henderson and Popa represent the latent vector as the evidence for the target and context, or the likelihood, this work suggests to represent it based on the posterior distribution instead. The resultant representations are evaluated on Weeds et al.'s (2014) version of BLESS, as well as the full BLESS dataset, where they do better than the original.\n\nThe paper is confusingly written, fails to mention a lot of related work, has a weak evaluation where it doesn't compare to related systems, and I feel that it would benefit from \"toning down\". Hence, I do not recommend it for acceptance. In more detail:\n\n1. The idea behind Henderson and Popa's model, as well as the suggested modification, should be easy to explain, but I really had to struggle to make sense of it. This work relies very heavily on that paper, and would be better off if it was more standalone. I think part of the confusion stems from using y for the latent representation but not specifying whether it is a word or latent representation in Equation 1 - that only becomes obvious later. The exposition clearly needs more work, and more precise technical writing.\n\n2. There is a lot of related work around word embeddings that is not mentioned, both on word2vec-style representation learning (e.g. it would be useful to relate this more to word2vec and what it learns, as in Omer Levy's work on \"interpreting\" word2vec, rather than reinterpreting) and word embeddings on hypernymy detection and lexical entailment (see e.g. Stephen Roller's thesis for references).\n\n3. There has been a lot of work on the Weeds BLESS dataset that is not mentioned, or compared against, including unsupervised approaches (e.g. Levy's work, Santus's work, Kiela's work, Roller's work, etc.), that perform better than the numbers in Table 1. There are many other datasets that measure lexical entailment, none of which are evaluated on (apart from the original BLESS set, which is mentioned in passing). It would make sense to show that the method works on more than one dataset, and to do a thorough comparison against other work; especially given that:\n\n4. The tone of the work appears to imply that word2vec was wrong and needs to be reinterpreted: the work leads to \"unprecedented results\" (not true), claims to be a completely novel method for inducing word representations (together with LSA, BOW and Word2Vec, third paragraph; not true), and suggests it has found \"the best way to extract information about the semantics of a word from this model\" (7th paragraph; not true). This, together with the \"reinterpretation of word2vec\" and the proposed \"new distributional semantic models\" almost makes it hard for me to take the work seriously.", "Reviewer has misunderstood the evaluation. In fact, our tables of results are all on the dataset from Weeds et al. 2014, which is precisely designed to address the criticism Reviewer points out for the previous BLESS-based datasets. We only report results on an earlier BLESS dataset to allow direct comparison to previous work. Hence the reported result from Shwartz is the appropriate one.\n\nWe can try to make the model easier to understand, but the fundamental difficulty comes from the novelty of the model, not from the writing. If a reader is not familiar with Henderson and Popa 2016, then it is impossible to explain this vector-space model convincingly and still have space for the novel contributions of this submission. This vector-space representation is just too different from previous vector-space representations.\n", "We thank Reviewer for the suggestions, which we will try to take into account. We can understand the reviewers' desire to make everything a simple extension of something we already understand, but it is not possible in this case.\n\nUnlike for Levy and Goldberg's insightful analysis of word2vec, there is no equivalence between our model and a previously proposed model, other than the connections to Henderson and Popa 2016 stated in the submission. \n\nAll three reviewers found it hard to understand the model as it is presented in section 2. We can try to rewrite section 2 again, but the fundamental difficulty comes from the novelty of the model, not from the writing. If a reader is not familiar with Henderson and Popa 2016, then it is impossible to explain this vector-space model convincingly and still have space for the novel contributions of this submission. This vector-space representation is just too different from previous vector-space representations.\n", "Reviewer 1 criticises the narrowness of the literature review. This is a conference on representation learning. This is a paper on representation learning. Other than papers already cited, none of the work referred to is on representation learning, and none of it gives comparable empirical results. That is why we did not cite it in the submission.\n\nPoint 1: Reviewer 1 states that the model should it be easy to explain. This is not true. The fundamental difficulty comes from the novelty of the model, not from the writing. If a reader is not familiar with Henderson and Popa 2016, then it is impossible to explain this vector-space model convincingly and still have space for the novel contributions of this submission. This vector-space representation is just too different from previous vector-space representations.\n\nPoint 2: There is a huge literature on word embeddings for similarity, but it is not relevant here, as explained above. This work is on embeddings which capture entailment, in contrast to similarity.\n\nPoint 3: Reviewer 1 claims \"there has been a lot of work on the Weeds BLESS dataset that is not mentioned\". This is not true. None of the authors mentioned have published results on the Weeds et al. 2014 dataset. The only recent unsupervised results I can find for this dataset are those reported in the submission from Shwartz, which are BELOW CHANCE. Perhaps others have tried but chosen not to publish such poor results. We stay with this evaluation setup precisely because it has been shown that similarity-based measures do not perform well. Unfortunately, most of the results from Shwartz et al. 2017 use a nonstandard evaluation measure which can easily be gamed, so we cannot make use of them for comparison. \n\nPoint 4: Reviewer 1 provides no evidence for any of their claims in this point. All these claims are false; we stay with the claims made in the submission. We certainly never claim that word2vec was wrong. Our point in this regard is that using similarity-based embeddings (like word2vec) to address entailment does not address the real issue, namely how to induce embeddings which intrinsically capture entailment. Reviewer 1 is very confident about their judgements for someone who says themselves that they did not understand the paper.\n" ]
[ 3, 7, 3, -1, -1, -1 ]
[ 5, 3, 5, -1, -1, -1 ]
[ "iclr_2018_S1Q79heRW", "iclr_2018_S1Q79heRW", "iclr_2018_S1Q79heRW", "r1kr4BQgG", "SJ4HCiUgz", "BkjD4Eqxz" ]
iclr_2018_ryOG3fWCW
Model Specialization for Inference Via End-to-End Distillation, Pruning, and Cascades
The availability of general-purpose reference and benchmark datasets such as ImageNet have spurred the development of general-purpose popular reference model architectures and pre-trained weights. However, in practice, neural net- works are often employed to perform specific, more restrictive tasks, that are narrower in scope and complexity. Thus, simply fine-tuning or transfer learn- ing from a general-purpose network inherits a large computational cost that may not be necessary for a given task. In this work, we investigate the potential for model specialization, or reducing a model’s computational footprint by leverag- ing task-specific knowledge, such as a restricted inference distribution. We study three methods for model specialization—1) task-aware distillation, 2) task-aware pruning, and 3) specialized model cascades—and evaluate their performance on a range of classification tasks. Moreover, for the first time, we investigate how these techniques complement one another, enabling up to 5× speedups with no loss in accuracy and 9.8× speedups while remaining within 2.5% of a highly ac- curate ResNet on specialized image classification tasks. These results suggest that simple and easy-to-implement specialization procedures may benefit a large num- ber practical applications in which the representational power of general-purpose networks need not be inherited.
rejected-papers
This paper does not meet the bar for ICLR - neither in terms of the quality of the write-up, nor in experimental design. The two confident reviewers agree to reject the paper, the weak accept comes from a less confident reviewer who did not write a good review at all. The rebuttal does not change this assessment.
train
[ "rJ6qTo7gz", "r1wK-mFlM", "HJtb_2teM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents an approach to do task aware distillation, task-specific pruning and specialized cascades. The main result is that such methods can yield smaller, efficient and sometimes more accurate models.\n\nThe proposed approach is simple and easy to understand. The task aware distillation relies on the availability of data that is target specific. In practice, I believe this is not an unreasonable requirement.\n\nThe speedups and accuracy gains of this paper are impressive. The fact that the proposed technique is simple yet yields such speedups is encouraging. However, evaluating on simple datasets like Kaggle cat/dog and Oxford Flowers diminishes the value of the paper. I would strongly encourage the authors to try harder datasets such as COCO, VOC etc. This will make the paper more valuable to the community.\n\nMissing citations\nDo Deep Nets Really Need to be Deep? - Ba & Caruana 2014", "This paper presents three different techniques for model specialization, i.e. adapting a pretrained network to a more specific task and reduce its computational cost while maintaining the performance. The three techniques are distillation, weight pruning and cascades. Evaluation compares how effective each technique is and how they interact with each other. In certain settings the obtained speed-up reaches 5x without loss of accuracy.\n\nPros:\n- The idea of reducing the computational cost of specialized models makes sense.\n- In some setting the speed-up can reach more than 5x, which is quite relevant.\n\nCons:\n- The fact that the models are specialized to simpler tasks is not explicitly used in the approach. The authors should test what would happen when using their cascade for classification on all classes of ImageNet for instance. Would it be the gain in speed much lower?\n- It is not clear if the distillation on smaller networks is really improving the models accuracy. The authors compared the distilled models with models trained from scratch. There should be an additional experiment with the small models trained on Imagenet first and then fine-tuned to the task. If in that case there is non gain, then, what is the advantage of distilling in these settings? ImageNet annotations need to be used anyway to train the teacher network.\n- In section 3.2 it seems that the filters of a CNN are globally ranked based on their average activation values. Those with the lowest average activation will be removed. However, in my understanding, the ranking can work better if performed layer specific and not globally.\n- In section 3.4, the title says \"end-to-end specialization pipeline\", but actually, the specialization is done in 3 steps, therefore in my understanding it is not end-to-end.\n- There are some spelling errors, for instance in the beginning of section 4.1\n- Pruning does not seem to produce much speed-up.\n- The experimental part is difficult to read. In particular Fig. 4 should be better explained. There are some symbols in the legend that do not appear in the graph, and others (baselines only) that appear multiple times, but it is not clear what they represent. Also, at the end of the explanation of Fig. 4 the authors mention a gain of 8%, which in my understanding is not really relevant compared with the total speed-up, which can be in the order of 500%\n\nOverall, the idea of model specialization seem interesting. However, in my understanding the main source of speed-up is a cascade approach with a reduced model, in which is not clear how much speed-up is actually due to the specialized task.", "The authors review and evaluate several empirical methods to create faster versions of big neural nets for vision without sacrificing accuracy. They show using the ResNet architecture that combining distillation, pruning, and cascades are complementary and can yield pretty nice speedups.\n\nThis is a great idea and could be a strong paper, but it's really hard to glean useful recommendations from this for several reasons:\n\n- The writing of the paper makes it hard to understand exactly what's being compared and evaluated. For a paper like this it's really crucial to be precise. When the authors say \"specialization\" or \"specialized model\", they sometimes mean distillation, sometimes filter pruning, and sometimes cascades. The distinction of \"task-aware\" also seems arbitrary to me and obfuscates the contribution of the paper as well. As far as I can tell, the technique is exactly the same, all that's changing is a slight modification. It's not like any of the intuitions or objectives are changing, so adding this new terminology just complicates things. For example, just say \"We distill a parent model to a child model with a subset of the labels.\" \n\n- In terms of substance, the experiments don't really add much value in terms of general lessons. For example, the Cat/Dog from ImageNet distillation only works if the target labels are exactly a subset of the original. Obviously if the parent model was overcomplete before, it is certainly overcomplete now. The proposed cascade method is also fairly trivial -- a cheap distilled model backs off to the reference model. Why not train the whole cascade end-to-end? What about multiple levels of cascades? The only useful conclusion I can draw from the experiments is that (1) distillation still works (2) cascades also still work (3) pruning doesn't seem that useful in comparison. Training a cascade also involves a bunch of non-trivial design choices which are largely ignored -- how to set pass through, how to train the model, etc. etc.\n\n- Nit: where are the blue squares in Figure 4? (Distill only) shouldn't those be the fastest methods (aside from pruning)? \n\nAn ideal story would for a paper like this would be: here are some complementary ideas that we can combine in non-obvious ways for superlinear benefits, e.g. it turns out that by distilling into a cascade in some end-to-end fashion, you can get much better accuracy vs. speed trade-offs. Instead this paper is a grab-back of tricks. Such a paper can also provide value, but to do that right, the tricks need to be obvious *in retrospect only* and/or the experiments need to show a lot of precise practical lessons. All in all this paper reads like a tech report but not a conference publication." ]
[ 6, 4, 3 ]
[ 3, 4, 4 ]
[ "iclr_2018_ryOG3fWCW", "iclr_2018_ryOG3fWCW", "iclr_2018_ryOG3fWCW" ]
iclr_2018_rJ8rHkWRb
A Simple Fully Connected Network for Composing Word Embeddings from Characters
This work introduces a simple network for producing character aware word embeddings. Position agnostic and position aware character embeddings are combined to produce an embedding vector for each word. The learned word representations are shown to be very sparse and facilitate improved results on language modeling tasks, despite using markedly fewer parameters, and without the need to apply dropout. A final experiment suggests that weight sharing contributes to sparsity, increases performance, and prevents overfitting.
rejected-papers
The paper presents yet another approach for modeling words based on their characters. Unfortunately the authors do not compare properly to previous approaches and the idea is very incremental.
val
[ "By7uW7Pef", "HkDiq0Flf", "Byp-dy9gG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose a neural network architecture which takes the characters of a word as input along with their positions, and output a word embedding. They then use these as inputs to a GRU language model, which is evaluated on two medium size data sets made from a series of novels and the Project Gutenberg Canada books respectively.\n\nWhile the idea has merit, the experimental protocol is too flawed to draw any reliable conclusions. Why use Wheel of Time, which is not in the public domain, rather than e.g. text8? Why not train the model to convergence (Figure 3)? Do the learned embeddings exhibit any morphological significance, or does the model only serve a regularization purpose?\n\nAs for the model itself: are the position agnostic character embeddings actually helpful in the spelling model? Does the model have the expressivity to learn the same embeddings as a look-up table?\n\nThe authors are also missing a significant amount of relevant literature on the topic of building word embeddings from characters, for example:\nFinding Function in Form: Compositional Character Models for Open Vocabulary Word Representation, Ling et al., 2015\nEnriching Word Vectors with Subword Information, Bojanowski et al. 2017\nCompositional Morphology for Word Representations and Language Modelling, Botha and Blunsom 2014\n\nPros:\n- Valid idea\n\nCons:\n- Too many missing references\n- Some modeling choices lack justification\n- Experiments do not provide meaningful comparisons and are not reproducible\n", "The paper uses both position agnostic and position aware embeddings for tokens in a language modeling task. To obtain token embeddings, they concatenate two embeddings: the sum of character embeddings and the sum of (character, position) embeddings, the former being position agnostic and the latter being position aware. In a language modeling task, they find that using a combination of both improves perplexity over the standard token embedding baseline with fewer parameters. \n\nThe paper shows that the character embeddings are more sparse, measured with the Gini coefficient, than token embeddings and are more robust to overfitting. They also find that while dropout increases overall sparsity, it makes a few tokens homogenous. The paper does not give a crisp answer to why such sparsity patterns are observed. \n\nThe paper falls a bit short both empirically and technically. While their technique is interesting, they do not compare it to the baseline of using convolutions over characters. More empirical evidence is needed for the technique to be adopted by the community. On the theory side, they should dig deeper into the reasons for sparsity and how it might help to train better models. \n\nIf the papers shows that the approach can work well in machine translation or language modeling of morphologically rich languages, it might encourage practitioners to use the technique. ", "This paper presents a new model for composing representations of characters into word embeddings. The starting point of their argument is to include position-specific embeddings of characters rather than just position-independent characters. By adding together position-specific vectors, reasonable results are obtained.\n\nThis is an interesting result, but I have a few recommendations to improve the paper.\n1) It is a bit hard to assess since it is not evaluated on a standard datasets. There are a number standard datasets for open vocabulary language modeling. E.g., the MWC corpus (http://k-kawakami.com/research/mwc), or even the Penn Treebank (although it is conventionally modeled in closed vocabulary form).\n2) There are many existing models for composing characters into words. In addition to those cited in the paper, see the citations listed below. Comparison with those is crucial in a paper like this.\n3) Since the predictions are done at the word type level, it is unclear how vocabulary set of the corpus is determined, and what is done with OOV word types at test time (while it is possible to condition on them using the technique in the paper, it is not possible to use this technique for generation).\n4) The analysis is interesting, but a more intuitive explanation would be to show nearest neighbor plots.\n\nSome missing citations:\n\nComposing characters into words:\n\ndos Santos and Zadrozny. (2014 ICML) http://proceedings.mlr.press/v32/santos14.pdf\nLing et al. (2015 EMNLP) Finding Function in Form. https://arxiv.org/abs/1508.02096\n\nAdditionally, using explicit positional features in modeling language has been used:\nVaswani et al. (2017) Attention is all you need https://arxiv.org/abs/1706.03762\nand a variety of other sources." ]
[ 3, 5, 4 ]
[ 4, 4, 5 ]
[ "iclr_2018_rJ8rHkWRb", "iclr_2018_rJ8rHkWRb", "iclr_2018_rJ8rHkWRb" ]
iclr_2018_rkYgAJWCZ
One-shot and few-shot learning of word embeddings
Standard deep learning systems require thousands or millions of examples to learn a concept, and cannot integrate new concepts easily. By contrast, humans have an incredible ability to do one-shot or few-shot learning. For instance, from just hearing a word used in a sentence, humans can infer a great deal about it, by leveraging what the syntax and semantics of the surrounding words tells us. Here, we draw inspiration from this to highlight a simple technique by which deep recurrent networks can similarly exploit their prior knowledge to learn a useful representation for a new word from little data. This could make natural language processing systems much more flexible, by allowing them to learn continually from the new words they encounter.
rejected-papers
The paper is looking at an interesting problem, but it seems too early. The approach requires training a new language model from scratch for each new word, rendering it completely impractical for real use. The main evaluation therefore only considers four words - "bonuses", "explained", "marketers", "strategist" (expanded to 20 during the rebuttal). This is not sufficient for ICLR.
train
[ "Sk9dBhUlG", "HJnrNAtlG", "Sybz8F9eG", "B1rxDNtmG", "r1i_IEtmG", "S1fm8Vt7z", "ByDSg2Ggz", "rJEcBKfxG", "rJcKVpa1f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "author", "public" ]
[ "I am highly sympathetic to the goals of this paper, and the authors do a good job of contrasting human learning with current deep learning systems, arguing that the lack of a mechanism for few-shot learning in such systems is a barrier to applying them in realistic scenarios. However, the main evaluation only considers four words - \"bonuses\", \"explained\", \"marketers\", \"strategist\" - with no explanation of how these words were chosen. Can I really draw any meaningful conclusions from such an experimental setup? Even the authors acknowledge, in footnote 1, that, for one of the tests, getting lower perplexity in three out of the four casess \"may just be chance variation, of course\". I wonder why we can't arrive at a similar conclusion for the other results in the paper. At the very least I need convincing that this is a reasonable experimental paradigm.\n\nI don't understand the first method for initializing the word embeddings. How can we use the \"current\" embedding for a word if it's never been seen before? What does \"current\" mean in this context?\n\nI also didn't understand the Latin square setup. Training on ten different permutations of the ten sentences suggests that all ten sentences are being used, so I don't see how this can lead to a few-shot or one-shot scenario.\n\n", "The paper proposes a technique for exploiting prior knowledge to learn embedding representations for new words with minimal data. The authors provide a good motivation for the task and it is also a nice step in the general direction of learning deep nets and other systems with minimal supervision. \n\nThe problem is useful and very relevant to natural language applications, especially considering the widespread use of word embeddings within NLP systems. However, the demonstrated experimental results do not match the claims which seems a little grand. Overall, the empirical results is unsatisfactory. The authors pick a few example words and provide a detailed analysis. This is useful to understand how the test perplexity varies with #training examples for these individual settings. However, it is hardly enough to draw conclusion about the general applicability of the technique or effectiveness of the results. Why were these specific words chosen? If the reason is due to some statistical property (e.g., frequency) observed in the corpus, then why not generalize this idea and demonstrate empirical results for a class of words exhibiting the property. Such an analysis would be useful to understand the effectiveness of the overall approach. Another idea would be to use the one/few-shot learning to learn embeddings and evaluate their quality on a semantic task (as suggested in Section 3.3), but on a larger scale.\n\nThe technical contributions are also not novel. Coupled with the narrow experimentation protocol, it does not make the paper’s contributions or proposed claims convincing.\n", "Paper Summary\n\nFrom just seeing a word used in a sentence, humans can infer a lot about this word by leveraging the surrounding words. Based on this idea, this work tries to obtain a better understanding of words in the one-shot or few-shot setting by leveraging surrounding word. They do this by language modeling sentences which contain rarely seen or never seen words. They evaluated their model using percent change in perplexity on test sentences containing new word by varying the number of training sentences containing this word. 3 Proposed Methods to model few-shot words: (1) beginning with random embedding, (2) beginning with zero embedding (3) beginning with the centroid of other words in the sentence. They compare to 2 Baseline Methods: (1) centroid of other words in the sentence, and (2) full training including the sparse words. Their results show that learning from centroids of other words can outperform full training on the new words. \n\nExplanation\nThe paper is well written, and the experiments are well explained. It is an interesting paper, and a research topic which is not well studied. The experiments are reasonable. The method seems to work well. \n\nHowever, the method provides a very marginal difference between the previous method in Lazaridou et al. (2017). They just use backdrop to learn from this starting position. The main contribution of this work is the evaluation section. \n\nWhy only use the PTB language modeling task. Why not use the task in Gauthier & Mordatch or Hermann et al. The one task of language modeling shows promising results, but it’s not totally convincing. \n\nOne of the biggest caveats is that the experiments are only done in a few words. I’m not sure why more couldn’t have been done. This is discussed in section 4.1, but I think some of these differences could have been alleviated if there were more experiments done. Regardless, the experiments on the 8 words that they did chose were well done. \n\nI don’t think that section 3.3 (embedding similarity) is particularly useful. \n", "All reviewers highlighted the small number of words we used. Briefly, we made this choice initially a) to allow us to explore the variation among different training sentences and different numbers of training sentences for the same word in more detail, and b) because our original experiments required training a new language model from scratch for each new word, which meant running many new words would require thousands of hours of compute time. However, it is useful to evaluate our approach across many words, and so we have added an experiment to the paper which does so while still maintaining computational feasibility.\n\nIn this experiment, we selected 100 of the ~150 words that appear exactly 20 times in the PTB train corpus (omitting the words we used in prior experiments). Instead of training separate models without each word as we had previously, we trained a single model with NONE of these words. We then tested our one-shot learning technique and the centroid technique on these sentences, and compared to results obtained from ``full training with all words'' -- a model trained with the train sentences for each of the hundred words and the rest of the PTB training corpus. Notice that this comparison is not as precise as the earlier ones -- the ``full training with all words'' model receives about 2.5% more training data over all than any of the one-shot learning models, which means it will both perform better even on other words and will thus have more relevant linguistic structure to learn the new words from. Nevertheless, the comparisons between our technique and the centroid technique are still valid, and the comparison to the full training with all words gives a *worst-case* bound on how poorly one-shot methods will do compared to full training. In these experiments, we saw that our method performed both relatively consistently across different words, and performed consistently better than the centroid method. On average, it performed about as well as full training with the word (see the revised paper for full results).\n\nIn order to partially compensate for the added material, we moved the embedding similairty analyses to the supplementary material, per reviewer 3's comment that they were unnecessary.\n\nClarification of some minor details:\n\nInitializing from current embedding: You can imagine that a <new-word> token would be included in the softmax from the beginning, which could then be used as an initialization for any new words encountered. This would likely help the new word embeddings to be well separated from old embeddings (though it ultimately proves to be detrimental). Thanks for pointing out that this was not explained clearly, we have clarified this in the article as well.\n\nLatin square: We actually performed 100 training runs for each word, 10 runs corresponding to taking the first sentence from each of the 10 permutations, 10 runs corresponding to taking the first two sentences, etc. We've added a sentence to the paper that we hope will clarify this.\n", "All reviewers highlighted the small number of words we used. Briefly, we made this choice initially a) to allow us to explore the variation among different training sentences and different numbers of training sentences for the same word in more detail, and b) because our original experiments required training a new language model from scratch for each new word, which meant running many new words would require thousands of hours of compute time. However, it is useful to evaluate our approach across many words, and so we have added an experiment to the paper which does so while still maintaining computational feasibility.\n\nIn this experiment, we selected 100 of the ~150 words that appear exactly 20 times in the PTB train corpus (omitting the words we used in prior experiments). Instead of training separate models without each word as we had previously, we trained a single model with NONE of these words. We then tested our one-shot learning technique and the centroid technique on these sentences, and compared to results obtained from ``full training with all words'' -- a model trained with the train sentences for each of the hundred words and the rest of the PTB training corpus. Notice that this comparison is not as precise as the earlier ones -- the ``full training with all words'' model receives about 2.5% more training data over all than any of the one-shot learning models, which means it will both perform better even on other words and will thus have more relevant linguistic structure to learn the new words from. Nevertheless, the comparisons between our technique and the centroid technique are still valid, and the comparison to the full training with all words gives a *worst-case* bound on how poorly one-shot methods will do compared to full training. In these experiments, we saw that our method performed both relatively consistently across different words, and performed consistently better than the centroid method. On average, it performed about as well as full training with the word (see the revised paper for full results).\n\nIn order to partially compensate for the added material, we moved the embedding similairty analyses to the supplementary material, per reviewer 3's comment that they were unnecessary.\n\nWe agree that it would be exciting to see these methods applied to richer semantic tasks like the grounded tasks that we mentioned in the article, as several reviewers commented. However, it seems to us that our results are a useful starting place to demonstrate the method, and we are already straining the limits of the length of this paper.", "All reviewers highlighted the small number of words we used. Briefly, we made this choice initially a) to allow us to explore the variation among different training sentences and different numbers of training sentences for the same word in more detail, and b) because our original experiments required training a new language model from scratch for each new word, which meant running many new words would require thousands of hours of compute time. However, it is useful to evaluate our approach across many words, and so we have added an experiment to the paper which does so while still maintaining computational feasibility.\n\nIn this experiment, we selected 100 of the ~150 words that appear exactly 20 times in the PTB train corpus (omitting the words we used in prior experiments). Instead of training separate models without each word as we had previously, we trained a single model with NONE of these words. We then tested our one-shot learning technique and the centroid technique on these sentences, and compared to results obtained from ``full training with all words'' -- a model trained with the train sentences for each of the hundred words and the rest of the PTB training corpus. Notice that this comparison is not as precise as the earlier ones -- the ``full training with all words'' model receives about 2.5% more training data over all than any of the one-shot learning models, which means it will both perform better even on other words and will thus have more relevant linguistic structure to learn the new words from. Nevertheless, the comparisons between our technique and the centroid technique are still valid, and the comparison to the full training with all words gives a *worst-case* bound on how poorly one-shot methods will do compared to full training. In these experiments, we saw that our method performed both relatively consistently across different words, and performed consistently better than the centroid method. On average, it performed about as well as full training with the word (see the revised paper for full results), even though this is a worst-case bound. We think these results are quite encouraging, and hope they will address some of the concerns raised here.\n\nIn order to partially compensate for the added material, we moved the embedding similairty analyses to the supplementary material, per your comment\n\nWe agree that it would be exciting to see these methods applied to richer tasks like the grounded tasks that we mentioned in the article, as several reviewers commented. However, it seems to us that our results are a useful starting place to demonstrate the method, and we are already straining the limits of the length of this paper.", "Thanks for the clarification. I fully agree that, while the papers ask similar questions and propose similar approaches, yours is going further in various empirical ways.", "Thank you for the helpful reference! We had not encountered this work previously, and we agree that they share some similar features and will reference it in our final version of the paper. However, we think there are several features that distinguish our work:\n\n* First, their work only showed benefits of learning from definitional sentences, whereas ours demonstrates the ability to benefit from sentences which are not so clearly informative about the target word. This is important, because in practice when a sentence containing a new word is encountered, it is unlikely to conveniently be a definition of that word.\n\n* Furthermore, the only metric on which their approach shows improvement is the similarity of the produced embeddings to the \"true\" embedding. This may or may not be meaningful, since our representational similarity analyses suggest that there are dissimilar word embeddings that nevertheless produce similar performance in a complex task.\n\n* We demonstrate behaviorally relevant improvements (that is, our model's ability to do its task in the context of the new word improves). We view this as an important part of exploring whether the representation learned is actually of real use in a language processing task.\n\n* Finally, we conducted more detailed analyses of the behavior and errors produced by our approach, such as the impact on prediction of other words and how this is affected by replay. We think these analyses provide important insights and caveats about our approach that will make it easier to refine and generalize.", "Nice paper, but motivation and methodology are very similar to the ones presented in:\n\nHerbelot, A. and Baroni, M. 2017. High-risk learning: acquiring new word vectors from tiny data. In proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP2017), Copenhagen, Denmark.\n\nhttp://aclweb.org/anthology/D/D17/D17-1030.pdf\n\nPerhaps, you could discuss how your proposal is different?\n" ]
[ 4, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkYgAJWCZ", "iclr_2018_rkYgAJWCZ", "iclr_2018_rkYgAJWCZ", "Sk9dBhUlG", "HJnrNAtlG", "Sybz8F9eG", "rJEcBKfxG", "rJcKVpa1f", "iclr_2018_rkYgAJWCZ" ]
iclr_2018_HJw8fAgA-
Learning Dynamic State Abstractions for Model-Based Reinforcement Learning
A key challenge in model-based reinforcement learning (RL) is to synthesize computationally efficient and accurate environment models. We show that carefully designed models that learn predictive and compact state representations, also called state-space models, substantially reduce the computational costs for predicting outcomes of sequences of actions. Extensive experiments establish that state-space models accurately capture the dynamics of Atari games from the Arcade Learning Environment (ALE) from raw pixels. Furthermore, RL agents that use Monte-Carlo rollouts of these models as features for decision making outperform strong model-free baselines on the game MS_PACMAN, demonstrating the benefits of planning using learned dynamic state abstractions.
rejected-papers
There was quite a bit of discussion about this paper but in the end the majority felt that, though the paper is interesting, the results are too limited and more needs to be done for publication. PROS: 1. Good comparison of state space model variations 2. Good writing (perhaps a bit dense in places) 3. Promising results, especially concerning speedup CONS: 1. The evaluation is quite limited
train
[ "HyV_TbIBf", "SyfTyqXSM", "H1KbSmYeM", "BkJknsz-f", "BkCqRjmZz", "B106AfgfG", "HyFH0GgGf", "HkDEpzlGz", "r1FwqfeGG" ]
[ "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Would the author of the comment elaborate on their objection? \n\nThe title is justified in our opinion; we use the term \"dynamic state abstraction\" to emphasize the following:\n- we learn state presentations that are more compact than the the raw observations at a single time step, hence they constitute \"abstractions\".\n- these representations are learned by predicting future observations , hence they capture the dynamics of the environment\n- we experimentally show that these learned state representations (together with the learned transition function) contain sufficient information to accurately predict the future over tens to hundreds of raw frames in non-trivial environments.\n", "Having gone through both this paper and I2A paper which this builds on, I find that the claim that any sort of \"dynamic\" state abstractions is learned to be unjustified. \n\nI find this behaviour of using overly generic titles that don't hold any water to flag plant alarming.", "Summary:\n\nThis paper studies how to learn (hidden)-state-space models of environment dynamics, and integrate them with Imagination-Augmented Agents (I2A). The paper considers single-agent problems and tests on Ms Pacman etc.\n\nThere are several variations of the hidden-state space [ds]SSM model: using det/stochastic latent variables + using det/stochastic decoders. In the stochastic case, learning is done using variational methods. \n\n[ds]SSM is integrated with I2A, which generates rollouts of future states, based on the inferred hidden states from the d/sSSM-VAE model. The rollouts are then fed into the agent's policy / value function.\n\nMain results seem to be:\n1. Experiments on learning the forward model, show that latent forward models work better and faster than naive AR models on several Atari games, and better than fully model-free baselines. \n2. I2A agents with latent codes work better than model-free models or I2A from pixels. Deterministic latent models seem to work better than stochastic ones.\n\nPro:\n- Relatively straightforward idea: learn the forward model on hidden states, rather than raw states.\n- Writing is clear, although a bit dense in places.\n\nCon:\n- Paper only shows training curves for MS Pacman. What about the other games from Table 1?\n- The paper lacks any visualization of the latent codes. What do they represent? Can we e.g. learn a raw-state predictor from the latent codes?\n- Are the latent codes relevant in the stochastic model? See e.g. the discussion in \"Variational Lossy Autoencoder\" (Chen et al. 2016)\n- Experiments are not complete (e.g. for AR, as noted in the paper).\n- The games used are fairly reactive (i.e. do not require significant long-term planning), and so the sequential hidden-state-space model does not have to capture long-term dependencies. It would be nice to see how this technique fares on Montezuma's revenge, for instance.\n\nOverall:\nThe paper proposes a simple idea that seems to work well on reactive 1-agent games. However, the paper could give more insights into *how* this works: e.g. a better qualitative inspection of the learned latent model, and how existing questions surrounding sequential stochastic model affect the proposed method. Also, not all baseline experiments are done, and the impact on training is only evaluated on 1 game. \n\nDetailed:\n-\n", "The paper proposes a method for inferring dynamical models from partial observations, that can later be used in model-based RL algorithms such as I2A. The essence of the method is to perform variational inference of the latent state, representing its distribution as Gaussian, and to use it in an ELBO to infer the dynamics of state and observation.\n\nWhile this is an interesting approach, many of the architectural choices involved seem arbitrary and unjustified. This wouldn't be so bad if they were justified by empirical success rather than principled design, but I'm also a bit skeptical of the strength of the results.\n\nA few examples of such architectural choices:\n1. What's the significance of separating the stochastic state transition into a stochastic choice of z and a transition g deterministic in z?\n2. How is the representational power affected by having only the observations depend on z? What's the intuition behind calling this model VAE, when sSSM is also trained with variational inference?\n3. What is gained by using pool-and-inject layers? By the way, is this a novel component? If so please elaborate, if not please cite.\n\nAs for the strength of the results, in Table 1 the proposed methods don't seem to outperform \"RAR\" (i.e., RNN) in expected value. They do seem to have lower variance, and the authors would do well to underline the importance of this.\nIn Figure 3, it's curious that the model-free baseline remains unnamed, as it also does in the text and appendix. This makes it hard to evaluate whether the significant wins are indicative of the strength of the proposed method.\n\nFinally, a notational point that the authors should really get right, is that conditioning on future actions naively changes the distribution of the current state or observation in ways they didn't intend. The authors intended for actions to be used as \"interventions\", i.e. a-causally, and should denote this conditioning by some sort of \"do\" operator.", "The authors provide a deeper exploration of Imagination Agents, by looking more closely at a variety of state-space models. They examine what happens as both the representation of state, update algorithm, as well as the concept of time, is changed. As in the original I2A paper, they experiment with learning how best to take advantage of a learned dynamics model.\n\nTo me, this strain of work is very important. Because no model is perfect, I think the idea of learning how best to use an approximate model will become increasingly important. The original I2A of learning-to-interpret models is here extended with the idea of learning-to-query, and a variety of solid variants on a few basic themes are well worked out.\n\nOverall, I think the strongest part of this paper is in its conceptual contributions - I find the work thought-provoking and inspiring. On the negative side, I felt that the experiments were thin, and that the work was not well framed in terms of the literature of state-space identification and planning (there are a zillion ways to plan using a model; couldn't we have compared to at least one of them? Or discussed a few popular ones, and why they aren't likely to work? Since your model is fully differentiable, vanilla MPC would be a natural choice [In other words, instead of learning a rollout policy, do something simple like run MPC on the approximate model, and pass the resulting optimized action trajectory back to the agent as an input feature]).\n\nOf course, while we can always demand more and more experiments, I felt that this paper did a good enough job to merit publication.\n\nMinor quibble: I wasn't sure what to make of the sSSM ideas. My understanding is that in any dynamical system model, the belief state update is always a deterministic function of previous belief state and observation; this suggests to me that the idea of \"state\" here differs from my definition of \"state\". I don't think you should have to sample anything if you've represented your state cleanly.", "We thank the reviewer for the comments. We would like to point out that another important contribution is that we consider learning the rollout policy by backpropagation of policy gradients via continuous relaxation an important point of the paper.\n\n\n\"- Paper only shows training curves for MS Pacman. What about the other games from Table 1?\"\n\nIn spite of the 13x speed-up we achieved for the models, the main limitation of the imagination-augmented agent is that it is still expensive compared to the baseline. Therefore, we did not manage to do extensive experiments on the other games, and focused on MS_PACMAN, which is the most challenging one of the 4 games (based on DQN and A3C results). Once we finish developing our simplified experimentation pipeline with online-learning of the models, we will perform experiments on a wide range of domains.\n\n\"- Experiments are not complete (e.g. for AR, as noted in the paper).\"\n\nPoint taken. We are currently re-running the experiments to complete the table.\n\n\n\"- The paper lacks any visualization of the latent codes. What do they represent? Can we e.g. learn a raw-state predictor from the latent codes?\"\n\nAlthough very interesting, due to space constraints (and considering that the paper is already quite dense), we did not include an analysis of the latent codes. \n\n\n“- Are the latent codes relevant in the stochastic model? See e.g. the discussion in \"Variational Lossy Autoencoder\" (Chen et al. 2016)”\n\nWe found the latent variables to be relevant in the sense that they boost performance. As shown in Table 1, a fully deterministic model with the same number of parameters and operations but without latent variables (dSSM-DET), performs worse that the full state-space model (sSSM). Preliminary experiments also suggested that there is a  sweet spot for the ratio of deterministic hidden units vs latent variables (at fixed total size) for the state-space model. \n\n\n\"- The games used are fairly reactive (i.e. do not require significant long-term\nplanning), and so the sequential hidden-state-space model does not have to\ncapture long-term dependencies. It would be nice to see how this technique fares\non Montezuma's revenge, for instance\"\n\nWe agree that more experiments are necessary for fully assessing the benefits and limitations of the I2A, and we are currently working on this. However, our contribution is somewhat orthogonal to hard exploration problems like MONTEZUMAS_REVENGE, and we actually do not expect massive benefits in this particular domain (unless one uses model uncertainty for exploration see e.g. “Unifying count-based exploration and intrinsic motivation”. Bellemare NIPS 2016). Also, although MS_PACMAN is almost fully observed and exploration is not hard, is is a difficult domain in which standard model-free agents have underperformed. It is not fully reactive in a sense that planning of the order of tens of steps into the future seems beneficial.", "\"What's the intuition behind calling this model VAE, when sSSM is also trained with variational inference?\"\n\nThis name was chosen, as the observation model of the dSSM-VAE is really a (conditional) convolutional, variational auto-encoder. The sSSM is indeed also trained by ELBO maximization, so in this sense it is also a VAE. However, it has additional, very important temporal structure, which we explicitly leverage for efficient inference and parameter optimization. We chose to call it a state-space model to emphasize this structure.\n\n\n\"3. What is gained by using pool-and-inject layers? By the way, is this a novel component? If so please elaborate, if not please cite.\"\n\nThe reviewer is correct: this layer was already used in [Weber et al. NIPS2017], we will make this explicit. As stated in the manuscript, this layer makes it possible to capture spatial long-range dependencies in $s_t$. The state-transitions are modeled with convolutions. The size of the convolutional filters limits how \"far\" information can propagate spatially in a single state transition. Modelling global aspects of the environment state, eg the score / reward in MS_PACMAN, would require very large (and therefore costly) convolutional filters. The global pooling in the pool-and-inject layer fixes this. An alternative would have been eg to use large dilated convolutions.\n\n\n\"As for the strength of the results, in Table 1 the proposed methods don't seem to outperform \"RAR\" (i.e., RNN) in expected value..”\n\nSee comment above.\n\n\n\"In Figure 3, it's curious that the model-free baseline remains unnamed, as it also does in the text and appendix. This makes it hard to evaluate whether the significant wins are indicative of the strength of the proposed method.\"\n\nAlthough it is mentioned in the manuscript, we agree that this could be somewhat confusing and it will be clarified in the next revision. The baseline is a re-implementation of the A3C agent from “Asynchronous Methods for Deep Reinforcement Learning” (Mnih et al ICLM 2016). This agent achieved state-of-the-art results on ALE, and is an extremely widely applied, general and strong baseline.\n\n\n\"Finally, a notational point that the authors should really get right, is that conditioning on future actions naively changes the distribution of the current state or observation in ways they didn't intend. The authors intended for actions to be used as \"interventions\", i.e. a-causally, and should denote this conditioning by some sort of \"do\" operator.\"\n\nThe reviewer is incorrect about this point. As can be seen from Figure 1, the actions that we condition on, do not have any parents in the graphical models (because we deliberately did not include the policy for modelling the actions). Therefore, inference in the mutilated graph stemming from an intervention on the actions, is exactly equivalent to the conditional distribution stated on page 3.\nOf course, instead of giving the conditional distribution on page 3, we could have used the do-notation. We decided against this, as it is rarely used (and could be therefore confusing) in the generative modelling and reinforcement literature, if it is not absolutely necessary.", "\"This wouldn't be so bad if they were justified by empirical success rather than principled design, but I'm also a bit skeptical of the strength of the results.\"\n\nWe remain convinced that our results are strong. We would like to invite the reviewer to provide references that lead her / him to be sceptical of the strength of the results.\n\nRegarding our reinforcement learning results: To the best of our knowledge, we are the first to present model-based RL agents on ALE domains, where the model is learned from raw pixels without any privileged information. Our agents outperform a strong, standard A3C baseline (also see below). \n\nRegarding the unsupervised modelling results: As shown in Table 1, our proposed model achieves the best likelihoods on all games we tried compared to the baselines, while showing an approximately 3x speed-up over an RNN. In practice this means that an imagination-augmented agent can be trained in 4 days instead of say 2 weeks. Furthermore, using the jumpy sSSM we can train an I2A in **1 day** at comparable accuracy. Furthermore, to the best of our knowledge, we are the first to present results of stochastic sequence models on the Atari domain. Prior work on Atari was based on deterministic models (with their inherent limitations); prior work on stochastic models was limited to much lower-dimensional sequences.\n\n\n\"While this is an interesting approach, many of the architectural choices involved seem arbitrary and unjustified.\"\n\nAll architectural choices are informed by state-of-the-art generative sequence models cited in the paper. We actually tried to go for the simplest architectures wherever possible, most of which are standard, even in the wider context of deep learning, as detailed below.\n\n\n\"1. What's the significance of separating the stochastic state transition into a stochastic choice of z and a transition g deterministic in z?\"\n\nInstead of being idiosyncratic, this architectural choice is canonical: it directly follows from the standard decomposition of the joint probability $p(z_1,..., z_T)=\\prod_t p(z_t\\vert z_{<t})$. The sufficient statistics of the distribution over $z_t$ is a (deterministic) function of $z_{<0}=z_1,...,z_{t-1}$; let’s denote it by h_{t-1}. Assuming a large enough number of hidden units, an RNN is a general function approximator, and so we can use it to approximate $h_{t-1}$. This is exactly the architecture we use. For a similar line of argumentation, see e.g. Chen et al “Variational Lossy Autoencoder” (ICLR 2017, page 3 top) and references therein. \nFurthermore, this architecture is standard in the literature at least since [Chung et al. NIPS 2015] (e.g. see their Figure 1). Our own preliminary experiments also show empirically, that models with this architecture outperform models with purely stochastic hidden units.\n\n\n\"2. How is the representational power affected by having only the observations depend on z?\"\n\nWe actually briefly discuss this difference in A.1.6 (page 13), which we explicitly reference to in the main article.", "We thank the reviewer for the comments.\n\n\"... I felt that the experiments were thin, and that the work was not\nwell framed in terms of the literature of state-space identification and\nplanning\"\n\nWe agree that a baseline which uses the learned models together with a\nclassical planning algorithm, would be very informative and we are currently\nworking on implementing MPC on a continuous relaxation of the discrete action\nmodel. However, we expect this baseline to be very slow, as the cost of\nbackpropagating through the model is high; we expect having an optimization as\nan inner loop of the agent, will make this at least 10x slower at test time\ncompared to the imagination-augmented agent.\n\n\"Minor quibble: I wasn't sure what to make of the sSSM ideas. My understanding\nis that in any dynamical system model, the belief state update is always a\ndeterministic function of previous belief state and observation; this suggests\nto me that the idea of \"state\" here differs from my definition of \"state\". I\ndon't think you should have to sample anything if you've represented your state\ncleanly.\"\n\nThis is a very interesting and subtle point. It is true, that the *ideal* belief-state\nupdated is a deterministic function of the previous belief-state and the current\nobservation. Given a simple model, eg linear-gaussian dynamical systems or HMM,\noptimal inference can and actually is done in a deterministic way (ie Kalman filter, Viterbi\nalgorithm). However, the expressive power of these models are too limited for the\nALE domains. In more powerful, non-linear models as we consider here, we have to\napproximate inference, and this can be done in multiple ways: trying to\napproximate the entire belief-state, or use a single sample, as we have done\n(other approaches also look at multi-sample particle filtering). It's very much\nan open question what the best approximation strategy is given the same *finite\nresources* (e.g. neural network with n layers). We build on the sample-based\nliterature, which has achieved good results in sequence modelling such as speech\netc (see the generative modelling references in the main article), but for us this is a decision of algorithmic simplicity / performance, not of dogma.\n" ]
[ -1, -1, 6, 5, 8, -1, -1, -1, -1 ]
[ -1, -1, 4, 4, 4, -1, -1, -1, -1 ]
[ "SyfTyqXSM", "iclr_2018_HJw8fAgA-", "iclr_2018_HJw8fAgA-", "iclr_2018_HJw8fAgA-", "iclr_2018_HJw8fAgA-", "H1KbSmYeM", "HkDEpzlGz", "BkJknsz-f", "BkCqRjmZz" ]
iclr_2018_SJky6Ry0W
Learning Independent Causal Mechanisms
Independent causal mechanisms are a central concept in the study of causality with implications for machine learning tasks. In this work we develop an algorithm to recover a set of (inverse) independent mechanisms relating a distribution transformed by the mechanisms to a reference distribution. The approach is fully unsupervised and based on a set of experts that compete for data to specialize and extract the mechanisms. We test and analyze the proposed method on a series of experiments based on image transformations. Each expert successfully maps a subset of the transformed data to the original domain, and the learned mechanisms generalize to other domains. We discuss implications for domain transfer and links to recent trends in generative modeling.
rejected-papers
PROS: 1. All the reviewers thought that the work was interesting and showed promise 2. The paper is relatively well written CONS: 1. Limited experimental evaluation (just MNIST) The reviewers were all really on the fence about this but in the end felt that while the idea was a good one and the authors were responsive in their rebuttal, the experimental evaluation needed more work.
train
[ "rJAS034ez", "S12z02uez", "SkiVnWtxM", "Hy0bIIFQf", "ryQnSLtXM", "rkUzBUKXz", "rJR1r8FQf", "SkzjmUKXG", "HypuI-dxG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper presents a framework to recover a set of independent mechanisms. In order to do so it uses a set of experts each one made out of a GAN.\n\nMy main concern with this work is that I don't see any mechanism in the framework that prevents an expert (or few of them) to win all examples except its own learning capacities. p7 authors have also noticed that several experts fail to specialize and I bet that is the reason why.\nThus, authors should analyze how well we can have all/most experts specialize in a pool vs expert capacity/architecture.\nIt would also be great to integrate a direct regularization mechanism in the cost in order to do so. Like for example a penalty in how many examples a expert has catched.\n\nMoreover, the discrimator D (which is trained to discriminate between real or fake examples) seems to be directly used to tell if an example is throw from the targeted distribution. It is not the same task. How D will handle an example far from fake or real ones ? Why will D answer negatively (or positively) on this example ? \n\n\n", "This paper describes a setting in which a system learns collections of inverse-mapping functions that transform altered inputs to their unaltered \"canonical\" counterparts, while only needing unassociated and separate sets of examples of each at training time. Each inverse map is an \"expert\" E akin to a MoE expert, but instead of using a feed-forward gating on the input, an expert is selected (for training or inference) based on the value of a distribution-modeling function c applied to the output of all experts: The expert with maximum value c(E(x)) is selected. When c is an adversarially trained discriminator network, the experts learn to model the different transformations that map altered images back to unaltered ones. This is demonstrated using MNIST with a small set of synthetic translations and noise.\n\nThe fact that these different inverse maps arise under these conditions is interesting --- and Figure 5 is quite convincing in showing how each expert generalizes. However, I think the experimental conditions are very limited: Only one collection of transformations is studied, and on MNIST digits only. In particular, I found the fact that only one of ten transformations can be applied at a time (as opposed to a series of multiple transforms) to be restrictive. This is touched on in the conclusion, but to me it seems fundamental, as any real-world new example will undergo significantly more complex processes with many different variables all applied at once.\n\nAnother direction I think would be interesting, is how few examples are needed in the canonical distribution? For example, in MNIST, could the canonical distribution P be limited to just one example per digit (or just one example per mode / style of digit, e.g. \"2\" with loop, and without loop)? The different handwriters of the digits, and sampling and scanning process, may themselves constitute in-the-wild transformations that might be inverted to single (or few) canonical examples --- Is this possible with this mechanism?\n\nOverall, it is nice to see the different inverse maps arise naturally in this setting. But I find the single setting limiting, and think the investigation could be pushed further into less restricted settings, a couple of which I mention above.\n\n\n\nOther comments:\n\n- c is first described to be any distribution model, e.g. the autoencoder described on p.5. But it seems that using such a fixed, predefined c like the autoencoder may lead to collapse: What is preventing an expert from learning a single constant mode that has high c value? The adversarially trained c doesn't suffer from this, because presumably the discriminator will be able to learn the difference between a single constant mode output and the distribution P. But if this is the case, it seems a critical part of the system, not a simple implementation choice as the text seems to say.\n\n- The single-net baseline is good, but I'd like to get a clearer picture of its results. p.8 says this didn't manage to \"learn more than one inverse mechanism\" --- Does that mean it learns to invert a single mechanism (that is always translates up, for example, when presented an image)? Or that it learned some mix of transforms that didn't seem to generalize as well? Or does it have some other behavior? Also, I'm not entirely clear on how it was trained wrt c --- is argmax(c(E(x)) always just the single expert? Is c also trained adversarially? And if so, is the approximate identity initialization used?\n", "Summary:\nGiven data from a canonical distribution P and data from distributions that are\nindependent transformations (mechanisms) applied on P, this paper aims to learn\n1) those independent transformations; and 2) inverse transformations that map\ndata from transformed distributions to their corresponding canonical\ndistribution.\n\nThis is achieved by training a mixture of experts, where each expert is assumed to\nmodel a single inverse transformation. Each expert can be seen as the generator\nof a conditional GAN. The discriminator is trained to distinguish samples from\nthe canonical distribution P and those transformed distributions.\n\nExperiments on MNIST data shows that in the end of training, each expert wins\nalmost all samples from one transformation and no other, which confirms that\neach expert model a single inverse transformation.\n\nComments:\n1) Besides samples from distributions that are results of applying independent\nmechanisms, samples from the canonical distribution are also required to learn\nthe model. Are the samples from the canonical distribution always available in\npractice? Since the canonical samples are needed for training, this problem \nsetup seems not to be totally \"unsupervised\".\n\n2) The authors only run experiments on the MNIST data, where 1) the mechanisms are\nsimulated and relatively simple, and 2) samples from the canonical distribution\nare also available. Did the authors run experiments on other datasets?\n\n3) This work seems to be related to the work on 1) disentangling factors of\nvariation; and 2) non-linear independent component analysis. Could the authors\nadd discussions to illustrate the difference between the proposed work and\nthose topics?\n\n4) This work is motivated from the objective of causal inference, therefore it\nmight be helpful to add empirical results to show how the proposed method can\nbe used for causal inference.", "We thank all three reviewers for their thorough analysis and extremely insightful comments. Following up on their input, we added new content to the paper, ran new experiments, clarified missing points and adjusted misleading statements.\n\nOn top of smaller edits, these are the major additions:\n\n* A paragraph in the related work section (Sec. 2) about disentangling factors of variation and non-linear independent component analysis.\n* A paragraph discussing new experiments on sample efficiency for the canonical distribution, where we obtained almost identical results using only 64 examples instead of 30'000 (Sec. 5, end of page 8).\n* We changed the incorrect claim (in Sec. 3, “Concrete protocol for neural networks”) that standard autoencoders could be used as well for c. However, VAEs do work, confirming that adversarial training is not a necessary component of our method.\n* We ran new experiments for different single net baselines --- smaller learning rate for the discriminator, with and without identity initialization, larger receptive field --- confirming that it is not straightforward to learn all tasks with a single net (extending paragraph \"A simple single-net baseline\" in Sec. 5).\n* We ran new experiments for experts with larger capacity, to test how specialization is affected (paragraph \"Specialization occurs also with higher capacity experts.\" in Sec. 5).\n\nWe believe the paper has greatly benefited from the reviewers' questions, and we encourage them to read the others' comments as well. We welcome further feedback and are happy to expand our comments if anything remains unclear. Finally, we kindly ask the reviewers to consider re-evaluating their final scores taking into account the new edits and added material.\n", "] My main concern with this work is that I don't see any mechanism in the framework that prevents an expert (or few of them) to win all examples except its own learning capacities. \n\nThis is correct. If the experts have unlimited capacity and the data is unlimited, then a single network can learn the whole thing. Having limited capacity and finite data, on the other hand, favors specialization into independent modules, together with the way we set up the method:\n1) the experts start from a similar ground (all initialized approximately as identity), \n2) they compete for data (only the winning expert is trained on a given example),\n3) the mechanisms are independent\n\nIn practice, as a mechanism starts to specialize on one task, it tends to get worse on the other tasks (at least, worse than other modules initialized to the identity). In our experience, experts usually fail to specialize if there are too many experts for the number of mechanisms present in the dataset, or if they are not initialized to the identity, both of which make perfect sense.\n\n] p7 authors have also noticed that several experts fail to specialize and I bet that is the reason why.\n\nAt page 7 the case where several fail is the one where they are initialized randomly (not with identity), which explains why they fail. Winning a few examples in the beginning will make the output of this lucky expert look “better” (more like MNIST digits) than the (random) outputs of the remaining experts. Hence such a “lucky” expert will likely continue to win almost all examples.\n\n] Thus, authors should analyze how well we can have all/most experts specialize in a pool vs expert capacity/architecture.\n\nWe ran new experiments with experts that have more capacity both in terms of number of filters --- 128 instead of 32 --- and in terms of size of the overall receptive field by adding two downsampling (2x2 average pooling) and two upsampling layers (2x2 nearest neighbor).\nFor more filters, the results and the training curves are almost identical to the ones obtained with smaller experts, with 9 or 10 experts specializing in every run.\nFor a larger receptive field there are still only isolated occurrences of up to two experts trying to specialize on up two tasks each.\n\n] It would also be great to integrate a direct regularization mechanism in the cost in order to do so. Like for example a penalty in how many examples a expert has won.\n\nThis would be a good way to incorporate prior knowledge on the tasks. In order to do this one would need to know approximately how many mechanisms are at play and what is the prior probability to choose any of them. \nIn our setting we assume we do not have such prior knowledge.\n\n] Moreover, the discrimator D (which is trained to discriminate between real or fake examples) seems to be directly used to tell if an example is throw from the targeted distribution. It is not the same task. How D will handle an example far from fake or real ones ? Why will D answer negatively (or positively) on this example ? \n\nIt is true that in general, even when training standard GANs, the behavior of a discriminator outside of the domain of the real and fake data is undefined, and one should not expect it to be meaningful (not even for the 'perfect' discriminator).\n\nIn our case, the discriminator is trained on all outputs from all experts (see page 6: \"In order to encourage the expert to specialize, the discriminator is also explicitly trained against the outputs of the losing experts\" and in the Algorithm, line 5).\n\nFrom the standard i.i.d. assumption, we then conclude that the discriminator can be used directly to judge any example coming from any of the experts. If the i.i.d. assumption is invalid, the discriminator will indeed give a meaningless answer, unless training continues on the non-i.i.d. data as well.\n", "] The fact that these different inverse maps arise under these conditions is interesting --- and Figure 5 is quite convincing in showing how each expert generalizes. However, I think the experimental conditions are very limited: Only one collection of transformations is studied, and on MNIST digits only. In particular, I found the fact that only one of ten transformations can be applied at a time (as opposed to a series of multiple transforms) to be restrictive. This is touched on in the conclusion, but to me it seems fundamental, as any real-world new example will undergo significantly more complex processes with many different variables all applied at once.\n\nWe believe the way to approach this will be to consider local mechanisms that can be iterated to generate complex transformations. This will require recurrency, and it may be linked to early work on Lie groups in visual perception. It’s an exciting prospect but beyond the scope of this paper. Right now, the paper should not be judged as a complete solution to the problem of image transformations - we think it’s an intriguing direction, but not the final story yet.\n\n] Another direction I think would be interesting, is how few examples are needed in the canonical distribution? For example, in MNIST, could the canonical distribution P be limited to just one example per digit (or just one example per mode / style of digit, e.g. \"2\" with loop, and without loop)? The different handwriters of the digits, and sampling and scanning process, may themselves constitute in-the-wild transformations that might be inverted to single (or few) canonical examples --- Is this possible with this mechanism?\n\nThis is indeed a very interesting question that we have also given some thought to. We believe this will be easier to do once we can combine local mechanisms, hence we have postponed it for the time being.\nIn particular, if there are intrinsic factors of variation (such as handwriting style) the problem again becomes one of disentangling simultaneously present transformations, rather than inverting individual mechanisms.\nConcerning sample efficiency, we followed up with a simple experiment: We still obtain very good results with down to 64 images for the canonical distribution (instead of 30k) and still 30k transformed images (roughly 3k independent images per mechanism). The images produced by the experts are not as clean, but the accuracy reached by the pre-trained classifier still increases from 40% to 96% and the experts still specialize on one mechanism each. Of course the discriminator starts to overfit sooner, and the performance decreases if it is trained long enough.\nFor even fewer examples (32), we start to observe overfitting of the discriminator before reaching very good performance of the experts.\nWe updated the paper with these results about sample sizes.\n\nOverall, it is nice to see the different inverse maps arise naturally in this setting. But I find the single setting limiting, and think the investigation could be pushed further into less restricted settings, a couple of which I mention above.\n\n] Other comments:\n] \n] - c is first described to be any distribution model, e.g. the autoencoder described on p.5. But it seems that using such a fixed, predefined c like the autoencoder may lead to collapse: What is preventing an expert from learning a single constant mode that has high c value? The adversarially trained c doesn't suffer from this, because presumably the discriminator will be able to learn the difference between a single constant mode output and the distribution P. But if this is the case, it seems a critical part of the system, not a simple implementation choice as the text seems to say.\n \nWe adjust the statement on page 5, since following up on the reviewer's comment we ran experiments with standard autoencoders, confirming the concern of the reviewer. All experts collapsed to output black images (which are usually perfectly reconstructed by an autoencoder). \n\nWe expected VAEs not to suffer from the same problem, and indeed they produced promising results, despite still somewhat inferior to an adversarially trained discriminator. Hence while we can use a distribution model that is not adversarially trained, we agree that the standard autoencoder is not a suitable one and changed the text on page 5 accordingly.", "] - The single-net baseline is good, but I'd like to get a clearer picture of its results. p.8 says this didn't manage to \"learn more than one inverse mechanism\" --- Does that mean it learns to invert a single mechanism (that is always translates up, for example, when presented an image)? Or that it learned some mix of transforms that didn't seem to generalize as well? Or does it have some other behavior? Also, I'm not entirely clear on how it was trained wrt c --- is argmax(c(E(x)) always just the single expert? Is c also trained adversarially? And if so, is the approximate identity initialization used?\n\nThe single net baseline learns a mix of transformations that do not generalize well: while color inversion is often correctly learned, the other mechanisms are mapped to odd looking shapes or digits outlines.\nTo answer the specific questions:\n- the argmax in this case is indeed the only net\n- c is still trained adversarially\n- we do use the identity initialization.\n\nFollowing up on the reviewer's comment we also tried the following extra configurations for the single net baseline:\n- not use the identity initialization\n- reduce the learning rate of the discriminator by a factor of 10\n- add two downsampling (2x2 average pooling) and two upsampling layers (2x2 nearest neighbor)\nNone of these improved the performance of the single net baseline.", "] 1) Besides samples from distributions that are results of applying independent\n] mechanisms, samples from the canonical distribution are also required to learn\n] the model. Are the samples from the canonical distribution always available in\n] practice? Since the canonical samples are needed for training, this problem \n] setup seems not to be totally \"unsupervised\".\n\n\nWe agree that having samples from both the canonical distribution and the transformed distribution helps learn the mechanisms. Note that the examples are not paired to original examples, so the signal is a weak one. Moreover, one could imagine that one could identify a subset of canonical examples from a larger distribution, i.e., that we construct the canonical distribution from a wider distribution containing transformed images.\nFurthermore, when trying to reconstruct a canonical distribution only from the transformed distributions, we found it hard to identify a “good” or unique criterion. For example, if the only transformations are “up” and “up-right”, it’s not clear why there should be a unique location for the “centered” digits.\n\nWe encourage the reviewer to look at the similar comment made by reviewer 2, and our answer with new results using a much more limited amount of examples from the canonical distribution (64 instead of 30'000).\n\n] 2) The authors only run experiments on the MNIST data, where 1) the mechanisms are\n] simulated and relatively simple, and 2) samples from the canonical distribution\n] are also available. Did the authors run experiments on other datasets?\n\nWe have so far only considered the MNIST problem.\n\n] 3) This work seems to be related to the work on 1) disentangling factors of\n] variation; and 2) non-linear independent component analysis. Could the authors\n] add discussions to illustrate the difference between the proposed work and\n] those topics?\n\nIndeed there are close relations to the work on disentangling factors of variation and also non-linear ICA. In our work, causal mechanisms play the role of ‘factors of variation’ which we believe is a fruitful view of this problem. We added a discussion about related works on DFV and non-linear ICA to Section 2 of the paper.\nThe causal point of view also builds a bridge to the field of domain adaptation. \nFinally, note that our approach currently recovers inverse mechanisms, as independent and modular parts (in our experiments a separate net for each mechanism), while typically in DFV one is interested in obtaining a joint low dimensional representation of the data without explicit separate paths for each factor.\n\n\n] 4) This work is motivated from the objective of causal inference, therefore it\n] might be helpful to add empirical results to show how the proposed method can\n] be used for causal inference.\n\nThe approach is not applicable to standard causal structure learning problems in the current form. We do believe, however, that it can play a role for learning multi-task structural causal models whose structure mimics the mechanistic structure of the data generating process, so ultimately it will also be linked to structure learning. Currently, the influence goes the other way round: the causal view inspires our machine learning approach.\n", "On page 12, Appendix D, a minus sign was not rendered on Equation 4 and the line above. The correct equations are:\nI( x : y ) := K( x ) + K( y ) - K( x , y )\nI( x : y ) = K( y ) - K( y | x )\nThe typo has been fixed and won't appear in the revision." ]
[ 6, 5, 5, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJky6Ry0W", "iclr_2018_SJky6Ry0W", "iclr_2018_SJky6Ry0W", "iclr_2018_SJky6Ry0W", "rJAS034ez", "S12z02uez", "S12z02uez", "SkiVnWtxM", "iclr_2018_SJky6Ry0W" ]
iclr_2018_HkepKG-Rb
A Semantic Loss Function for Deep Learning with Symbolic Knowledge
This paper develops a novel methodology for using symbolic knowledge in deep learning. From first principles, we derive a semantic loss function that bridges between neural output vectors and logical constraints. This loss function captures how close the neural network is to satisfying the constraints on its output. An experimental evaluation shows that our semantic loss function effectively guides the learner to achieve (near-)state-of-the-art results on semi-supervised multi-class classification. Moreover, it significantly increases the ability of the neural network to predict structured objects, such as rankings and shortest paths. These discrete concepts are tremendously difficult to learn, and benefit from a tight integration of deep learning and symbolic reasoning methods.
rejected-papers
This one was really on the fence. After some additional rounds of discussion post-rebuttal with the reviewers I think the general consensus is that it's a good paper and almost there but not quite ready for acceptance at this time. A detailed list of issues and concerns below. PROS: 1. good idea: an additional loss term that enforces semantic constraints on the network output (like exactly 1 output element must be 1). 2. well written generally 3. a nice variety of different experiments CONS: 1. paper organization. The authors start with the axioms they would like a semantic loss function to obey, then provide a general definition, then show it does obey the axioms. The general definition is intractable in a naive implementation. The authors use boolean circuits to tractably solve the problem but this isn't discussed enough and it's unreasonable to expect readers to just give a pass on it without some more background. I personally would prefer an organization that presented the motivation (in english) for the loss definition; then the definition with a description of its pieces and why they are there; then a short discussion of how to implement such a loss in practice using boolean circuits (or if this is too much put it in the appendix); and a pointer to the axiomatization in an appendix. 2. related to 1, I didn't see anything which discussed the training time of this approach. Given that the semantic loss has to be computed in a more involved way than usual, it's not clear whether it is practical.
train
[ "SkRvF__xf", "ByEQXA5lM", "SJJw0N0eM", "S1cu8ZLmf", "BJYV8WUmG", "Syd2rWUQM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "SUMMARY \n\nThe paper proposes a new form of regularization utilizing logical constraints. The semantic loss function is built on the exploitation of symbolic knowledge extracted from data and connecting the logical constraints to the outputs of a neural network. The use of Boolean logic as a constraint provides a secondary regularization term to prevent over-fitting and improve predictions. The benefit of using the function is found primarily with semi-supervised tasks where data is partially unlabelled. The logical constraints provided by the semantic loss function allow for improved classification of unlabeled data.\nOutput constraints for the semantic loss function are represented with one-hot encoding, prefer- ence rankings, and paths in a grid. These three different output constraints are designed to explore different learning purposes. The semantic function was tested on both semi-supervised classifica- tion tasks as well as structure learning. The paper primarily focuses on the one-hot encoding constraint as it is viewed as a capable technique for multi-class classification.\n\nPOSITIVES \n\nIn terms of structure, the paper was written very well. Sufficient background information was con- veyed which helped in understanding the proposed semantic loss function. A thorough breakdown is also carried out on the semantic loss function itself by explaining its axioms which help explain how the outputs of a neural network match a given constraint.\nAs a scientific contribution, I would say results from the experiments were able to justify the proposal of the semantic loss function. The function was able to perform better than most other implementations for semi-supervised learning tasks, and the function was tested on multiple datasets. The paper also made use of testing the function against other notable machine learning approaches, and in most cases the function performed better, but this usually was confined to semi-supervised learning tasks. During supervised learning tasks the function did not perform markedly better than older implementations. Given that, the semantic loss function did prove to be a seemingly simple approach to improving semi-supervised classification tasks.\n• The background section covers the knowledge required in understanding the semantic loss function. The paper also clearly explains the meaning for some of the notation used in the definitions.\n• Experiments which clearly show the benefit of using the semantic loss function. Multiple experiment types were done as well which showed evidence of the broad applicability of the function.\n• In depth description of the definitions, axioms, and propositions of the semantic loss function.\n• A large number of experiments exploring the usefulness of the function for multiple learning tasks, and on multiple datasets.\n\nNEGATIVES \n\nI was not clear if the logical constraints are to be instantiated before learning, i.e. they are defined by hand prior to being implemented in the neural network. This is a pretty important question and drastically changes the nature of the learning process. Beyond that complaint, the paper did not suffer from any critical issues. There were some issues with spelling, and the section titled ’Algorithm’ fails to clearly define a complete algorithm using the semantic loss function. It would have helped to have two algorithms. One defining the pipeline for the semantic loss function, and another showing the implementation of the function in a machine learning framework. The semantic loss function found success only in cases were the learning task was semi-supervised, and not in cases of total supervised learning. This is not a true negative, but an observation on the effectiveness of the function.\n\n- A few typos in the paper.\n- The axioms for the semantic loss function where defined but there seemed to be a lack of a clear algorithm provided showing the pipeline implementation of the semantic loss function.\n- While the semantic loss function does improve learning performance in most cases, the im- provements are confined to semi-supervised learning tasks, and with the MNIST dataset another methodology, Ladder Nets, was able to outperform the semantic loss function.\n\nRELATED WORK\n\nThe paper proposed that logic constraints applied to the output of neural networks have the capacity to improve semi-supervised classification tasks as well as finding the shortest path. In the introduction, the paper lists Zhiting Hu et al. paper titled Harnessing Deep Neural Networks with Logic Rules as an example of a similar approach. Hu et al. paper utilized logic constraints in conjunction with neural nets as well. A key difference was that Hu et al. applied their network architecture to supervised classification tasks. Since the performance of the current papers semantic loss function with supervised tasks did not improve upon other methods, it may benefit to utilize the research by Hu et al. as a means of direct comparison for supervised learning tasks, and possibly incorporate their methods with the semantic loss function in order to improve upon supervised learning tasks.\n\nCONCLUSION\n\nGiven the success of the semantic loss function with semi-supervised tasks, I would accept this paper. The semantic loss was able to improve learning with respect to the tested datasets, and the paper clearly described the properties of the functions. The paper would benefit by including a more concrete algorithm describing the flow of data through a given neural net to the semantic loss function, as well as the process by which the semantic loss function constrains the data based on propositional logic, but in general this complaint is more nit picking. The semantic loss function and the experiments which tested the function showed clearly that there is a benefit to this research and there are areas for it to improve.", "The authors propose a new loss function that is directed to take into account Boolean constraints involving the variables of a classification problem. This is a nice idea, and certainly relevant. The authors clearly describe their problem, and overall the paper is well presented. The contributions are a loss function derived from a set of axioms, and experiments indicating that this loss function captures some valuable elements of the input. This is a valid contribution, and the paper certainly has some significant strengths.\n\nConcerning the loss function, I find the whole derivation a bit distracting and unnecessary. Here we have some axioms, that are not simple when taken together, and that collectively imply a loss function that makes intuitive sense by itself. Well, why not just open the paper with Definition 1, and try to justify this definition on the basis of its properties. The discussion of axioms is just something that will create debate over questionable assumptions. Also it is frustrating to see some axioms in the main text, and some axioms in the appendix (why this division?). \n\nAfter presenting the loss function, the authors consider some applications. They are nicely presented; overall the gains are promising but not that great when compared to the state of the art --- they suggest that the proposed semantic loss makes sense. However I find that the proposal is still in search of a \"killer app\". Overall, I find that the whole proposal seems a bit premature and in need of more work on applications (the work on axiomatics is fine as long as it has something to add).\n\nConcerning the text, a few questions/suggestions:\n- Before Lemma 3, \"this allows...\" is the \"this including the other axioms in the appendix?\n- In Section 4, line 3: I suppose that the constraint is just creating a problem with a class containing several labels, not really a multi-label classification problem (?).\n- The beginning of Section 4.1 is not very clear. By reading it, I feel that the best way to handle the unlabeled data would be to add a direct penalty term forcing the unlabeled points to receive a label. Is this fair?\n- Page 6: \"a mor methodological\"... should it be \"a more methodical\"?\n- There are problems with capitalization in the references. Also some references miss page numbers and some do not even indicate what they are (journal papers, conference papers, arxiv, etc).\n", "This paper suggest a method for including symbolic knowledge into the learning process. The symbolic knowledge is given as logical constraints which characterize the space of legal solutions. This knowledge is \"injected\" into the learning process by augmenting the loss function with a symbolic-loss term that, in addition to the traditional loss, increases the probability of legal states (which also includes incorrect, yet legal, predictions). \n\nOverall the idea is interesting, but the paper does not seem ready for publication. The idea of semantic-loss function is appealing and is nicely motivated in the paper, however the practical aspects of how it can be applied are extremely vague and hard to understand. Specifically, the authors define it over all assignments to the output variables that satisfy the constraints. For any non-trivial prediction problem, this would be at least computationally challenging. The author discuss it briefly mentioning a method by Darwiche-2003, but do not offer much intuition or analysis beyond that. Their experiments focus on multiclass classification, which implicitly has a \"one-vs.-all\" constraint, although it's not clear why defining a formal loss function is needed (instead of just taking the argmax of the multiclass net), and even beyond that - why would it result in such significant improvements (when there are a few annotated data points)? \n\nThe more interesting case is where the loss needs to decompose over the parts of a structural decision, where symbolic knowledge can help constrain the output space. This has been addressed in the literature (e.g., [1], [2]) it's not clear why the authors don't compare to these models, or even attempt any meaningful evaluation.\n\n\n[1] Zhiting Hu, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing. Harnessing deep neural\nnetworks with logic rules. ACL, 2016.\n\n[2]Posterior Regularization for Structured Latent Variable Models. Ganchev et-al 2010.", "Thank you for your valuable feedback.\n\nThere is an important point of misunderstanding that we would like to clarify. Our supervised learning experiments actually do show a significant improvement over the underlying baseline MLP. The neural network’s ability to predict shortest paths improves from 5% to 28% accuracy. Its ability to predict total rankings improves from 1% to 13% accuracy. In the revised paper we have attempted to better highlight these results. The column for “coherent loss” is the one to look at. The “incoherent loss” is not affected much by semantic loss, and represents the accuracy of predicting individual edges in the graph, not paths. For this easier problem, the output constraint is irrelevant. We have also added a column highlighting the “constraint accuracy”, for the fraction of test outputs that satisfy the constraint. That accuracy improves very significantly with semantic loss, from 7% to 70% and from 3% to 55%, showing that semantic loss does help the deep net in the supervised setting to learn the concept expressed by the logical constraint.\n\nIn our experiments, the constraints were always provided by the user before learning (they are inherent to the task). Trying to learn the constraints directly from data is an interesting idea for future work. Although learning theory tells us this can be quite challenging.\n\nWe agree that a comparison with related work was missing from the initial submission. We have now added an extensive discussion of related work and how it is conceptually different from semantic loss in the revised paper we posted (see Section 6). We have also included 17 new references. \n\nSpecifically for the Hu et al. paper you mention, the key difference with semantic loss is that Hu et al. use fuzzy logic. This has two implications. First, the fuzzy loss function is very sensitive to the syntax of the logical constraint, whereas our loss only depends on the semantics. Second, the logical constraints supported by these fuzzy alternatives are much more simple than the ones we consider. For example, for the Grids experiment in our paper, the constraint is very complex (it doesn’t even have a compact CNF form), and needs to be represented as a logical circuit (see Nishino et al. 2017). We are not aware of a reasonable fuzzy logic encoding. In contrast, other work that uses fuzzy logic to encode constraints works with very simple logical sentences (usually simple implications (X => Y) or Horn clauses). We have also attempted to compare experimentally on the benchmarks of Hu et al. Initial experiments suggested that semantic loss outperforms the loss used by Hu et al. on their evaluation tasks. However, because we found it difficult to exactly reproduce the initialization of Hu et al. and were not able to perform enough tuning, we prefer to not report on that experiment at this time.\n\nIn the revised paper, we have also tried to address your comments about the algorithm section, spelling mistakes, etc.\n", "Thank you for your valuable feedback.\n\nAbout searching for a “killer-app”: our results on semi-supervised learning are either very close to the state-of-art (0.5% lower) or exceed it by a significant margin (see Tables 2 and 3). Moreover, our approach is a lot simpler than previous ones: it does not require any special-purpose architecture or customized learner, has only one parameter to tune, and is easy to apply to any network. The fact that you can just add our loss function to an existing architecture and get results as good as state-of-the-art special-purpose techniques makes semi-supervised learning a “killer-app” for semantic loss. We believe semantic loss has the potential of becoming the standard initial-trial method for semi-supervised learning, because of its simplicity and effectiveness. To be fair, our experiments on Path and Preference learning are indeed more academic, but also show promise on highly symbolic problems that are very hard to solve with deep learning.\n\nWhether to derive the semantic loss from axioms or simply state it is largely a matter of taste. The axiomatic approach is certainly more common in basic mathematics and formal logic. We hope to convince you that our axioms implying the definition has the following advantage. When someone proposes an alternative loss function to enforce logical constraints, they will have to violate at least one of our axioms, which will make it clear what the difference is in their assumptions. For example, Axiom 3 will be violated by most loss functions based on fuzzy logic (see related work). Generally, we find this to be a proper scientific way of defining new concepts (versus stating some arbitrary function).\nWe agree that it is inconvenient that the axioms are split over main text and appendix. We felt obliged to do so in order to stay within the recommended page limit for ICLR. If the reviewers recommend to add all axioms to the main text, we are happy to make that change.\n \nAddressing your specific questions:\n- Lemma 3 follows from Axioms 1-5 in the main text and Axioms 7-8 in Appendix. We added an explicit reference to those axiom numbers in the revised paper. Thanks for pointing this out.\n- Section 4, line 3: That sentence was confusing, we rephrased it in the revised paper. Our constraint does not encode a multi-label classification problem; it encodes a multi-class classification problem. For example in MNIST, exactly one class from the set {0,1,2,..,9} can be assigned to one picture.\n-Section 4.1: You have the right idea. Semantic loss function is like a penalty term, in a way that it is lowest when the unlabeled point receives exactly one label. For example, in a 3-class classification, the semantic loss value of an output [1,0,0] is smaller than the semantic loss of [0.8,0.1,0.1]. It pushes the unlabeled data in a direction of confidently picking a label.\n- Page 6 and references: These have been corrected, thanks!\n", "Thank you for your valuable feedback. \n\nThe proposed semantic loss function is very general, and depending on the constraint you want to enforce, may be computationally challenging. We see this generality as a virtue, and our goal for this paper was to show the feasibility of semantic loss on a variety of constraints. As you point out, one should avoid evaluating semantic loss by enumerating all assignments to the output variables (this is only feasible for the exactly-one constraint). We avoid this by employing state-of-the-art automated reasoning tools that build efficient circuit representations of the constraint and support efficient weighted model counting (SDD circuits). Unfortunately, here we have to refer to the automated logical reasoning literature for the details of this construction (in particular Darwiche 2011). In the next revision of the paper, we will expand our discussion of this issue. Note however that the slowdown from adding semantic loss to our experiments was negligible, given an initial overhead to build the circuit. We would also like to point out that we are the first to bring these logical circuit techniques to bear on deep learning, which explains why these useful reasoning techniques may not yet be known to the deep learning community.\n\nTaking the argmax of the multiclass net helps to classify instances but does not help to learn from unlabeled data, which is the problem we consider. Just taking the argmax for the unsupervised data does not give us any more information because we do not know if our prediction is correct or not. For example in the 3-class classification case, when the neural network outputs a [0.7, 0.5, 0.4], existing loss functions will still push the label to its correct output [1, 0, 0]. However, in the semi-supervised learning, the unsupervised dataset does not have a label. \n\nNevertheless our semantic loss function is defined on unsupervised data and captures the inherent constraints of the unsupervised data, that it must have some label. In the revision of the paper we posted, we added more discussion of how semantic loss helps in semi-supervised learning (See Figure 3). It forces the neural network to confidently choose a label for unlabeled data points. In that sense, semantic loss training is like self-training, except that for self-training, once an unlabeled data point is erroneously labeled, it can no longer recover. We also refer to entropy-based regularization for semi-supervised learning as a related idea, which our loss function generalizes to arbitrary constraints (Grandvalet & Bengio, 2005).\n\nWe agree that a comparison with related work was missing from the initial submission. We have now added an extensive discussion of related work and how it is conceptually different from semantic loss in the revised paper we posted (see Section 6). We have also included 17 new references. \n\nSpecifically for the Hu et al. paper you mention, the key difference with semantic loss is that Hu et al. use fuzzy logic. This has two implications. First, the fuzzy loss function is very sensitive to the syntax of the logical constraint, whereas our loss only depends on the semantics. Second, the logical constraints supported by these fuzzy alternatives are much more simple than the ones we consider. For example, for the Grids experiment in our paper, the constraint is very complex (it doesn’t even have a compact CNF form), and needs to be represented as a logical circuit (see Nishino et al. 2017). We are not aware of a reasonable fuzzy logic encoding. In contrast, other work that uses fuzzy logic to encode constraints works with very simple logical sentences (usually simple implications (X => Y) or Horn clauses). We have also attempted to compare experimentally on the benchmarks of Hu et al. Initial experiments suggested that semantic loss outperforms the loss used by Hu et al. on their evaluation tasks. However, because we found it difficult to exactly reproduce the initialization of Hu et al. and were not able to perform enough tuning, we prefer to not report on that experiment at this time.\n" ]
[ 7, 5, 4, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1 ]
[ "iclr_2018_HkepKG-Rb", "iclr_2018_HkepKG-Rb", "iclr_2018_HkepKG-Rb", "SkRvF__xf", "ByEQXA5lM", "SJJw0N0eM" ]
iclr_2018_HJnQJXbC-
AMPNet: Asynchronous Model-Parallel Training for Dynamic Neural Networks
New types of compute hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs. However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon. In particular, models that exploit structured input via complex and instance-dependent control flow are difficult to accelerate using existing algorithms and hardware that typically rely on minibatching. We present an asynchronous model-parallel (AMP) training algorithm that is specifically motivated by training on networks of interconnected devices. Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently, even for small minibatch sizes, resulting in shorter overall training times. Our framework opens the door for scaling up a new class of deep learning models that cannot be efficiently trained today.
rejected-papers
The authors propose a system for asynchronous, model-parallel training, suitable for dynamic neural networks. To summarize the reviewers: PROS: 1. Paper contrasts well with existing work. 2. Positive results on dynamic neural network problems. 3. Well written and clear CONS: 1. Some concern about extrapolations/estimates to hardware other than that on CPU. 2. Comparisons with Dynet seem to suggest auto-batching results in a dynamic mode aren't very positive. For 1) the AC notes the author's objections to reviewer 1's views on the value of estimation/extrapolation to non-CPU hardware. However, reviewer 3 voiced a similar concern and both still feel that there is more to be done to be convincing in the experiments.
train
[ "B1vCBbYlz", "HJwWcV8gz", "HJKXoRdgf", "HkGay76mG", "BkBqKCvMz", "r1Gy453ZM", "Symd8cB-G", "HyWWIcHbf", "HJlT49r-M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes new direction for asynchronous training. While many synchronous and asynchronous approaches for data parallelism have been proposed and implemented in the past, the space of asynchronous model parallelism hasn't really been explored before. This paper discusses an implementation of this approach and compares the results on dynamic neural networks as compared to existing parallel approaches.\n\nPros:\n- Paper seems to cover and contrast well with the existing approaches and is able to clarify where it differs from existing papers.\n- The new approach seems to show positive results on certain dynamic neural network problems.\n\nCons:\n- Data parallelism is a very commonly used technique for scaling. While the paper mentions support for it, the results are only showed on a toy problem, and it is unclear that it will work well for real problems. It will be great to see more results that use multiple replicas.\n- As the authors mention the messages also encapsulate meta-data or \"state\" as the authors refer to it. This does seem to make their compiler more complex. This doesn't seem to be a requirement for their design and proposal, and it will be good to see explorations to improve on this in the future.\n- Comparisons with Dynet (somewhat hidden away) that offers auto-batching in a dynamic mode aren't very positive.\n\nQuestions: \n- It appears that only a single copy of the parameters is kept, thus it is possible that some of the gradients may be computed with newer values than what the forward computation used. Is this true? Does this cause convergence issues?\n\nOverall it seems like a valuable area for exploration, especially given the growing interest in dynamic neural networks.\n\n[Update] Lowered rating based on other feedback and revisiting empirical results. The ideas are still interesting, but the empirical results are less convincing.", "The paper describes a model-parallel training framework/algorithm that is specialized for new devises including FPGA. Because of the small memory of those devices, model-parallel training is necessary. Most current other frameworks are for model parallelism, so in this sense, the framework proposed by the authors is different and original. The framework includes a few interesting ideas including using intermediate representation (IR) to express static computation graph and execute it as dynamic control flow, combining pipeline model parallelism and data parallelism by splitting or replicating certain layers, and enabling asynchronous training, etc. \n\nSome concerns/questions are \n1) The framework is targeted at devices like FPGA, but the implementation is a multicore CPU SMP. It makes the computational result less convincing. Also, does the implementation use threading or message passing?\n2) Pipeline model parallelism seems need a lot of load balance tuning. The reported speedup results confirm this conjecture. Can the limitation of pipeline model parallelism be improved?\n\nPage 4, in the \"min_update_interval\" paragraph, why \"Small min update interval may increase gradient staleness.\"? I would think it decreases staleness. \n\nThe paper is clearly written and easy to follow. \n\n", "This paper presents AMPNet, that addresses parallel training for dynamic networks. This is accomplished by building a static graph like IR that can serve as a target for compilation for high-level libraries such as tensor flow. In the IR each node of the computation graph is a parallel worker, and synchronization occurs when a sufficient number of gradients have been accumulated. The IR uses constructs such as concat, split, broadcast,.. allowing dynamic, instance dependent control flow decisions. The primary improvement in training performance is from reducing synchronization costs.\n\nComments for the author:\n\nThe paper proposes a solution to an important problem of model parallel training especially over dynamic batching that is increasingly important as we see more complex models where batching is not straightforward. The proposed solution can be effective. However, this is not really evident from the evaluation. Furthermore, the paper can be a little dense read for the ICLR audience. I have the following additional concerns:\n\n1) The paper stresses new hardware throughout the paper. The paper also alludes to “simulator\" of a 1 TFLOPs FPGA in the conclusion. However, your entire evaluation is over CPU. The said simulator is a bunch of sleep() calls (unless some details are skipped). I would encourage the authors to remove these references since these new devices have very different hardware behavior. For example, on a real constrained device, you may not enjoy a large L2 cache which you are benefitting from by doing an entire evaluation over CPUs. Likewise, the vector instruction processing behavior is also very different since these devices have limited power budgets and may not be able to support AVX style instructions. Unless an actual simulator like GEM5 is used, a correct representation of what hardware environment is being used is necessary before making claims that this is ideal for emerging hardware.\n\n2) To continue on the hardware front and the evaluation, I feel for this paper to be accepted or appreciated, a simulated hardware is not necessary. Personally, I found the evaluation with simulated sleep functions more confusing than helpful. An appropriate evaluation for this paper can be just benefits over CPU or GPUs, For example, you have a 7 TFLOPS device (e.g. a GPU or a CPU). Existing algorithms extract X TFLOPs of processing power and using your IR/system one gets Y effective TFLOPs and Y>X. This is all that is required. Currently, looking at your evaluation riddled with hypothetical hardware, it is unclear to me if this is helpful for existing hardware. For example, in Table 1, are Tensorflow numbers only provided over the 1 TFLOPs device (they correspond to the 1 TFLOPs column for all workloads except for MNIST)? Do you use the parallelism at all in your Tensorflow baseline? Please clarify.\n\n3) How do you compare for dynamic batching with dynamic IR platforms like pytorch? Furthermore, more details about how dynamic batching is happening in benchmarks mentioned in Table 1 will be nice to have. Finally, an emphasis on the novel contributions of the paper will also be appreciated.\n\n4) Finally, the evaluation appears to be sensitive to the two hyper-parameters introduced. Are they dataset specific? I feel tuning them would be rather cumbersome for every model given how sensitive they are (Figure 5).\n", "Dear reviewers and area chairs,\n\nThank you for all your comments. We have updated the submission with the following changes:\n * The quantitative claims about the hypothetical 1 TFLOPS device have been moved out of the main text and now appears in Appendix C (following AnonReviewer1's suggestion)\n * We have added Fig. 5 (a) to clarify the difference between pure pipeline parallelism and asynchronous model parallelism and to highlight the fundamental trade-off we are addressing in this paper.\n * We have also included preliminary comparison against DyNet in Section 6. Hopefuly this will serve as an additional data point supporting our asynchronous training working with a network with dynamic routing decisions. More details are given in Appendix B.5.", "1) We will move the extrapolation to a 1 TFLOPS device and all other quantitative analysis of potential performance on new hardware to an appendix following the reviewer's suggestion. Our main contribution is to propose and empirically verify the working of asynchronous model parallelism as an alternative to mini-batch-based data parallelism. Although we think it is interesting to analyze how this algorithm might perform on distributed systems of new devices, we are happy to move these quantitative claims to an appendix to make the contributions of the main text clearer.\n\n2) The metric used in Table 1 to demonstrate the benefits of our method is wall clock time to reach a pre-specified validation accuracy on identical hardware. We believe that this is the most relevant metric because raw TFLOPs or throughput comparison only makes sense when two approaches achieve the target validation accuracy the same number of epochs (see e.g. Goyal et al. [1]; see also [2, p30]). Note that we also show the number of epochs required to converge in Table 1.\n\nTo achieve a good wall clock time, the algorithm needs to do two things: (1) process instances quickly (high throughput) and (2) have a high parameter update frequency. Fig 1a and Fig 1b are intended to show two extremes: Fig 1a has low throughput and high update frequency; Fig 1b has higher throughput but lower update frequency. Fig 1c shows that asynchronous model parallelism could achieve the best of both worlds, and it is the purpose of this paper to demonstrate that there is promise for this training algorithm.\n\nNote that we *do* compare synchronous model parallelism (corresponding to Figure 1a) in our experiments (see max_active_keys=1). We can see consistent speed-up in the time to reach target accuracy which can be explained by increased throughput and only a slight increase in the number of epochs to convergence. Figure 5 explores this trade-off in more detail.\n\n3) We never claimed that \"our paper will work better on all FPGA devices in the future and is superior to pytorch/Dynet in every aspect\". Neural network deployment in a distributed environment is a challenge that we certainly need to do more work (currently we run on multiple threads). The only point we made was the dynamic construction of computational graph (define-by-run principle) employed in dynamic frameworks does not make it easier to achieve this goal because any graph optimization/partition/scheduling that can be done dynamically can be implemented when the computational graph is statically available but not the other way around.\n\n4) min_update_interval is the number of gradients that a parametrized operation computes and aggregates before applying an update to its parameters. A small min_update_interval leads to frequent, noisy updates, and a larger value gives infrequent but less noisy updates. Note that this is analogous to batch size in batch based training. In batch based training, the optimal batch size needs tuning and is intimately linked with learning rate tuning (less noise in gradient estimates allows increased learning rates). Similarly, some tuning is required with min_update_interval (see Fig. 5).\n\n[Clarification about the need for synchronization]\nWe would like to point out again that we do not require any synchronization for weight updates in the current model parallel setting. The weights, gradients, and the number of accumulated gradients are stored locally at each worker and the decision to make an update can be done in a completely decentralized manner. In fact, different parts of a network can be updated with different interval. We used this flexibility to update the word embedding matrix more slowly than other parts of the network for Tree RNN.\n\n[Open source software]\nOur work is neither a simulator nor just a back-of-envelope calculation. We have built the intermediate representation, the multi-threaded runtime and a python frontend for easy model definition. We will open source our project around the time of paper notification.\n\n[Which system does it make sense?]\nThe propose AMP training makes sense for a system composed of many computational devices that can act independently. A good example is many FPGAs with high-bandwidth inter-connect. On the other hand, we cannot expect a single GPU to benefit from the proposed idea because although a single GPU has many cores it is challenging to make them act independently.\n\n[1] Goyal et al. (2017) Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour\n[2] Bottou et al. (2017) Optimization Methods for Large-Scale Machine Learning", "How do you enforce min_update_interval in the no data parallel case? Is there a wait or any other synchronization primitive?\n\n1) For this paper to be accepted, it is necessary that the FPGA claims and the use of the word 'simulator' be removed. A conference paper is not the place to make strong claims using back of the envelope calculations. There are other papers, at least in other communities that are using programmable FPGA boards or real simulators using Gem5 to build model/data parallel software systems. Hence, I feel the claims need proper evaluation or needs to be removed. As I mentioned in my review, if this works fine on existing parallel CPUs or GPUs, it is sufficient for the paper to be accepted.\n\n2) If you find effective TFLOPs to be misleading, which is extensively used to compare performance improvement for many convolution algorithms and other recent papers in ICLR. Please provide any metric that shows that you actually improve performance over *existing model parallel systems*. I find it rather strange that a paper about speeding up deep learning computations uses sleep() functions to slow down execution in its evaluation to establish a baseline. \n\nYour response \"effective TFLOPs (..) can be misleading because we can always maximize the device utilization by sacrificing the rate of convergence\" weakens the motivation of the paper as shown in Figure 1. Your response implies that Figure 1(a) can converge faster than Figure 1(c), and Figure 1(c) may not even be desirable.\n\n\"Dynamic frameworks such as PyTorch make it easy for users to write dynamic models but there is no advantage in terms of efficiency, and in particular, it is difficult to deploy dynamic models written in PyTorch in an distributed environment. In contrast, our contribution is to express dynamic neural networks using a static intermediate representation which is critical for training these models in a distributed environment.\"\n\n3) I am surprised that you make such strong statements about deploying pytorch while providing no evaluation results to backup your claims. Even though it appears that you are convinced that your paper will work better on all FPGA devices in the future and is superior to pytorch/Dynet in every aspect, a reader needs a scientifically correct and thorough evaluation to be convinced. More importantly, a clear evaluation tells us when this works and when it doesn't. \n\n4) Please provide more intuition on \"min_update_interval has a similar interpretation to the batch size\"? So a very small min_update_interval i.e synchronous has better convergence? Is this dataset specific or independent of the problem? Unlike batch sizes which are driven by GPU sizes and memory limits and are rather well known, I get the intuition that this hyper parameter will require multiple runs of experimentation for each dataset and it may just be faster to run a synchronous version.\n\nI have updated my score based on your response. I am looking forward to a revised version and I will update my score based on how you fix the evaluation and claims as pointed out in 1, 2 & 3.", "Thank you for your comments. Please find our response to your questions below\n\n\"Also, does the implementation use threading or message passing?\" \nthe implementation is based on C++ threads, and the communication between threads is all done by explicit message passing in order to remain faithful to a truly distributed execution.\n\n\"Page 4, in the \"min_update_interval\" paragraph, why \"Small min update interval may increase gradient staleness.\"? I would think it decreases staleness. \"\nThis is because when min_update_interval is small, the updates occur more frequently, so the parameters may change several times between the forward and backward passes of a data instance. Consider the limiting case of 1 update per epoch (i.e. maximum min_update_interval) - in this limit there is no gradient staleness because all the forward and backward messages see the same parameters. Reducing min_update_interval from this limit will monotonically increase gradient staleness. Increasing staleness is not necessarily a problem for the convergence rate, and Fig 5. shows that tuning min_update_interval can lead to good performance.\n", "Thank you for your careful review.\n\nFirst, there seems to be a potential misunderstanding. In your summary you write \"In the IR each node of the computation graph is a parallel worker, and synchronization occurs when a sufficient number of gradients have been accumulated\" but we would like to point out that there is no need for and we do not perform synchronization because each operation is pinned to its worker (in our model parallel paradigm, there is only one copy of each operation's parameters, and they are stored in memory directly accessible from the worker hosting the operation) . The only exception is where we combine data parallelism with model parallelism using replicas: Here, replicas are synchronized at the end of each epoch.\n\nPlease find our response to your questions below:\n\n1) The hypothetical computational capability of 1 TFLOPS is configurable depending on the hardware characteristics. We feel that many new devices are candidates for neural network accelerators. While it will eventually be important to understand fine details of a chosen hardware's characteristics, it is not our intention to lay out detailed calculations for one specific hardware. Instead, we want to point out a model class and training algorithm where our CPU based simulator and back-of-the-envelope calculations show that real value could be added by hardware that significantly differs from batch-based processors like GPUs.\n\n\n2.1) We would like to stress that only comparing the effective flops (which is equivalent to comparing only the throughput for a given model) can be misleading because we can always maximize the device utilization by sacrificing the rate of convergence. For example, increasing the minibatch size for GPU based training increases device utilization, but spending time processing large batches to produce a small number of low variance gradient steps can lead to overall slower convergence than taking many (higher variance) gradient steps with small minibatches. This is why we focus on the time to reach target validation accuracy in our experiments and report both the throughput and the number of epochs to convergence.\n\n2.2) Sorry for the confusing vertical alignment of TensorFlow in Table 1. TensorFlow uses the same number of threads and is run on the same hardware as our multi-core implementation. We will separate out any references to the 1 TFLOPS device in our revised version.\n\n3) Dynamic frameworks such as PyTorch make it easy for users to write dynamic models but there is no advantage in terms of efficiency, and in particular, it is difficult to deploy dynamic models written in PyTorch in an distributed environment. In contrast, our contribution is to express dynamic neural networks using a static intermediate representation which is critical for training these models in a distributed environment.\n\n4) In Figure 5 we have elucidated how the two parameters influence the speed of convergence in terms of throughput and convergence rate. In practice, tuning these parameters is not too difficult: max_active_keys can be set to the number of CPU cores, and min_update_interval has a similar interpretation to the batch size which is common in other neural network frameworks.\n", "Thank you for your careful review. Here is our response to your questions:\n\n 1. Data parallelism (replicas) can be easily deployed in any model (see Section 5).\n\n 2. State (or metadata) is necessary in the proposed asynchronous setup. For example, take an operation that receives forward propagations from multiple parents (e.g., add or concat); messages from different parents and different training instances may arrive in any order due to asynchrony and we need to guarantee that the messages are correctly grouped by the instance id or the loop counter (if the operation lies inside a loop) stored in the state. The same is true for the backward phase of an operation that receives messages from multiple children. A global scheduler could be used instead of state in a more centralized system but that would require more communication. The focus of our paper is to explore a decentralized system that doesn't require a scheduler.\n\n 3. The reviewer is absolutely correct about the possible impact of the staleness of the gradients. We studied this empirically in the paper (see Fig 5), and we found that the staleness can be controlled by either reducing max_active_keys (the maximum number of examples that the system can process at any given moment) or increasing the min_update_interval (number of gradients to accumulate before applying updates). When the staleness was reasonably small, the convergence was minimally affected (e.g., max_active_keys=8 for 8 replicas in Figure 5). Moreover, in our multi-core CPU implementation we found that in most cases there is no benefit in increasing max_active_keys larger than the number of cores (16 in this example).\n" ]
[ 6, 6, 4, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJnQJXbC-", "iclr_2018_HJnQJXbC-", "iclr_2018_HJnQJXbC-", "iclr_2018_HJnQJXbC-", "r1Gy453ZM", "HyWWIcHbf", "HJwWcV8gz", "HJKXoRdgf", "B1vCBbYlz" ]
iclr_2018_B1CQGfZ0b
Learning to select examples for program synthesis
Program synthesis is a class of regression problems where one seeks a solution, in the form of a source-code program, that maps the inputs to their corresponding outputs exactly. Due to its precise and combinatorial nature, it is commonly formulated as a constraint satisfaction problem, where input-output examples are expressed constraints, and solved with a constraint solver. A key challenge of this formulation is that of scalability: While constraint solvers work well with few well-chosen examples, constraining the entire set of example constitutes a significant overhead in both time and memory. In this paper we address this challenge by constructing a representative subset of examples that is both small and is able to constrain the solver sufficiently. We build the subset one example at a time, using a trained discriminator to predict the probability of unchosen input-output examples conditioned on the chosen input-output examples, adding the least probable example to the subset. Experiment on a diagram drawing domain shows our approach produces subset of examples that are small and representative for the constraint solver.
rejected-papers
The reviewers were largely agreed that the paper presented an interesting idea and has potential but needs a better empirical evaluation. It seems that the authors largely agree and are working to improve it. PROS: 1. Improving the speed of program synthesis is a useful problem 2. Good treatment of related work, e.g. CEGIS CONS: 1. The approach likely does not scale 2. The architecture is underspecified making it hard to reproduce 3. Only 1 domain for evaluation
train
[ "SJC_Polgz", "Bycm6ytgf", "By8NGl0xG", "Syz7_waXf", "BycOZvTXf", "SkzWAUTQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper proposes a method for identifying representative examples for program\nsynthesis to increase the scalability of existing constraint programming\nsolutions. The authors present their approach and evaluate it empirically.\n\nThe proposed approach is interesting, but I feel that the experimental section\ndoes not serve to show its merits for several reasons. First, it does not\ndemonstrate increased scalability. Only 1024 examples are considered, which is\nby no means large. Even then, the authors approach selects the highest number of\nexamples (figure 4). CEGIS both selects fewer examples and has a shorter median\ntime for complete synthesis. Intuitively, the authors' method should scale\nbetter, but they fail to show this -- a missed opportunity to make the paper\nmuch more compelling. This is especially true as a more challenging benchmark\ncould be created very easily by simply scaling up the image.\n\nSecond, there is no analysis of the representativeness of the found sets of\nconstraints. Given that the results are very close to other approaches, it\nremains unclear whether they are simply due to random variations, or whether the\nproposed approach actually achieves a non-random improvement.\n\nIn addition to my concerns about the experimental evaluation, I have concerns\nabout the general approach. It is unclear to me that machine learning is the\nbest approach for modeling and solving this problem. In particular, the\nselection probability of any particular example could be estimated through a\nheuristic, for example by simply counting the number of neighbouring examples\nthat have a different color, weighted by whether they are in the set of examples\nalready, to assess its \"borderness\", with high values being more important to\nachieve a good program. The border pixels are probably sufficient to learn the\nprogram perfectly, and in fact this may be exactly what the neural net is\nlearning. The above heuristic is obviously specific to the domain, but similar\nheuristics could be easily constructed for other domains. I feel that this is\nsomething the authors should at least compare to in the empirical evaluation.\n\nAnother concern is that the authors' approach assumes that all parameters have\nthe same effect. Even for the example the authors give in section 2, it is\nunclear that this would be true.\n\nThe text says that rand+cegis selects 70% of examples of the proposed approach,\nbut figure 4 seems to suggest that the numbers are very close -- is this initial\nexamples only?\n\nOverall the paper appears rushed -- the acknowledgements section is left over\nfrom the template and there is a reference to figure \"blah\". There are typos and\ngrammatical mistakes throughout the paper. The reference to \"Model counting\" is\nincomplete.\n\nIn summary, I feel that the paper cannot be accepted in its current form.", "This paper presents a method for choosing a subset of examples on which to run a constraint solver\nin order to solve program synthesis problems. This problem is basically active learning for\nprogramming by example, but the considerations are slightly different than in standard active\nlearning. The assumption here is that labels (aka outputs) are easily available for all possible\ninputs, but we don't want to give a constraint solver all the input-output examples, because it will\nslow down the solver's execution.\n\nThe main baseline technique CEGIS (counterexample-guided inductive synthesis) addresses this problem\nby starting with a small set of examples, solving a constraint problem to get a hypothesis program,\nthen looking for \"counterexamples\" where the hypothesis program is incorrect.\n\nThis paper instead proposes to learn a surrogate function for choosing which examples to select. The\npaper isn't presented in exactly these terms, but the idea is to consider a uniform distribution\nover programs and a zero-one likelihood for input-output examples (so observations of I/O examples\njust eliminate inconsistent programs). We can then compute a posterior distribution over programs\nand form a predictive distribution over the output for all the remaining possible inputs. The paper\nsuggests always adding the I/O example that is least likely under this predictive distribution\n(i.e., the one that is most \"surprising\").\n\nForming the predictive distribution explicitly is intractable, so the paper suggests training a\nneural net to map from a subset of inputs to the predictive distribution over outputs. Results show\nthat the approach is a bit faster than CEGIS in a synthetic drawing domain.\n\nThe paper starts off strong. There is a start at an interesting idea here, and I appreciate the\nthorough treatment of the background, including CEGIS and submodularity as a motivation for doing\ngreedy active learning, although I'd also appreciate a discussion of relationships between this approach \nand what is done in the active learning literature.Once getting into the details of the proposed approach, \nthe quality takes a downturn, unfortunately. \n\nMain issues:\n- It's not generally scalable to build a neural network whose size scales with the number\nof possible inputs. I can't see how this approach would be tractable in more standard program\nsynthesis domains where inputs might be lists of arrays or strings, for example. It seems that this\napproach only works due to the peculiarities of the formulation of the only task that is considered,\nin which the program maps a pixel location in 32x32 images to a binary value.\n\n- It's odd to write \"we do not suggest a specific neural network architecture for the\nmiddle layers, one should seelect whichever architecture that is appropriate for the domain at\nhand.\" Not only is it impossible to reproduce a paper without any architectural details, but the\nresult is then that Fig 3 essentially says inputs -> \"magic\" -> outputs. Given that I don't even\nthink the representation of inputs and outputs is practical in general, I don't see what the \ncontribution is here.\n\n- This paper is poor in the reproducibility category. The architecture is never described,\nit is light on details of the training objective, it's not entirely clear what the DSL used in the\nexperiments is (is Figure 1 the DSL used in experiments), and it's not totally clear how the random\nimages were generated (I assume values for the holes in Figure 1 were sampled from some\ndistribution, and then the program was executed to generate the data?).\n\n- Experiments are only presented in one domain, and it has some peculiarities relative to \nmore standard program synthesis tasks (e.g., it's tractable to enumerate all possible inputs). It'd\nbe stronger if the approach could also be demonstrated in another domain.\n\n- Technical point: it's not clear to me that the training procedure as described is consistent\nwith the desired objective in sec 3.3. Question for the authors: in the limit of infinite training\ndata and model capacity, will the neural network training lead to a model that will reproduce the\nprobabilities in 3.3?\n\nTypos:\n- The paper needs a cleanup pass for grammar, typos, and remnants like \"Figure blah shows our \nneural network architecture\" on page 5.\n\nOverall: There's the start of an interesting idea here, but I don't think the quality is high enough\nto warrant publication at this time.\n", "General-purpose program synthesizers are powerful but often slow, so work that investigates means to speed them up is very much welcome—this paper included. The idea proposed (learning a selection strategy for choosing a subset of synthesis examples) is good. For the most paper, the paper is clearly-written, with each design decision justified and rigorously specified. The experiments show that the proposed algorithm allows a synthesizer to do a better job of reliably finding a solution in a short amount of time (though the effect is somewhat small).\n\nI do have some serious questions/concerns about this method:\n\nPart of the motivation for this paper is the goal of scaling to very large sets of examples. The proposed neural net setup is an autoencoder whose input/output size is proportional to the size of the program input domain. How large can this be expected to scale (a few thousand)? \n\nThe paper did not specify how often the neural net must be trained. Must it be trained for each new synthesis problem? If so, the training time becomes extremely important (and should be included in the “NN Phase” time measurements in Figure 4). If this takes longer than synthesis, it defeats the purpose of using this method in the first place.\nAlternatively, can the network be trained once for a domain, and then used for every synthesis problem in that domain (i.e. in your experiments, training one net for all possible binary-image-drawing problems)? If so, the training time amortizes to some extent—can you quantify this?\nThese are all points that require discussion which is currently missing from the paper.\n\nI also think that this method really ought to be evaluated on some other domain(s) in addition to binary image drawing. The paper is not an application paper about inferring drawing programs from images; rather, it proposes a general-purpose method for program synthesis example selection. As such, it ought to be evaluated on other types of problems to demonstrate this generality. Nothing about the proposed method (e.g. the neural net setup) is specific to images, so this seems quite readily doable.\n\nOverall: I like the idea this paper proposes, but I have some misgivings about accepting it in its current state.\n\n\n\n\nWhat follows are comments on specific parts of the paper:\n\n\nIn a couple of places early in the paper, you mention that the neural net computes “the probability” of examples. The probability of what? This was totally unclear until fairly deep into Section 3.\n - Page 2: “the neural network computes the probability for other examples not in the subset”\n - Page 3: “the probability of all the examples conditioned on…”\n\nOn a related note, I don’t like the term “Selection Probability” for the quantity it describes. This quantity is ‘the probability of an input being assigned the correct output.’ That happens to be (as you’ve proven) a good measure by which to select examples for the synthesizer. The first property (correctness) is a more essential property of this quantity, rather than the second (appropriateness as an example selection measure).\n\nPage 5: “Figure blah shows our neural network architecture” - missing reference to Figure 3.\n\nPage 5: “note that we do not suggest a specific neural network architecture for the middle layers, one should select whichever architecture that is appropriate for the domain at hand” - such as? What are some architectures that might be appropriate for different domains? What architecture did you use in your experiments?\n\nThe description of the neural net in Section 3.3 (bottom of page 5) is hard to follow on first read-through. It would be better to lead with some high-level intuition about what the network is supposed to do before diving into the details of how it’s set up. The first sentence on page 6 gives this intuition; this should come much earlier.\n\nPage 5: “a feed-forward auto-encoder with N input neurons…” Previously, N was defined as the size of the input domain. Does this mean that the network can only be trained when a complete set of input-output examples is available (i.e. outputs for all possible inputs in the domain)? Or is it fine to have an incomplete example set?\n\n", "\nOnly 1024 examples are considered, which is by no means large.\n\n=> Indeed this is not large compared to a standard vision task, but if all are taken together, can be quite significant for the constraint solver to reason with. We believe what you meant by “not large” is in a sense that the entire _input space_ is quite small, and we do intend to address this problem so that the input-outputs are not total in the dataset, but rather a sample of input-output that lives in a much bigger space.\n\n\nEven then, the authors approach selects the highest number of\nexamples (figure 4). CEGIS both selects fewer examples and has a shorter median\ntime for complete synthesis. Intuitively, the authors' method should scale\nbetter, but they fail to show this -- a missed opportunity to make the paper\nmuch more compelling. This is especially true as a more challenging benchmark\ncould be created very easily by simply scaling up the image.\n\n=> We tuned some weights and have better results (better median time). \n\nhttps://imgur.com/a/JyZor\n\nIt is true that CEGIS selects the fewest number of examples, but it does so at a cost of calling a constraint solver each time. So that whenever a constraint solver is “stuck” the example would not be produced for a long time. Our approach selects a whole bunch of examples in batch at a low cost (using neural network) and often no additional example is required (i.e. no additional solver time is needed to pick more examples). We should scale up the images and see how they compare (originally we had 64x64 images with very good results but it was taking forever to run even 1 instance).\n\n\n\n\nSecond, there is no analysis of the representativeness of the found sets of\nconstraints. Given that the results are very close to other approaches, it\nremains unclear whether they are simply due to random variations, or whether the\nproposed approach actually achieves a non-random improvement.\n\n=> The additional cegis example needed is a quantification of this metric. \n\nhttps://imgur.com/a/JyZor\n\nOur approach selected examples in a way that the synthesizer returned a correct program with 0 or 1 additional cegis examples on top. Meaning the original set of example chosen by the NN is forcing a set of constraints strong enough so the correctly synthesized program cannot be ambiguous. However, a better metric would be to explicitly measure this ambiguity.\n\n\nThe above heuristic is obviously specific to the domain, but similar\nheuristics could be easily constructed for other domains. I feel that this is\nsomething the authors should at least compare to in the empirical evaluation.\n\n=> We will incorporate the boarder heuristic as another baseline to compare against (one issue with this heuristic is all boarders is clearly a lot of input-output examples, do you suggest to keep all of them? Or you stop collecting at some point, and if so what is a good stopping criteria if you intend to do so without any learning but rely on a heuristic?) We will include experiments from other domain(s) such that it will convince the reader that there will be cases where heuristics are hard to construct.\n\n\nOverall: Quantify the \"representativeness\" of the set of examples better, perhaps explicitly. Incorporate new domains and show that learning to select examples is more reasonable than hacking a heuristic for each domain.", "I'd also appreciate a discussion of relationships between this approach and what is done in the active learning literature.\n\n=> Which work would you see as most similar to our work? I am seeing CEGIS most closely relates to the line of work that ask for labels for the input that lies most \"close to the decision boundary\" for learning a SVM. However I am in a setting where all labels are already given but are too many to process. If you can give a few pointers/papers on what would be good related work in this space it would be very well appreciated. \n\n\nIt's not generally scalable to build a neural network whose size scales with the number\nof possible inputs. I can't see how this approach would be tractable in more standard program\nsynthesis domains where inputs might be lists of arrays or strings, for example. It seems that this\napproach only works due to the peculiarities of the formulation of the only task that is considered,\nin which the program maps a pixel location in 32x32 images to a binary value.\n\n=> You are right. In the particular experiments we use a conv-net of a 7x7 window size so it would scale to arbitrary large images (to the point that the constraint synthesizer is the bottleneck). However in general it is definitely true such encoding will not scale. We are working on a rnn architecture that do not take in the entire input space at once.\n\n\n- This paper is poor in the reproducibility category. The architecture is never described,\nit is light on details of the training objective, it's not entirely clear what the DSL used in the\nexperiments is (is Figure 1 the DSL used in experiments), and it's not totally clear how the random\nimages were generated (I assume values for the holes in Figure 1 were sampled from some\ndistribution, and then the program was executed to generate the data?).\n\n=> We'll do a better job next time explaining the architecture and the DSL. The random images are generated by uniformly sampling inter values (between some range bounds) on the wholes in Figure 1, and the draw program is executed to generate a 32x32 image.\n\n\n- Experiments are only presented in one domain, and it has some peculiarities relative to \nmore standard program synthesis tasks (e.g., it's tractable to enumerate all possible inputs). It'd\nbe stronger if the approach could also be demonstrated in another domain.\n\n=> We do intend to take our work to a different domain and have some in mind. However, if you have any domain where you would like to see us try this approach please let us know, it would be very instructive.\n\n\n- Technical point: it's not clear to me that the training procedure as described is consistent\nwith the desired objective in sec 3.3. Question for the authors: in the limit of infinite training\ndata and model capacity, will the neural network training lead to a model that will reproduce the\nprobabilities in 3.3?\n\n=> Yes it will. The neural network in that case would act like a \"soft\" dictionary of counts keeping track of all the instances a new input x is mapped to y conditioned on all the past observed input/outputs. Thus, for the same reason the explicit count formulation approaches the desired probability, the neural network would as well.\n\n\nOverall: Need better explaination on the neural network architecture, a new domain is needed (with a better architecture that can scale)", "Part of the motivation for this paper is the goal of scaling to very large sets of examples. The proposed neural net setup is an autoencoder whose input/output size is proportional to the size of the program input domain. How large can this be expected to scale (a few thousand)? \n\n=> This is a fair point. We also believe the current architecture is both badly explained and badly constructed for other kind of tasks. We failed to mention that the particular architecture for the drawing example is a conv-net with a 7x7 window size, so there is an additional independence assumption based on location: pixel values far away from each other are uncorrelated. For that particular task local informations such as the shape of a line / square is already sufficient for picking good examples for synthesis, and scales well potentially to very large images. We also hope to include experiments on a textual domain in the future, which will use a recurrent neural network architecture that sequentially process the input-output rather than all at once.\n\n\n\nThe paper did not specify how often the neural net must be trained. Must it be trained for each new synthesis problem?\n\n=> It is trained once as a kind of “compilation” if you will for a domain. Once trained it can be used repeatedly without additional training.\n\n\nI also think that this method really ought to be evaluated on some other domain(s) in addition to binary image drawing.\n\n=> Indeed! We really hoped for it too but could not quite get it working in the time of the deadline. We agree that a general-purpose paper would benefit from a additional domains. \n \n\nOverall the specific neural network architectures need to be better explained, with potentially a different architecture for a different domain to show that it can scale to potentially large input-spaces. We will take these suggestions to make the work more solid. Thanks!\n" ]
[ 4, 5, 5, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_B1CQGfZ0b", "iclr_2018_B1CQGfZ0b", "iclr_2018_B1CQGfZ0b", "SJC_Polgz", "Bycm6ytgf", "By8NGl0xG" ]
iclr_2018_r1kjEuHpZ
Learning Less-Overlapping Representations
In representation learning (RL), how to make the learned representations easy to interpret and less overfitted to training data are two important but challenging issues. To address these problems, we study a new type of regularization approach that encourages the supports of weight vectors in RL models to have small overlap, by simultaneously promoting near-orthogonality among vectors and sparsity of each vector. We apply the proposed regularizer to two models: neural networks (NNs) and sparse coding (SC), and develop an efficient ADMM-based algorithm for regularized SC. Experiments on various datasets demonstrate that weight vectors learned under our regularizer are more interpretable and have better generalization performance.
rejected-papers
Each of the reviewers had a slightly different set of issues with this paper but here is an attempt at a summary: PROS: 1. Paper is mostly clear and well structured. CONS: 1. Lack of novelty 2. Unsupported claims 3. Questionable methodology (using dropout confounds the goal of the experiment) The authors did not submit a rebuttal.
train
[ "B1taBfmlG", "BJ9J8G_ez", "ByL47G5lM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "*Summary*\nThe paper introduces a matrix regularizer to simultaneously induce both sparsity and (approximate) orthogonality. The definition of the regularizer mostly relies on the previous proposal from Xie et al. 2017b, to which a weighted L1 term is added.\nThe regularizer aims at reducing overlap among the learned matrices, and it is applied to various neural networks and sparse coding (SC) settings.\nMost of the challenges of the paper concentrate on the optimization side.\nThe evaluation of the paper is based on 3 experiments: SC (to illustrate the gain in interpretability and the reduction in overfitting), LTSM (for a NLP task over PTB) and CNN (for a computer vision task over CIFAR-10). \n\nThe paper is overall clear and fairly well structured, but it suffers from several flaws, as next discussed.\n\n*Detailed comments*\n(mostly in linear order)\n\n-The proposed regularization scheme seems closely related to the approach taken in [Zass2007]; a detailed discussion and potential comparison should be provided. In particular, the approach of [Zass2007] would lead to an easier optimization. \n\n-The sparse coding formulation has an extremely heavy parametrization (4 regularization parameters + the optimization parameter for ADMM + the number of columns of W). It seems to me that the resulting approach may not be very practical.\n\n-Sparse coding: More references to previous work are needed, such as references related to alternating schemes and proximal optimization for SC (in Sec. 3); e.g., see [Mairal2010,Jenatton2011] and numerous references therein.\n\n-I would suggest to move the derivations of Sec. 3.1 into an appendix not to break the flow of the readers. The derivations look sound.\n\n-Due to the use of ADMM, I think that only W_tilde is sparse (due to the prox update (10)), but W may not be. This point should be discussed. Is a \"manual\" thresholding applied thereafter?\n\n-For equation (25), I would precise that the columns of U have to be properly ordered to make sure we can only look at those from s=m...d.\n\n-More details about the optimization in the case of the neural networks should be discussed.\n\n-Could another splitting for ADMM, based on the logdet to reuse ideas from [Banerjee2008,Friedman2008], be possible?\n\n-In table 2., are those 3-decimal statistics significant? Any idea of the variability of those numbers?\n\n-Interpretability: The paper focuses on the gain in interpretability thanks to the regularizer (e.g., Table 1 and 3). But all the proposed settings (SC or neural networks) are such that the parameters are themselves subject to sources of variations, e.g., the initial conditions. How can we make strong conclusions while inspecting the parameters?\n\n-In Figure 2., it seems to be that the final performance metric should also be overlaid. What appears as interesting to me is the the trade-off between overlap score and final performance metric.\n\n*References*\n\n[Banerjee2008] Banerjee, O.; El Ghaoui, L. & d'Aspremont null, A. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data Journal of Machine Learning Research, MIT Press, 2008, 9, 485-516\n\n[Friedman2008] Friedman, J.; Hastie, T. & Tibshirani, R. Sparse inverse covariance estimation with the graphical lasso Biostatistics, 2008, 9, 432\n\n[Jenatton2011] Jenatton, R.; Mairal, J.; Obozinski, G. & Bach, F. Proximal Methods for Hierarchical Sparse Coding Journal of Machine Learning Research, 2011, 12, 2297-2334\n\n[Mairal2010] Mairal, J.; Bach, F.; Ponce, J. & Sapiro, G. Online learning for matrix factorization and sparse coding Journal of Machine Learning Research, 2010, 11, 19-60\n\n[Zass2007] Zass, R. & Shashua, A. Nonnegative sparse PCA Advances in Neural Information Processing Systems, 2007", "The paper studies a regularization method to promote sparsity and reduce the overlap among the supports of the weight vectors in the learned representations. The motivation of using this regularization is to enhance the interpretability of the learned representation and avoid overfitting of complex models. \n\nTo reduce the overlap among the supports of the weight vectors, an existing method (Xie et al, 2017b) encouraging orthogonality is adopted to let the Gram matrix of the weight vectors to be close to the identity matrix (so that each weight vector is with unit norm and any pair of vectors are approximately orthogonal).\n\nNeural network and sparse coding are considered as two case studies. The alternating algorithm for solving the regularized sparse coding formulation is common and less attracted. I think the major point is to see how much benefit that the regularization can afford for learning deep neural networks. To avoid overfitting, some off-the-shelf methods, e.g., dropout which can be viewed as a kind of regularization, are commonly used for deep neural networks. Are there any connections between the adopted regularization terms and the existing methods? Will these less overlapped parameters control the activation of different neurons? I think these are some straightforward questions while there are not much explanations on those aspects.\n\nFor training neural networks, a simple sub-gradient method is used because of the non-smoothness of the regularization terms. When training with large neural networks, will the sub-gradient method affect the efficiency a lot compared without using the regularizer? For example, in the image classification problem with ResNet.\n\nIt is better not to use dropout in the experiments (language modeling and image classification), because one of the motivation of using the proposed regularizer is to avoid overfitting while dropout does the same work and may affect the evaluation of effectiveness of the regularization.\n", "The paper proposed a new regularization approach that simultaneously encourages the weight vectors (W) to be sparse and orthogonal to each other. The argument is that the sparsity helps to eliminate the irrelevant feature vectors by making the corresponding weights zero. Nearly orthogonal sparse vectors will have zeros at different indexes and hence, encourages the weight vectors to have small overlap in terms of indices of nonzero entries (called support). Small overlap in support of weight vectors, aids interpretability as each weight vector is associated with a unique subset of feature vectors. For example, in the topic model, small overlap encourages, each topic to have unique set of representation words. \n\nThe proposed approach used L1 regularizer for enforcing sparsity in W. For enforcing orthogonality between different weight vectors (wi, wj), the log-determinant divergence (LDD) regularization term encourages the Gram Matrix G (Gij = wiTwj) to be close to an identity matrix I. The authors applied and tested the performance of proposed approach on Neural Network and Sparse Coding (SC) machine learning models. The authors validated the need for their proposed regularizer through experiments on 4 datasets (3 text and 1 images).\n\nMajor\n* The novelty of the paper is not clear. Neither L1 no logdet() are novel regularizers (see the literature of Determinatal Point Process). With the presence of the auto-differentiator, one cannot claim the making derivative a novelty.\n\n* L1 is also encourages diversity although as explicit as logdet. This is also obvious from Fig 2. Perhaps the advantage of diversity is in interpretability but that is hard to quantify and the authors did not put enough effort to do that; we only have small anecdotal results in section 4.3. \n\n* The Table 1 is not convincing because one can argue, for example, gun (vec 1) and weapon (vec 4) are colinear. \n\n* In section 4.2, the authors experimented with SC on text dataset. The overlap score decreases as the strength of regularization increases. The authors didn’t show the effect of increasing the regularization strength on the model accuracy and convergence time. This analysis is important to make sure, the decrease in overlap score is not coming at the expense of model accuracy and performance. \n\n* In section 4.4, increase in test set accuracy and difference between test and train set accuracy is used to validate the claim, that the proposed regularizer helps reducing over fitting. In Table-2, , the test accuracy increases between SC and LDD-L1 SC while the train accuracy remains almost the same. Also, the authors didn’t do any cross validation to support their claim. The difference is numbers is too small to support the claim.\n\n* In section on LSTM for Language Modeling, the perplexity score of LDD-L1 regularization on PytorchLM received perplexity score of 1.2 lower than without regularization. Although, the author mentions it as a significant reduction, the lowest perplexity score in Table 3 is significantly lower than this result. It’s not clear how 1.2 reduction in perplexity is significant and why the method should be preferred while much better models already exists.\n\n* Results of the best perplexity model, Neural Architecture Search + WT V2, with proposed regularization would also help, validating the generalizability claims of the new approach.\n\n* In CNN for Image Classification section, details of increase interpretability of the model, in terms of classification decision, is missing.\n\n* In Table-4, the proposed LDD-L1 WideResNet is not the best results. Results of adding the proposed regularization, to the best know method (Pyramid Sep Drop) would be interesting. \n\n* The proposed regularization claims to provide more interpretable representation and less overfit model. The given experiments are inadequate to validate the claims.\n\n* A more extensive experimentation is required to validate the applicability of the method.\n\n* In SC, aj are the linear coefficients or the coefficient vector of the j-th sample. If A ∈ Rm×n then aj ∈ Rm×1 and j ranges between [1,n] as in equation 6. The notation in section 2.2, Sparse Coding section is misleading as j ranges between [1,m].\n\n* In Related works, the authors mention previous work done on interpreting the results of the machine learning models. Related works on enhancing interpretability and reducing overfitting by using regularization is missing.\n\n\n" ]
[ 5, 4, 3 ]
[ 4, 4, 5 ]
[ "iclr_2018_r1kjEuHpZ", "iclr_2018_r1kjEuHpZ", "iclr_2018_r1kjEuHpZ" ]
iclr_2018_SJdCUMZAW
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Grasping an object and precisely stacking it on another is a difficult task for traditional robotic control or hand-engineered approaches. Here we examine the problem in simulation and provide techniques aimed at solving it via deep reinforcement learning. We introduce two straightforward extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find high-performance control policies. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
rejected-papers
The reviewers were quite unanimous in their assessment of this paper. PROS: 1. The paper is relatively clear and the approach makes sense 2. The paper presents and evaluates a collection of approaches to speed learning of policies for manipulation tasks. 3. Improving the data efficiency of learning algorithms and enabling learning across multiple robots is important for practical use in robot manipulation. 4. The multi-stage structure of manipulation is nicely exploited in reward shaping and distribution of starting states for training. CONS 1. Lack of novelty e.g. wrt to Finn et al. in "Deep Spatial Autoencoders for Visuomotor Learning" 2. The techniques of asynchronous update and multiple replay steps may have limited novelty, building closely on previous work and applying it to this new problem. 3. The contribution on reward shaping would benefit from a more detailed description and investigation. 4. There is concern that results may be specific to the chosen task. 5. Experiments using real robots are needed for practical evaluation.
train
[ "SJVDtoHef", "SyFsE_Def", "SkHZuZqxf", "SJEzPorez" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I already reviewed this paper for R:SS 2017. There were no significant updates in this version, see my largely identical detailed comment in \"Official Comment\"\n\nQuality\n======\nThe proposed approaches make sense but it is unclear how task specific they are.\n\nClarity\n=====\nThe paper reads well. The authors cram 4 ideas into one paper which comes at the cost of clarity of each of them.\n\nOriginality\n=========\nThe ideas on their own are rather incremental.\n\nSignificance\n==========\nIt is unclear how widely applicable the ideas (and there combination) are an whether they would transfer to a real robot experiment. As pointed out above the ideas are not really groundbreaking on their own.\n\nPros and Cons (from the RSS AC which sums up my thoughts nicely)\n============\n+ The paper presents and evaluates a collection of approaches to speed learning of policies for manipulation tasks.\n+ Improving the data efficiency of learning algorithms and enabling learning across multiple robots is important for practical use in robot manipulation.\n+ The multi-stage structure of manipulation is nicely exploited in reward shaping and distribution of starting states for training.\n\n- The techniques of asynchronous update and multiple replay steps may have limited novelty, building closely on previous work and applying it to this new problem.\n- The contribution on reward shaping would benefit from a more detailed description and investigation.\n- There is concern that results may be specific to the chosen task. \n- Experiments using real robots are needed for practical evaluation.\n", "The authors propose to learn to pick up a block and put it on another block using DDPG. A few tricks are described, which I believe already appear in prior work. The discussion of results presented in prior work also has a number of issues. The claim of \"data efficient\" learning is not really accurate, since even with demonstrations, the method requires substantially more experience than prior methods. Overall, it's hard to discern a clear contribution, either experimentally or conceptually, and the excessive claims in the paper are very off-putting. This would perhaps make a reasonable robotics paper if it had a real-world evaluation and if the claims were scoped more realistically, but as-is, I don't think this work is ready for publication.\n\nMore detailed comments:\n\nThe two main contributions -- parallel training and asynchrony -- already appear in the Gu et al. paper. In fact, that paper demonstrates learning entirely in the real world, and substantially more efficiently than described in this paper. The authors don't discuss this at all, except a passing mention of Gu et al.\n\nThe title is not appropriate for this paper. The method is data-efficient compared to what? The results don't look very data efficient: the reported result is something on the order of 160 robot-hours, and 16 robot-hours with demonstration. That's actually dramatically less efficient than prior methods.\n\n\"our results on data efficiency hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots\": Prior work already shows successful stacking policies on real robots, as well as successful pick-and-place policies and a variety of other skills. The funny thing is that many of these papers are actually cited by the authors, but they simply pretend that those works don't exist when discussing the results.\n\n\"We assess the feasibility of performing analogous experiments on real robotics hardware\": I assume this is a typo, but the paper does not actually contain any real robotics hardware experiments.\n\n\"To our knowledge our results provide the first demonstration of end-to-end learning for a complex manipulation problem involving multiple freely moving objects\": This was demonstrated by Finn et al. in \"Deep Spatial Autoencoders for Visuomotor Learning,\" with training times that are a tiny fraction of those reported in this paper, and using raw images and real hardware.\n\n\"both rely on access to a well defined and fully observed state space\": This is not true of the Finn et al. paper mentioned above.", "The title is too generic and even a bit misleading. Dexterous manipulation usually refers to more complex skills, like in-hand manipulation or using the fingers to turn an object, and not simple pick and place tasks. Reinforcement learning methods are generally aiming to be data-efficient, and the method does not seem designed specifically for dexterous manipulation (which is actually a positive point, as it is more general).\n\nThe paper presents two extensions for DDPG: multiple network updates per physical interactions, and asynchronous updates from multiple robots. As the authors themselves state, these contributions are fairly straightforward, and the contributions are largely based on prior works. The authors do evaluate the methods with different parameter settings to see the effects on learning performance. \n\nThe simulation environment is fairly basic and seems unrealistic. The hand always starts close to the blocks, which are close together, so the inverse kinematics will be close to linear. The blocks are always oriented in the same direction and they can connect easily with no need to squeeze or wiggle them together. The task seems more difficult from the description in the paper, and the authors should describe the environment in more detail.\n\nDoes the robot learn to flip the blocks over such that they can be stacked? The videos show the \nblocks turning over accidentally, but then the robot seems to give up. Having the robot learn to turn the blocks would make for a more challenging task and a better policy.\n\nThe paper’s third contribution is a recipe for constructing shaped reward functions for composite tasks. The method relies on a predefined task structure (reach-grasp-stack) and is very similar to reward shaping already used in many other reinforcement learning for manipulation papers. A comparison of different methods for defining the rewards and a more formal description of the reward generation procedure would improve the impact of this section. The authors should also consider using tasks with longer sequences of actions, e.g., stacking four blocks. \n\nThe fourth and final listed contribution is learning from demonstrated states. Providing the robot with prior knowledge and easier partial tasks will result in faster learning. This result is not surprising. It is not clear though how applicable this approach is for a real robot system. It effectively assumes that the robot can grasp the block and pick it up, such that it can learn the stacking part, while simultaneously still learning how to grasp the block and pick it up. For testing the real robot applicability, the authors should try having the robot learn the task without simulation resets. \n\nWhat are the actual benefits of using deep learning in this scenario? The authors mention skill representations, such as dynamic motor primitives, which employ significantly more prior knowledge than a deep network. However, as demonstrations of the task are provided, the task is divided into steps, the locations of the objects and finger tips are given, a suitable reward function is provided, and the generalization is only over the object positions, why not train a set of DMPs and optimize them with some additional reinforcement learning? The authors should consider adding a Cartesian DMP policy as a benchmark, as well as discussing the benefits of the proposed approach given the prior knowledge. ", "The paper proposes 4 approaches to speed up deep RL: multiple replay steps, asynchronous updates, reward shaping, and selecting starting states. The various combinations of approaches are evaluated on a combined grasping and stacking task.\nI agree with the ideas related to reward shaping and selecting starting states in general. The other two approaches are rather specific to deep RL but well justified there.\nThe paper is a nice illustration that completely uninformative rewards do not work for complex, sequential tasks but that we also have to be very careful with rewards that they do not lead to the agent exploiting the reward definition in undesirable ways.\n\nBoth a strength and a weakness is that the methods proposes a whopping 4 approaches to speed up learning. It comes across as somewhat incremental for each of them and does not allow the authors to go very much in depth.\nMy main points of criticism:\n- The first 2 contributions are heavily based on prior work (as pointed out by the authors in Sect. 5, but not in the intro and summary). It is not really clear what is novel and what is just showing \"works for this type of problems as well\".\n- The contribution on reward shaping is very interesting, but then the \"meat\" is described in 10 lines which are hard to follow and to generalize to new problems.\n- The contribution on learning from starting states ends up using a pre-trained policy or demonstration. It is not really clear how much this still is RL and how much this is supervised learning. I.e., We could also pre-train the value fct. and policy based on the demonstrations in a supervised fashion. Then it would be interesting to evaluate the performance and see how much RL can additionally improve upon that.\n- I am not convinces this can really be transferred to a real robot as claimed by the authors and that the ideas are really widely applicable\n\nQuite a bit of the discussion on RL, robotics, and rendering this combination tractable could be shortened e.g. by referring to Kober, J; Bagnell, D.; Peters, J. (2013). Reinforcement Learning in Robotics: A Survey, International Journal of Robotics Research, 32, 11, pp.1238-1274.\n\nother interesting reference on parallel updates etc.\nW. Caarls and E. Schuitema, “Parallel Online Temporal Difference Learning for Motor Control,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, pp. 1–12, Jul 2015.\n\n- If you have such a clear task decomposition (and make use of it for the reward shaping) why not learn the parts separately, e.g., in a hierarchical RL setting?\n\nMinor comments\n==============\n- Introduction: you keep talking about end-to-end but never mention what the inputs and outputs are until much later (end-to-end is typically vision+proprioception to torque, here the position and orientation of the blocks are given).\n- Section 4: what is the definition of \"physically plausible simulation\"?\n- Section 4: The observation vector is not entirely clear: 9 DoFs of the robot (position + velocity) is clear. The observations of the blocks not so much: position (3 dim?) + orientation (3 dim?), but then no velocities? And what dimension are the relative distances? Full 6 dim per block? Scalar?\n- Section 5 (multiple- mini batches): so you make this a lot more aggressive, does re-using the samples so often not lead to overfitting?\n- Section 5 (asynchronous DPG): Any explanation why the speed-up of Grasp (8x) is significantly lower than for StackInHand (16x) for the 16 workers?\n- It would be nice to have axes label in the plots\n- \"hasn't\" => \"has not\"\n\n\n\n" ]
[ 4, 2, 3, -1 ]
[ 4, 5, 4, -1 ]
[ "iclr_2018_SJdCUMZAW", "iclr_2018_SJdCUMZAW", "iclr_2018_SJdCUMZAW", "iclr_2018_SJdCUMZAW" ]
iclr_2018_H1kMMmb0-
Sequential Coordination of Deep Models for Learning Visual Arithmetic
Achieving machine intelligence requires a smooth integration of perception and reasoning, yet models developed to date tend to specialize in one or the other; sophisticated manipulation of symbols acquired from rich perceptual spaces has so far proved elusive. Consider a visual arithmetic task, where the goal is to carry out simple arithmetical algorithms on digits presented under natural conditions (e.g. hand-written, placed randomly). We propose a two-tiered architecture for tackling this kind of problem. The lower tier consists of a heterogeneous collection of information processing modules, which can include pre-trained deep neural networks for locating and extracting characters from the image, as well as modules performing symbolic transformations on the representations extracted by perception. The higher tier consists of a controller, trained using reinforcement learning, which coordinates the modules in order to solve the high-level task. For instance, the controller may learn in what contexts to execute the perceptual networks and what symbolic transformations to apply to their outputs. The resulting model is able to solve a variety of tasks in the visual arithmetic domain,and has several advantages over standard, architecturally homogeneous feedforward networks including improved sample efficiency.
rejected-papers
The consensus among the reviewers is that this paper is not quite ready for publication for reasons I will summarize in more detail below. However, I think there are some things that are really nice about this approach, and worth calling out: PROS: 1. the idea of tackling tasks broadly all the way from perception through symbolic reasoning is an important direction. 2. It certainly would be useful to have a "plug and play" framework in which various knowledge sources or skills can be assembled behind a simple interface designed by the ML practioner to solve a given problem or class of problems. 3. Clearly finding ways to increase sample efficiency -- especially in a deep net approach -- is of great importance practically. 4. The writing is good. CONS: 1. The comparison to feedforward networks needs to be made fair in order to disentangle the benefit of the architecture from the benefit of pre-training the modules. 2. Using the very limited 2x2 grid was too low a bar for the reviewers. The authors aim at a more general, efficient architecture useful for a variety of tasks, and perhaps you didn't want to devote too much time to this particular task, but I think having a slam-dunk example of the power of the approach is really necessary to be convincing. 3. Given the similarity, I think more has to be done to show the intellectual contribution over Zaremba et al, the difference in motivation notwithstanding. One way to do this is to really prove out the increased sample efficiency claim.
train
[ "ByFXl7Def", "BkQXYLhgz", "BJzWfyTlf", "SkqQIMvMf", "rkcJUMwGf", "H1njHMvzG", "SkoIrMwfG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Summary: This work is a variant of previous work (Zaremba et al. 2016) that enables the use of (noisy) operators that invoke pre-trained neural networks and is trained with Actor-Critic. In this regard it lacks a bit of originality. The quality of the experimental evaluation is not great. The clarity of the paper could be improved upon but is otherwise fine. The existence of previous work (Zaremba et al. 2016) renders this work (including its contributions) not very significant. Relations to prior work are missing. But let's wait for the rebuttal phase. \n\nPros \n-It is confirmed that noisy operators (in the form of neural networks) can be used on the visual arithmetic task\n\nCons\n-Not very novel\n-Experimental evaluation is wanting\n\nThe focus of this paper is on integrating perception and reasoning in a single system. This is done by specifying an interface that consists of a set of discrete operations (some of which involve perception) and memory slots. A parameterized policy that can make use of these these operations is trained via Actor-Critic to solve some reasoning tasks (arithmetics in this case). \n\nThe proposed system is a variant of previous work (Zaremba et al. 2016) on the concept of interfaces, and similarly learns a policy that utilizes such an interface to perform reasoning tasks, such as arithmetics. In fact, the only innovation proposed in this paper is to incorporate some actions that invoke a pre-trained neural network to “read” the symbol from an image, as opposed to parsing the symbol directly. However, there is no reason to expect that this would not function in previous work (Zaremba et al. 2016), even when the network is suboptimal (in which case the operator becomes noisy and the policy should adapt accordingly). Another notable difference is that the proposed system is trained with Actor-Critic as opposed to Q-learning, but this is not further elaborated on by the authors. \n\nThe proposed system is evaluated on a visual arithmetics task. The input consists of a 2x2 grid of extended MNIST characters. Each location in the grid then corresponds to the 28 x 28 pixel representation of the digit. Actions include shifting the “fovea” to a different entry of the grid, invoking the digit NN or the operator NN which parse the current grid entry, and some symbolic operations that operate on the memory. The fact that the input is divided into a 2x2 grid severely limits the novelty of this approach compared to previous work (Zaremba et al. 2016). Instead it would have been interesting to randomly spawn digits and operators in a 56 x 56 image and maintain 4 coordinates that specify a variable-sized grid that glimpses a part of the image. This would make the task severely more difficult, given fixed pre-trained networks. The addition of the salience network is unclear to me in the context of MNIST digits, since any pixel that is greater than 0 is salient? I presume that the LSTM uses this operator to evaluate whether the current entry contains a digit or an operator. If so, wouldn’t simply returning the glimpse be enough?\n\nIn the experiments the proposed system is compared to three CNNs on two different visual arithmetic tasks, one that includes operators as part of the input and one that incorporates operators only in the tasks description. In all cases the proposed method requires fewer samples to achieve the final performance, although given enough samples all of the CNNs will solve the tasks. This is not surprising as this comparison is rather unfair. The proposed system incorporates pre-trained modules, whose training samples are not taken into account. On the other hand the CNNs are trained from scratch and do not start with the capability to recognize digits or operators. Combined with the observation that all CNNs are able to solve the task eventually, there is little insight in the method's performance that can be gained from this comparison. \n\nAlthough the visual arithmetics on a 2x2 grid is a toy task it would at least be nice to evaluate some of the policies that are learned by the LSTM (as done by Zaremba) to see if some intuition can be recovered from there. Proper evaluation on a more complex environment (or at least on that does not assume discrete grids) is much desired. When increasing the complexity (even if by just increasing the grid size) it would be good to compare to a recurrent method (Pyramid-LSTM, Pixel-RNN) as opposed to a standard CNN as it lacks memory capabilities and is clearly at a disadvantage compared to the LSTM.\n\nSome detailed comments are:\n\nThe introduction invokes evidence from neuroscience to argue that the brain is composed of (discrete) modules, without reviewing any of the counter evidence (there may be a lot, given how bold this claim is).\n\nFrom the introduction it is unclear why the visual arithmetic task is important.\n\nSeveral statements including the first sentence lack citations.\n\nThe contribution section is not giving any credit to Zaremba et al. (2016) whereas this work is at best a variant of that approach.\n\nIn the experiment section the role of the saliency detector is unclear.\n\nExperiment details are lacking and should be included.\n\nThe related work section could be more focused on the actual contribution being made.\n\nIt strikes me as odd that in the discussion the authors propose to make the entire system differentiable, since this goes against the motivation for this work.\n\n\nRelation to prior work:\n\np 1: The authors write: \"We also borrow the notion of an interface as proposed in Zaremba et al. (2016). An interface is a designed, task-specific machine that mediates the learning agent’s interaction with the external world, providing the agent with a representation (observation and action spaces) which is intended to be more conducive to learning than the raw representations. In this work we formalize an interface as a separate POMDP I with its own state, observation and action spaces.\" \n\nThis interface terminology for POMDPs was actually introduced in:\n\nJ. Schmidhuber. Reinforcement learning in Markovian and non-Markovian environments. In D. S. Lippman, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3, NIPS'3, pages 500-506. San Mateo, CA: Morgan Kaufmann, 1991.\n\np 4: authors write: \"For the policy πθ, we employ a Long Short-Term Memory (LSTM)\" \n\nDo the authors use the (cited) original LSTM of 1997, or do they also use the forget gates (recurrent units with gates) that most people are using now, often called the vanilla LSTM, by Gers et al (2000)?\n\np 4: authors write: \"One obvious point of comparison to the current work is recent research on deep neural networks designed to learn to carry out algorithms on sequences of discrete symbols. Some of these frameworks, including the Differen-tiable Forth Interpreter (Riedel and Rocktäschel, 2016) and TerpreT (Gaunt et al., 2016b), achieve this by explicitly generating code, while others, including the Neural Turing Machine (NTM; Graves et al., 2014), Neural Random-Access Machine (NRAM; Kurach et al., 2015), Neural Programmer (NP; Neelakan- tan et al., 2015), Neural Programmer-Interpreter (NPI; Reed and De Freitas, 2015) and work in Zaremba et al. (2016) on learning algorithms using reinforcement learning, avoid gen- erating code and generally consist of a controller network that learns to perform actions in a (sometimes differentiable) external computational medium in order to carry out an algorithm.\"\n\nHere the original work should be mentioned, on differentiable neural stack machines: \n\nG.Z. Sun and H.H. Chen and C.L. Giles and Y.C. Lee and D. Chen. Connectionist Pushdown Automata that Learn Context-Free Grammars. IJCNN-90, Lawrence Erlbaum, Hillsdale, N.J., p 577, 1990.\n\nMozer, Michael C and Das, Sreerupa. A connectionist symbol manipulator that discovers the structure of context-free languages. Advances in Neural Information Processing Systems (NIPS), p 863-863, 1993.\n\n\n", "The paper presents an interesting model to reuse specialized models trained for perceptual tasks in order to solve more complex reasoning tasks. The proposed model is based on reinforcement learning with an agent that interacts with an environment C, which is the combination of E and I, the external world and the interface, respectively. This abstraction is nicely motivated and contextualized with respect to previous work.\n\nHowever, the paper evaluates the proposed model in artificial tasks with limited reasoning difficulty: the tasks can be solved with simpler baseline models. The paper argues that the advantage of the proposed approach is data efficiency, which seems to be a side effect of having pre-trained modules rather than a clear superior reasoning capability. The paper discusses other advantages of the model, but these are not tested or evaluated either. A more convincing experimental setup would include complex reasoning tasks, and the evaluation of all the aspects mentioned as benefits: computational time, flexibility of computation, better accuracy, etc.", "Summary: The authors use RL to learn a visual arithmetic task, and are able to do this with a relatively small number of examples, presumably not including the number of examples that were used to pre-train the classifiers that pre-process the images. This appears to be a very straightforward application of existing techniques and networks.\n\nQuality: Given the task that the authors are trying to solve, the approach seems reasonable.\nClarity: The paper appears quite clearly written for the most part.\nOriginality & Significance: Unless I am missing something important, or misunderstanding something, I do not really understand what is significant about this work, and I don't see it as having originality.\n\nNitpick 1: top of Page 5, it says \"Figure ??\" \nNitpick 2: Section 2.3 says \"M means take the product, A means take the sum, etc\". Why choose exactly those terms that obscure the pattern, and then write \"etc\"? In Figure 1, \"X\" could mean multiply, or take the maximum, but by elimination, it means take the maximum. It would have only added a few characters to the paper to specify the notation here, e.g. \"Addition(A), Max (X), Min (N), Multiply (M)\". If the authors insist on making the reader figure this out by deduction, I recommend they just write \"We leave the symbols-operation mapping as a small puzzle for the reader.\"\n\nThe authors might find the paper \"Visual Learning of Arithmetic Operations” by Yedid Goshen and Shmuel Peleg to also be somewhat relevant, although it's different from what they are doing.\n\nSection 3. The story from the figures seems to be that the authors' system works beats a CNN when there are very few examples. But significance of this is not really discussed in any depth other than being mentioned in corresponding places in the text, i.e. it's not really the focal story of the text. \n\nPros: Seems to work OK. Seems like a reasonable application of pre-trained nets to allow solving a different visual problem for which less data might be available.\n\nCons: Unless I am missing an important point, the results are unsurprising, and I am not clear what is novel or significant about them.", "Thank you for the detailed feedback. First, addressing the high-level comments, your major concerns were:\n\n1. Lack of difficulty.\n\nWe agree that it would be somewhat more convincing if the digits were not restricted to a grid. However, the grid-restricted version has precedent in the literature. The paper\n\n Alexander L Gaunt, Marc Brockschmidt, Nate Kushman, and Daniel Tarlow. Differentiable Programs with Neural Libraries. In International Conference on Machine Learning, pages 1213–1222, 2017.\n\n(which is cited in the Related Work section of the new version of the article) tackles a similar domain the digits are constrained to a grid in all tasks. It therefore seems unfair to count this too heavily against our work.\n\nThat said, we will work on applying our approach to a modified version of the task wherein the digits are not restricted to lie on a grid.\n\n2. Lack of novelty.\n\nWhile somewhat similar to Zaremba et al., it is our opinion that the addition of perceptual challenges make our tasks different enough from that work to warrant separate exploration. This will become even more true once we are able to make it work without restricting the digits to a grid, as then the application of a pre-trained network to classify a digit becomes less like a noisy version of \"reading a digit from a cell\" (which is how things are set up in Zaremba et al.), and more like perceiving the world.\n\nMoreover, while the methodology we use may be regarded as a variant of that used by Zaremba et al., both our motivation and experimental manipulations are significantly different. Our motivation is tackling perceptuo-symbolic tasks in a sample efficient way by providing information processing modules, whereas theirs is to see whether algorithms can be learned by reinforcement learning. On the experimental side, they do not compare against feedforward networks as baselines (probably because they are interested in generalizing to longer sequences than were seen during training, which cannot really be handled by feedforward network, though it still might be illuminating to see how a feedforward network would do in their setting if the sequences were kept at a fixed size). Additionally, they make no mention of sample efficiency; they provide their controllers with a training set containing 30,000,000 training characters and do not manipulate this number. So our work is complementary to theirs in that we explore how many unique training examples are required to induce an algorithm using reinforcement learning, which seems to us like a worthwhile endeavour.\n\n3. Unfair comparison with feedforward networks.\n\nWe will work on making the information processing modules available to feedforward networks in order to make the comparison fairer. See response to reviewer #2 for more discussion on this. Additionally, we will look into including baselines with memory as suggested.", "\nResponding to the detailed comments:\n\n-- The introduction invokes evidence from neuroscience to argue that the brain is composed of (discrete) modules, without reviewing any of the counter evidence (there may be a lot, given how bold this claim is).\n\nWe have removed this claim, as we have other sources of motivation that are likely more convincing, less controversial, and tie in better with our experiments; see third paragraph of the introduction in the new version.\n\n-- From the introduction it is unclear why the visual arithmetic task is important.\n\nIn the introduction we have provided more motivation for this task choice, with automatic grading of math-exam questions as a potential future application of our approach.\n\n-- Several statements including the first sentence lack citations.\n\nWe have added a citation for the first sentence (assuming you meant first sentence of the introduction).\n\n-- The contribution section is not giving any credit to Zaremba et al. (2016) whereas this work is at best a variant of that approach.\n\nWe will take this into consideration.\n\n-- In the experiment section the role of the saliency detector is unclear.\n-- Experiment details are lacking and should be included.\n\nThe role of the saliency detector is to provide the controller with a low-dimensional representation of the locations of the digits. We have tried using a low-resolution glimpse in place of the saliency detector as suggested, and it does work, but found it to learn more slowly (our hypothesis is that the saliency network is better at providing only information about symbol location, with information about symbol identity stripped away). Additionally, using a network for the saliency detection rather than a simple glimpse provides an additional example of a role for pre-trained networks in our framework, over and above simple classification.\n\nAt any rate, we agree that its role may not be clear; we will work on providing more details on the visual arithmetic interface and the experiments in general in an appendix.\n\n-- The related work section could be more focused on the actual contribution being made.\n\nWe are not sure which way to go on this, could you be more specific?\n\n-- It strikes me as odd that in the discussion the authors propose to make the entire system differentiable, since this goes against the motivation for this work.\n\nAgreed, removed in the new version.\n\nAnd finally, with respect to the suggested citations:\n\n1. The sense in which we and Zaremba et al. use word \"interface\" is different from the sense in which it is used in the Schmidhuber article. For us, an interface is a designed POMDP that is coupled to the external world and is intended to make learning easier (by providing information processing modules). Zaremba et al. use a similar notion. In the Schmidhuber article, he basically just uses the term \"Non-Markovian Interface\" to refer to any partially observable environment.\n\n2. We use the 1997 version of LSTM (default in tensorflow).\n\n3. References on neural stack machines are very useful and will be incorporated, thanks.", "Thank you for the insightful comments. We have posted an updated version of the article which makes our motivation clearer (see the introduction).\n\nAs for your suggestions, we are in the process of running additional experiments which:\n\n1. Make the comparison with the feedforward network fairer by supplying the feedforward network with the pre-trained modules.\n\n2. Demonstrate additional benefits of the sequential setup, including ability to adapt the amount of computation to the difficulty of the example, and improved amenability to curriculum learning compared to feed-forward networks.\n\n3. Tackle more difficult reasoning tasks. It is our hypothesis that our approach will actually do better at more difficult reasoning tasks, and we may be able to find tasks that cannot be learned by feedforward networks at all.", "Thank you for the insightful comments. We have posted an updated version of the article which fixes the nitpicks (see Section 3.1, paragraph labelled “Combined Task” for our fix to nitpick 2, though the puzzle idea was a close second ;) ) and cites the article that you mentioned. As for the higher-level problems, we address them point-by-point.\n\n1. Lack of novelty\n\n* Appearance of lack of novelty may stem from incomplete framing/motivation of our approach in the first draft of the paper. In general we are interested in creating systems that can learn to coordinate information processing modules to solve high-level tasks. The use-case we have in mind involves an ML practitioner who would like to solve a given task, and has at their disposal a collection of information-processing modules that would clearly be useful for the task. The practitioner would like to provide the learning agent with access to those modules (to act as a source of inductive bias, allowing the task to be learned from fewer examples) but does not know how to make those modules available to the learning agent. We propose reinforcement learning as a means of \"injecting\" the modules into a trainable system; this has the benefit that the modules are not required to be differentiable, greatly expanding the space of modules that may be used. We use the visual arithmetic task as an example of this use-case, wherein it is clear that there are information-processing modules that would be helpful, but it is not clear at first blush how to combine those modules into a system that performs the required task, or that can be trained to perform the task, especially since several of the required operations are symbolic and non-differentiable.\n\n We have modified the introduction of the article to be clearer about this motivation, which we feel was somewhat lacking in the original draft. If this clarification doesn't alleviate your concerns about novelty, could you expand on these concerns, perhaps citing specific work?\n\n2. Relating the experiments to the text\n\nThe revised introduction also makes it clear that the benefit that we expect to gain from providing the agent with information-processing modules is improved sample efficiency, which is what the experiments show." ]
[ 4, 3, 2, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_H1kMMmb0-", "iclr_2018_H1kMMmb0-", "iclr_2018_H1kMMmb0-", "ByFXl7Def", "ByFXl7Def", "BkQXYLhgz", "BJzWfyTlf" ]
iclr_2018_BkIkkseAZ
Theoretical properties of the global optimizer of two-layer Neural Network
In this paper, we study the problem of optimizing a two-layer artificial neural network that best fits a training dataset. We look at this problem in the setting where the number of parameters is greater than the number of sampled points. We show that for a wide class of differentiable activation functions (this class involves most nonlinear functions and excludes piecewise linear functions), we have that arbitrary first-order optimal solutions satisfy global optimality provided the hidden layer is non-singular. We essentially show that these non-singular hidden layer matrix satisfy a ``"good" property for these big class of activation functions. Techniques involved in proving this result inspire us to look at a new algorithmic, where in between two gradient step of hidden layer, we add a stochastic gradient descent (SGD) step of the output layer. In this new algorithmic framework, we extend our earlier result and show that for all finite iterations the hidden layer satisfies the``good" property mentioned earlier therefore partially explaining success of noisy gradient methods and addressing the issue of data independency of our earlier result. Both of these results are easily extended to hidden layers given by a flat matrix from that of a square matrix. Results are applicable even if network has more than one hidden layer provided all inner hidden layers are arbitrary, satisfy non-singularity, all activations are from the given class of differentiable functions and optimization is only with respect to the outermost hidden layer. Separately, we also study the smoothness properties of the objective function and show that it is actually Lipschitz smooth, i.e., its gradients do not change sharply. We use smoothness properties to guarantee asymptotic convergence of O(1/number of iterations) to a first-order optimal solution.
rejected-papers
Understanding the quality of the solutions found by gradient descent for optimizing deep nets is certainly an important area of research. The reviewers found several intermediate results to be interesting. At the same time, the reviewers unanimously have pointed out various technical aspects of the paper that are unclear, particularly new contributions relative to recent prior work. As such, at this time, the paper is not ready for ICLR-2018 acceptance.
train
[ "SyQTA31bG", "ByASB9UEG", "BJUmm5_lz", "SJccyr0-M", "rkvgtaimf", "SyqV2dWfz", "rJSis_bfz", "r1XDiO-GG", "r1FKsfwWz", "ryRVo4QeM", "rkQ3TFMxG", "SJMDtxGlz", "rypGH4bgG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer" ]
[ "The paper studies the theoretical properties of the two-layer neural networks. \n\nTo summarize the result, let's use the theta to denote the layer closer to the label, and W to denote the layer closer to the data. \n\nThe paper shows that \na) if W is fixed, then with respect to the randomness of the data, with prob. 1, the Jacobian matrix of the model is full rank\nb) suppose that we run an algorithm with fresh samples, then with respect to the randomness of the k-th sample, we have that with prob. 1, W_k is full rank, and the Jacobian of the model is full rank. \n\nIt's know (essentially from the proof of Carmon and Soudry) that if the Jacobian of the model is full rank for any matrix W w.r.t the randomness of the data, then all stationary points are global. But the paper cannot establish such a result. \n\nThe paper is not very clear, and after figuring out what it's doing, I don't feel it really provides many new things beyond C-S and Xie et al.\n\nThe paper argues that it works for activation beyond relu but result a) is much much weaker than the one with for all quantifier for W. result b) is very sensitive to the exactness of the events (such as W is exactly full rank) --- the events that the paper talks just naturally never happen as long as the density of the random variables doesn't degenerate. \n\nAs the author admitted, the results don't provide any formal guarantees for the convergence to a global minimum. It's also a bit hard for me to find the techniques here provide new ideas that would potentially lead to resolving this question. \n\n--------------------\n\nadditional review after seeing the author's response: \n\nThe author's response pointed out some of the limitation of Soudry and Carmon, and Xie et al's which I agree. However, none of this limitation is addressed by this paper (or addressed in a misleading way to some extent.) The key technical limitation is the dependency of the local minima on the weight parameters. Soudry and Carmon addresses this in a partial way by using the random dropout, which is a super cool idea. Xie et al couldn't address this globally but show that the Jacobian is well conditioned for a class of weights. The paper here doesn't have either and only shows that for a single fixed weight matrix, the Jacobian is well-conditioned. \n\nI don't also see the value of extension to other activation function. To some extent this is not consistent with the empirical observation that relu is very important for deep learning. \n\nRegarding the effect of randomness, since the paper only shows the convergence to a first-order optimal solution, I don't see why randomness is necessary. Gradient descent can converge to a first order optimal solution. (Indeed I have a typo in my previous review regarding \"w.r.t. k-th sample\", which should be \"w.r.t. k-th update\". ) Moreover, to justify the effect of the randomness, the paper should have empirical experiments. \n\nI think the writing of the paper is also misleading in several places. \n", "Thanks for your response and clarification. I think the authors have clarified that their result is assuming a data independent W at each iteration as I had thought. I think this needs to be stated more clearly in the abstract and intro. I was actually confused by \"arbitrary\" in the abstract and the sentence \" hence even if the hidden layer variables are data dependent, we still get required properties\" in the intro. In fact I got the impression that they meant the opposite i.e. their results hold even when W does depend on data. I recommend using \"data-independent\" instead of arbitrary and rephrasing the latter sentence. \n\nWhile I believe this paper leaves much to be desired I am increasing my rating to 7 as I believe this paper is a step forward and in the right direction and has interesting ideas and insights. My overall recommendation is acceptance after addressing the concerns above.\n\n\n\n", "This paper aims to study some of the theoretical properties of the global optima of single-hidden layer neural networks and also the convergence to such a solution. I think there are some interesting arguments made in the paper e.g. Lemmas 4.1, 5.1, 5.2, and 5.3. However, as I started reading beyond intro I increasingly got the sense that this paper is somewhat incomplete e.g. while certain claims are made (abstract/intro) the theoretical justification are rather far from these claims. Of course there is a chance that I might be misunderstanding some things and happy to adjust my score based on the discussions here.\n\nDetailed comments:\n1) My main concern is that the abstract and intro claims things that are never proven (or even stated) in the rest of the paper\nExample 1 from abstract: \n“We show that for a wide class of differentiable activation functions (this class involved “almost” all functions which are not piecewise linear), we have that first-order optimal solutions satisfy global optimality provided the hidden layer is non-singular.”\n\nThis is certainly not proven and in fact not formally stated anywhere in the paper. Closest result to this is Lemma 4.1 however, because the optimal solution is data dependent this lemma can not be used to conclude this. \n\nExample 2 from intro when comparing with other results on page 2:\nThe authors essentially state that they have less restrictive assumptions in the form of the network or assumptions on the data (e.g. do not require Gaussianity). However as explained above the final conclusions are also significantly weaker than this prior literature so it’s a bit of apples vs oranges comparison.\n\n2) Page 2 minor typos\nWe study training problem -->we study the training problem\nIn the regime training objective--> in the regime the training objective\n\n3) the basic idea argument and derivative calculations in section 3 is identical to section 4 of Soltan...et al\n\n4) Lemma 4.1 is nice, well done! That being said it does not seem easy to make it (1) quantifiable (2) apply to all W. It would also be nice to compare with Soudry et. al.\n\n5) Argument on top of page 6 is incorrect as the global optima is data dependent and hence lemma 4.1 (which is for a fixed matrix) does not apply\n\n6) Section 5 on page 6. Again the stated conclusion here that the iterates do not lead to singular W is much weaker than the claims made early on.\n \n7) I haven’t had time yet to verify correctness of Lemmas 5.1, 5.2, and Lemma 5.3 in detail but if this holds is a neat argument to side step invertibility w.r.t. W, Nicely done!\n\n8) What is the difference between Lemma 5.4 and Lemma 6.12 of Soltan...et al \n\n9) Theorem 5.9. Given that the arguments in this paper do not show asymptotic convergence to a point where gradient vanishes and W is invertible why is the proposed algorithm better than a simple approach in which gradient descent is applied but a small amount of independent Gaussian noise is injected in every iteration over W. By adjusting the noise variance across time one can ensure a result of the kind in Theorem 5.9 (Of course in the absence of a quantifiable version of Lemma 4.1 which can apply to all W that result will also suffer from the same issues).\n", "I only got access to the paper after the review deadline; and did not have a chance to read it until now. Hence the lateness and brevity.\n\nThe paper tackles an important theoretical question; and it offers results that are complementary to existing results (e.g., Soudry et al). However, the paper does not properly relate their results, assumptions in the context of the existing literature. Much explanation is needed in the author reply in order to clear these questions.\n\nThe work should not be evaluated from a practical perspective as it is of a theoretical nature.\n\nI agree with most of the criticism raised by other reviewers. However, I also believe the authors managed to clear essentially of the criticism in they reply. The paper lacks in clarity as currently written. \n\nThe results are interesting, but more explanation is needed for the main message to be conveyed more clearly. I suggest 7, but the paper has a potential to become 8 in my eyes in a future resubmission.\n", "Thank you very much for the positive comments. We also appreciate that you went through our replies to other reviewers and took a holistic view of the paper. We understand that first version of the paper had caused some confusion among reviewers. So we have added a new version in an effort to address their criticism. New comments are in red color in latest revision to the paper which was uploaded on 15th Dec 2017.\nWe would greatly appreciate if you could go through it and give your feedback", "After reading your and other reviewers' comments, there was a realization that the way paper was written has caused some confusion. We have tried to address those concerns in the revised paper. New explanations are put in red color. The issue you mentioned about order of quantifier is mentioned in the section 4 with the motivation to look at it even though there is obvious problem of data dependency. We partially address those in section 5.\nWe also added comparison to Xie et al. and Soudry et al. More details of this are in section 3 where we make our approach clear and differentiate our approach with theirs. \nPlease go through the revised paper and we will greatly appreciate your feedback.\nAchieving convergence to the global minimum is a challenging problem, especially if we do not make any additional assumptions. But we think that our results will inspire new ideas to tackle this challenge.", "8) What is the difference between Lemma 5.4 and Lemma 6.12 of Soltan...et al \nResponse:\nWe now add some discussions about the difference in the introduction. We also believe you meant Lemma 6.14 of arxiv version of that paper.\n\n9) Theorem 5.9. Given that the arguments in this paper do not show asymptotic convergence to a point where gradient vanishes and W is invertible why is the proposed algorithm better than a simple approach in which gradient descent is applied but a small amount of independent Gaussian noise is injected in every iteration over W. By adjusting the noise variance across time one can ensure a result of the kind in Theorem 5.9\nResponse: \nThis is now Theorem 5.11. We agree that injecting Gaussian noise to W may obtain similar result, but this approach will slightly modify the problem to be solved. On the other other hand, we develop an algorithmic framework which alternates between the gradient descent step with respect to W and the SGD procedure for the output layer \\theta. A few SGD steps will help to guarantee the non-singularity of W, but would not modify the original problem since we make sure the original function value is descent. The overall scheme is also inspired to the iterative layer-by-layer training that has been used in practice. We concluded the results talking about the pluses and minuses of this algorithm and possibility of extending this to multiple others. \nIn the end, we think to prove global optimality especially for the highly nonlinear activation functions studied in this paper is challenging problem to solve in totality without making any additional assumptions. But at the same time our work should inspire new ideas for tackling this challenge.", "We have added a revision to the paper. After reading comments of all reviewers, there was a realization that the way paper was written may have caused some confusion. We tried to address those concerns. Please go through it. New explanations are written in red. We will greatly appreciate your feedback. Specific replies to points you raised:\n1) My main concern is that the abstract and intro claims things that are never proven (or even stated) in the rest of the paper \nExample 1 from abstract: “We show that for a wide class of differentiable activation functions (this class involved “almost” all functions which are not piecewise linear), we have that first-order optimal solutions satisfy global optimality provided the hidden layer is non-singular.” This is certainly not proven and in fact not formally stated anywhere in the paper. Closest result to this is Lemma 4.1 however, because the optimal solution is data dependent this lemma can not be used to conclude this. \nExample 2 from intro when comparing with other results on page 2: The authors essentially state that they have less restrictive assumptions in the form of the network or assumptions on the data (e.g. do not require Gaussianity). However as explained above the final conclusions are also significantly weaker than this prior literature so it’s a bit of apples vs oranges comparison. \nResponse:\nWe appreciate the Reviewer’s question. We now try to clarify these issues in the revised version of the paper.\nIn the abstract, we have added the word arbitrary first order optimal points to signify that in this case we are looking at data-independent W. Then in the immediate next sentence we make it clear that we extend these results to $W_k$'s found along a trajectory of an algorithm thereby alleviating some of the concerns. Then in section 5, once we state Lemma 5.3, we have added remarks 5.4 and 5.5 addressing the issue of implementing this algorithmic framework for multiple hidden layer. Note that we use some of the results we proved for arbitrary $W$ while stating remark 5.5. \nWe have also added comparison to previous results in the Introduction and Section 3. We make sure that we have apples vs oranges part of the results stated in the explanation.\n\n2) Page 2 minor typos\nResponse:\nAbove mentioned typos are fixed in new version.\n\n3) the basic idea argument and derivative calculations in section 3 is identical to section 4 of Soltan...et al\nResponse:\nWe are not sure about the basic idea because they also look second order conditions. But the derivative is indeed same.\n\n4) Lemma 4.1 is nice, well done! That being said it does not seem easy to make it (1) quantifiable (2) apply to all W. It would also be nice to compare with Soudry et. al.\nResponse:\nThanks for the comments. We have added comparison to Soudry et al. in section 3. Also a motivation to look at arbitrary W is added at the beginning of section 4. This will relieve some concerns of readers.\n\n5) Argument on top of page 6 is incorrect as the global optima is data dependent and hence lemma 4.1\nResponse:\nThis is a similar concern to the previous ones. We have added more materials in Section 4 and Section 5 to alleviate these concerns. \n\n7) I haven’t had time yet to verify correctness of Lemmas 5.1, 5.2, and Lemma 5.3 in detail but if this holds is a neat argument to side step invertibility w.r.t. W, Nicely done!\nResponse:\nThanks. Also please read the new remarks added at the end of these results. This is to extend this result for training outermost hidden layer of multilayer neural network. We use some results in section 4 for arbitrary W to prove/state these remarks.\n \n(Reply continued in next comment)", "The reviewer stated in part b) of the summary that we show \"Jacobian matrix D is full rank w.r.t. k-th sample\". We do not show this. We show that assuming data is given then w.r.t. randomness of stochastic gradient of theta, all the Jacobian matrices will be full rank. So this is essentially property of the algorithm and not of the data-sample (You can choose whatever distribution you want for the stochastic gradient of theta; whereas distribution of data is not something you can choose in practice). There is no new sample generated in every iteration of the algorithm. All the data is sampled in the beginning and used as constant throughout the algorithm. Also the events are systematically shown to be happening with probability 1.\nSoudry et al. paper talks about the Jacobian but they do not give an algorithmic characterization for which this Jacobian property will hold. On the other hand, we give a simple characterization and show that stochastic gradient algorithm achieves these characterizations. In fact, this algorithm gives robustness w.r.t. dependence on data of the characterization. (This is actually observed in practice: A standard gradient descent does not necessarily converge to a good first-order stationary point but a noisy (stochastic) gradient descent mostly converges to \"good” first-order points). Our theorem justifies how having noise helps in maintaining the robustness w.r.t dependence on data.\nSoudry et al. look at leaky ReLu with randomisation as \"dropout\". This model fails when there is no dropout as mentioned in their paper itself. Our whole analysis is for models whose activation is not ReLu which is of course a different (and bigger) class of activation functions. Moreover, we do not use random dropouts in our activations. We show our results without changing model but giving randomization to the algorithmic process of finding the W.\nSo in essence we show that why having noisy gradients is important and also show that you can actually play with algorithm rather than the NN model to get these robustness guarantees. Moreover, our algorithms alternate between the output layer and the hidden layer, partly explaining the success of the layer-by-layer training. The bigger idea might be to say: \"Random noise in one layer helps in keeping robustness w.r.t. data dependence on other layer.\"\nOn the side, we also analyze Lipschitz smoothness properties of the optimization problem and show a positive result which only depends on data. Such result has great algorithmic value since these constants can be computed and employed in the algorithm. Also note that these lipschitz constants are not probabilistic. Using this, we establish convergence results to an approximate-first-order optimality point which satisfies our characterization. (If it were an exact-first order optimality point then we would be at global optimal). The core of the problem of proving global optimality is proving that lowest singular value does not converge to 0 faster that rate of convergence of the algorithm itself.\nIn this regard, indeed Xie et al. give a positive result but there is a catch. Xie et al also look at the same Jacobian matrix but here again, they only consider ReLu units. There results are indeed strong in the sense that they show lower bounds on smallest singular value of Jacobian matrix but for that they need two facts. One is to restrict the activation to ReLu so that they can find spectrum of corresponding Kernel matrix, second is a bound on discrepancy of weights W. These conditions are strong in the sense that it is difficult to implement algorithm which will satisfy these conditions. On other hand, our condition is simple, workable for almost all activations (except ReLu), can be shown to be true in simple algorithmic set up but it does not get us an actual bound on lowest singular value of Jacobian matrix. We think both results have their own theoretical pluses and minuses.", "Yes, for global convergence we need to prove that smallest singular value of matrix D does not decrease at the rate faster than 1/N (It may go to zero but the rate should not be faster than 1/N). We were unable to show this for the general setting where a simple noise is added to theta. Any suggestion in that direction are most welcome.", "But then this lemma 5.2,5.3 is sensitive in the sense that the resulting matrix W_k can be full rank but it may be very ill-conditioned (e.g., the least singular value can converge to zero very fast?)? I guess this is why you couldn't prove that it converges to a global minimum? ", "In corollary 4.2, matrix W is independent of data. That is indeed a problem we notice and address in section 5. Immediately after lemma 5.1, we discuss the exact problem you are referring to. We use the ideas developed in lemma 4.1/corollary 4.2 to show that even though you assume that data is fixed, using the randomness of stochastic theta, one can show that Algorithm 1 is robust and W_k generated by this algorithm achieves the same guarantees as that of W in corollary 4.2. Robustness is derived solely from lebesgue measure on theta.", "What's the order of the quantifier in Corollary 4.2? It seems that the samples are sampled after W is fixed? Then the result doesn't seem to be useful, because the whole point of previous work such as SC, XLS is to prove the uniform result so that W can be chosen after samples are fixed. " ]
[ 4, -1, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, -1, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BkIkkseAZ", "rJSis_bfz", "iclr_2018_BkIkkseAZ", "iclr_2018_BkIkkseAZ", "SJccyr0-M", "SyQTA31bG", "r1XDiO-GG", "BJUmm5_lz", "SyQTA31bG", "rkQ3TFMxG", "SJMDtxGlz", "rypGH4bgG", "iclr_2018_BkIkkseAZ" ]
iclr_2018_ryCM8zWRb
Recurrent Neural Networks with Top-k Gains for Session-based Recommendations
RNNs have been shown to be excellent models for sequential data and in particular for session-based user behavior. The use of RNNs provides impressive performance benefits over classical methods in session-based recommendations. In this work we introduce a novel ranking loss function tailored for RNNs in recommendation settings. The better performance of such loss over alternatives, along with further tricks and improvements described in this work, allow to achieve an overall improvement of up to 35% in terms of MRR and Recall@20 over previous session-based RNN solutions and up to 51% over classical collaborative filtering approaches. Unlike data augmentation-based improvements, our method does not increase training times significantly.
rejected-papers
While the use of RNNs for building session-based recommender systems is certainly an important class of applications, the main strength of the paper is to propose and benchmark practical modifications to prior RNN-based systems that lead to performance improvements. The reviewers have pointed out that the writing in the paper needs improvement, modifications are somewhat straightforward and some expected baselines such as comparisons against state of the art matrix-factorization based methods is missing. As such the paper could benefit from a revision and resubmission elsewhere.
val
[ "ryqETl9gG", "S1d1eXqlM", "r1QqAe3lG", "B1Qwgzlff", "B1VZezgff", "Hy35efeMz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This is an interesting paper that analyzes existing loss functions for session-based recommendations. Based on the result of these analysis the authors propose two novel losses functions which add a weighting to existing ranking-based loss functions. These novelties are meant to improve issues related to vanishing gradients of current loss functions. The empirical results on two large-scale datasets are pretty impressive. \n\nI found this paper to be well-written and easy to read. It also provides a nice introduction to some of the recent literature on RNNs for session-based recommendations. \n\nIn terms of impact, while it studies a fairly applied (and narrow) question, it seems like it would be of interest to researchers and practitioners in recommender systems.\n\n\nI have a few comments and questions: \n\n- The results in Figure 3 show that both a good loss function and sampling strategy are required to perform well. This is interesting in the sense that doing the \"right thing\" according to the model (optimizing using all samples) isn't optimal. This is a very empirical observation and it would be insightful to better understand exactly the objective that is being optimized.\n\n- While BPR-max seems to be the strongest performer (Table 2), cross-entropy (XE) with additional samples is close. This further outlines the importance of the sampling method over the exact form of the loss function. \n\n- In ranking-max losses, it seems like \"outliers\" could have a bigger impact. I don't know how useful it is to think about (and it is a bit unclear what an \"outlier\" means in this implicit feedback setting).\n\n\nMinor comments: \n\n- Around Eq. 4 it may be worth being more explicit about the meaning of i and j. \n", "This paper presents a few modifications on top of some earlier work (GRU4Rec, Hidasi et al. 2016) for session-based recommendation using RNN. The first one is to include additional negative samples based on popularity raised to some power between 0 and 1. The second one is to mitigate the vanishing gradient problem for pairwise ranking loss, especially with the increased number of negative samples from the first modification. The basic idea is to weight all the negative examples by their “relevance”, since for the irrelevant negatives the gradients are vanishingly small. Experimentally these modifications prove to be effective compared with the original GRU4Rec paper. \n\nThe writing could have been more clear, especially in terms of notations and definitions. I found myself sometimes having to infer the missing bits. For example, in Eq (4) and (5), and many that follow, the index i and j are not defined (I can infer it from the later part), as well as N_s (which I take it as the number of negative examples). This is just one example, but I hope the authors could carefully check the paper and make sure all the notations/terminologies are properly defined or referred with a citation when first introduced (e.g., pointwise, pairwise, and listwise loss functions). I consider myself very familiar with the RecSys literature, and yet sometimes I cannot follow the paper very well, not to mention the general ICLR audience. \n\nRegarding the two main modifications, I found the negative sampling rather trivial (and I am surprised in Hidasi et al. (2016) the negatives are only from the same batch, which seems a huge computational compromise) with many existing work on related topic: Steck (Item popularity and recommendation accuracy, 2011) used the same “popularity to the power between 0 and 1” strategy (they weighted the positive by the inverse popularity to the power). More closely, the negative sampling distribution in word2vec is in fact a unigram raised to the power of 0.75, which is the same as the proposed strategy here. As for the gradient vanishing problem for pairwise ranking loss, it has been previously observed in Rendle & Freudenthaler (Improving Pairwise Learning for Item Recommendation from Implicit Feedback, 2014) for BPR and they proposed an adaptive negative sampling strategy (trying to sample more relevant negatives while still keeping the computational cost low), which is closely related to the ranking-max loss function proposed in this paper. Overall, I don’t think this paper adds much on top of the previous work, and I think a more RecSys-oriented venue might benefit more from the insights presented in this paper. \n\nI also have some high-level comments regarding using RNN for session-based recommendation (this was also my initial reaction after reading Hidasi et al. 2016). As mentioned in this paper, when applying RNN on RecSys datasets with longer time-span (which means there can be more temporal dynamics in users’ preference and item popularity), the results are not striking (e.g., Wu et al. 2017) with the proposed methods barely outperforming standard matrix factorization methods. It is puzzling that how RNN can work better for session-based case where a user’s preference can hardly change within such a short period of time. I wonder how a simple matrix factorization approach would work for session-based recommendation (which is an important baseline that is missing): regarding the claim that MF is not suited for session-based because of the absence of the concept of a user, each session can simply be considered as a pseudo-user and approaches like asymmetric matrix factorization (Paterek 2007, Improving regularized singular value decomposition for collaborative filtering) can even eliminate the need for learning user factors. ItemKNN is a pretty weak baseline and I wonder if a scalable version of the SLIM (Ning & Karypis 2011, SLIM: Sparse Linear Methods for Top-N Recommender Systems) would give better results. Finally, my general experience with BPR-type of pairwise ranking loss is that it is good at optimizing AUC, but not very well-suited for head-heavy metrics (MRR, Recall, etc.) I wonder how the propose loss would perform comparing with more competitive baselines. \n\nRegarding the page limit, given currently the paper is quite long (12 pages excluding references), I suggest the authors cutting down some space. For example, the part about fixing the cross entropy is not very relevant and can totally be put in the appendix. \n\nMinor comment:\n\n1. Section 3.3.1, “Part of the reasons lies in the rare occurrence…”, should r_j >> r_i be the other way around?\n", "This paper discussed the issues for optimizing the loss functions in GRU4Rec and proposed tricks for optimize the loss functions, and also proposed enhanced version of the loss functions for GRU4Rec. Good performance improvements have been reported for the several datasets to show the effectiveness of the proposed methods.\n\nThe good point of this work is to show that the loss function is important to train a better classifier for the session-based recommendation. This work is of value to the session-based recommendations.\n\nSome minor points:\nI think it may be better if the authors could put the results of RSC15 from Tan (2016) and Chatzis ect. (2017) into table 2 as well.\nAs these work has already been published and should be compared with and reported in the formal table. ", "Thank you for the extensive review. We tried to fix some of the notation issues you mentioned in the paper and uploaded a new version. Regarding the ranking losses we added a note when pointwise or listwise loss ranking losses are first mentioned to clarify them, we believe that much of the ML community is familiar with these terms as they originate from the learning to rank literature and are not specific to the RecSys community. \n\nWhile several negative sampling strategies have been proposed we do believe that ours is quite different from past ones given that we also adapt the loss functions (BPR and TOP1) to the proposed sampling strategies by using a relaxation of the max operator (softmax) to adjust the loss (and gradients). \nNone of the previously proposed sampling schemes use this. Rendle (2014) focuses on finding highly scored negative items for the BPR loss while Steck (2011) weight the sampled items with their popularity. Softmax over the scores in the loss function has, to the best of our knowledge, not been used before. Note also that the massive performance benefits we find in the experiments result from the combined use of the new losses and the proposed sampling strategy. \n\nRegarding the high level comments about RNNs in session-based recommendation, both in the original Hidasi et.al (2016) but also in other works extending upon Hidasi et. al. (2016) e.g. Chatzis et. al. (2017) matrix factorisation approaches (BPR-MF to be precise) have been used as baselines in the way you describe i.e. the individual sessions are treated as users in the matrix. The question that arises is how to then serve recommendations, and in the original Hidasi et.al (2016) the item factors are used to compute similarities with the last clicked item and in the Chatzis et. al. (2017) the item factors of the session (so far) are averaged. Both approaches lead to very poor results even compared to item-knn (and an order of magnitude below RNN), both are reported in the papers. \nNeither asymetric MF approaches (which essentially use averaging or SLIM which uses weighting of the factors(and would need to be rerun at every click) can improve upon these results enough so that MF becomes competitive with RNNs. \n\nIn fact the Wu et. al. (2016) paper (Recurrent Recommender Nets) does something similar in that it uses RNNs to perform a type of matrix factorization whereby two RNNs are used (one on the user side and one on the item side) to predict user/item factors given the items that the user has seen so far and the users that have seen the item so far. \nThey also evaluate with RMSE (which correlates poorly with IR metrics) on the Netflix data, note that the Netflix data is not a session-based dataset and matrix factorization approaches have been shown to perform well there. As a last twist note also that Wu et. al. (2016) initialize these RNNs by performing a PMF (to learn the initial factors of the matrix factorization) so it is unclear if and to what extend the RNNs actually learn anything from the data. It is thus no surprise to us that the results in Wu et. al. (2016) are quite poor as the RNNs are used in a rather convoluted way (to predict factors of a matrix factorization instead of the next item) and on data that is not strongly sequential or session-based. \n\nRegarding BPR performance on MRR and Recall our experience is that it performs actually quite well in these metrics. \n\n", "Thank you for the review, \n\nRegarding the minor comment, \nwe did not put the Tan (2016) and Chatzis (2017) results directly in the table because the experimental protocols are slightly different, we generally avoid presenting results in the same table that are not run under the same experimental setting (test-train splits etc.) Notice though that the relative increase over the baseline GRU that we report is way higher than the results in Chatzis (2017) and equal to Tan (2016) (but with way less computational cost). ", "Thank you very much for the review and the comments. Regarding the decline in performance when performing optimization over all pairs of items, we do believe that this is due to the fact that introducing too many ‘non-relevant’ items in the objective might introduce a negative bias in the learning process. \n\nIn the new version of the paper we made the indexes i and j more explicit. " ]
[ 8, 4, 6, -1, -1, -1 ]
[ 4, 5, 5, -1, -1, -1 ]
[ "iclr_2018_ryCM8zWRb", "iclr_2018_ryCM8zWRb", "iclr_2018_ryCM8zWRb", "S1d1eXqlM", "r1QqAe3lG", "ryqETl9gG" ]
iclr_2018_r1saNM-RW
Small Coresets to Represent Large Training Data for Support Vector Machines
Support Vector Machines (SVMs) are one of the most popular algorithms for classification and regression analysis. Despite their popularity, even efficient implementations have proven to be computationally expensive to train at a large-scale, especially in streaming settings. In this paper, we propose a novel coreset construction algorithm for efficiently generating compact representations of massive data sets to speed up SVM training. A coreset is a weighted subset of the original data points such that SVMs trained on the coreset are provably competitive with those trained on the original (massive) data set. We provide both lower and upper bounds on the number of samples required to obtain accurate approximations to the SVM problem as a function of the complexity of the input data. Our analysis also establishes sufficient conditions on the existence of sufficiently compact and representative coresets for the SVM problem. We empirically evaluate the practical effectiveness of our algorithm against synthetic and real-world data sets.
rejected-papers
While the paper shows some encouraging results for scaling up SVMs using coreset methods, it has fallen short of making a fully convincing case, particularly given the amount of intense interest in this topic back in the heydey of kernel methods. When it comes to scalability, it has become the norm now to benchmark results on far larger datasets using parallelism, specialized hardware in conjunction with algorithmic speedups (e.g., using random feature methods, low-rank approximations such as Nystrom and other approaches). As such the paper is unlikely to generate much interest in the ICLR community in its current form.
train
[ "BylLVbdNG", "H11vg1wVf", "Byu50uOxf", "BJu6VdYlf", "rkydZLAef", "ByibM2BMz", "B1vBW3SGz", "Sk3SxYn-G", "HJwWeF3bM", "rJoTyFh-G", "SkwxyY2bM" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "Thank you for the additional consideration.\n\n1) Regarding the *offline* running time of our algorithm, we include below the response that we had posted earlier regarding the runtime comparisons. In short, our algorithm, unlike prior approaches, can be applied to streaming settings where it may not be possible to store or process all the data at one time, as is common with Big Data applications and for dynamic datasets where samples are inserted/deleted.\n\nPrior response regarding runtime:\n---\nWe want to emphasize that our algorithm introduces a novel way to accelerate SVM training in streaming settings, where traditional SGD-based approaches to approximately-optimal SVM training (e.g., Pegasos) currently cannot handle. Therefore, comparing the *offline* performance of our algorithm (designed to operate in streaming settings) to SGD-based approaches (which cannot operate in streaming settings) may not be the most appropriate comparison. \n---\n\n2) The last graph in Fig. 6 actually contains all of the 4 curves, where uniform sampling, our coreset, and All Data essentially overlap (at either 0 or very close to 0 relative error). This is due to CVM's poor performance on this particular data set combined with the good performance (i.e., relative error very close to 0) of both uniform sampling and our coreset. We will recreate the figure to reflect this overlap of curves more clearly and will also add further explanatory text as requested.\n\n", "I checked the new experimental results. \n\nThe new sampling method provides moderate improvement over the naive uniform sampling (in many cases). \nThe running time part is not so convincing, as in many cases, it is significantly slower than other methods.\nAlso, some text explaining those figures should be helpful.\n\nwhy the last figure in Figure 6 only has 2 curves?\n\nThe new results are certainly helpful. \nBut in my opinion, the paper may not be a clear accept of ICLR.", "This paper studies the approach of coreset for SVM. In particular, it aims at sampling a small set of weighted points such that the loss function over these points provably approximates that over the whole dataset. This is done by applying an existing theoretical framework to the SVM training objective.\n\nThe coreset idea has been applied to SVM in existing work, but this paper uses a new theoretical framework. It also provides lower bound on the sample complexity of the framework for general instances and provides upper bound that is data dependent, shedding light on what kind of data this method is suitable for. \n\nThe main concern I have is about the novelty of the coreset idea applied to SVM. Also, there are some minor issues:\n-- Section 4.2: What's the point of building the coreset if you've got the optimal solution? Indeed one can do divide-and-conquer. But can one begin with an approximation solution? In general, the analysis of the coreset should still hold if one begins with an approximation solution. Also, even when doing divide-and-conquer, the solution obtained in the first line of the algorithm should still be approximate. The authors pointed out that Lemma 7 can be extended to this case, and I hope the proof can be written out explicitly.\n-- section 2, paragraph 4: why SGD-based approaches cannot be trivially extended to streaming settings? \n-- Definition 3: what randomness is the probability with respect to? \n-- For experiments: the comparison with CVM should be added.\n", "The paper suggests an importance sampling based Coreset construction for Support Vector Machines (SVM). To understand the results, we need to understand Coreset and importance sampling: \n\nCoreset: In the context of SVMs, a Coreset is a (weighted) subset of given dataset such that for any linear separator, the cost of the separator with respect to the given dataset X is approximately (there is an error parameter \\eps) the same as the cost with respect to the weighted subset. The main idea is that if one can find a small coreset, then finding the optimal separator (maximum margin etc.) over the coreset might be sufficient. Since the computation is done over a small subset of points, one hopes to gain in terms of the running time.\n\nImportance sampling: This is based on the theory developed in Feldman and Langberg, 2011 (and some of the previous works such as Langberg and Schulman 2010, the reference of which is missing). The idea is to define a quantity called sensitivity of a data-point that captures how important this datapoint is with respect to contributing to the cost function. Then a subset of datapoint are sampled based on the sensitivity and the sampled data point is given weight proportional to inverse of the sampling probability. As per the theory developed in these past works, sampling a subset of size proportional to the sum of sensitivities gives a coreset for the given problem.\n\nSo, the main contribution of the paper is to do all the sensitivity calculations with respect to SVM problem and then use the importance sampling theory to obtain bounds on the coreset size. One interesting point of this construction is that Coreset construction involves solving the SVM problem on the given dataset which may seem like beating the purpose. However, the authors note that one only needs to compute the Coreset of small batches of the given dataset and then use standard procedures (available in streaming literature) to combine the Coresets into a single Coreset. This should give significant running time benefits. The paper also compares the results against the simple procedure where a small uniform sample from the dataset is used for computation. \n\n\nEvaluation: \nSignificance: Coresets give significant running time benefits when working with very big datasets. Coreset construction in the context of SVMs is a relevant problem and should be considered significant.\n\nClarity: The paper is reasonably well-written. The problem has been well motivated and all the relevant issues point out for the reader. The theoretical results are clearly stated as lemmas a theorems that one can follow without looking at proofs. \n\nOriginality: The paper uses previously developed theory of importance sampling. However, the sensitivity calculations in the SVM context is new as per my knowledge. It is nice to know the bounds given in the paper and to understand the theoretical conditions under which we can obtain running time benefits using corsets. \n\nQuality: The paper gives nice theoretical bounds in the context of SVMs. One aspect in which the paper is lacking is the empirical analysis. The paper compares the Coreset construction with simple uniform sampling. Since Coreset construction is being sold as a fast alternative to previous methods for training SVMs, it would have been nice to see the running time and cost comparison with other training methods that have been discussed in section 2.\n", "The paper studies the problem of constructing small coreset for SVM.\nA coreset is a small subset of (weighted) points such that the optimal solution for the coreset is also a good approximation for the original point set. The notion of coreset was originally formulated in computational geometry by Agarwal et al.\n(see e.g., [A])\nRecently it has been extended to several clustering problems, linear algebra, and machine learning problems. This paper follows the important sampling approach first proposed in [B], and generalized by Feldman and Langberg. The key in this approach is to compute the sensitivity of points and bound the total sensitivity for the considered problem (this is also true for the present paper). For SVM, the paper presents a bad instance where the total sensitivity can be as bad as 2^d. Nevertheless,\nthe paper presents interesting upper bounds that depending on the optimal value and variance of the point set. The paper argues that in many data sets, the total sensitivity may be small, yielding small coreset. This makes sense and may have significant practical implications.\n\nHowever, I have the following reservation for the paper.\n(1) I don't quite understand the CHICKEN and EGG section. Indeed, it is unclear to me \nhow to estimate the optimal value. The whole paragraph is hand-waving. What is exactly merge-and-reduce? From the proof of theorem 9, it appears that the interior point algorithm is run on the entire dataset, with running time O(n^3d). Then there is no point to compute a coreset as the optimal solution is already known.\n\n(2) The running time of the algorithm is not attractive (in both theory and practice).\nIn fact, the experimental result on the running time is a bit weak. It seems that the algorithm is pretty slow (last in Figure 1). \n\n(3) The theoretical novelty is limited. The paper follows from now-standard technique for constructing coreset.\n\nOveral, I don't recommend acceptance.\n\nminor points:\nIt makes sense to cite the following papers where original ideas on constructing coresets were proposed initially.\n\n[A]Geometric Approximation via Coresets\nPankaj K. Agarwal Sariel Har-Peled Kasturi R. Varadarajan\n\n[B]Universal epsilon-approximators for integrals, by Langberg and Schulman\n\n---------------------------------------------------------\n\nAfter reading the response and the revised text, I understand the chicken-and-egg issue.\nI think the experimental section is still a bit weak (given that there are several very competitive SVM algorithms that the paper didn't compare with).\nI raised my score to 5. \n\n", "Thank you again for your consideration. We have updated our submission with a revised manuscript that includes additional comparisons in the streaming setting and evaluations against competitive algorithms, including Pegasos and Core Vector Machines (CVMs). Please feel free to refer to our latest general response and revision for additional details.", "We wanted to update the reviewers and readers that our latest revision contains additional experimental results that evaluate and compare the performance of our algorithm with that of state-of-the-art. In particular, our latest revision contains the following additional experimental results:\n\n1) Comparisons with Pegasos (Fig. 2)\n2) Comparisons with uniform subsampling in the streaming setting where the data points arrive one-by-one (Fig. 5)\n3) Comparisons with Core Vector Machines (CVMs) (Fig. 6).\n\nDue to space constraints and our consideration that our theoretical results may have been of higher interest to the community, we were not able to fit all of these additional results in our original submission. However, in the case that our paper is accepted, we will certainly investigate ways to include these additional results in the final version of our paper.", "Thank you for your comments and feedback. Please find below our item-specific responses. \n\n1) As we also highlighted in our general response and response to AnonReviewer1, our coreset construction method is intended to be used in conjunction with the traditional merge-and-reduce technique, which ensures that our coreset construction algorithm is never run against the full data set. Rather, our coreset construction algorithm takes as input partitions of the original data set (where each set in the partition is of sufficiently small size, see Sec. 4.2). We have also included an extension of Lemma 7 to the case where only an approximately-optimal solution is available (see Lemma 11 in our revision).\n\n2) Gradient-based methods cannot be trivially extended to settings where the data points arrive in a streaming fashion, since seeing a new point results in a complete change of the gradient.\n\n3) Thank you for pointing out this ambiguity. The randomness is with respect to the sampling scheme used by our algorithm to construct the coreset, but we realize in retrospect that this is confusing since there exists deterministic coreset-construction algorithms. We have modified our paper to clarify the definition of a coreset and the (probabilistic) guarantees provided by our algorithm.\n\n4) As we mentioned to AnonReviewer2, we are currently in the process of running additional experiments that evaluate the performance of our algorithm against other algorithms, such as CVM as you mentioned. Our plan is to include the results of these experiments in a later revision to be uploaded before Dec. 20.", "Thank you for your in-depth feedback and consideration of our paper. We included the reference to the original Langberg and Schulman (2010) paper that introduced the concept of sensitivity. We are currently in the process of running additional experiments that evaluate the performance of our algorithm against larger data-sets and compare it to more of the other approaches mentioned in Sec. 2. We plan to finalize these experiments and include the results in our revised version before Dec. 20.", "Thank you for your insightful feedback and references to prior work that originally proposed the notion of constructing coresets. We added these references to the revised version of our paper. More specific responses below:\n\n1) The chicken and the egg phenomena commonly arises in coresets-related work, where the optimal or approximately-optimal solution is used to compute approximations of the sensitivity of each point. In our case, we compute the optimal solution to the problem using the Interior Point Method, but as mentioned in Sec. 4.3, an approximately optimal solution can be computed using Pegasos (Shalev-Shwatz et al., 2011). The merge-and-reduce procedure (explained below and cited in our original submission) ensures that our algorithm is never run against the entire data set, but rather small partitions of the data set. This implies that by repeatedly running our algorithm on a partition of the data set, consisting of approximately logarithmic number of points, and merging the resulting coresets yield a coreset for the entire data set. In other words, one needs to run the coreset procedure on only a small subset (or batch) of the input points and then use the standard merge-and-reduce procedure to combine the resulting coresets to form a coreset for the entire data set.\n\nThe merge-and-reduce procedure is a traditional technique in coreset-construction dating back to the work of Har-Peled and Mazumdar (2004) (for a recent exposition of this technique, see: Braverman et al. 2016, as we cited in Sec. 4.2 in our submission) that exploits the fact that coresets are composable and reducible, as explained in our general response above. Moreover, the merged coreset can be further reduced by noting that an epsilon-coreset of a say, delta-coreset, is an (epsilon + delta)-coreset. Both the chicken and the egg phenomena and merge-and-reduce techniques are covered in detail in the related work we cited in the section (Braverman et al. 2016).\n\nIn light of your feedback, we have modified the text to clarify the exposition of the chicken and the egg phenomena and the merge and reduce technique.\n\n2) Thank you for bringing up this ambiguity in the reported runtime. We have should have highlighted that the running time of the algorithm is approximately linear if the merge-and-reduce procedure above is used and the sufficient conditions on the sum of sensitivities for the existence of small coresets mentioned in the analysis section hold. We have modified our paper accordingly and these changes are reflected in the revised version, namely in Sec. 4.2.\n\nWe want to emphasize that our algorithm introduces a novel way to accelerate SVM training in streaming settings, where traditional SGD-based approaches to approximately-optimal SVM training (e.g., Pegasos) currently cannot handle. Therefore, comparing the *offline* performance of our algorithm (designed to operate in streaming settings) to SGD-based approaches (which cannot operate in streaming settings) may not be the most appropriate comparison. \n\n3) We agree and mention in our original submission that our work builds on the framework for coreset construction introduced by Langberg et al. (2010) and generalized by Feldman et al., (2011). However, as these authors also note, the main challenge in using the coresets framework lies in establishing accurate upper bounds on the sensitivity of each point using analytical and algorithmic techniques. In fact, the novelty in the most of the recently published coresets papers lies in the introduction of novel upper bounds on the sensitivity (typically, via bicriteria approximations). In our paper, we not only provide accurate, data-dependent upper bounds on the sensitivity of each point, but also establish lower bounds on the sensitivity, which enables us to classify the set of problem instances for which our algorithm is most suited. \n", "We thank all the reviewers for their useful suggestions and careful consideration of our paper. Your feedback has raised several points we need to clarify prior to providing detailed answers. We understand that there is a range of expertise in this community and will improve our exposition to make sure the paper is broadly accessible to the ICLR community. \n\n\tOur submission proposes a coreset-based approach to speeding up SVM training by constructing compact representations of massive data sets. The key idea is that an SVM trained on coresets, i.e., weighted subsets of the original input points, generated by our algorithm is provably competitive with the SVM trained on the full data set. In contrast to SGD-based approaches, e.g., Pegasos, our approach extends to streaming data cases, where the input data set is so large that it may not be possible to store or process all the data at one time, as is common with Big Data applications and for dynamic datasets where samples are inserted/deleted. This new computational model for SVM is enabled by combining our coreset construction algorithm with the merge-and-reduce technique. The merge-and-reduce technique is over a decade old and is now a standard technique. We included references in the paper revision.\n\n\tOur algorithm requires knowledge of the optimal SVM solution in order to generate the coreset. This seemingly paradoxical construction is known as the chicken and the egg phenomenon, which commonly arises in coresets literature, and is resolved by the fact that the original algorithm is *not* intended to be run against the full data set, but rather small partitions of the data set. The merge-and-reduce procedure is a traditional technique in coreset construction dating back to the work of Har-Peled and Mazumdar (2004) (for a recent description of this technique, see: Braverman et al., (2016) as we cited in Sec. 4.2 in our submission) that exploits the fact that coresets are *composable*, i.e., if S_1 is an epsilon-coreset for data set P_1 and S_2 is an epsilon-coreset for data set P_2, then the union S_1 \\cup S_2 is an epsilon-coreset for P_1 \\cup P_2, and *reducible*, i.e., a delta-coreset of an epsilon-coreset is a ((1 + epsilon)*(1 + delta) - 1)-coreset.\n\n\tThus, if the merge-and-reduce technique is used, our algorithm is only run on small subsets of the original data set. The results are then appropriately merged together, which implies that despite the super-linear runtime required to compute the optimal solution, the overall runtime is polylog(n) * d^3 * n, ignoring epsilon-error and delta (probability of failure) factors (details can be found in Sec. 4.2 of our revision). In our original submission, we show how this construction can be further sped up by using an efficient method to obtain a coarse, but near optimal solution, e.g., via Pegasos. We have included an additional lemma in the manuscript to clarify this point and to extend our prior analysis to this case, as requested by AnonReviewer3. We have also added details to sections 4.2 and 4.3 to further clarify the chicken and the egg phenomenon and the merge and reduce technique. " ]
[ -1, -1, 5, 7, 5, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 3, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "H11vg1wVf", "B1vBW3SGz", "iclr_2018_r1saNM-RW", "iclr_2018_r1saNM-RW", "iclr_2018_r1saNM-RW", "rkydZLAef", "iclr_2018_r1saNM-RW", "Byu50uOxf", "BJu6VdYlf", "rkydZLAef", "iclr_2018_r1saNM-RW" ]
iclr_2018_H1U_af-0-
Quadrature-based features for kernel approximation
We consider the problem of improving kernel approximation via feature maps. These maps arise as Monte Carlo approximation to integral representations of kernel functions and scale up kernel methods for larger datasets. We propose to use more efficient numerical integration technique to obtain better estimates of the integrals compared to the state-of-the-art methods. Our approach allows to use information about the integrand to enhance approximation and facilitates fast computations. We derive the convergence behavior and conduct an extensive empirical study that supports our hypothesis.
rejected-papers
This an interesting new contribution to construction of random features for approximating kernel functions. While the empirical results look promising, the reviewers have raised concerns about not having insights into why the approach is more effective; the exposition of the quadrature method is difficult to follow; and the connection between the quadrature rules and the random feature map is never explicitly stated. Some comparisons are missing (e.g., QMC methods). As such the paper will benefit from a revision and is not ready for ICLR-2018 acceptance.
train
[ "H1--71dlz", "BkpB7yqxz", "SyGYH-ieG", "BymT16sXz", "SyjTxTsmz", "S1D-gTjmf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper proposes to improve the kernel approximation of random features by using quadratures, in particular, stochastic spherical-radial rules. The quadrature rules have smaller variance given the same number of random features, and experiments show its reconstruction error and classification accuracies are better than existing algorithms.\n\nIt is an interesting paper, but it seems the authors are not aware of some existing works [1, 2] on quadrature for random features. Given these previous works, the contribution and novelty of the paper is limited.\n\n[1] Francis Bach. On the Equivalence between Kernel Quadrature Rules and Random Feature Expansions. JMLR, 2017.\n[2] Tri Dao, Christopher De Sa, Christopher Ré. Gaussian Quadrature for Kernel Features. NIPS 2017", "The authors offer a novel version of the random feature map approach to approximately solving large-scale kernel problems: each feature map evaluates the \"fourier feature\" corresponding to the kernel at a set of randomly sampled quadrature points. This gives an unbiased kernel estimator; they prove a bound its variance and provide experiment evidence that for Gaussian and arc-cos kernels, their suggested qaudrature rule outperforms previous random feature maps in terms of kernel approximation error and in terms of downstream classification and regression tasks. The idea is straightforward, the analysis seems correct, and the experiments suggest the method has superior accuracy compared to prior RFMs for shift-invariant kernels. The work is original, but I would say incremental, and the relevant literature is cited.\n\nThe method seems to give significantly lower kernel approximation errors, but the significance of the performance difference in downstream ML tasks is unclear --- the confidence intervals of the different methods overlap sufficiently to make it questionable whether the relative complexity of this method is worth the effort. Since good performance on downstream tasks is the crucial feature that we want RFMs to have, it is not clear that this method represents a true improvement over the state-of-the-art. The exposition of the quadrature method is difficult to follow, and the connection between the quadrature rules and the random feature map is never explicitly stated: e.g. equation 6 says how the kernel function is approximated as an integral, but does not give the feature map that an ML practitioner should use to get that approximate integral.\n\nIt would have been a good idea to include figures showing the time-accuracy tradeoff of the various methods, which is more important in large-scale ML applications than the kernel approximation error. It is not clear that the method is *not* more expensive in practice than previous methods (Table 1 gives superior asymptotic runtimes, but I would like to see actual run times, as evaluating the feature maps sound relatively complicated compared to other RFMs). On a related note, I would also like to have seen this method applied to kernels where the probability density in the Bochner integral was not the Gaussian density (e.g., the Laplacian kernel): the authors suggested that their method works there as well when one uses a Gaussian approximation of the density (which is not clear to me), --- and it may be the case that sampling from their quadrature distribution is faster than sampling from the original non-Gaussian density.", "This paper shows that techniques due to Genz & Monahan (1998) can be used to achieve low kernel approximation error under the framework of random fourier feature.\n\nPros\n\n1. It is new to apply quadrature rules to improve kernel approximation. The only other work I found is\nGaussian Quadrature for Kernel Features NIPS 2017. \nThe work is pretty recent so the author might not know it when submitting the paper. But in either case, it will be good to discuss the connections.\n\n2. The proposed method is shown to outperform a few baselines empirically.\n\nCons\n\n1. I don’t find the theoretical analysis to be very useful. In particular, the theorem shows that the kernel approximation error is O(1/D), which is the same as the original RFF paper. Unless the paper can provide a better characterization of the constants (like the ORF paper), it does not provide much insight in the proposed method. Unlike deep neural networks, since RFF is such a simple model, I think providing precise theoretical understanding is crucial. \n\n2. Approximating an integral is a well-studied topic. I do not find a good discussion on all the possible methods. Why is Genz & Monahan 1998 better than other alternatives such as Monte-Carlo, QMC etc? One argument seems to be “for kernels with specific specific integrand one can improve on its properties”. But this trick can be used for Monte-Carlo as well. And I do not see benefit of this trick in the curves.\n\n3. When choosing the orthogonal matrix, I think one obvious choice is to sample a matrix from the Stiefel manifold (the Q matrix of a random Gaussian). This baseline should be added in additional to H and B.\n\n4. A wall-time experiment is needed to justify the speedup.\n\nMinor comments:\n“For kennels with q(w) other than Gaussian… obtain very accurate results with little effort by using Gaussian approximation of q(w)”. What is the citation of this in the kernel approximation context?", "We thank the reviewer for the helpful and thorough feedback.\n\nAbout Gaussian Quadrature for Kernel Features NIPS 2017: \nThank you for pointing this paper. The authors propose several methods, among them three schemes are data-independent and one is data-dependent. We cannot directly compare our method with data-dependent one because our method does not use any data to construct mappings, i.e. is data-independent (a brief discussion on the difference of data-dependent and independent techniques can be found in the related section work). However, we note that one can apply the proposed data-dependent scheme to our method to learn the weights of the points as well. As a matter of fact, one can use random points and learn the weights for them in the proposed fashion. Thus, we only consider the data-independent approaches.\nDense grid and sparse grid methods are shown to be problematic in the paper. Dense grid is known to suffer heavily from the curse of dimensionality, while sparse grid yields high error rate. The last data-independent approach is subsampling dense grid according to the distribution on the weights of the points. Unfortunately, the code for the paper is not yet available, but we have reimplemented it to the best of our knowledge and ran experiments to compare with the proposed data-independent subsampled dense grid approach. We tested the subsampled approach on all datasets with Gaussian kernel and, unfortunately, it showed almost the same performance as random Fourier features (RFF), which has been shown in the paper for the ANOVA kernel as well. We added the figure with the comparison to the Appendix section E.\n\nIn a nutshell, \n1) only subsampled dense grid method was found eligible for the comparison, \n\n2) it showed higher kernel approximation error than our method across all the datasets. \n\n3) We updated the text of the paper to reflect this comparison and added a brief discussion of the paper to the related work section.\n\nRegarding theoretical analysis: \nIndeed, we did not elaborate much on the convergence, leaving it to show the similar rates as all state-of-the-art MC methods. However, to the best of our knowledge we were the first to highlight the dependence of the kernel approximation quality on the scale of the data.\n\nAbout MC and QMC methods for approximating an integral:\nIndeed, we did not show theoretically that MC or QMC methods are worse than the one we propose, however we conducted an extensive study that showed that on most of the datasets the proposed quadrature based features approximate the kernel with lower relative error. Theoretical proof remains an open question for further work.\nAnother point about QMC is something that has already been raised in the previous literature (we also noted this in Appendix section D), although QMC provides better asymptotic convergence than MC, it has larger constant factors hidden under O notation and has higher computational complexity along with lower empirical performance.\n\nAbout the other option for an orthogonal matrix,\nThank you for the proposed option. We have tried it while preparing the paper, it did show similar/equivalent performance to the ones we used, though it is not sparse or structured as butterflies and, thus, has no computational advantage. That was the reason we did not include it into the paper.\n\nAs for the last pointed out con in the paper,\nwe have updated the text to include the walltime experiment in the Appendix section F. The figure we added shows that our even somewhat unoptimized implementation of the proposed method (B) indeed scales as theoretically predicted with larger dimensions thanks to the structured nature of the mapping.\n\nAddressing the minor comment:\nWe apologize for this unsupported claim and have removed it from the text of the paper. Although the Laplacian kernel approximation can be implemented, unfortunately, it would not be accurate enough and one would need to use other quadratures to approximate kernels with densities other than Gaussian.\n\n\n", "We thank the reviewer for directing us to alternative work on quadrature for random features. We have updated the related work section to include both papers. We also address the differences in what follows.\n\n[1] In this paper the authors consider a problem of calculating integral of a function with respect to some probability measure. They show that this problem is equivalent to the random feature problem for a specific feature map. It is known that if we sample weights from the distribution, the rate of convergence is square root of (1 / n). In this paper an optimal distribution for weights is derived, which leads to better rate of convergence. Actually, the distribution depends on the so called leverage score. However, they operate in Hilbert space and calculation of leverage score requires inversion of integral operator. Such integral operators are infinite-dimensional, so it is hard to compute them in practice. To make their algorithm practical, they propose the following procedure: sample large amount (N) of weights W from the initial distribution; then calculate the density of optimized distribution on the set of sampled weights; after that resample small amount of weights from generated weights W. The complexity of the algorithm is O(N^3).\nThe main objective of the paper is to numerically calculate the integral of a function. The problem is equivalent to generation of random features for kernel approximation, but they don’t study explicitly the kernel approximation. So, in the experimental section they do not check the error of approximation of kernel function. They just take several functions and calculate quadrature errors on them. Thus, the paper does not really provide any practically useful explicit mappings for kernel approximation.\n\n[2] The authors propose several methods, among them three schemes are data-independent and one is data-dependent. We cannot directly compare our method with data-dependent one because our method does not use the data to construct mappings, i.e. is data-independent (a brief discussion on the difference of data-dependent and independent techniques can be found in the related section work). However, we note that one can apply the proposed data-dependent scheme to our method to learn the weights of the points as well. As a matter of fact, one can use random points and learn the weights for them in the proposed fashion. Thus, we only consider the data-independent approaches.\nDense grid and sparse grid methods are shown to be problematic in the paper. Dense grid is known to suffer heavily from the curse of dimensionality, while sparse grid yields high error rate. The last data-independent approach is subsampling dense grid according to the distribution on the weights of the points. Unfortunately, the code for the paper is not yet available, but we have reimplemented it to the best of our knowledge and ran experiments to compare with the proposed data-independent subsampled dense grid approach. We tested the subsampled approach on all datasets with Gaussian kernel and, unfortunately, it showed nearly the same performance as random Fourier features (RFF), which was indeed shown in the paper for the ANOVA kernel as well. We added the figure with the comparison to the Appendix section E.\n\nTo sum things up, \n1) the first paper does not provide practically useful explicit mappings for kernel approximation (due to the complexity O(N^3), where N is the number of features), while \n\n2) the second paper has one data-independent method that is eligible for the comparison. The subsampled dense grid method from [2] showed higher kernel approximation error than our method across all the datasets.\n\n3) We updated the text of the paper to reflect this comparison. We also included a brief discussion of both papers to the related work section.\n", "We deeply appreciate the reviewer for a constructive and comprehensive feedback. \n\nAbout the performance in downstream ML tasks and explicit mapping:\nWe have updated the paper to include the explicit map construction (Section 4.3), which has little additional complexity compared to state-of-the-art Monte Carlo methods, such as Random Orthogonal Features. While our method similar to MC methods only needs matrix multiplication to produce random features, it provides empirically better kernel approximation and in many instances better downstream quality.\n\nWalltime experiments:\nWe have updated the Appendix of the paper to include the actual runtimes, which show that for higher dimensions there is indeed an advantage. The figure we added shows that our somewhat unoptimized implementation of the proposed method (B) indeed scales as theoretically predicted with larger dimensions thanks to the structured nature of the mapping.\n\nKernels with other densities:\nWe are deeply sorry and have removed this unsupported claim from the text of the paper. While the approximation is implementable for Laplacian kernel, unfortunately, we found it to be not accurate enough, one would need to use other quadratures to approximate kernels with different densities.\n" ]
[ 4, 7, 6, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1 ]
[ "iclr_2018_H1U_af-0-", "iclr_2018_H1U_af-0-", "iclr_2018_H1U_af-0-", "SyGYH-ieG", "H1--71dlz", "BkpB7yqxz" ]
iclr_2018_HJBhEMbRb
A Spectral Approach to Generalization and Optimization in Neural Networks
The recent success of deep neural networks stems from their ability to generalize well on real data; however, Zhang et al. have observed that neural networks can easily overfit random labels. This observation demonstrates that with the existing theory, we cannot adequately explain why gradient methods can find generalizable solutions for neural networks. In this work, we use a Fourier-based approach to study the generalization properties of gradient-based methods over 2-layer neural networks with sinusoidal activation functions. We prove that if the underlying distribution of data has nice spectral properties such as bandlimitedness, then the gradient descent method will converge to generalizable local minima. We also establish a Fourier-based generalization bound for bandlimited spaces, which generalizes to other activation functions. Our generalization bound motivates a grouped version of path norms for measuring the complexity of 2-layer neural networks with ReLU activation functions. We demonstrate numerically that regularization of this group path norm results in neural network solutions that can fit true labels without losing test accuracy while not overfitting random labels.
rejected-papers
Understanding the generalization behavior of deep networks is certainly an open problem. While this paper appears to develop some interesting new Fourier-based methods in this direction, the analysis in its current form is currently too restrictive, with somewhat limited empirical support, to broadly appeal to the ICLR community. Please see the reviews for more details.
train
[ "rkcAW6tJG", "SJk182OlM", "H1Mweadlz", "S1Zg-PTQG", "B1f8kP6QM", "BJg_JQeXz", "ryglRAbMG", "HkpNpRbMf", "SJhIn0bfM", "ByHaYRulf", "HJ815-Lxz", "HJ1KekUef", "ry3-q3Xez", "r1I4--WxG", "rJOFw1u1z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author", "author", "author", "public", "author", "public", "author", "public" ]
[ "Deep neural networks have found great success in various applications. This paper presents a theoretical analysis for 2-layer neural networks (NNs) through a spectral approach. Specifically, the authors develop a Fourier-based generalization bound. Based on this, the authors show that the bandwidth, Fourier l_1 norm and the gradient for local minima of the population risk can be controlled for 2-layer NNs with SINE activation functions. Numerical experimental results are also presented to verify the theory.\n\n(1) The scope is a bit limited. The paper only considers 2-layer NNs. Is there an essential difficulty in extending the result here to NNs with more layers? Also, the analysis for gradient-based method in section 6 is only for squared-error loss, SINE activation and a deterministic target variable. What would happen if Y is random or the activation is ReLU?\n(2) The generalization bound in Corollary 3 is only for the gradient w.r.t. \\alpha_j. Perhaps, an object of more interest is the gradient w.r.t. W. It would be intersting to present some analysis regarding the gradient w.r.t. W.\n(3) It is claimed that the bound is tighter than that obtained using only the Lipschitz property of the activation function. However, no comparison is clearly made. It would be better if the authors could explain this more?\n\nIn summary, the application domain of the theoretical results seems a bit restricted.\n\nMinor comments:\nEq. (1): d\\xi should be dx\nLemma 2: one \\hat{g} should be \\hat{f}", "\nThis work proposes to study the generalization of learning neural networks via the Fourier-based method. It first gives a Fourier-based generalization bound, showing that Rademacher complexity of functions with small bandwidth and Fourier l_1 norm will be small. This leads to generalization for 2-layer networks with appropriate bounded size. For 2-layer networks with sine activation functions, assuming that the data distribution has nice spectral property (ie bounded bandwidth), it shows that the local minimum of the population risk (if with isolated component condition) will have small size, and also shows that the gradient of the empirical risk is close to that of the population risk. Empirical results show that the size of the networks learned on random labels are larger than those learned on true labels, and shows that a regularizer implied by their Fourier-based generalization bound can effectively reduce the generalization gap on random labels. \n\nThe idea of applying the Fourier-based method to generalization is interesting. However, the theoretical results are not very satisfactory. \n-- How do the bounds here compared to those obtained by directly applying Rademacher complexity to the neural network functions? \n-- How to interpret the isolated components condition in Theorem 4? Basically, it means that B(P_X) should be a small constant. What type of distributions of X will be a good example? \n-- It is not easy to put together the conclusions in Section 6.1 and 6.2. Suppose SGD leads to a local minimum of the empirical loss. One can claim that this is an approximate local minimum (ie, small gradient) by Corollary 3. But to apply Theorem 4, one will need a version of Theorem 4 for approximate local minima. Also, one needs to argue that the local minimum obtained by SGD will satisfy the isolated component condition. The argument in Section 8.6 is not convincing, ie, there is potentially a large approximation error in (41) and one cannot claim that Lemma 1 and Theorem 4 are still valid without the isolated component condition. \n", "This paper studies the generalization properties of 2-layer neural networks based on Fourier analysis. Studying the generalization property of neural network is an important problem and Fourier-based analysis is a promising direction, as shown in (Lee et al., 2017). However, I am not satisfied with the results in the current version.\n\n1) The main theoretical results are on the sin activation functions instead of commonly used ReLU functions. \n\n2) Even if for sin activation functions, the analysis is NOT complete. The authors claimed in the abstract that gradient-based methods will converge to generalizable local minima. However, Corollary 3 is only a concentration bound on the gradient. There is a gap that how this corollary implies generalization. The paragraph below this corollary is only a high level intuition. \n\n\n", "The anonymous review process continues till the end of January. As we stated before, we will share the code on our Github account after the review process is complete. ", "We thank all the reviewers for their detailed comments and feedback which greatly helped us to improve this work. We have posted a revision which includes changes addressing the main comments. In summary,\n\n1) We have clarified the explanation after Corollary 2 about the comparison between the Fourier-based Rademacher complexity bound and the existing Lipschitz-based bounds for 2-layer neural nets (asked by AnonReviewer2 and AnonReviewer3). \n\n2) We have added Remark 1 after Theorem 4 explaining what this result implies when X has a multivariate Gaussian distribution (asked by AnonReviewer3). \n\n3) We have made the explanation after Corollary 3 clearer (asked by AnonReviewer1) by directly applying an approximate version of Theorem 4 included in section 8.6 (asked by AnonReviewer3) to the local minima of the empirical risk.", "Following your previous comments - \"We will make the code publicly available on Github after the anonymous review process is complete.\"\nCan you make the code public?", "Thank you for your feedback. First let us note that analyzing the generalization performance of gradient methods is an open problem, even in the specific case of 2-layer neural nets and squared-error loss function. Our focus in this work is to address this open problem in the special case of 2-layer NNs with sine activation. We have chosen and analyzed this simple neural network model via Fourier analysis, because as shown in our numerical experiments (section 7) this simple model can still easily overfit random labels. Here is our response to the other questions and comments:\n\n1) A possible way of extending our Fourier-based analysis to multi hidden layer neural nets is through the analysis of the Fourier representation of composite of sine functions. To apply our Fourier-based analysis of gradient methods for ReLU function, one possible way is to approximate ReLU by its Fourier series after assuming the input X is properly bounded. For a random Y(x), we need to perform our Fourier analysis for the resulting stochastic process instead of a deterministic function. \n\n2) A sufficient condition for Theorem 4’s bounds is the generalization of the gradients w.r.t. \\alpha. Generalization of the gradients only w.r.t. W does not provide a sufficient condition for applying Theorem 4. Also, establishing generalization w.r.t. both \\alpha and W leads to slower rates of convergence compared to our result in Corollary 3. \n\n3) For a two layer neural net f(x)=a*phi(Wx) with activation phi, the Lipschitz-based bounds are linear in the square root of W’s norm. Our Corollary 2 improves this dependency to the square root of the log of W’s norm. The improvement holds for bandlimited phi’s such as sine or Gaussian activations. We will make this point clearer in the text.\n\n4) Thanks for pointing out the typos. We will correct them in our draft.", "Thank you for your feedback. Here is our response to the three points raised in this review:\n\n1) The existing Rademacher complexity bounds for neural nets are based on the activation function’s Lipschitz constant. For a two layer neural net f(x)=a*phi(Wx) with activation phi, those Lipschitz-based bounds are linear in the square root of W’s norm. Our Corollary 2 improves this dependency to the square root of the log of W’s norm. The improvement holds for bandlimited phi’s such as sine or Gaussian activations. \n\n Although the ReLU function does not satisfy the above condition, our Fourier-based generalization result is still applicable to ReLU-based networks (Theorem 3) and motivates new capacity norms (group path norms) for these networks. \n\n2) For example, consider a multivariate Gaussian X ~ N( 0 , sigma^2*I). The Fourier transform of P_X in this case, which is exp{ - (sigma*||w||)^2 / 2 }, becomes sufficiently small in Eq. (39) if the components are O( 1/sigma ) apart. Therefore, we need the standard deviation of X to be large enough so that the components are O( 1/sigma ) apart. We will add this example to section 6.\n\n3) For an approximate local minimum where the population gradient is epsilon instead of zero, the upper-bound in Lemma 1 (Eq. (12)) changes by at most epsilon and hence the Fourier L1_norm bound in Theorem 4 changes by at most d*epsilon (d is the size of the hidden layer). \n\n We agree that the isolated components assumption provides a barrier for applying our theory to the general case. However, the condition holds given that Y(x)’s Fourier transform has distant local extrema (at least B(P_X) apart). For example, the condition holds if Y(x)=a^T sin(Wx) where each two different rows w_i,w_j of W satisfy || w_i - w_j || > B(P_X). We will include this discussion in section 6.", "Thanks for your feedback. Here is our response to the two comments:\n\n1) The Fourier-based generalization bound in Theorem 2 is applicable to a general activation function. By applying this generalization result, Theorem 3 motivates new capacity norms (group path norms) for ReLU-based networks. Supporting this, our numerical experiments in sections 7.2,7.3 indicate that regularizing group path norms can close the generalization gap without compromising test accuracy over neural nets with ReLU activation. \n\n In section 6, we apply our Fourier-based generalization result to address the open problem of analyzing generalization performance of gradient methods in the special case of 2-layer neural nets with sine activation. Here we have chosen and analyzed this simple neural network model via Fourier analysis, because as shown in our numerical experiments this simple neural network can still easily overfit random labels.\n\n2) Corollary 3 is not the final generalization bound, but it should be applied together with Theorem 4. More specifically, Corollary 3 provides a sufficient condition to apply the bounds in Theorem 4 on the spectral properties of the local minima found, which in turn bounds the generalization error of those local minima. \n\n The generalization result holds for the local minima found by large-batch gradient descent, and the paragraph after Corollary 3 gives high level intuition why we expect the bound to also hold for the local minima found by small-batch gradient descent. We will make this explanation clearer in the text.", "“This class seems to have been defined by reverse engineering the proof. Any interpretations on class F_\\phi_\\lapha?”\n\nF_\\phi_\\lapha is the set of 2-layer neural nets (ReLU-type activation) with bounded group path norm. That’s why in our earlier response we called the generalization result in Theorem 3 “a path norm-based generalization bound.”\n\n“at the cost of loosing accuracy (by 5% in the paper's experiments) on real labels”\n\nAs we stated in our previous response, “due to computational constraints, we could only test a small set of lambda values for each regularization strategy.” By considering larger validation sets for lambda, we can tune lambda to get closer validation accuracy to the original accuracy.\n\n“But many authors share their code via anonymous github accounts.”\n\nWe will check this option. Let us repeat that we will make the code publicly available on our Github account after the anonymous review process is complete.\n\n“From Fig 2b the authors conclude that random labels will have a higher Fourier L1-norm.”\n\nWe have not concluded “random labels will have a higher Fourier L1-norm” from figure 2b. The only conclusions made from figure 2b in our manuscript (section 7.1) are “Figures 2b and 2c confirm that both Fourier L1-norm and bandwidth consistently increase with training” and “This suggests that, as implied by the theory above, regularizing Fourier L1-norm and bandwidth could improve generalizability of the final learned model.” \n\n“I know that proofs in the appendix of the paper requires approximation in L1 sense thats not at all clear in the introduction.”\n\nWe will make this clearer in the text.", "\"Theorem 3 proves a path norm-based generalization bound for ReLU activation by considering the Fourier representation of ReLU function.\" - The theorem is restricted to class of NNs - F_\\phi_\\lapha. This class seems to have been defined by reverse engineering the proof. Any interpretations on class F_\\phi_\\lapha?\n\n\"Yes, but for L2-norm if some lambda coefficient closes the generalization gap for random labels, that lambda will lead to a considerable drop in test accuracy for true labels.\"\nAgreed. However, changing the regularization only to improve the generalization of random labels at the cost of loosing accuracy (by 5% in the paper's experiments) on real labels seems unpractical. \nIdeally one should compare the GE of NNs with path-norm vs l2 on random labels such that the NNs have same accuracy (or GE) on real labels.\n\n\"Our code is much longer than the 5000 character limit of openreview comments\"\nSure. But many authors share their code via anonymous github accounts.\n\n\n\"This is a sufficient condition\" - I see. But since we empirically know that small batch gradient descent outperforms large-batch, this condition might not the right condition to look to prove the observation by Keskar et al's mentioned in the introduction.\n\n\n“From Fig 2b the authors conclude that random labels will have a higher path-norm.” \nYou are right, it was a typo. I meant \"Fourier L1-norm\". Thus my comment changes to - From Fig 2b the authors conclude that random labels will have a higher Fourier L1-norm.\n\n\"Hence the function cannot be arbitrarily well approximated by bandlimited functions\" - I meant approximation in L2 sense (not L1) which is commonly used in Fourier analysis since it has a physical meaning of energy (My conclusions are based on Parseval's theorem). I know that proofs in the appendix of the paper requires approximation in L1 sense but thats not at all clear in the introduction.", "“Simplifying ReLU by sinusoidal function doesn't seem like a good idea and seems very forced.”\n\nIn fact, our Fourier-based approach suggests the opposite is true. Theorem 3 proves a path norm-based generalization bound for ReLU activation by considering the Fourier representation of ReLU function. Note that Fourier transform is a function’s representation in the sinusoidal basis.\n\n“one could keep decreasing the training accuracy (to 0.1) by increasing the value of lambda in l_2 norm”\n\nYes, but for L2-norm if some lambda coefficient closes the generalization gap for random labels, that lambda will lead to a considerable drop in test accuracy for true labels. On the other hand, for path norms we numerically showed that the same lambda coefficient can close the generalization gap for both true and random labels without compromising test accuracy for true labels (Figures 1B and 3).\n\n“Corollary 3 says that gradients via large-batch will have a better generalization than gradients via small-batch”\n\nWe emphasize that Corollary 3 only claims the generalization of the GRADIENT of the empirical risk to the GRADIENT of the population risk. This is a sufficient condition, which holds only for large-batch gradient descent, for applying Theorem 4 to guarantee generalization for the local minima found by large-batch gradient descent. \n\n“Can the code be shared on anonymously?”\n\nOur code is much longer than the 5000 character limit of openreview comments. We will make the code publicly available on Github after the anonymous review process is complete. \n\n“5) Your comment is unclear here.” \n\nYour original comment was “From Fig 2b the authors conclude that random labels will have a higher path-norm.” However, figure 2b shows Fourier L1-norm which is completely different from path norms.\n\n“Restricting ourselves to Borel measurable functions for now, any function in L_2 can be arbitrarily 'approximated' by band-limited functions”\n\nThis statement is incorrect. For example, consider the density function of a uniform random variable over [-0.5,0.5]. While this function is in L_2, its Fourier transform (the sinc function) is not absolutely integrable. Hence the function cannot be arbitrarily well approximated by bandlimited functions, since the absolute integral of its Fourier transform over [B,infty) is infinite for any value B.\n", "1) Simplifying ReLU by sinusoidal function doesn't seem like a good idea and seems very forced.\n\n2) Comment (2) directly contradicts the numbers in comment (4). Refer to my point 4.\n\n3) Since large-batch descent have a larger value of n, Corollary 3 says that gradients via large-batch will have a better generalization than gradients via small-batch. Since the whole paper is centered around the theme - \"good generalization ==> better performance\", it does contradict Keskar et al.’s results.\n\n3) Well, the paper initially didn't say anything about any validation set for choosing values of lambda. Can the code be shared on anonymously?\n\n4) For true labels, the decrease in training accuracy (.021) and the decrease in generalizing error is (.007) are comparable. If 0.021 is not 'really compromising accuracy', then 0.007 is not really decreasing generalizing error.\n\nAlso, for random labels the test accuracy will always be around .1 and one could keep decreasing the training accuracy (to 0.1) by increasing the value of lambda in l_2 norm (all the way upto infinity). Hence I don't see why one should use the norm proposed in this paper over l_2.\n\n5) Your comment is unclear here.\n\nMinor points\n\n1) Technically Gaussian is still not a band-limited function (by the definition stated in the paper). Restricting ourselves to Borel measurable functions for now, any function in L_2 can be arbitrarily 'approximated' by band-limited functions.\n\n2,3) But still, what is the interpretation of such an (approximate) assumption?", "The following addresses the points made in the conclusion:\n\n1) Sinusoidal activation is not proposed as a replacement for RELU, but as an analytical simplification to illustrate that gradient-based methods converge to generalizable local minima.\n \n2) Our numerical results in Figures 1B and 3 indicate that group path norms can close the generalization gap for both random and true labels without compromising test accuracy for true labels, while L2_norm cannot.\n\n3) Corollary 3 only applies to large-batch gradient descent and it does not explain the difference between the generalization performance of small and large batch gradient-descent. Hence it does not contradict Keskar et al.’s results.\n\nHere is the response to the other concerns:\n\n3) We have a typo in this sentence and the word “test” should be replaced with “validation.” To fairly compare different regularization strategies, we tested about 5 lambda values for each strategy and then reported the performance on the test set for the lambda value that resulted in the best performance on the validation set. Good performance here means low generalization gap with comparable validation accuracy for the true labels.\n\n4) Due to computational constraints, we could only test a small set of lambda values for each regularization strategy. For each strategy, we chose the largest lambda coefficient which did not result in more than a 5% decrease in the validation accuracy for true labels. For the X2-group path norm (Figure 1B3), this resulted in test and train accuracies of 0.519 and 0.632 for the true labels and 0.099 and 0.104 for the random labels. For the L2-norm (Figure 1B2), this resulted in test and train accuracies of 0.540 and 0.659 for the true labels and 0.096 and 0.285 for the random labels. Therefore in comparison to the L2 norm, the X2-group path norm achieves smaller generalization gap for both true and random labels (significantly for random labels) without really compromising test accuracy. \n\n5) The point of Figure 2b is to validate our Fourier-based generalization bound, and not to examine path norms which is irrelevant to the plots in Figure 2. Figure 2b demonstrates how the Fourier L1-norm, which is different from path norm, of a neural net with sine activation changes during training. The hypotheses fitting random and true labels have comparable Fourier L1 norm (slightly larger for random labels), but as shown in Figure 2c the bandwidth achieved when fitting random labels is > 1.5 times larger than the bandwidth achieved when fitting true labels. Our generalization result in Theorem 2 depends on both Fourier L1-norm and bandwidth, which correctly predicts that the generalization risk should be larger for random labels.\n\nResponse to the minor points:\n\n1) The Fourier transform of a Gaussian function has a Gaussian shape. Therefore, a Gaussian function’s Fourier transform is concentrated around the origin in a ball with radius inversely proportional to the standard deviation of that Gaussian function and can be arbitrarily well approximated by a bandlimited function. \n\n2,3) This is a technical assumption to obtain the exact value instead of an approximation of a convolution integral. As we have discussed in the Appendix (section 8.6), Theorem 4 (shown through Lemma 1) remains approximately valid even without the isolated components assumption. ", "Summary: The authors propose 1) New class of activation functions which have bounded bandlimit (in the foruier domain) 2) and 'nice' l1-norm in the fourier domain. They evaluate the generalization bound derived by Bartlett and Mendelson (2002) for bandlimited functions and get tighter gaurantees than just using lipschitz continuity. They also extend the analysis to the gradients of the loss functions. They also have a few experiments trying to support their claims.\n\nA few concerns -\n\n1) The authors propose using sin function as an activation function over ReLu (or sigmoid). However, they haven't directly compared the accuracy of NNs using ReLu with NNs using sin function. To be more specific, Figure 2 evaluate NNs with sin-activation function using MSE whereas Figure 3 evaluates NNs with ReLu-activation functions using prediction accuracy.\n\n2) In the introduction, Keskar et. al. are cited claiming - 'SGD has been empirically shown to outperform large-batch gradient descent'. However, the Corollary 3 seems to say that the difference between sample-gradient and population-gradient is smaller for large values of samples and hence large-batch gradient descent should outperform SGD. The previous two statements are in direct contradiction.\n\n3) The experimental section states that - \"We note that while we tested multiple values of lambda for each regularization technique, we always chose lambda the that resulted in the smallest generalization gap with comparable test performance.\"\nSuch techniques are classic examples on how to overfit. The standard practice in the community is to choose the lambda via cross-validation.\n\n4) In figure 1b the authors claim that path-norm penalty reduces the generalization error (compared l2 penalty). However, on close observation I see that using path-norm penalty also reduced the accuracy (which is not mentioned in the description). It is well known that reducing the size of the class of functions (to optimize over), reduces both training accuracy and generalization error. Hence it most likely that using path norm is 'effectively' reducing the class of functions and hence decreasing the generalization error (along with accuracy). One could quite possibly acheive the same effect by using l2 norm with a higher value of lambda.\n\n5) From Fig 2b the authors conclude that random labels will have a higher path-norm. However, this is not very convincing to me as it looks like the path-norm for true labels might be equal to the path-norm for the random labels if the experiments were run for a few more epocs.\n\n\nFew minor points\n1) The introduction claims gaussian activation function as bandlimited. Can you explain this?\n2) The assumption in lemma 1 are used only to prove the result and hence seems a little artificial. What is the interpretation of such an assumption?\n3) Lemma 1 requires ||w_j|| to be larger the bandwith of P_X. Large values of W would make the sin function a highly non-monotonous activation function (with high frequency). I don't think such a highly variable non-monotonous functions is a good candidate for an activation functions.\n\n\nConclusion: There is huge disconnect between theretical conclusions and experiments results in this paper.\n1) The theory proposes using sin-activation activation function - However, the sin-activation is not directly compared to ReLu in any experiments.\n2) The theory proposes superiority of path-norm over l2 norm - The experiments are inconclusive (point 5).\n3) The paper claims the gradient methods that accurately estimate the population gradient are expected to have better performance - Keskar et. al. (https://arxiv.org/pdf/1609.04836.pdf) have the exact opposite conclusion." ]
[ 6, 6, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJBhEMbRb", "iclr_2018_HJBhEMbRb", "iclr_2018_HJBhEMbRb", "BJg_JQeXz", "iclr_2018_HJBhEMbRb", "iclr_2018_HJBhEMbRb", "rkcAW6tJG", "SJk182OlM", "H1Mweadlz", "HJ815-Lxz", "HJ1KekUef", "ry3-q3Xez", "r1I4--WxG", "rJOFw1u1z", "iclr_2018_HJBhEMbRb" ]
iclr_2018_SJ8M9yup-
On Optimality Conditions for Auto-Encoder Signal Recovery
Auto-Encoders are unsupervised models that aim to learn patterns from observed data by minimizing a reconstruction cost. The useful representations learned are often found to be sparse and distributed. On the other hand, compressed sensing and sparse coding assume a data generating process, where the observed data is generated from some true latent signal source, and try to recover the corresponding signal from measurements. Looking at auto-encoders from this signal recovery perspective enables us to have a more coherent view of these techniques. In this paper, in particular, we show that the true hidden representation can be approximately recovered if the weight matrices are highly incoherent with unit ℓ2 row length and the bias vectors takes the value (approximately) equal to the negative of the data mean. The recovery also becomes more and more accurate as the sparsity in hidden signals increases. Additionally, we empirically also demonstrate that auto-encoders are capable of recovering the data generating dictionary when only data samples are given.
rejected-papers
- The paper is overall difficult to read and would benefit from a revised presentation. - The practical relevance of the recovery conditions and algorithmic consequences of the work is not sufficiently clear or convincing.
train
[ "HySa8MQgz", "B1NjQYOeG", "HkQOWPieM", "r1Nk9t2Xz", "BkNZsY27f", "r1bwcKhQz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "*Summary*\nThe paper studies recovery guarantees within the context of auto-encoders. Assuming a noise-corrupted linear model for the inputs x's, the paper looks at some sufficient properties (e.g., over the generating dictionary denoted by W) to recover the true underlying sparse signals (denoted by h). Several settings of increasing complexity are considered (from binary signals with no noise to noisy continuous signals). Evaluations are carried out on synthetic examples to highlight the theoretical findings.\n\nThe paper is overall difficult to read. Moreover, and importantly, no algorithmic perspectives are presented in the paper, in the sense that we do not know whether practical procedures would lead to W's satisfying the appropriate properties (unlike (not-mentioned) recent results for dictionary learning/ICA; see detailed comments). Also, assumptions are made (e.g., knowledge about expectations of h and x) for which it is unclear to see how practical/limiting they are. Finally (and as further discussed below), the paper does not sufficiently discuss related work.\n\n(note: I have not reviewed the appendix and supplementary material)\n\n*Detailed comments*\n\n-I think there is an insufficient literature review about recent recovery results in the context of sparse coding, dictionary learning and ICA (see some references at the bottom of the review). I think this is all the more important as the paper tries to draw connections with ICA (see Sec. 4.4).\nGiven that the paper positions itself on a theoretical level, detailed comparisons with existing sample complexities obtained in previous work for related models (e.g., sparse coding, dictionary learning and ICA) must be provided.\n\n-To the best of my understanding of the paper, the guarantees are about h_hat and the true h. It therefore seems that the paper's approach is very close to standard sparse inverse problems, up to the difference due to the (non-identity) activation function. If this is indeed the case, the paper should discuss its results when the activation is identity to see whether known results are recovered. \n\n-\"...we consider linear activation s_d because it is a more general case.\": Just after this statement, it is mentioned that non-linear activations are used in practice. Could this statement be therefore clarified?\n\n-Sec. 2 is unclear. For instance, it is not easy to see how one go from (1) to (2). Moreover, the concept of \"AE framework\" is not well defined.\n\n-In the bottom of page 3, why are p_i and (1-p_i) discarded?\n\n-In practice, how can we set the appropriate value of b_i?\n\n-What is the practical sense of being able to have access to E_h[x], E_x[x], and E_h[h]?\n\n-In Proposition 1 and 2, if the noise e is indeed random, it means the right-hand sides are also random variables. Then, what does the probability statement Pr mean on the left-hand side? Is is conditioned on the draw of e? Some clarifications are required.\n\n-Typo page 7: \"...that used to generate the data.\" --> \"... used to generate the data.\"\n-Typo page 9: \"...data are then generate...\" --> \"...data are then generated...\"\n\n-In Sec. 5.3, to match W_hat and W, the Hungarian algorithm can probably be used.\n\n*References*\n\n(Arora2012) Arora, S.; Ge, R.; Moitra, A. & Sachdeva, S. Provable ICA with unknown Gaussian noise, with implications for Gaussian mixtures and autoencoders Advances in Neural Information Processing Systems (NIPS), 2012, 2375-2383\n\n(Arora2013) Arora, S.; Ge, R. & Moitra, A. New algorithms for learning incoherent and overcomplete dictionaries preprint arXiv:1308.6273, 2013\n\n(Chatterji2017) Chatterji, N. S. & Bartlett, P. L. Alternating minimization for dictionary learning with random initialization preprint arXiv:1711.03634, 2017\n\n(Gribonval2015) Gribonval, R.; Jenatton, R. & Bach, F. Sparse and spurious: dictionary learning with noise and outliers IEEE Transactions on Information Theory, 2015, 61, 6298-6319\n\n(Sun2015) Sun, J.; Qu, Q. & Wright, J. Complete dictionary recovery over the sphere Sampling Theory and Applications (SampTA), 2015 International Conference on, 2015, 407-410\n", "This papers proposes to analyze auto-encoders under sparsity constraints of an underlying signal to be recovered.\nBased on concentration inequality, the reconstruction provided for a simple class of functions is guaranteed to be accurate in l1 norm with high probability.\nThe proof techniques are classical, but the results seem novel as far as I know.\nAs an open question, could the results be given for other lp norms, in particular for infinity-norm? Indeed, this is a privileged norm for support recovery.\n\n\n\nPresentation issues:\n- section should be Section when stating for instance \"section 1\". Idem for eq, equation, assumption...\n- bold fonts for vectors are randomly used: some care should be given to harmonizing symbols fonts.\n- equations should be cited with brackets\n\nReferences issues:\n- harmonize citations: if you add first name for some authors add it for all references: why writing Roland Makhzani and J. Wright?\n\n- Candes -> Cand\\`es\n\n- Consider citing \"Sparse approximate solutions to linear systems\", Natarajan 1995 when mentioning Amaldi and Kann 1998.\n\n\n\nSpecific comments:\npage 1:\n- hasn't -> has not.\n\npage 2:\n- \"activation function\": at this stage s_e and s_d are just functions. What is the \"activation\" refers to? Also a clarification on the space they act on should be stated. Idem for b_e and b_d.\n- \"the identity of h in eq. 1 is only well defined in the presence of l1 regularization due to the over-completeness of the dictionary\" : this is implicitly stating the uniqueness of the Lasso. Not that it is well known that there are cases where the Lasso is non-unique. Please, clarify your statement accordingly.\n- for simplicity b_d could be removed here.\n- in (4) it would be more natural to write f_j(h_j) instead of f(h_j)\n- \"has is that to be bounded\"-> is boundedness?\n- what is l_max_j here? moreover the bold letters seem to represent vectors but this should be state explicitly somewhere.\n\npage 3:\n- what is the precise meaning of \"distributed\" when referring to representation\n- In remark 1: the font has changed weirdly for W and h.\n- \"two class\"->two classes\n- Definition 1: again what is a precise definition of activation function?\n- \"if we set\": bold issue.\n- b should b_e in Theorem 1, right? Also, please recall the definition of the sigmoid function here. Moreover l_max and mu_h seem useless in this theorem... why referring to them?\n- \"if the rows of the weight matrix is\"-> if the rows of the weight matrix are\n\npage 4:\n- Proposition 1 could be stated as a theorem and Th.1 as a corollary (with e=0). The same is true for proposition 2 I suspect.\n- Again the influence of l_max and mu_h are none here...\n- Please, provide the definition of the ReLu function here. Is this just x->x_+ ?\n\npage 6:\n- R^+m -> font issue again.\n- \"are maximally incoherent\": what is the precise meaning of this statement?\n- what the motivation for Theorem 3? This should be discussed.\n- De-noising -> de-noising\n- the discussion after (15) should be made more precise.\n\npage 7:\n- Figure 1 and 2 should be postponed to page 8.\n- in Eq. (16) one needs to known E_h(x) and E_h_i(h_i), but I suspect this quantity are usually unknown to the practitioner. Can the author comment on that?\n\npage 8:\n- \"the recovery is denoised through thresholding\": where is this step analyzed?\n\npage 9:\n- figure 3: sparseness-> sparsity; also what is the activation function used here?\n- \"are then generate\"->are then generated\n- \"by greedily select\"->by greedily selecting\n- \"the the\"\n- \"and thus pre-process\"-> and thus pre-processing\n\n\nSupplementary:\npage 1:\n- please define \\sigma, and its simple properties used along the proof.\n\npage 2:\n- g should be g_j (in eq 27 - > 31)\n- overall this proof relies on ingredients such as the one used for Hoeffding's inequality.\nMost ingredients could be taken from standard tools on concentration (see for instance Boucheron, Lugosi, Massart: \"Concentration Inequalities: A Nonasymptotic Theory of Independence\", 2013).\nMoreover, some elements should be factorized as they are shared among the next proofs. This should reduce the size of the supplementary dramatically.\n\npage 7:\n- Eq. (99): it should be reminded that W_ii=1 here.\n- the upper bound used on \\mu to get equation 105 seems to be in the wrong order.", "This paper considers the following model of a signal x = W^T h + b, where h is an m-dimensional random sparse vector, W is an m by n matrix, b is an n dimensional fixed bias vector. The random vector h follows an iid sparse signal model, each coordinate independently have some probability of being zero, and the remaining probability is distributed among nonzero values according to some reasonable pdf/pmf. The task is to recover h, from the observation x via the activation functions like Sigmoid or ReLU. For example, \\hat{h} = Sigmoid(W^T h + b).\n\nThe authors then show that, under the random sparsity model of h, it is possible to upper bound the probability P(||h-\\hat{h}|| > \\delta. m) in terms of the parameters of the distribution of h and W and b. In some cases noise can also be tolerated. In particular, if W is incoherent (columns being near-orthonormal), then the guarantee is stronger. As far as I understood, the proofs make sense - they basically use Chernoff-bound type argument. \n\nIt is my impression that a lot of conditions have to be satisfied for the recovery guarantee to be meaningful. I am unsure if real datasets will satisfied so many conditions. Also, the usual objective of autoencoders is to denoise - i.e. recover x, without any access to W. The authors approach in this vein seem to be only empirical. Some recent works on associative memory also assume the sparse recovery model - connections to this literature would have been of interest. It is also not clear why compressed sensing-type recovery using a single ReLU or Sigmoid would be of interest: are their complexity benefits?", "Thanks for your comments.\n\nRegarding the questions. All real datasets of course do not exactly satisfy the assumptions made in the paper. It also depends on how we model the data. For instance, modeling images at patch levels would be conducive to the assumptions made for real images while modeling them at the image level itself may not. To further defend that the assumptions would hold at patch level, consider that independent component analysis (ICA) is commonly used to model real images at patch level. As discussed in the paper, ICA is a special case of our model where the signal is further assumed to be sparse for recovery using the mechanism discussed in the paper. It is well known that the sparsity assumption usually holds in practice. \n\nRegarding the use of ReLU and Sigmoid, other non-linearities can be used for signal recovery as well as long as they satisfy certain criteria.\n", "Thank you for your comments. \n\nWe would like to stress that our goal in this paper was to study the signal recovery properties of auto-encoders rather than studying them from a dictionary learning point of view which is a separate problem on its own.\n\nRegarding not providing algorithmic perspective.\nWe are interested in properties that would lead to optimal solutions using auto-encoders from a signal recovery perspective, and thus better understand auto-encoders. There is no algorithmic perspective since 1) when the dictionary is known, we can get the solution analytically, 2) in case the dictionary is unknown we solve it using gradient descent. In the second case, we do not have guarantees for the recovery of the hidden signals, however, we have shown that empirically the recovery is strong using the theories that we developed. \n\nRegarding the linearity of the activation function s_d, we apologize for the confusion, we have reworded to make it more clear. s_d is the decoding activation function, linear activation is more general as it covers a wider numerical range. The latter sentence meant to say 1) the activations for both s_d and s_e can be non-linear in practice; 2) since linear s_d can handle the case for non-linear s_d, the previous statement is still true.\n\nOn page 3, p_i and 1-p_i are constants with respect to the data and therefore it is suffice to analyse the other terms.\n\nRegarding how to set the value of b_i, as stated in theorem 1 and 2, b_i can be set analytically based on weights and p_i. In practice, since we do not know the sparsity level, we can set it in two ways, 1) treat p_i as hyper parameters, 2) treat p_i as parameters of the model.\n\nE_h[x] and E_x[x] are all data mean, and E_h[h] are mean of the hidden activations.\n\nWe have fixed other minor problems mentioned in your reviews.\n", "Thank you for your diligent reviews and detailed comments.\n\nOur results can be extended trivially to other norms using the standard norm equalities; however a non-trivial bound for specific norms (Eg. infinity norm) may need additional work. \n\nWe have harmonized the citations as suggested by the reviewer. Following are clarifications to some of the questions.\n\nWhat is the \"activation\" refers to? \nWe use the term activation function in accordance to the terminology used commonly for auto-encoders. \n\nThere are cases where the Lasso is non-unique.\nIndeed there are cases when the solution to LASSO is not unique (Eg. when the line y=Dw aligns with |w|=c for some c). We were referring to the general case when the solution to y=Dw is not unique at all when D is over-complete. In these cases some sort of constraint like L1 or L2 penalty is needed to make the solution unique. \n\nin (4) it would be more natural to write f_j(h_j) instead of f(h_j)\nWe use the notation f(h_j) instead of f_j(h_j) to stress that all units are identical.\n\n what is the precise meaning of \"distributed\" when referring to representation\nDistributed representation is a terminology used in deep learning which implies multiple hidden units participate together to represent one sample instead of a single hidden unit corresponding to a single sample. It is an efficient way of encoding patterns.\n\nl_max and mu_h seem useless in this theorem\nI_max and mu_h is required for the data generating distribution BINS defined in our paper; they are the sufficient statistics.\n\nReLu\nReLU is x -> max(0, x)\n\n\"are maximally incoherent\": what is the precise meaning of this statement?\nThe vectors in a matrix are maximally incoherent if the minimum of the angle between every pair of vectors is maximized. This angle is given by the Welch bound.\n\nwhat the motivation for Theorem 3?\nAs mentioned at the beginning of this subsection, the motivation behind theorem 3 is to gain some insights for the generated data.\n\nQuantities of E_h(x) and E_h_i(h_i)\nYes the quantities E_h(x) and E_h_i(h_i) are unknown, but notice this value is equal to W E_x[x] where x is observed. So as long as W can be recovered, the quantity E_h[h] can be computed.\n\nRegarding \"the recovery is denoised through thresholding\", we didn't analyze it. We mention this based on the intuition that the signal is recovered with epsilon error with high probability. So if the signal magnitude is large enough compared to noise, a simple thresholding should work in separating signal from noise.\n\nIn figure 3, ReLU is used for continuous recovery and sigmoid is used for binary case, as mentioned in theorem 1 in section 4.1 and theorem 2 in section 4.2\n" ]
[ 4, 5, 5, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_SJ8M9yup-", "iclr_2018_SJ8M9yup-", "iclr_2018_SJ8M9yup-", "HkQOWPieM", "HySa8MQgz", "B1NjQYOeG" ]
iclr_2018_SJu63o10b
UNSUPERVISED METRIC LEARNING VIA NONLINEAR FEATURE SPACE TRANSFORMATIONS
In this paper, we propose a nonlinear unsupervised metric learning framework to boost of the performance of clustering algorithms. Under our framework, nonlinear distance metric learning and manifold embedding are integrated and conducted simultaneously to increase the natural separations among data samples. The metric learning component is implemented through feature space transformations, regulated by a nonlinear deformable model called Coherent Point Drifting (CPD). Driven by CPD, data points can get to a higher level of linear separability, which is subsequently picked up by the manifold embedding component to generate well-separable sample projections for clustering. Experimental results on synthetic and benchmark datasets show the effectiveness of our proposed approach over the state-of-the-art solutions in unsupervised metric learning.
rejected-papers
The paper is well written overall. However, the algorithmic framework has limited novelty and the reviewers unanimously are unconvinced by experimental results showing marginal improvements on smallish UCI datasets.
train
[ "HJAY2L-ez", "BJfgoy9xz", "rkX795cez" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed an unsupervised metric learning method, which is designed for clustering and cannot be used for other problems. The authors argued that unsupervised metric learning should not be a pre-processing method for the following clustering method due to the lack of any similarity/dissimilarity constraint. Consequently the proposed formulation plugs a certain metric learning objective, which is called CPD given in (1) and (3), into the k-means objective (2). After some linear algebra, it arrives the objective in (10) or its regularized version in (11). In order to solve (11), an alternative optimization is used to iteratively obtain the optimal Y given fixed Psi and obtain optimal Psi given fixed Y. More than one page of space is for proving the convexity of the latter subproblem.\n\nThe paper is overall well-written and I have only 1 question about the clarity: there is a very short Sec. 2.4 saying that \"So far, we developed and applied our proposed CPD-UML under input feature spaces. However, it can be further kernelized to improve the clustering performance for more complicated data.\" I found this quite confusing. Just after (1), it is mentioned that Psi is the weight matrix for the Gaussian kernel functions and g is the Gaussian kernel function; it is also mentioned between (16) and (17) that G is a kernel (Gram) matrix with the Gaussian kernel. This issue of inconsistency should be clarified.\n\nThe main issues of the paper are the motivation and the experiments. While the argument of the authors is partially true, it is quite difficult to distinguish unsupervised metric learning from unsupervised feature/deep learning nowadays. The CPD model is limited in the nonlinearity to me: as shown in (1), the nonlinear function v is nonlinear in x via the so-called empirical kernel map G, and more importantly v is linear in its parameters namely Psi. If we would like to use a nonlinear-in-parameter model such as deep networks for v, the optimization for Y still works but the optimization for Psi can no longer work. This means the proposed learning objective is not model-free.\n\nThe experiments are correct to me, where 3 performance measures of clustering are reported for 10 methods on 6 datasets. However, all the datasets are quite small and thus cannot represent the most reliable comparisons of these methods. Moreover, the computational complexity of the proposed method is not discussed, but I guess it is quite high since both of the alternative subproblems require eigenvalue decomposition or solving a linear system.", "This paper presents a scheme for unsupervised metric learning using coherent point drifting (CPD)-- the core idea is to learn a parametric model of CPD that shifts the input points such that the shifted points lead to better clustering in a K-Means setup. Following the work of Myronenko & Song, 2010, this paper uses a linear parametric model for the drift (in CPD) after mapping the input points to a kernel feature space using an RBF kernel. The CPD model is directly used within the KMeans objective -- the drift parameter matrix and the KMeans cluster assignment matrix are jointly learned using block-coordinate descent (BCD). The paper uses some interesting properties of the CPD model to derive an efficient optimization solver for the BCD subproblems. Experiments are provided on UCI datasets and demonstrate some promise.\n\nPros:\n1) The idea of using CPD for unsupervised metric learning is quite interesting\n2) The exploration into the convexity of the CPD parameter learning -- although straightforward -- is also perhaps interesting.\n3) The experiments show some promise. \n\nCons:\n1) Lacking motivation/Intuition \nThe main motivation for the approach, as far as I understand, is to learn cluster boundaries for non-linear data -- where K-Means fails. However, it is unclear to me why would one need to use K-Means for non-linear data, why not use kernelized kmeans? The proposed CPD model also is essentially learning a linear transformation of the kernelized feature space. So in contrast to kernelized kmeans, what is the advantage of the proposed framework? I see there is an improvement in performance compared to kernelized kmeans, however, intuitively I do not see how that improvement comes from? Perhaps providing some specific examples/scenarios or graphic illustrations will help appreciate the method.\n\n2) Novelty/Significance \nI think the novelty of this paper is perhaps marginal. The main idea is to directly use CPD from a prior work in a KMeans setup. There are a few parameters to be estimated in the joint learning objective, for which a block-coordinate descent strategy is proposed. The derivations are perhaps straightforward. As noted above, it is not clear what is the significance of this combination or how does it improve performance. As far as CPD goes, it looks to me that the performance depends heavily on the choice of the Gaussian RBF bandwidth parameter, and it is not clear to me how such a parameter can be selected in a unsupervised setting, when class labels are not available for cross-validation. The paper does not provide any intuitions on this front.\n\n3) Technical details.\nThere are a few important details that I do not quite follow in the paper.\n\na) The CPD is originally designed for the point matching problem, and its parametric form (\\Psi) is derived using a different a Tikhonov regularized regression model as described just above (1). The current paper directly uses this parametric form in a KMeans setup and solve the resultant problem jointly for the CPD parameter and the clustering assignment. However, it is not clear to me how the paper could use the optimal parametric form for Tikhonov regression as the optimum for the clustering problem. Ideally, I would think when formulating the joint optimization for the clustering problem, the optimal functional v(x) should also be learned/derived for the clustering problem, or some proof should be provided showing the functionals are the same. Without this, I am not convinced that the proposed formulation indeed learns the optimum drifts and the clusters jointly.\n\nb) The subproblem on Y (the assignment matrix) looks like a standard SVD objective. It is not clear why would it be necessary to resort to Ky Fan's theorem for its optimal solution.\n\nc) The paper talks about manifold embedding in the abstract and in Sec. 2.2. However, it appears to be a straightforward dimensionality reduction (PCA) of data. If not, what is the precise manifold that is described here? \n\nd) Eq. 9, the definition of Y_c is incorrect and unclear. p is defined as a vector of ones, earlier. \n\ne) Although the assignment matrix Y has orthogonal columns, it is a binary matrix. If it is approximated by an orthonormal frame, how do you reduce it to a binary matrix? Does taking the largest values in each column suffice -- it does not look like so. However, in the paper, Y is relaxed to an orthonormal frame, which is estimated using PCA, the data points are then projected onto this low-dimensional subspace, and then k-means applied to get the Y matrix. The provided math does not support any of these steps. Thus, the technical exposition is imprecise and the solutions appear rather heuristic. \n\nf) The kernelized variant of the proposed scheme, described in Sec. 2.4 is missing important details. How precisely is the kernelization done? How is CPD extended to that setup and what would be the Gaussian kernel G in that case, and what does \\Psi signify? \n\ng) Figure 2, it seems that kernel kmeans and the proposed CPD-UML show similar cluster boundaries for low-kernel widths. Why are the high kernel widths beneficial?\n\n4) Experiments\nThere is some improvement of the proposed method -- however overall, the improvements are marginal. The discussion is missing any analysis of the results. Why it works at times, how well it improves on kernelized kmeans, and why? What is the advantage over other competitive schemes, etc. \n\nIn summary, while there is a minor novelty in connecting two separate ideas (CPD and UML) into a joint UML setup, the paper lacks sufficient motivations for proposing this setup (in contrast to say kernelized kmeans), the technical details are unconvincing, and the experiments lack sufficient details or analysis. Thus, I do not think this paper is ready to be accepted in its current form.\n\n\n", "This paper proposed a nonlinear unsupervised metric learning framework. The authors combine Coherent Point Drifting and the k-means approaches under the trace minimization framework. However, I am afraid that the novelty and insight of this work is not good enough for acceptance.\n\nPros:\nThe paper is well written and easy to follow.\n\nCons:\n1 The novelty of this paper is limited.\nThe authors mainly combine Coherent Point Drifting and the k-means under the trace minimization framework. The trace minimization is then solved with an EM-like iterative minimization.\nHowever, trace minimization is already well explored and this paper provides little insight. Furthermore, there is not any theoretical guarantee how this iterative minimization approach will converge to.\n\n2 For a method with limited novelty, comprehensive experiments are needed to verify its effectiveness. However, the experimental setting of this paper is biased.\nAn important line of works, namely deep learning based clustering, are totally missing.\nComprehensive experiments with other deep learning based clustering are required.\n" ]
[ 6, 4, 4 ]
[ 4, 5, 4 ]
[ "iclr_2018_SJu63o10b", "iclr_2018_SJu63o10b", "iclr_2018_SJu63o10b" ]
iclr_2018_SJDYgPgCZ
Understanding Local Minima in Neural Networks by Loss Surface Decomposition
To provide principled ways of designing proper Deep Neural Network (DNN) models, it is essential to understand the loss surface of DNNs under realistic assumptions. We introduce interesting aspects for understanding the local minima and overall structure of the loss surface. The parameter domain of the loss surface can be decomposed into regions in which activation values (zero or one for rectified linear units) are consistent. We found that, in each region, the loss surface have properties similar to that of linear neural networks where every local minimum is a global minimum. This means that every differentiable local minimum is the global minimum of the corresponding region. We prove that for a neural network with one hidden layer using rectified linear units under realistic assumptions. There are poor regions that lead to poor local minima, and we explain why such regions exist even in the overparameterized DNNs.
rejected-papers
The reviewers are unanimous in their opinion that the theoretical results in this paper are of limited novelty and significance. Several parts of the paper are not presented clearly enough. As such the paper is not ready for ICLR-2018 acceptance.
train
[ "HJfKMWtxz", "HJ_5MStgf", "B1kVUQjxM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to study the loss surfaces of neural networks with ReLU activations by viewing the loss surface as a sum of piecewise linear functions at each point in parameter space, i.e. one piecewise linear function per sample. The main result is that every local minimum of the total surface is a global minimum of the region where the ReLU activations corresponding to each sample do not change. \n\nQuality:\n- The paper's claims are correct, however most of the theoretical results follow easily from the definitions and I don't see how they are very useful. What is interesting about other recent theoretical works, which show (subject to various assumptions) that local minima are roughly equivalent to the global minimum, is that they compare local minima across all regions of the parameter space and show they are similar. Here, the results only hold within each local region of the space, and they don't say anything about how the global minima in different regions compare in terms of loss. Knowing that a local minumum is a global minimum of a local region is not very useful because the global minimum of that local region could still be much worse than that of the other regions. \n\n\nClarity:\n- The main claims/results in the paper are not stated very clearly, and the authors are not clear about what the contributions of the paper are or why they are useful. \n\nOriginality:\n- Studying loss surfaces by viewing ReLU networks as piecewise linear functions is by now standard. \n\nSignificance:\n- It is not clear how these results may be applied in practice or open new directions for future theoretical work. \n\n", "The authors propose investigating regions of the the parameter space under which the activations (over the entire training data set) remain unchanged. They conjecture, and then attempt to argue for a simple network, that, over these regions, the loss function exhibits nice properties: all local minima are global minima, all other local optima are saddle points, and the function is neither convex nor concave on these regions. The proof of this statement seems relatively straightforward and appears to be correct. Unfortunately it only applies to a special case. Second, the authors argue that the loss function for their simple network has poor local minima. Finally, the authors conclude with a simple set of experiments exploring the accuracy of random activations. Overall, I found the main idea of the paper relatively straightforward, but the presentation is a bit awkward in places.\n\nI think the work is heading in an interesting direction, but I found it somewhat incremental. It's nice to know that the loss function (squared loss in this case) has these properties, but as there are exponentially many regions corresponding to the different activations, it is unclear what the practical consequences of these theoretical observations are. Could the authors elaborate on this?\n\nAnother question: is it really true that the non-differentiability of the functions involved creates significant issues in practice (not theoretically) - isn't the set of all points with this property of measure zero?\n", "This paper attempts to extend analytical results pertaining to the loss surface of linear networks to a nonlinear network with a single hidden ReLU layer. Unfortunately though, at this point I feel that the theoretical results, which constitute the majority of the paper, are of limited novelty and/or significance. However, I still remain very open to counterarguments to this opinion and the points raised below.\n\nFirst, I don't believe that Lemma 2.2 is precisely true, at least as currently stated. In particular, it would appear that L_f could have a differentiable local minima that is only a saddle point in L_gA. For example, if there is a differentiable valley in L_f that terminates on the boundary of an activation region, then this phenomena could occur, since a local-minima-creating boundary in L_f might just lead to a saddle point in L_gA. Regardless, the basic premise of this result is quite straightforward anyway.\n\nTurning to Lemma 2.3 and 2.4, I don't understand the relevance of these results. Where are they needed later or applied? Additionally, Theorem 2.5 is very related to results already proven for linear networks in earlier work (Kawaguchi, 2016), so there is little novelty here.\n\nThere also seem to be issues with Corollary 2.7, which as an aggregation result can be viewed as the main contribution of the paper. Part (1) of this corollary is obvious. Part (2) depends on Lemma 2.2, which as stated previously may be problematic. Most seriously though, Part (3) only considers critical points (i.e., derivative equal to zero), not local minima occurring at non-differentiable locations. To me this greatly mutes the value of this result, and the contribution of the paper overall, because local minimum are *very* likely to occur on the boundary between activation regions at non-differentiable points (e.g. as in Figure 2). I therefore don't understand the utility of only considering the differentiable local minima.\n\nOverall though, the main point that within areas of fixed activation the network behaves much like a linear network (with all local minima also global minima when constrained within each region), is not especially noteworthy, because it provides no pathway for comparing minima from different activation regions, which is the main problem to begin with.\n\nBeyond this, the paper makes a few less-technical observations regarding bad local minima. For example, in Section 3.1 the argument is made that the linear region created when all activations are equal to one, will have a local minimum, and this minimum might be suboptimal. However, these arguments pertain to the surrogate function L_gA, and if the minima to L_gA occurs on the boundary to another activation region, then this solution might not be a local minima to L_f, the real objective we care about. Am I missing something here?\n\nAs for Section 4.2, the paper needs to do a better job of explaining exactly what is been shown in Table 2. I can maybe guess, but it is not at all clear what the accuracy percentage is referring to, nor precisely how rich and random minima are computed. Also, the assumption that P(a = 1) = P(a = 0) = 0.5 is not very realistic, although admittedly this type of simplification is sometimes adopted in the literature.\n\nMinor comment:\n* Near the beginning in the introduction, it is claimed that \"the vanishing gradient problem has been solved by using rectified linear units.\" This is not actually true, and portends problematic claims later in the paper." ]
[ 4, 5, 4 ]
[ 4, 4, 4 ]
[ "iclr_2018_SJDYgPgCZ", "iclr_2018_SJDYgPgCZ", "iclr_2018_SJDYgPgCZ" ]
iclr_2018_BJgd7m0xRZ
Unsupervised Adversarial Anomaly Detection using One-Class Support Vector Machines
Anomaly detection discovers regular patterns in unlabeled data and identifies the non-conforming data points, which in some cases are the result of malicious attacks by adversaries. Learners such as One-Class Support Vector Machines (OCSVMs) have been successfully in anomaly detection, yet their performance may degrade significantly in the presence of sophisticated adversaries, who target the algorithm itself by compromising the integrity of the training data. With the rise in the use of machine learning in mission critical day-to-day activities where errors may have significant consequences, it is imperative that machine learning systems are made secure. To address this, we propose a defense mechanism that is based on a contraction of the data, and we test its effectiveness using OCSVMs. The proposed approach introduces a layer of uncertainty on top of the OCSVM learner, making it infeasible for the adversary to guess the specific configuration of the learner. We theoretically analyze the effects of adversarial perturbations on the separating margin of OCSVMs and provide empirical evidence on several benchmark datasets, which show that by carefully contracting the data in low dimensional spaces, we can successfully identify adversarial samples that would not have been identifiable in the original dimensional space. The numerical results show that the proposed method improves OCSVMs performance significantly (2-7%)
rejected-papers
The reviewers have unanimously expressed concerns about clarity, novelty, sound theoretical justification and intuitive motivation of the proposed approach.
test
[ "ByZrbWcxG", "BJLMRY2gG", "SkrFXoAef" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose a defense against attacks on the security of one-class SVM based anonaly detectors. The core idea is to perform a random projection of the data (which is supposed to decrease the impact from adversarial distortions). The approach is empirically tested on the following data: MNIST, CIFAR, and SVHN.\n\nThe paper is moderately well written and structured. Command of related work is ok, but some relevant refs are missing (e.g., Kloft and Laskov, JMLR 2012). The empirical results actually confirm that indeed the strategy of reducing the dimensionality using random projections reduces the impact from adversarial distortions. This is encouraging. What the paper really lacks in my opinion is a closer analysis of *why* the proposed approach works, i.e., a qualitative empirical analysis (toy experiment?) or theoretical justification. Right now, there is no theoretical justification for the approach, nor even a (in my opinion) convincing movitation/Intuition behind the approach. Also, the attack model should formally introduced.\n\nIn summary, I d like to encourage the authors to further investigate into their approach, but I am not convinced by the manuscript in the current form. It lacks both in sound theoretical justification and intuitive motivation of the approach. The experiments, however, show clearly advantages of the approach (again, here further experiments are necessary, e.g., varying the dose of adversarial points). ", "In this paper, the authors explore how using random projections can be used to make OCSVM robust to adversarially perturbed training data. While the intuition is nice and interesting, the paper is not very clear in describing the attack and the experiments do not appropriately test whether this method actually provides robustness.\n\nDetails:\nhave been successfully in anomaly detection --> have been successfully used in anomaly detection\n\n\"The adversary would select a random subset of anomalies, push them towards the normal data cloud and inject these perturbed points into the training set\" -- This seems backwards. As in the example that follows, if the adversary wants to make anomalies seem normal at test time, it should move normal points outward from the normal point cloud (eg making a 9 look like a weird 7).\n\nAs s_attack increases, the anomaly data points are moved farther away from the normal data cloud, altering the position of the separating hyperplane. -- This seems backwards from Fig 2. From (a) to (b) the red points move closer to the center while in (c) they move further away (why?). The blue points seem to consistently become more dense from (a) to (c).\n\nThe attack model is too rough. It seems that without bounding D, we can make the model arbitrarily bad, no? Assumption 1 alludes to this but doesn't specify what is \"small\"? Also the attack model is described without considering if the adversary knows the learner's algorithm. Even if there is randomness, can the adversary take actions that account for that randomness?\n\nDoes selecting a projection based on compactness remove the randomness?\n\nExperiments -- why/how would you have distorted test data? Making an anomaly seem normal by distorting it is easy.\n\nI don't see experiments comparing having random projections and not. This seems to be the fundamental question -- do random projects help in the train_D | test_C case?\n\nExperiments don't vary the attack much to understand how robust the method is.", "Although the problem addressed in the paper seems interesting, but there lacks of evidence to support some of the arguments that the authors make. And the paper does not contribute novelty to representation learning, therefore, it is not a good fit for the conference. Detailed critiques are as following:\n1. The idea proposed by the authors seems too quite simple. It is just performing random projections for 1000 times and choose the set of projection parameters that results in the highest compactness as the dimensionality reduction model parameter before one-class SVM.\n2. It says in the experiments part that the authors have used 3 different S_{attack} values, but they only present results for S_{attack} = 0.5. It would be nicer if they include results for all S_{attack} values that they have used in their experiments, which would also give the reader insights on how the anomaly detection performance degrades when the S_attack value change.\n3. The paper claims that the nonlinear random projection is a defence against adversary due to the randomness, but there is no results in the paper proving that other non-random projections are susceptible to adversary that is designed to target that projection mechanism and nonlinear random projection is able to get away with that. And PCA as a non-random projection would a nice baseline to compare against.\n4. The paper seems to misuse the term “False positive rate” as the y label of figure 3(d/e/f). The definition of false positive rate is FP/(FP+TN), so if the FPR=1 it means that all negative samples are labeled as positive. So it is surprising to see FPR=1 in Figure 3(d) when feature dimension=784 while the f1 score is still high in Figure 3(a). From what I understand, the paper means to present the percentage of adversarial examples that are misclassified instead of all the anomaly examples that get misclassified. The paper should come up with a better term for that evaluation.\n5. The conclusion, that robustness of the learned model increases wrt the integrity attacks increases when the projection dimension becomes lower, cannot be drawn from Figure 3(d). Need more experiment on more dimensionality to prove that. \n6. In the appendix B results part, sometimes the word ’S_attack’ is typed wrong. And the values in “distorted/distorted” columns in Table 5 do not match up with the ones in Figure 3(c)." ]
[ 4, 4, 4 ]
[ 4, 4, 4 ]
[ "iclr_2018_BJgd7m0xRZ", "iclr_2018_BJgd7m0xRZ", "iclr_2018_BJgd7m0xRZ" ]
iclr_2018_HyiRazbRb
Demystifying overcomplete nonlinear auto-encoders: fast SGD convergence towards sparse representation from random initialization
Auto-encoders are commonly used for unsupervised representation learning and for pre-training deeper neural networks. When its activation function is linear and the encoding dimension (width of hidden layer) is smaller than the input dimension, it is well known that auto-encoder is optimized to learn the principal components of the data distribution (Oja1982). However, when the activation is nonlinear and when the width is larger than the input dimension (overcomplete), auto-encoder behaves differently from PCA, and in fact is known to perform well empirically for sparse coding problems. We provide a theoretical explanation for this empirically observed phenomenon, when rectified-linear unit (ReLu) is adopted as the activation function and the hidden-layer width is set to be large. In this case, we show that, with significant probability, initializing the weight matrix of an auto-encoder by sampling from a spherical Gaussian distribution followed by stochastic gradient descent (SGD) training converges towards the ground-truth representation for a class of sparse dictionary learning models. In addition, we can show that, conditioning on convergence, the expected convergence rate is O(1/t), where t is the number of updates. Our analysis quantifies how increasing hidden layer width helps the training performance when random initialization is used, and how the norm of network weights influence the speed of SGD convergence.
rejected-papers
The reviewers have unanimously expressed strong concerns about the technical correctness of the theoretical results in the paper. The paper should be carefully revised and checked for technical errors. In its current form, the paper is not suitable for acceptance at ICLR 2018.
train
[ "HyutwEMJG", "HkErJ5eez", "SJyMoI5gG", "H1vdfKpXG", "rJwXE9amz", "rkofl9aXM", "H1wvdY6mz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors study the convergence of a procedure for learning\nan autoencoder with a ReLu non-linearity. The procedure is akin\nto stochastic gradient descent, with some parameters updated at\neach iteration in a manner that performs optimization with respect\nto the population risk.\n\nThe autoencoders that they study tie the weights of the decoder to\nthe weights of the encoder, which is a common practice. There\nare no bias terms in the decoder, however. I do not see where they\nmotivate this restriction, and it seems to limit the usefulness of\nthe bias terms in the encoder.\n\nTheir analysis is with respect to a mixture model. This is described\nin the abstract as a sparse dictionary model, which it is, I guess.\nThey assume that the gaussians are very well separated. \n\nThe statement of Theorem says that it concerns Algorithm 1. The\ndescription of Algorithm 1 describes a procedure, with an\naside that describes a \"version used in the analysis\".\n\nThey write in the text that the rows of W^t are projected onto\na ball of radius c in each update, but this is not included\nin the description of Algorithm 1. The statement of Theorem 1\nincludes the condition that all rows of W^t are always equal to\nc, but this may not be consistent with the updates given\nin Algorithm 1. My best guess is that they intend of\nthe rows of W^t to be normalized after each update (which is\ndifferent than projecting onto the ball of radius c). This\naspect of their procedure seems restrict its applicability.\n\nSuccessful initialization looks like a very strong condition to\nme, something that will occur exponentially rarely, as a function\nof d. (See Fact 10 of \"Agnostically learning halfspaces\", by Kalai, et al.)\nFor each row of W^*, the probability that any one row of W^o will be\nclose enough is exponentially small, so exponentially many rows\nare needed for the probability that any row is close enough to\nbe, say, 1/2. I don't see anything in the conditions of Theorem 1\nthat says that n is large relative to d, so it seems like its\nclaim includes the case where k and n are constants, like 5.\nBut, in this case, it seems like the claim of the probability\nof successful initialization cannot be correct when d is large.\n\nIt looks like, after \"successful initialization\", especially\ngiven the strong separation condition, the model as already\n\"got it\". In particular, the effect of the ReLUs seems to\nbe limited in this regime.\n\nI have some other concerns about correctness, but I do not think\nthat the paper can be accepted even if they are unfounded.\n\nThe exposition is uneven. They tell us that W^T is the transpose\nof W, but do not indicate that 1_{a^t (x') > 0} is a componentwise\nindicator function, and that x' 1_{a^t (x') > 0} is its\ncomponentwise product with x' (if this is correct).\n\n\n", "The paper considers training single-hidden-layer auto-encoders, using stochastic gradient descent, for data generated from a noisy sparse dictionary model. The main result shows that under suitable conditions, the algorithm is likely to recover the ground-truth parameters. \n\nAlthough non-convex dictionary learning has been extensively studied for linear models, extending such convergence results to nonlinear models is interesting, and the result (if true) would be quite nice. Unfortunately (and unless I missed something), there appears to be a crucial bug in the argument, which requires that random initialization lead to dictionary elements sufficiently close to the ground truth. Specifically, definition 1 and lemma 1 give a bound on the success probability, which is exponentially small in the dimension d (as it should, since it essentially bounds the probability that an O(1)-norm random vector has \\Omega(1) inner product with some fixed unit vector). However, the d exponent disappears when the lemma is used to prove the main theorem (bottom of pg. 10), as well as in the theorem statement, making it seem that the success probability is large. Of course, a result which holds with exponentially small probability is not very interesting. I should also say that I did not check the rest of the proof carefully.\n\nA few relatively more minor issues:\n- The paper makes the strong assumption that the data is generated from a 1-sparse dictionary model. In other words, each data point is simply a randomly-chosen dictionary element, plus zero-mean noise. With this model, dictionary learning is quite easy and could be solved directly by other methods (although I see the value of analyzing specifically the behavior of SGD on auto-encoders). \n- To make things go through, the paper makes a non-trivial assumption on how the bias terms are updated (not quite according to SGD). But unless I'm missing something, a bias term isn't even needed to learn in their model, so wouldn't it be simpler and more natural to just assume that the auto-encoder doesn't have a bias term (i.e., x-> W's(Wx))?.\n\n", "This paper shows that an idealized version of stochastic gradient descent converges when learning autoencoders with ReLu non-linearities under strong sparsity assumptions. Convergence rates are also determined. The result is another one in the emerging line of proving convergence guarantees for non-convex optimization problems arising in machine learning, and aims to explain certain phenomena experienced in practice.\n\nThe paper is generally nicely written, providing intuitions, but there are several typos (both in the text and in the math, e.g., missing indices), which should also be corrected.\n\nOn the negative side, while the proof technique in general looks plausible, there seem to be some mistakes in the derivations, which must be corrected before the paper can be accepted. Also, the assumptions in the in the paper seem quite restrictive, and their implications are not discussed thoroughly. \n\nThe assumptions are the following:\n1. The input data is coming from a mixture distribution, in the form x=w_I + eps, where {w_1,...,w_k} is a collection of unit vectors, I is uniform in {1,...,K}, eps is some noise (independent for each sample). \n2. The maximum norm of the noise is O(1/k).\n3. The number n of hidden neurons in the autoencoder is Omega(k) (this is not explicitly assumed but is necessary to make the probability of \"incorrect\" initialization small as well as the results to hold).\n\nUnder these assumptions it is shown that the weights of the autoencoder converge to the centers {w_1,...,w_k} (i.e., for any i the autoencoder has at least one weight converging to w_i). The rate of convergence depends on the coherence of the vectors w_i: the less coherent they are the faster the convergence is.\n\nFirst notice that some assumptions are missing from the main statement, as the error probability delta is certainly connected to the probability of incorrect initialization: when n=1<k, the convergence result clearly cannot hold. This comes from the mistake that in Theorem 3 you state the bound for the probability P(F^\\infty) instead of the conditional probability P(F^\\infty|E_o) (this is present everywhere in the proof). Theorem 3 should also depend on delta_o, which is used in the definition of F^\\infty. \n\nTheorem 2 also seems incorrect. Intuitively, the question is why it cannot happen that two neurons contribute to reproducing a given w_i, and so neither of their weights converge to w_i: E.g., assuming that {w_1,...,w_k,w_1',...,w_k'} form an orthogonal system and the noise is 0, the weight matrix of size n=2k defined as W_{2i-1,*}^T = 1/sqrt{2}(w_i + w'_i) and W_{2i,*}^T=1/sqrt{2}(w_i - w'_i), i \\in [k], with 0 bias can exactly recover any x=w_i (indeed, W_{2j-1,*} x= W_{2j,*} x = 1/sqrt{2}, while the other products are 0, and so W^T W x = W^T W w_j = 1/sqrt{2}(W_{2j-1,*}+W_{2j,*})^T = w_j). Then SGD does not change the weights and hence cannot recover the original weights {w_i }, in particular, it cannot increase the coherence in any step, contradicting Theorem 2. This counterexample can be extended even to the situation when k>d, as--in fact--we only need that the existence of a single j such that w_j and w'_j are orthogonal and also orthogonal to the other basis vectors.\n\nThe assumptions are also very strange in the sense that the norm of the noise is bounded by O(1/k), thus the more modes the input distribution has the more separable they become. What motivates this scaling? Furthermore, the parameters of the algorithm for which the convergence is claimed heavily depend on the problem parameters, which are not known. How can you instantiate the algorithm then (accepting the ideal definition of b)? What are the consequences?\n\nGiven the above, at this point I cannot recommend the paper for acceptance. However, if the above problems are resolved, I would be very happy to see the paper at the conference.\n\n\nOther comments\n-----------------------\n- Add a short derivation why the weights of the autoencoder should converge to the w_i.\n- Definition 3: C_j is not defined in the main text.\n- While it is mentioned multiple times that the interesting regime is d<n, this is actually never used, nor needed (personally, I have never seen such an autoencoder--please give some references). What is really needed is n>k, which is natural if one wants to preserve the information, and also k>d for a rich family of distributions.\n- The area of the spherical cap is well understood (up to multiplicative constants), and better bounds than yours are readily available: with a cap of height 1-t, for sqrt{2/d}<t<1, the relative surface of the cap is between P/6 and P/2 where \nP=1/(t \\sqrt{d}) (1-t^2)^{(d-1)/2}; see, e.g., A. Brieden, P. Gritzmann, R. Kannan, V. Klee, L. Lovasz, and M. Simonovits. Deterministic and randomized polynomial-time approximation of radii. Mathematika. A Journal of Pure and Applied Mathematics, 48(1-2):63–105, 2001. \n- The notation section should be brought forward (or referred the fist time the notation is actually used).\n- Instead of unit spherical Gaussian you could simply say uniform distribution on the unit sphere\n- While Algorithm 1 is called \"norm-controlled SGD training,\" it does not control the norm at all.\n\n\n", "We are grateful for your careful examination of our paper. We have already corrected several typos in our statements (we haven't found mistakes in our proofs), simplified our analysis and parameters. Below are some of our clarification which we hope can help clear some doubts you had about our analysis.\n\n1. delta is included in the statement of Theorem 1 (our main theorem); it is in fact not related to the probability of successful initialization, but a handy parameter to help us control the probability of martingale convergence in the later stage of the algorithm (addressed in Theorem 3)\n\n2. Regarding your question about Theorem 3, the event F^{\\infty} in fact implies E^o, so with or without conditioning on E^o, the probability is the same.\n\n3. For Theorem 2, it can happen that \"two neurons contribute to reproducing a given w_i\" (in fact, this is exactly what we want by adding more neurons and is beneficial as revealed by our analysis). But in your example, it will happen that one neuron will contribute equally to reproducing two different ground-truth dictionary items; this will not happen under our definition of \"active\" neurons by the \"unique firing condition\", which a prerequisite to proving Theorem 2. \n\nEssentially, unique firing condition is guarantee by our definition of successful initialization (the inner product between any dictionary item and at least one of the (normalized) neuron is required to be strictly larger than 1/sqrt(2).\nSo neurons taking the specific values given in your example are considered effectively \"dead\" in our analysis (g(s)=0).\nYour example perhaps also illustrates why the bias term is beneficial to keep; the bias will serve as a threshold to filter out neurons that are close to the \"decision boundary\" and not specializing to learning a single dictionary item.\n\n4. We can provide some rough intuition as for why the norm of noise depends inversely on k, i.e., the true number of dictionary items; the coherence-to-noise ratio can be viewed as the \"signal-to-noise\" ratio of our model. Intuitively, noise cannot scale larger than signal (coherence). In sparse dictionary learning models, coherence usually scales inversely with the number of dictionary items. The typical scale is coherence=1/sqrt(k), while in our case it is 1/k, which is admittedly worse. However, previous theoretical guarantee usually needs to know the exact value of coherence (e.g., see Rangamani et al cited in the updated version of our paper) and thus sets the threholding parameter using this knowledge, while in our case we automatically adjust the bias term using data (which is more practical than the theoretical thresholding method).\n\nAlso, the norm of the noise does not actually have to be bounded in a deterministic sense. Thanks for pointing this out. We added a footnote to explain this in our paper: we can relax and assume that the noise has, e.g., sub-Gaussian tails. \n\n5. While proving the success of our SGD variant depend on knowing either upper or lower bounds on certain model parameters, we do not \"heavily\" depending on them. This is especially true in the updated version of our paper, where there are only two model parameters k and \\lambda left, representing the number of true dictionary items and the incoherence. \nFor k, we only need a loose lower bound (as part of setting our norm parameter). In contrast, for almost all clustering problems, for example, the number of clusters which corresponds to the number of dictionary items in this case, needs to be known exactly.\nFor \\lambda, we only need an upper bound to set our learning rate parameter.\nThe bias update we use can also be well approximated by sampling the data. In contrast, e.g., the recent related work of Rangamani et al, who also studies ReLu activated overcomplete autoencoders, sets their bias term using the exact knowledge of incoherence.\n\nIn our updated paper, we added a Related work section, and added more discussion motivating why the case n>d is relevant (this is in fact known as \"overcomplete\" dictionary learning in the literature). We hope our added explanations and discussions can help you better appreciate our paper.\n\n", "We thank all reviewers for spending your time in reviewing our work and providing insightful feedbacks. \n\nAdmittedly, our first submission was under a very limited time budget and hence bug-prone. We have corrected several bugs/typos mentioned by reviewers and beyond. \n\nHowever, after repeatedly going through our proofs, we have not found any fatal mistake in the proof. Furthermore, we have devoted a lot of time in greatly simplifying our analysis. As a result, \n\n1. We were able to eliminate some model parameters and hard-to-parse interdependence between parameters. Now, our results are stated in a much more crisp way.\n\n2. Regarding the motivation of studying over-complete dictionary learning, we also added reference and discussion in the paper. Moreover, we added a Related work section to discuss recent advances in studying two-layer auto-encoders, and compare our results with existing ones.\n\n3. Regarding your concern about the success probability being exponentially dependent on dimension, we have provided three explanations under our response to Reviewer 2's comments. Since our submission, we have also explored and added performance guarantee of another form of random initialization, initializing by randomly sampling data points. With data initialization, we are able to provide a much stronger and realistic guarantee on the success probability: if the network width increases of order at least \\Omega(k^3), then with high probability, successful initialization can be guaranteed. In contrast, with Gaussian initialization, we need network width to grow as \\Omega(k^d).\n\nThank you\n", "We thank Reviewer 3 for your comments. Here are our responses to your questions/doubts.\n\n1. Regarding your concern about bias: In our case, we especially want to include the bias term in the encoder because when used together with ReLU activation, we view it as an automatically adjusted threshold that cuts small (noisy) signals and only let pass the strongest signals: note that a neuron w_j will only be fired if w_jx+b > 0 according to ReLU activation. So negative bias -b can be viewed as controlling the level of firing threshold (which in turn controls sparsity). We have added these explanations in our updated submission.\n\n2. Regarding your concern about the difference between the bias term used in our analysis and the one stated in Algorithm 1. First, we want to stress that the theoretical quantity we analyze can be estimated from sample (using the empirical version in Algorithm 1). In fact, the empirical version is an unbiased estimator of the theoretical quantity we analyze, just like how the stochastic gradient in SGD is an unbiased estimator of the true gradient. \nSecond, the fact that our bias term can be approximated from data is already an improvement when compared to previous works (we added a Related work section in our updated version). For example, in another recent work studying ReLU activated two-layer weight-tied autoencoder (Rangamani et al, 17), the bias term is fixed to be a function of model parameters, including incoherence which is not known typically by an algorithm. \n\n3. You guess is correct, we forgot to include the normalization step in Algorithm 1, and thanks for pointing out this bug. However, the normalization step (as discussed in the original and current version of our paper) is extremely common in deep learning. We also observed empirically that, if we do not control the norm of the weights, then when training with moderately large dictionary size, the vanilla SGD usually results in NAN weights and the training procedure becomes highly unstable. In deep learning, another common practice is to normalize the gradient (this is widely known as \"gradient clipping\"), which we believe have similar effect as weight normalization. \n\nIn fact, we see being able to account for the normalization step in our SGD analysis as one of the strength and interesting point of our paper, because such tricks are very common in practice but lacks a theoretical justification.\n\n4. Thanks for pointing out another bug. We did accidentally drop the dependence on dimension in our statement about success probability of random initialization. Regarding your concern about the success of initialization probability depending exponentially on d, please refer to our detailed response to Reviewer 2.\n", "We thank Reviewer2 for your comment. As pointed out by Reviewer 3 as well, there was a typo in the statement of the success probability of initialization, where we dropped the exponent in d. \n\nRegarding your concern about the exponential dependence of on d, here we would like to provide three observations:\n\n1. In dictionary learning problems, one usually preprocess the dataset in such a way that we eventually do not deal with very high dimensional data. For example, in its application to image analysis, one common way of preprocessing the dataset is to subsample random small patches from images. Each flattened patch will become training examples. And they typically have fixed dimensions (determined by the filter size) regardless of how large the original image is.\n\n2. We are examining theoretically the performance of randomly initializing network weights by sampling the data points directly. If there is enough time, we will add the new result, as an alternative way of initialization, to this paper. Notably, the success probability do not depend exponentially in this case on the data dimension. Empirically, in fact, we also observe that initializing with random samples from the dataset works better.\n\n3. Theoretically speaking, at least, the exponentially decaying probability due to increasing dimension can be countered by increasing the network width (also exponentially in dimension). While admittedly this is not what we observe in practice, theoretically this will work according to our current version of analysis.\n\n" ]
[ 2, 3, 2, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_HyiRazbRb", "iclr_2018_HyiRazbRb", "iclr_2018_HyiRazbRb", "SJyMoI5gG", "iclr_2018_HyiRazbRb", "HyutwEMJG", "HkErJ5eez" ]
iclr_2018_rJSr0GZR-
Learning Priors for Adversarial Autoencoders
Most deep latent factor models choose simple priors for simplicity, tractability or not knowing what prior to use. Recent studies show that the choice of the prior may have a profound effect on the expressiveness of the model, especially when its generative network has limited capacity. In this paper, we propose to learn a proper prior from data for adversarial autoencoders (AAEs). We introduce the notion of code generators to transform manually selected simple priors into ones that can better characterize the data distribution. Experimental results show that the proposed model can generate better image quality and learn better disentangled representations than AAEs in both supervised and unsupervised settings. Lastly, we present its ability to do cross-domain translation in a text-to-image synthesis task.
rejected-papers
The paper proposes learning the prior for AAEs by training a code-generator that is seeded by the standard Gaussian distribution and whose output is taken as the prior. The code generator is trained by minimizing the GAN loss b/w the distribution coming out of the decoder and the real image distribution. The paper also modifies the AAE by replacing the L2 loss in pixel domain with "learned similarity metric" loss inspired by the earlier work (Larsen et al., 2015). The contribution of the paper is specific to AAE which makes the scope narrow. Even there, the benefits of learning the prior using the proposed method are not clear. Experiments make two claims: (i) improved image generation over AAE, (ii) improved "disentanglement". Towards (i), the paper compares images generated by AAE with those generated by their model. However, it is not clear if the improved generation quality is due to the use of decoder loss on the learned similarity metric (Larsen at al, 2015), or due to the use of GAN loss in the image space (ie, just having GAN loss over decoder's output w/o having a code generator), or due to learning the prior which is the main contribution of the paper. This has also been hinted at by AnonReviewer1. Hence, it's not clear if the sharper generated images are really due to the learned prior. Towards (ii), the paper uses InfoGAN inspired objective to generate class conditional images. It shows the class-conditional generated images for AAE and the proposed method. Here AAE is also trained on "learned similarity metric" and augmented with similar InfoGAN type objective so the only difference is in the prior. Authors say the performance of both models is similar on MNIST and SVHN but on CIFAR their model with "learned prior" generates images that match the conditioned-upon labels better. However this claim is also subjective/qualitative and even if true, it is not clear if this is due to learned prior or due to the extra GAN discriminator loss in the image-space -- in other words, how do the results look for AAE + a discriminator in the image space, just like in the proposed model but without a code generator? The t-SNE plots for the learned prior are also shown but they are only shown when InfoGAN loss is added. The same plots are not shown for AAE with added InfoGAN loss so it is difficult to know the benefits of learning the code-generator as proposed. Overall, I feel the scope of the paper is narrow and the benefits of learning the prior using the method proposed in the paper are not clearly established by the reported experiments. I am hesitant to recommend acceptance to the main conference in its current form.
train
[ "ByzPtktlM", "BkD44d9gM", "ByjrTO5ef", "H1csCkCmz", "ryz-A1CXz", "HkKhMOgEM", "Hk2EakRmf", "SJvH3J0XM", "SkCykNYCb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "This paper proposes an interesting idea--to learn a flexible prior from data by maximizing data likelihood.\n\nIt seems that in the prior improvement stage, what you do is training a GAN with CG+dec as the generator while D_I as the discriminator (since you also update dec at the prior improvement stage). So it can also be regarded as GAN trained with an additional enc and D_c, and additional objective. In my opinion, this may explain why your model can generate sharper images.\n\nThe experiments do demonstrate the power of their model compared to AAE. However, only the qualitative analysis may not persuade me and more thorough analysis is needed.\n\n1. About the latent space for z. The motivation in AAE is to impose aggregated posterior regularization $D(q(z),p(z))$ where $p(z)$ is chosen as a simple one, e.g., Gaussian. I'm curious how the geometry of the latent space will be, when the code generator is introduced. Maybe some visualization like t-sne will be helpful.\n2. Any quantitative analysis? Doing a likelihood analysis like that in the AAE paper will be very informative. \n", "This paper propose a simple extension of the adversarial auto-encoders for (conditional) image generation. The general idea is that instead of using Gaussian prior, the propose algorithm uses a \"code generator\" network to warp the gaussian distribution, such that the internal prior of the latent encoding space is more expressive and complicated. \n\nPros:\n- The proposed idea is simple and easy to implement\n- The results show improvement in terms of visual quality\n\nCons:\n- I agree that the proposed prior should better capture the data distribution. However, incorporating a generic prior over the latent space plays a vital role as regularisation, this helps avoid model collapse. Adding a complicated code generation network brings too much flexibility for the prior part. This makes the prior and posterior learnable, which makes it easier to fool the regularisation discriminator (think about the latent code and prior code collapsed to two different points). As a result, this weakens the regularisation over the latent encoder space. \n- The above mentioned could be verified through qualitative results. As shown in Fig. 5. I believe this is a result due to the fact that the adversarial loss in the regularisation phase does not a significant influence there. \n- I have some doubts over why AAE works so poorly when the latent dimension is 2000. How to make sure it's not a problem of implementation or the model wasn't trapped into a bad local optima / saddle points. Could you justify this?\n- Contributions; this paper propose an improvement over a existing model. However, neither the idea/insights it brought can be applied onto other generative models, nor the improvement bring a significant improvement over the-state-of-the-arts. I am wondering what the community will learn from this paper, or what the author would like to claim as significant contributions. ", "Recently some interesting work on a role of prior in deep generative models has been presented. The choice of prior may have an impact on the expressiveness of the model [Hoffman and Johnson, 2016]. A few existing work presents methods for learning priors from data for variational autoencoders [Goyal et al., 2017][Tomczak and Welling, 2017]. The work, \"VAE with a VampPrior,\" [Tomczak and Welling, 2017] is missing in references.\n\nThe current work focuses on adversarial autoencoder (AAE) and introduces a code generator network to transform a simple prior into one that together with the generator can better fit the data distribution. Adversarial loss is used to train the code generator network, allowing the output of the network could be any distribution. I think the method is quite simple but interesting approach to improve AAEs without hurting the reconstruction. The paper is well written and is easy to read. The method is well described. However, what is missing in this paper is an analysis of learned priors, which help us to better understand its behavior. \n\nThe model is evaluated qualitatively only. What about quantitative evaluation? \n\n", "Dear Thanh Tung Hoang,\n\n\"AAE with code generator can produce much better images but suffer from mode collapse. It seems that the improvement in the image quality is due to the fact that the network has remembered some of the input. In other words, the mode collapse problem makes generated images look better. I would love to see the result without mode collapse problem. For example, you could try Wasserstein GAN which suffer less from mode collapse problem. I am also interested in the learned prior distribution. If you could provide some analysis on the learned prior then your paper could be much better.\"\n\nSince receiving the review comments, we have improved our model in several significant ways, including\n1)\tIntroducing a pair of more capable encoder and decoder with ResNets. (See appendix for the implementation details)\n2)\tEmploying a learned similarity metric in place of the default squared error in data space to improve the convergence of the decoder. (See Section 3 Learning Priors for the reasons)\n3)\tIntroducing the variational technique in InfoGAN for training the decoder and code generator when it is necessary to generate images conditionally on an input variable, as in our supervised and unsupervised learning tasks. (See Section 3 Learning Priors for the reasons)\n\nWith these changes, our model can now produce much better images without incurring obvious mode collapse.\n\nWe have re-written extensively the entire manuscript, presenting more experimental results and analyses. ", "Dear Reviewer 1,\n\n\"This paper proposes an interesting idea--to learn a flexible prior from data by maximizing data likelihood. It seems that in the prior improvement stage, what you do is training a GAN with CG+dec as the generator while D_I as the discriminator (since you also update dec at the prior improvement stage). So it can also be regarded as GAN trained with an additional enc and D_c, and additional objective. In my opinion, this may explain why your model can generate sharper images. \nThe experiments do demonstrate the power of their model compared to AAE. However, only the qualitative analysis may not persuade me and more thorough analysis is needed. \"\n\nThanks for your suggestions. We have provided more analysis results including comparison of inception scores and visualization of learned code space in the revised manuscript. \n\n\"1. About the latent space for z. The motivation in AAE is to impose aggregated posterior regularization $D(q(z),p(z))$ where $p(z)$ is chosen as a simple one, e.g., Gaussian. I'm curious how the geometry of the latent space will be, when the code generator is introduced. Maybe some visualization like t-sne will be helpful. \n\n2. Any quantitative analysis? Doing a likelihood analysis like that in the AAE paper will be very informative. \"\n\nThanks for your suggestion. For quantitative evaluation, we have compared the inception score of the proposed method with other generative models in Table I. We also have visualized the learned priors with t-SNE in Figs. 9 and 12 for the supervised and unsupervised learning tasks. The text in Section 4.2.1 and Section 4.2.2 have been modified accordingly to include the discussions (see the last paragraphs in these sections). \n\nIn addition, since receiving the review comments, we have improved our model in several significant ways, including \n1) Introducing a pair of more capable encoder and decoder with ResNets. (See appendix for the implementation details) \n2) Employing a learned similarity metric in place of the default squared error in data space to improve the convergence of the decoder. (See Section 3 Learning The Prior for the reasons) \n3) Introducing the variational technique in InfoGAN for training the decoder and code generator when it is necessary to generate images conditionally on an input variable, as in our supervised and unsupervised learning tasks. (See Section 3 Learning The Prior for the reasons) With these changes, our model can now produce much better images without incurring obvious mode collapse.\n\nWe have re-written extensively the entire manuscript, presenting more experimental results and analyses as requested. \n\n", "Dear Reviewer 3,\n\n\"- Contributions; this paper propose an improvement over a existing model. However, neither the idea/insights it brought can be applied onto other generative models, nor the improvement bring a significant improvement over the-state-of-the-arts. I am wondering what the community will learn from this paper, or what the author would like to claim as significant contributions. \"\n\nThanks for your comments. \n\nWith the changes we have made so far, we believe our contributions include\n1)\tWe replace the simple prior with a learned prior by training the code generator to output latent variables that will minimize an adversarial loss in data space.\n2)\tWe employ a learned similarity metric (Larsen et al., 2015) in place of the default squared error in data space for training the autoencoder.\n3)\tWe maximize the mutual information between part of the code generator input and the decoder output for supervised and unsupervised training using a variational technique introduced in InfoGAN (Chen et al., 2016).\n\nExtensive experiments confirm its effectiveness of generating better quality images and learning better disentangled representations than AAE in both supervised and unsupervised settings, particularly on complicated datasets. In addition, to the best of our knowledge, this is one of the first few works that attempt to introduce a learned prior for AAE.\n\nWe have re-written extensively the entire manuscript, presenting more experimental results and analyses as requested. ", "Dear Reviewer 3,\n\n\"This paper propose a simple extension of the adversarial auto-encoders for (conditional) image generation. The general idea is that instead of using Gaussian prior, the propose algorithm uses a \"code generator\" network to warp the gaussian distribution, such that the internal prior of the latent encoding space is more expressive and complicated. \n\nPros: \n- The proposed idea is simple and easy to implement \n- The results show improvement in terms of visual quality \n\nCons: \n- I agree that the proposed prior should better capture the data distribution. However, incorporating a generic prior over the latent space plays a vital role as regularisation, this helps avoid model collapse. \nAdding a complicated code generation network brings too much flexibility for the prior part. This makes the prior and posterior learnable, which makes it easier to fool the regularisation discriminator (think about the latent code and prior code collapsed to two different points). As a result, this weakens the regularisation over the latent encoder space. \n- The above mentioned could be verified through qualitative results. As shown in Fig. 5. I believe this is a result due to the fact that the adversarial loss in the regularisation phase does not a significant influence there. \"\n\nThanks for your comments. I agree that generic priors may help avoid mode collapse. However, it also risks overly regularizing the model, consequently decreasing its expressiveness. \n\nThis work, like few other similar attempts for VAE, aims to learn a prior through a code generation network so that the resulting model can better explain the data distribution. Unlike the prior works, which are mostly based on maximizing the data log-likelihood, ours tries to learn the prior by minimizing an adversarial loss in data space. \n\nSince receiving the review comments, we have improved our model in several significant ways, including\n1)\tIntroducing a pair of more capable encoder and decoder with ResNets. (See appendix for the implementation details)\n2)\tEmploying a learned similarity metric in place of the default squared error in data space to improve the convergence of the decoder. (See Section 3 Learning The prior for the reasons)\n3)\tIntroducing the variational technique in InfoGAN for training the decoder and code generator when it is necessary to generate images conditionally on an input variable, as in our supervised and unsupervised learning tasks. (See Section 3 Learning The Prior for the reasons)\n\nWith these changes, our model can now produce much better images without incurring obvious mode collapse. Furthermore, as shown in our visualization of latent code space in supervised and unsupervised tasks (see Figs 9 and 12), the code generator does exert a regularization effect while producing better images. \n\n\"- I have some doubts over why AAE works so poorly when the latent dimension is 2000. How to make sure it's not a problem of implementation or the model wasn't trapped into a bad local optima / saddle points. Could you justify this?\"\n\nThanks for pointing out this. We have implemented a pair of more capable encoder and decoder with ResNets. AAE now performs reasonably well (see Figs. 5 and 6). But, still when the latent dimension is increased to 100-D or 2000-D, the simple Gaussian prior may overly regularize the model. Imagine that the latent codes generated by the encoder may occupy only a tiny portion of the high dimensional code space specified by the prior. In this case, the limited training data can hardly ensure that every random sample drawn from the prior would produce a good decoded image.\n", "Dear Reviewer2, \n\n\"Recently some interesting work on a role of prior in deep generative models has been presented. The choice of prior may have an impact on the expressiveness of the model [Hoffman and Johnson, 2016]. A few existing work presents methods for learning priors from data for variational autoencoders [Goyal et al., 2017][Tomczak and Welling, 2017]. The work, \"VAE with a VampPrior,\" [Tomczak and Welling, 2017] is missing in references. \"\n\nThanks for your suggestion. We have cited this work in Introduction and provided a description in Related Work. \n\n\"The current work focuses on adversarial autoencoder (AAE) and introduces a code generator network to transform a simple prior into one that together with the generator can better fit the data distribution. Adversarial loss is used to train the code generator network, allowing the output of the network could be any distribution. I think the method is quite simple but interesting approach to improve AAEs without hurting the reconstruction. The paper is well written and is easy to read. The method is well described. However, what is missing in this paper is an analysis of learned priors, which help us to better understand its behavior. The model is evaluated qualitatively only. What about quantitative evaluation? \"\n\nThanks for your suggestion. For quantitative evaluation, we have compared the inception score of the proposed method with other generative models in Table I. We also have visualized the learned priors with t-SNE in Figs. 9 and 12 for the supervised and unsupervised learning tasks. The text in Section 4.2.1 and Section 4.2.2 have been modified accordingly to include the discussions (see the last paragraphs in these sections).\n\nIn addition, since receiving the review comments, we have improved our model in several significant ways, including \n1) Introducing a pair of more capable encoder and decoder with ResNets. (See appendix for the implementation details) \n2) Employing a learned similarity metric in place of the default squared error in data space to improve the convergence of the decoder. (See Section 3 Learning The Prior for the reasons) \n3) Introducing the variational technique in InfoGAN for training the decoder and code generator when it is necessary to generate images conditionally on an input variable, as in our supervised and unsupervised learning tasks. (See Section 3 Learning The Prior for the reasons) With these changes, our model can now produce much better images without incurring obvious mode collapse.\n\nWe have re-written extensively the entire manuscript, presenting more experimental results and analyses as requested. \n", "AAE with code generator can produce much better images but suffer from mode collapse. It seems that the improvement in the image quality is due to the fact that the network has remembered some of the input. In other words, the mode collapse problem makes generated images look better. I would love to see the result without mode collapse problem. For example, you could try Wasserstein GAN which suffer less from mode collapse problem. I am also interested in the learned prior distribution. If you could provide some analysis on the learned prior then your paper could be much better." ]
[ 6, 5, 6, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJSr0GZR-", "iclr_2018_rJSr0GZR-", "iclr_2018_rJSr0GZR-", "SkCykNYCb", "ByzPtktlM", "Hk2EakRmf", "BkD44d9gM", "ByjrTO5ef", "iclr_2018_rJSr0GZR-" ]
iclr_2018_HyI6s40a-
Towards Safe Deep Learning: Unsupervised Defense Against Generic Adversarial Attacks
Recent advances in adversarial Deep Learning (DL) have opened up a new and largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems. We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model. The proposed PCL methodology is unsupervised, meaning that no adversarial sample is leveraged to build/train parallel checkpointing learners. We formalize the goal of preventing adversarial attacks as an optimization problem to minimize the rarely observed regions in the latent feature space spanned by a DL network. To solve the aforementioned minimization problem, a set of complementary but disjoint checkpointing modules are trained and leveraged to validate the victim model execution in parallel. Each checkpointing learner explicitly characterizes the geometry of the input data and the corresponding high-level data abstractions within a particular DL layer. As such, the adversary is required to simultaneously deceive all the defender modules in order to succeed. We extensively evaluate the performance of the PCL methodology against the state-of-the-art attack scenarios, including Fast-Gradient-Sign (FGS), Jacobian Saliency Map Attack (JSMA), Deepfool, and Carlini&WagnerL2 algorithm. Extensive proof-of-concept evaluations for analyzing various data collections including MNIST, CIFAR10, and ImageNet corroborate the effectiveness of our proposed defense mechanism against adversarial samples.
rejected-papers
The paper proposes a method to detect and correct adversarial examples at the input stage (using a sparse coding based model) and/or at a hidden layer (using a GMM). These detector/corrector models are trained using only the natural examples. While the proposed method is interesting and has some novelty wrt to the specific models used for detection/correction (ie sparse coding and GMMs), there are crucial gaps in the empirical studies: - It does not compare with a highly relevant prior work MagNet (Meng and Chen, 2017) which also detects and corrects adversarial examples by modeling the distribution of the natural examples - The attacks used in the evaluations do not consider the setting where the existence (and architecture) of the defender models is known to the attacker - It does not evaluate the method on a stronger PGD attack (also known as iterative FGSM)
train
[ "rJ7exuPgM", "SJHkBN_xM", "HkkqUpk-f", "BJ9P82pmz", "Hklf_pYzf", "rJL6vTYMf", "H1R6UTKzM", "rkSaXaKMz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper present a method for detecting adversarial examples in a deep learning classification setting. The idea is to characterize the latent feature space (a function of inputs) as observed vs unobserved, and use a module to fit a 'cluster-aware' loss that aims to cluster similar classes tighter in the latent space. \n\nQuestions/Comments:\n\n- How is the checkpointing module represented? Which parameters are fit using the fine-tuning loss described on page 3? \n\n- What is the rationale for setting the gamma (concentration?) parameters to .01? Is that a general suggestion or a data-set specific recommendation?\n\n- Are the checkpointing modules designed to only detect adversarial examples? Or is it designed to still classify adversarial examples in a robust way?\n\nClarity: I had trouble understanding some of this paper. It would be nice to have a succinct summary of how all of the pieces presented fit together, e.g. the original victim network, fine-tuning loss, per-class dictionary learning w/ OMP. \n\nTechnical: It is hard to tell how some of the components of this approach are technically justified. \n\nNovel: I am not familiar enough with adversarial deep learning to assess novelty or impact. ", "Summary:\n The paper presents an unsupervised method for detecting adversarial examples of neural networks. The method includes two independent components: an ‘input defender’ which tried to inspect the input, and a ‘latent defender’ trying to inspect a hidden representation. Both are based on the claim that adversarial examples lie outside a certain sub-space occupied by the natural image examples, and modeling this sub-space hence enables their detection. The input defender is based on sparse coding, and the latent defender on modeling the latent activity as a mixture of Gaussians. Experiments are presented on MInst, Cifar10, and ImageNet.\n \n-\tIntroduction: The motivation for detecting adversarial examples is not stated clearly enough. How can such examples be used by a malicious agent to cause damage to a system? Sketching some such scenarios would help the reader understand why the issue is practically important. I was not convinced it is. \nPage 4: \n-\tStep 3 of the algorithm is not clear:\no\tHow exactly does HDDA model the data (formally) and how does it estimate the parameters? In the current version, the paper does not explain the HDDA formalism and learning algorithm, which is a main building block in the proposed system (as it provides the density score used for adversarial examples detection). Hence the paper cannot be read as a standalone document. I went on to read the relevant HDDA paper, but it is also not clear which of the model variants presented there is used in this paper.\no\tWhat is the relation between the model learned at stage 2 (the centers c^i) and the model learnt by HDDA? Are they completely different models? Or are the C^I used when learning the HDDA model (and how)? \nIf these are separate models, how are they used in conjunction to give a final density score? If I understand correctly, only the HDDA model is used to get the final score, and the C^i are only used to make the \\phy(x) representation more class-seperable. Is that right?\n-\tFigure 4, b and c: it is not clear what the (x,y,z) measurements plotted in these 3D drawings are (what are the axis).\nPage 5:\n-\tSection 2: the risk analysis is done in a standard Bayesian way and leads to a ratio of PDFs in equation 5. However, this form is not appropriate for the case presented at this paper, since the method presented only models one of these PDFs (Specifically p(x | W1) - there is not generative model of p(x|W2)). \n-\tThe authors claim in the last sentence of the section that p(x|W2) is equivalent to 1-p(x|W1), but this is not true: these are two continuous densities, they do not sum to 1, and a model of p(x|W2) is not available (as far as I understand the method)\nPage 6:\n-\tHow is equation 7) optimized?\n-\tWhich patchs are extracted from images, for training and at inference time? Are these patchs a dense coverage of the image? Sparsely sampled? Densely sampled with overlaps?\n-\tIts not clear enough what exactly is the ‘PSNR’ value which is used for the adversarial example detection, and what exactly is ‘profile the PSNR of legitimate samples within each class’. A formal definition of PSNR and’profiling’ is missing (does profiling simply mean finding a threshold for filtering?)\nPage 7:\n-\tFigure 7 is not very informative. Given the ROC curves in figure 8 and table 1 it is redundant. \n\nPage 8:\n-\tThe results in general indicate that the method is much better than chance, but it is not clear if it is practical, because the false alarm rates for high detection are quite high. For example on ImageNet, 14.2% of the innocent images are mistakenly rejected as malicious to get 90% detection rate. I do not think this working point is useful for a real application\n-\tGiven the high flares alarm rate, it is surprising that experiments with multiple checkpoints are not presented (specifically as this case of multiple checkpoints is discussed explicitly in previous sections of the paper). Experiments with multiple checkpoints are clear required to complete the picture regarding the empirical performance of this method\n-\tThe experiments show that essentially, the latent defenders are stronger than the input defender in most cases. However, an ablation study of the latent defender is missing: Specifcially, it is not clear a) how much does stage 2 (model refinement with clusters) contribute to the accuracy (how does the model do without it? And 3) how important is the HDDA and the specific variant used (which is not clear) important: is it important to model the Gaussians using a sub-space? Of which dimension?\n\nOverall:\nPros:\n-\t A nice idea with some novelty, based on a non-trivial observation\n-\tThe experimental results how the idea holds some promise\nCons\n-\tThe method is not presented clearly enough: the main component modeling the network activity is not explained (the HDDA module used)\n-\tThe results presented show that the method is probably not suitable for a practical application yet (high false alarm rate for good detection rate)\n-\tExperimental results are partial: results are not presented for multiple defenders, no ablation experiments\n\n\nAfter revision:\nSome of my comments were addressed, and some were not.\nSpecifically, results were presented for multiple defenders and some ablation experiments were highlihgted\nThings not addressed:\n - The risk analysis is still not relevant. The authors removed a clearly flawed sentence, but the analysis still assumes that two densities (of 'good' and 'bad' examples) are modeled, while in the work presented only one of them is. Hence this analysis does not add anything to the paper- it states a general case which does not fit the current scenario and its relation to the work is not clear. It would have been better to omit it and use the space to describe HDDA and the specific variant used in this work, as this is the main tool doing the distinction.\n\nI believe the paper should be accepted.\n", "This paper proposes an unsupervised method, called Parallel Checkpointing Learners (PCL), to detect and defend adversarial examples. The main idea is essentially learning the manifold of the data distribution and using Gaussian mixture models (GMMs) and dictionary learning to train a \"reformer\" (without seeing adversarial examples) to detect and correct adversarial examples. With PCL, one can use hypothesis testing framework to analyze the detection rate and false alarm of different neural networks against adversarial attacks. Although the motivation is well grounded, there are two major issues of this work: (i) limited novelty - the idea of unsupervised manifold projection method has been proposed in the previous work; and (ii) insufficient attack evaluations - the defender performance is evaluated against weak attacks or attacks with improper parameters. The details are as follows.\n\n1. Limited novelty and performance comparison - the idea of unsupervised manifold projection method has been proposed and well-studied in \"MagNet: a Two-Pronged Defense against Adversarial Examples\", appeared in May 2017. Instead of GMMs and dictionary learning in PCL, MagNet trains autoencoders for defense and provides sufficient experiments to claim its defense capability. On the other hand, the authors of this paper seem to be not aware of this pioneering work and claim \"To the best of our knowledge, our proposed PCL methodology is the first unsupervised countermeasure that is able to detect DL adversarial samples generated by the existing state-of-the-art attacks\", which is obviously not true. More importantly, MagNet is able to defend the adversarial examples very well (almost 100% success) no matter the adversarial examples are close to the information manifold or not. As a result, the resulting ROC and AUC score are expected be better than PCL. In addition, the authors of MagNet also compared their performance in white-box (attacker knowing the reformer), gray-box (having multiple independent reformers), and black-box (attacker not knowing the reformer) scenarios, whereas this paper only considers the last case.\n\n2. Insufficient attack evaluations - the attacks used in this paper to evaluate the performance of PCL are either weak (no longer state-of-the-art) or incorrectly implemented. For FGSM, the iterative version proposed by (Kurakin, ICLR 2017) should be used. JSMA and deep fool are not considered strong attacks now (see Carlini's bypassing 10 detection methods paper). Carlini-Wagner attack is still strong, but the authors only use 40 iterations (should be at least 500) and setting the confidence=0, which is known to be producing non-transferable adversarial examples. In comparison, MagNet has shown to be effective against different confidence parameters. \n\nIn summary, this paper has limited novelty, incremental contributions, and lacks convincing experimental results due to weak attack implementation. \n \n\n\n", "The reviewer appreciates the authors' efforts in replying the review comments and updating the submitted paper. However, the authors are strongly suggested to take a close read of MagNet and Carlini and Wagner's attack implementation to support the claimed contributions. First of all, MagNet is not simply a manifold projection method. Before entering the reformer for manifold projection, it also uses a detector to reject the adversarial example if its statistical divergence to the training data is large. By adjusting the threshold of the detector in MagNet, it can yield similar ROC and AUC analysis as proposed in this paper. In fact, I think the main difference between this paper and MagNet is in the approach of image reconstruction but not in methodology - MagNet uses auto-encoder and this paper uses dictionary learning. It would be great if we can see the actual defense comparison of these two methods. Secondly, the range of the confidence parameter selected for CW L2 attack in Table is not representative. Based on the evaluation of their S&P paper, the confidence should be selected from the range between 0 to 60 (or 100) instead of 0 to 1 reported in this paper.", "Thank you for your detailed comments. Please find our responses to your questions/comments below.\n\nIntroduction:\nFor safety-critical applications (e.g., unmanned vehicles and drones), artificial intelligence and machine learning agents will not be trusted until we obtain a better understanding of adversarial space and how to thwart such attacks. In particular, consider a traffic sign classifier used in self-driving cars. In this example, an adversary can carefully add imperceptible perturbation to a legitimate “stop” sign sample and fool the DL model to classify it as a “yield” sign; thus, jeopardizes the safety of the vehicle. As such, it is highly important to reject risky adversarial samples to ensure the integrity of DL models used in autonomous systems. We modified the second paragraph of the introduction section to further elaborate on the importance of adversarial sample detection.\n\nPage 4:\no The HDDA algorithm is used in conjunction with the data realignment (step 2 in Figure 2) to learn the density score used for adversarial sample detection. In particular, the data realignment is used to condense data points belonging to each class and HDDA is leveraged to find the corresponding mean and the conditional covariance matrix of a Gaussian Mixture Model that best describe the data points within each class as an ensemble of lower dimensional sub-spaces. We apologize for our over cited reference. We modified Paragraph 1 on page 4 to elaborate more on the HDDA algorithm. Please refer to the slide deck of the cited paper (http://lear.inrialpes.fr/~bouveyron/work/presentation_ASMDA05.pdf) for a succinct summary of the HDDA algorithm (e.g., slide 13).\n-----------------------------------------------------------------------------------\no Yes, you are right! the Ci is only used for data realignment and make the \\phi(x) representation more class-separable. The HDDA is used to learn the mean and conditional covariance matrix of a Gaussian Mixture Model that best describe the data points within each class.\n-----------------------------------------------------------------------------------\no Thank you for your note. The three axis show the first three Eigenvectors corresponding to the PCA of the data. We used the first three dimensions for visualization purposes only. The data points belong to higher dimensional spaces. We have modified the figure caption to avoid confusion.\n\nPage 5:\nThank you for your comment. Yes, W1 and W2 are not summed to be 1. we removed the sentence to avoid confusion.\n\nPage 6:\nWe modified the 2nd and 3rd Paragraph of Section 3 to address your comments. In particular, our dictionary learning approach is devised based upon the least angle regression (LAR) method (http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.MiniBatchDictionaryLearning.html) to solve the lasso problem defined in Eq 7. During the training phase, 150,000 randomly selected patches of training data are used to learn the dictionary. Whereas, during the inference phase, all of the non-overlapping patches within the image are denoised by the dictionary to form the whole reconstructed image and compute the PSNR.\n\nWe revised the paper (Paragraph 4 of Section 3) to provide the formal definition of PSNR and profiling. In summary, the classic Peak Signal-to-Noise Ratio (PSNR) is defined as PSNR = 20log(MAXI) - 10log(MSE),where the mean square error (MSE) is defined as the L2 difference between the input image and the reconstructed image based on the dictionary (||Zi - DiVi||2). The MAXI is the maximum possible pixel value of the image (usually equivalent to 255). Profiling does mean finding a threshold for filtering; we added the text to the paper to resolve confusion. \n\nPage 7:\nYou are right. However, we have included Figure 7 to clearly demonstrate the relation between security parameter, the probability of true detection (True positive), and the probability of false alarm (False positive) for the broad audience and avoid any possible confusion. ", "Thank you for your detailed comments. Please find our responses to your questions/comments below.\n\n- Each checkpointing module can be represented as a function \\phi(x) along with the centers Ci for different classes. Here, x is the input and \\phi(.) is the stack of neural network layers up to the corresponding checkpoint layer. The training of each checkpoint module involves learning \\phi() (parameters of the defender neural network) and corresponding centers in the checkpoint layer. Nota that the parameters of the defender module are different than those of the victim model. In other words, each defender module has its own trainable parameters.\n\n- Gamma is a variable among other DL hyperparameters that should be tuned depending on the application data. We empirically found 0.01 to be effective across various benchmarks presented in the paper. However, in general, the user can easily change this hyperparameter in our API depending on her application.\n\n- We have chosen not to use the checkpointing modules as classifiers to focus more on understanding adversarial space, by constructing PDF estimations for legitimate samples. The checkpoint modules are only leveraged to detect adversarial samples that significantly deviate from the underlying PDF. \n\nClarity:\nAdversarial and legitimate samples differ in certain statistical properties. Adversarial samples are particularly crafted by finding the rarely explored dimensions in a L∞-ball of radius ε. In PCL methodology, samples whose features lie in the unlikely subspaces are identified and rejected as risky samples. Our conjecture is that a general ML model equipped with the side information about the density distribution of the input data as well as the distribution of the latent feature vectors is effectively more robust against adversarial samples.\n\nWe formalize the goal of preventing adversarial attacks as an optimization problem to minimize the rarely observed regions in the latent feature space spanned by an ML model. To solve the aforementioned minimization problem, a set of complementary but disjoint checkpoint modules are trained to capture the Probability Density Function (PDF) of the model based on legitimate data samples. The checkpointing modules explicitly characterize the geometry of the input data and the corresponding high-level data abstractions (PDF) within an ML model. In a neural network, for example, each PCL module checkpoints a certain intermediate hidden layer (Figure 1c). The complement of the space characterized by the PDF is marked as the rarely observed region, enabling statistical tests to determine the validity of new samples. Once such characterizations are obtained, statistical testing is used at runtime to determine the legitimacy of new data samples. The defender modules evaluate the input sample probability in parallel with the victim model and raise alarm flags for data points that lie within the rarely explored regions. \n\nThe outputs of PCL modules (checkpoints) are aggregated into a single output node (the red neuron in Figure 1c) that quantitatively measures the reliability (confidence) of the victim model prediction. For any input sample, the new neuron outputs a confidence rate in the unit interval [0, 1], with 0 and 1 indicating highly risky and safe samples, respectively. The extra neuron incorporates a “don’t know” class into the model: samples for which the prediction confidence is below a certain threshold are treated as risky inputs. The threshold is determined based on the safety sensitivity of the application for which the ML model is employed. \n\nThe per-class dictionaries with OMP reconstruction are further used to assess the Peak Signal to Noise Ratio (PSNR) of incoming samples in the input space and automatically filter out samples that incur low PSNR as demonstrated in Figure 6. Whereas, the latent defenders (PDF) are particularly leveraged to detect adversarial samples with a low perturbation in the input space but high divergence from actual distribution in the latent feature space.", "Page 8:\no We agree with the reviewer that minimizing the false alarm rate is an important concern. As shown through our experiments, the false alarm rate varies in different attack scenarios and benchmarks. We have observed that checkpointing some layers in a deep architecture performs better in terms of minimizing false alarms while achieving a particular detection accuracy. As such, we need to give more weights to those critical checkpoint modules when aggregating the results to whether accept or reject an incoming sample. We modified Section 5 of the paper to include such experiments. In particular, the newly added Figure 10 illustrates the impact of using multiple (three) checkpoints simultaneously in the MNIST benchmark. As demonstrated, using multiple parallel checkpoints with negative correlation can significantly reduce the false positive rate while achieving the same level of detection for adversarial samples. \n\nThe space spanned by deep neural networks is far away from being completely understood and we cannot claim to fully understand the space. Our results is a primary step towards a more formal characterization of the adversarial space in the context of deep learning. Investigating the robustness of checkpoint modules at different layers against various attacks is an interesting topic of research. In particular, we are looking into automated techniques to select mixing coefficients for multiple redundancies as future work. \n\no The cluster refinement is necessary to statistically separate the PDF distributions of adversarial and legitimate samples. The effect of clustering is visualized in Figure 3-b and 3-c, where the histogram of the distance of sample points to the centers (means) of Gaussian Mixture PDFs is shown before and after refinement. The overlap between the two distributions (as in 3-b) injects detection errors: there might be samples that are in fact legitimate, but the detector treats them as adversarial samples. Therefore, the cluster refinement reduces the false alarms. \n\nPlease refer to our answer to your second question for details regarding the HDDA method. ", "Response to comment 1.\n\nThank you for your comments. However, we strongly disagree with the reviewer’s assessment and conclusion. We have modified the paper (Paragraph 3 of Section 6) to include MagNet as one of the prior work. We would like to highlight the following two points for clarification. \n\n(I) We disagree with the reviewer regarding the robustness of MagNet and particularly the ROC and AUC score of that approach. MagNet and similar works rely on manifold learned by ML agents to “reform” adversarial samples and correct the wrong decision made by the ML agent, e.g., by de-noising samples near the manifold. As shown by Carlini and Wagner in (https://arxiv.org/pdf/1711.08478.pdf), manifold projection methods including MagNet are not robust to adversarial samples. In particular, it has been shown that MagNet only works for very small perturbation values and totally fails (less than 1% detection rate) if the distortion to generate adversarial sample increased by approximately 30% in the worst-case scenario. This performance is even worse than a random prediction (diagonal-line) in our reported ROC curves.\n\nOur proposed PCL methodology is fundamentally different than manifold projection methods such as MagNet. Our conjecture is that the vulnerability of ML models to adversarial samples originates from the existence of rarely explored sub-spaces in each feature map. Due to the curse of dimensionality in modern applications, it is often not practical to fully cover the underlying high-dimensional space spanned by modern ML applications. What we can do, instead, is to construct statistical modules that can quantitatively “detect” whether or not a certain sample comes from the subspaces that were exposed to the ML agent. To ensure robustness against adversarial samples, we argue that ML models should be capable of rejecting samples that are likely to come from the unexplored regions. Unlike manifold projection methods (e.g., MagNet) that particularly rely on data manifolds and remain oblivious to the density of the training samples, our proposed PCL defense methodology learns the probability density function (pdf) of legitimate samples in the latent feature space and raise question marks for risky samples based on the estimated density and an application-specific security parameter. Note that PCL is a detection methodology and is not used to reform data samples.\n\n(II) The ROC curves provided in Figure 8 and Table 1, shows the performance of our proposed PCL methodology in best-case and worst-case scenario by swiping various thresholds that can be selected by a defender depending on the attack and/or application data. As stated in Paragraph 4 of the Introduction section, we assume a white-box attack model in which the attacker knows everything about the victim model including its architecture, learning algorithm, and parameters. \n\nResponse to comment 2.\n\nThe reviewer is incorrect in the reading of our work. Our aim in including both older and the state-of-the-art attacks is to empirically confirm that our unsupervised PCL methodology can generalize well across a wide range of attacks and is not customized for only one attack scenario. As for implementation, we use the well-known CleverHans adversarial attack library. The reviewer claims our choice of parameters for Carlini’s attack (which the reviewer also admits that this is a strong attack) is not correct. We initially had chosen the default value of the attack based on the original paper. Nevertheless, we have repeated our experiments with the reviewer’s suggested parameters. In particular, for the Carlini attack, we have changed the number of iterations to 500 and created adversarial samples with different confidence parameters in the range 0-99% per your suggestion (updated results are available in Table 1 and Figure 8; new parameter sets can be seen in Table 3). As demonstrated, the performance of our defender is consistent with the results reported previously. This is also consistent with our claim that PCL is rather a generic unsupervised mechanism to detect adversarial samples based on the data distribution in the latent space and is robust against the mechanics of the attack. \n" ]
[ 5, 7, 3, -1, -1, -1, -1, -1 ]
[ 3, 3, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyI6s40a-", "iclr_2018_HyI6s40a-", "iclr_2018_HyI6s40a-", "rkSaXaKMz", "SJHkBN_xM", "rJ7exuPgM", "SJHkBN_xM", "HkkqUpk-f" ]
iclr_2018_ryZERzWCZ
The Information-Autoencoding Family: A Lagrangian Perspective on Latent Variable Generative Modeling
A variety of learning objectives have been recently proposed for training generative models. We show that many of them, including InfoGAN, ALI/BiGAN, ALICE, CycleGAN, VAE, β-VAE, adversarial autoencoders, AVB, and InfoVAE, are Lagrangian duals of the same primal optimization problem. This generalization reveals the implicit modeling trade-offs between flexibility and computational requirements being made by these models. Furthermore, we characterize the class of all objectives that can be optimized under certain computational constraints. Finally, we show how this new Lagrangian perspective can explain undesirable behavior of existing methods and provide new principled solutions.
rejected-papers
The paper provides a constrained mutual information objective function whose Lagrangian dual covers several existing generative models. However reviewers are not convinced of the significance or usefulness of the proposed unifying framework (at least from the way results are presented currently in the paper). Authors have not taken any steps towards revising the paper to address these concerns. Improving the presentation to bring out the significance/utility of the proposed unifying framework is needed.
val
[ "S1ufxZqlG", "SkugmHtgf", "BJ8bKuOlM", "B1A1_t67z", "SJ2PZA-XM", "BycXcabXz", "rkckcaWXM", "HJqtmElmf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "EDIT: I have read the authors' rebuttals and other reviews. My opinion has not been changed. I recommend the authors significantly revise their work, streamlining the narrative and making clear what problems and solutions they solve. While I enjoy the perspective of unifying various paths, it's unclear what insights come from a simple reorganization. For example, what new objectives come out? Or given this abstraction, what new perspectives or analysis is offered?\n\n---\n\nThe authors propose an objective whose Lagrangian dual admits a variety of modern objectives from variational auto-encoders and generative adversarial networks. They describe tradeoffs between flexibility and computation in this objective leading to different approaches. Unfortunately, I'm not sure what specific contributions come out, and the paper seems to meander in derivations and remarks that I didn't understand what the point was.\n\nFirst, it's not clear what this proposed generalization offers. It's a very nuanced and not insightful construction (eq. 3) and with a specific choice of a weighted sum of mutual informations subject to a combinatorial number of divergence measure constraints, each possibly held in expectation (eq. 5) to satisfy the chosen subclass of VAEs and GANs; and with or without likelihoods (eq. 7). What specific insights come from this that isn't possible without the proposed generalization?\n\nIt's also not clear with many GAN algorithms that reasoning with their divergence measure in the limit of infinite capacity discriminators is even meaningful (e.g., Arora et al., 2017; Fedus et al., 2017). It's only true for consistent objectives such as MMD-GANs.\n\nSection 4 seems most pointed in explaining potential insights. However, it only introduces hyperparameters and possible combinatorial choices with no particular guidance in mind. For example, there are no experiments demonstrating the usefulness of this approach except for a toy mixture of Gaussians and binarized MNIST, explaining what is already known with the beta-VAE and infoGAN. It would be useful if the authors could make the paper overall more coherent and targeted to answer specific problems in the literature rather than try to encompass all of them.\n\nMisc\n+ The \"feature marginal\" is also known as the aggregate posterior (Makhzani et al., 2015) and average encoding distribution (Hoffman and Johnson, 2016); also see Tomczak and Welling (2017).", "Update after rebuttal\n==========\nThanks for your response on my questions. The stated usefulness of the method unfortunately do not answer my worry about the significance. It remains unclear to me how much \"real\" difference the presented results would make to advance the existing work on generative models. Also, the authors did not promised any major changes in the final version in this direction, which is why I have reduced my score.\n\nI do believe that this work could be useful and should be resubmitted. There are two main things to improve. First, the paper need more work on improving the clarity. Second, more work needs to be added to show that the paper will make a real difference to advance/improve existing methods.\n\n==========\nBefore rebuttal\n==========\nThis paper proposes an optimization problem whose Lagrangian duals contain many existing objective functions for generative models. Using this framework, the paper tries to generalize the optimization problems by defining computationally-tractable family which can be expressed in terms of existing objective functions. \n\nThe paper has interesting elements and the results are original. The main issue is that the significance is unclear. The writing in Section 3 is unclear for me, which further made it challenging to understand the consequences of the theorems presented in that section. \n\nHere is a big-picture question that I would like to know answer for. Do the results of sec 3 help us identify a more useful/computationally tractable model than exiting approaches? Clarification on this will help me evaluate the significance of the paper.\n\nI have three main clarification points. First, what is the importance of T1, T2, and T3 classes defined in Def. 7, i.e., why are these classes useful in solving some problems? Second, is the opposite relationship in Theorem 1, 2, and 3 true as well, e.g., is every linear combination of beta-ELBO and VMI is equivalent to a likelihood-based computable-objective of KL info-encoding family? Is the same true for other theorems?\n\nThird, the objective of section 3 is to show that \"only some choices of lambda lead to a dual with a tractable equivalent form\". Could you rewrite the theorems so that they truly reflect this, rather than stating something which only indirectly imply the main claim of the paper.\n\nSome small comments:\n- Eq. 4. It might help to define MI to remind readers.\n- After Eq. 7, please add a proof (may be in the Appendix). It is not that straightforward to see this. Also, I suppose you are saying Eq. 3 but with f from Eq. 4.\n- Line after Eq. 8, D_i is \"one\" of the following... Is it always the same D_i for all i or it could be different? Make this more clear to avoid confusion.\n- Last line in Para after Eq. 15, \"This neutrality corresponds to the observations made in..\" It might be useful to add a line explaining that particular \"observation\"\n- Def. 7, the names did not make much sense to me. You can add a line explaining why this name is chosen.\n- Def. 8, the last equation is unclear. Does the first equivalence impy the next one? \n- Writing in Sec. 3.3 can be improved. e.g., \"all linear operations on log prob.\" is very unclear, \"stated computational constraints\" which constraints?\n", "Thank you for the feedback, I have read it.\n\nI do think that developing unifying frameworks is important. But not all unifying perspective is interesting; rather, a good unifying perspective should identify the behaviour of existing algorithms and inspire new algorithms.\n\nIn this perspective, the proposed framework might be useful, but as noted in the original review, the presentation is not clear, and it's not convincing to me that the MI framework is indeed useful in the sense I described above.\n\nI think probably the issue is the lack of good evaluation methods for generative models. Test-LL has no causal relationship to the quality of the generated data. So does MI. So I don't think the argument of preferring MI over MLE is convincing.\n\nSo in summary, I will still keep my original score. I think the paper will be accepted by other venues if the presentation is improved and the advantage of the MI perspective is more explicitly demonstrated.\n\n==== original review ====\n\nThank you for an interesting read.\n\nThe paper presented a unifying framework for many existing generative modelling techniques, by first considering constrained optimisation problem of mutual information, then addressing the problem using Lagrange multipliers.\n\nI see the technical contribution to be the three theorems, in the sense that it gives a closure of all possible objective functions (if using the KL divergences). This can be useful: I'm tired of reading papers which just add some extra \"regularisation terms\" and claim they work. I did not check every equation of the proof, but it seems correct to me.\n\nHowever, an imperfection is, the paper did not provide a convincing explanation on why their view should be preferred compared to the original papers' intuition. For example in VAE case, why this mutual information view is better than the traditional view of approximate MLE, where q is known to be the approximate posterior? A better explanation on this (and similarly for say infoGAN/infoVAE) will significantly improve the paper.\n\nContinuing on the above point, why in section 4 you turn to discuss relationship between mutual information and test-LL? How does that relate to the main point you want to present in the paper, which is to prefer MI interpretation if I understand it correctly?\n\nTerm usage: we usually *maximize* the ELBO and *minimise* the variational free-energy (VFE). ", "Thank you for your comment. The solution proposed in our paper is to bound the mutual information rather than direct optimization of the Lagrangian multipliers. Direct maximization would lead to maximizing it to infinity for infeasible problems. Our experiments show that bounding the mutual information can solve the problem: as soon as mutual information reaches the preset bound, log likelihood starts to improve.", "Thank you for your feedback.\n\nCould you add experiments that optimises the Lagrange multiplier as well? It would help strengthen the paper.", "We thank the reviewers for their time and valuable feedback. \n\n“However, an imperfection is, the paper did not provide a convincing explanation on why their view should be preferred compared to the original papers' intuition.” For example in VAE case, why this mutual information view is better than the traditional view of approximate MLE, where q is known to be the approximate posterior? A better explanation on this (and similarly for say infoGAN/infoVAE) will significantly improve the paper. Continuing on the above point, why in section 4 you turn to discuss relationship between mutual information and test-LL? How does that relate to the main point you want to present in the paper, which is to prefer MI interpretation if I understand it correctly?“\n\nOur view (optimize mutual information under distribution matching constraint) provides several understandings traditional perspectives do not provide. First, several attributes of an objective are revealed by the Lagrangian form: information preference, possible optimization methods (likelihood based or likelihood free), closure (most generic form) of model family, etc. In addition Section 4 proceeds to demonstrate two applications where the Lagrangian perspective reveal problems/features that are difficult to identify from traditional perspectives. \n\n\n1.Correct optimization of the Lagrangian dual requires maximization over the Lagrangian parameters. However, all existing methods use fixed (arbitrarily chosen) Lagrangian parameters. We show failure cases where this does not correctly optimize the primal problem. For example, when the primal objective is information maximization under constraints of distributional consistency, optimization with fixed Lagrangian parameters can maximize mutual information indefinitely without ever encouraging distributional consistency. As a result, data fit (distributional consistency) may even get worse during training (for example, resulting in lower test log likelihood) as mutual information is maximized. We show that this also happens in practice. \n\n2.The Lagrangian perspective allows us to explicitly weight (“price”) different (conflicting) terms in the objective. For example, suppose the input x has more dimensions than the feature space z. Then for the same per-dimension loss, the input space is weighted more than the latent space (because it has more dimensions). We show in the paper that increasing the weight on matching marginals on z can solve the problem and leads to better performance. In general, we can write out the desired preference in Lagrangian form, and then convert it into a familiar model and optimization method (in our example, this corresponds to InfoVAE with a specific hyper-parameter choice.)\n", "We thank the reviewers for their time and valuable feedback. \n\n“The main issue is that the significance is unclear.”\n\nBeyond providing an organizational principle for learning objectives (highlighting their information maximization/minimization properties and trade-offs between computational requirements and flexibility) our new perspective is useful for several reasons (Sections 3 and 4):\n\n1. We are able to characterize **all** learning objectives that can be optimized under given computational constraints (likelihood based optimization; unary likelihood free optimization; binary likelihood free optimization) providing a “closure” result. Even though we do not introduce a new learning objective, we show that (slightly generalized versions) of ten (already known) “base classes” encompass all possible objectives in each category. Therefore, in a certain sense, we show that there do not exist “new” objectives under our stated assumptions on how objectives can be constructed. \n\n2. We show that several known problems are revealed by the Lagrangian perspective and hold across the entire model family: \n\na. Correct optimization of the Lagrangian dual requires maximization over the Lagrangian parameters. However, all existing methods use fixed (arbitrarily chosen) Lagrangian parameters. We show failure cases where this does not correctly optimize the primal problem. For example, when the primal objective is information maximization under constraints of distributional consistency, optimization with fixed Lagrangian parameters can maximize mutual information indefinitely without ever encouraging distributional consistency. As a result, data fit (distributional consistency) may even get worse during training (for example, resulting in lower test log likelihood) as mutual information is maximized. We show that this also happens in practice. \n\nb. The Lagrangian perspective allows us to explicitly weight (“price”) different (conflicting) terms in the objective. For example, suppose the input x has more dimensions than the feature space z. Then for the same per-dimension loss, the input space is weighted more than the latent space (because it has more dimensions). We show in the paper that increasing the weight on matching marginals on z can solve the problem and leads to better performance. In general, we can write out the desired preference in Lagrangian form, and then convert it into a familiar model and optimization method (in our example, this corresponds to InfoVAE with a specific hyper-parameter choice.)\n\n“What is the importance of T1, T2, and T3 classes defined in Def. 7, i.e., why are these classes useful in solving some problems?”\n\nIt has been observed experimentally that T1 T2 and T3 are increasingly more challenging in terms of optimization stability, sensitivity to hyper-parameters, and outcome of optimization (Arjovsky et al., 2017). In particular, T1 (likelihood based, e.g. VAE) is highly stable and converges quickly, while T2/T3 methods (such as GANs) suffer from issues such as optimization stability, non-convergence. T3 is slightly more challenging than T2 because BiGAN/ALI (Dumoulin et al., 2016a; Donahue et al., 2016) tend to suffer from inaccurate inference. \n\n“Is the opposite relationship in Theorem 1, 2, and 3 true as well, e.g., is every linear combination of beta-ELBO and VMI is equivalent to a likelihood-based computable-objective of KL info-encoding family? Is the same true for other theorems?”\n\n\nYes the opposite relationship is true as well. The existing objectives enumerated in Theorem 1, 2, 3 are exactly equivalent to T1/T2/T3 computably objectives respectively.\n\n“Third, the objective of section 3 is to show that only some choices of lambda lead to a dual with a tractable equivalent form. Could you rewrite the theorems so that they truly reflect this, rather than stating something which only indirectly imply the main claim of the paper.”\n\nThe statement we supported with Theorems 1/2/3 is: only some parameters choices lead to objectives in each computability (T1/T2/T3) classes (easy vs hard to optimize). For example, only parameters choices that correspond to beta-VAE/VMI can have a likelihood-based computable equivalent form. Most objectives cannot be equivalently transformed to become a likelihood based computable objective. We have revised the paper to make the statement more clear. \n\n“Some small comments”\n\nThank you. We have revised the writing according to the advice. \n", "We thank the reviewers for their time and valuable feedback. \n\n“It would be useful if the authors could make the paper overall more coherent and targeted to answer specific problems in the literature rather than try to encompass all of them.”\n\nWe respectfully disagree. We strongly believe that identifying connections between existing methods and developing general frameworks and theories that encompass as many existing methods as possible is a fundamental scientific goal. Machine learning research is not only about developing new methods and beating benchmarks, but also achieving a deeper understanding of the strengths, weaknesses, and relationships of existing techniques. \n\n\n“What specific insights come from this that isn't possible without the proposed generalization?”\n\nBeyond providing an organizational principle for learning objectives (highlighting their information maximization/minimization properties and trade-offs between computational requirements and flexibility) our new perspective is useful for several reasons:\n\n1. We are able to characterize **all** learning objectives that can be optimized under given computational constraints (likelihood based optimization; unary likelihood free optimization; binary likelihood free optimization) providing a “closure” result. Even though we do not introduce a new learning objective, we show that (slightly generalized versions) of ten (already known) “base classes” encompass all possible objectives in each category. Therefore, in a certain sense, we show that there do not exist “new” objectives under our stated assumptions on how objectives can be constructed. \n\n2. We show that several problems are revealed by the Lagrangian perspective and hold across the entire model family: \n\na. Correct optimization of the Lagrangian dual requires maximization over the Lagrangian parameters. However, all existing methods use fixed (arbitrarily chosen) Lagrangian parameters. We show failure cases where this does not correctly optimize the primal problem. For example, when the primal objective is information maximization under constraints of distributional consistency, optimization with fixed Lagrangian parameters can maximize mutual information indefinitely without ever encouraging distributional consistency. As a result, data fit (distributional consistency) may even get worse during training (for example, resulting in lower test log likelihood) as mutual information is maximized. We show that this also happens in practice. \n\nb. The Lagrangian perspective allows us to explicitly weight (“price”) different (conflicting) terms in the objective. For example, suppose the input x has more dimensions than the feature space z. Then for the same per-dimension loss, the input space is weighted more than the latent space (because it has more dimensions). We show in the paper that increasing the weight on matching marginals on z can solve the problem and leads to better performance. In general, we can write out the desired preference in Lagrangian form, and then convert it into a familiar model and optimization method (in our example, this corresponds to InfoVAE with a specific hyper-parameter choice.)\n" ]
[ 4, 5, 6, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryZERzWCZ", "iclr_2018_ryZERzWCZ", "iclr_2018_ryZERzWCZ", "SJ2PZA-XM", "BycXcabXz", "BJ8bKuOlM", "SkugmHtgf", "S1ufxZqlG" ]
iclr_2018_H1wt9x-RW
Interpretable and Pedagogical Examples
Teachers intentionally pick the most informative examples to show their students. However, if the teacher and student are neural networks, the examples that the teacher network learns to give, although effective at teaching the student, are typically uninterpretable. We show that training the student and teacher iteratively, rather than jointly, can produce interpretable teaching strategies. We evaluate interpretability by (1) measuring the similarity of the teacher's emergent strategies to intuitive strategies in each domain and (2) conducting human experiments to evaluate how effective the teacher's strategies are at teaching humans. We show that the teacher network learns to select or generate interpretable, pedagogical examples to teach rule-based, probabilistic, boolean, and hierarchical concepts.
rejected-papers
The paper proposes iterative training strategies for learning teacher and student models. They show how iterative training can lead to interpretable strategies over joint training on multiple datasets. All the reviewers felt the idea was interesting, although, one of the reviewers had concerns about the experimentation. However, there is a BIG problem with this submission. The author names appear in the manuscript thus disregarding anonymity.
train
[ "Bk8IzGblG", "Hk2pegqlf", "r1RtXlk-f", "B1U8nvY7G", "H1Et5DFmf", "BJYBqDFmG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This is a well written paper on a compelling topic: how to train \"an automated teacher\" to use intuitive strategies that would also apply to humans. \n\nThe introduction is fairly strong, but this reviewer wishes that the authors would have come up with an intuitive example that illustrates why the strategy \"1) train S on random exs; 2) train T to pick exs for S\" makes sense. Such an example would dramatically improve the paper's readability.\n\nThe paper appears to be original, and the related work section is quite extensive.\n\nA second significant improvement would be to add an in-depth running example in section 3, so that the authors could illustrate why the BR strategy makes sense (Algorithm 2).", "The authors define a novel method for creating a pair of models, a student and a teacher model, that are co-trained in a manner such that the teacher provides useful examples to the student to communicate a concept that is interpretable to people. They do this by adapting a technique from computational cognitive science called rational pedagogy. Rather than jointly optimize the student and teacher (as done previously), they have form a coupled relation between the student and teacher where each is providing a best response to the other. The authors demonstrate that their method provides interpretable samples for teaching in commonly used psychological domains and conduct human experiments to argue it can be used to teach people in a better manner than random teaching. \n\nUnderstanding how to make complex models interpretable is an extremely important problem in ML for a number of reasons (e.g., AI ethics, explainable AI). The approach proposed by the authors is an excellent first step in this direction, and they provide a convincing argument for why a previous approach (joint optimization) did not work. It is an interesting approach that builds on computational cognitive science research and the authors provide strong evidence their method creates interpretable examples. They second part of their article, where they test the examples created by their models using behavioral experiments was less convincing. This is because they used the wrong statistical tests for analyzing the studies and it is unclear whether their results would stand with proper tests (I hope they will! – it seems clear that random samples will be harder to learn from eventually, but I also hoped there was a stronger baseline.).\n\nFor analysis, the authors use t-tests directly on KL-divergence and accuracy scores; however, this is inappropriate (see Jaeger, 2008; Categorical data analysis: Away from ANOVAs (transformation or not) and towards logit mixed models. Journal of Memory and Language, 59(4), 434-446.). This is especially applicable to the accuracy score results and the authors should reanalyze their data following the paper referenced above. With respect to KL-divergence, a G-test can be used (see https://en.wikipedia.org/wiki/G-test#Relation_to_Kullback.E2.80.93Leibler_divergence). I suspect the results will still be meaningful, but the appropriate analysis is essential to be able to interpret the human results.\n\nAlso, a related article: One article testing rational pedagogy in more ML contexts and using it to train ML models that is\nHo, M. K., Littman, M., MacGlashan, J., Cushman, F., & Austerweil, J. L. (NIPS 2016). Showing versus Doing. Teaching by Demonstration.\n\nFor future work, it would be nice to show that the technique works for finding interpretable examples in more complex deep learning networks, which motivated the current push for explainable AI in the first place.", "This paper looks at a specific aspect of the learning-to-teach problem, where the learner is assumed to have a teacher that selects training examples for the student according to a strategy. The teacher's strategy should also be learned from data. In this case the authors look at finding interpretable teaching strategies. The authors define the \"good\" strategies as similar to intuitive strategies (based on human intuition about the structure of the domain) or strategies that are effective for teaching humans. \nThe suggested method follow an iterative process in which the student and teacher are interchangeably used. At each iteration the teacher generates examples based on the students current concept. \n\nI found it very difficult to follow the claims in the paper. Why is it assumed that human intuition is necessarily good? The experiments do not answer these questions, but are designed to show that the suggested approach follows human intuition. There are not enough details to get a good grasp of the suggested method and the different choices for it, and similarly the experiments are not described in a very convincing way. Specifically - the domains picked seem very contrived, there actual results are not reported, the size of the data seems minimal so it's not clear what is actually learned.\nHow would you analyze the teaching strategy in realistic cases, where there is no simple intuitive strategy? This would be more convincing.", "Thank you for the kind words overall and the helpful remarks around the statistical tests for the human experiments. We redid the analysis of the boolean concepts human experiment (the accuracy score) following the methodology in [Jaeger, 2008] and found the result to still be highly significant (p < 0.001). \n\nOur goal in the bimodal concepts human experiment (scored with KL divergence) was to test whether humans given examples from the teacher (Group 1) learn the bimodal distributions better than humans given random examples (Group 2). We thought a natural way to quantify this would be to calculate for each concept the KL divergence between a subject’s estimated distribution and the true ground-truth distribution. We then attempted to see if there was a significant difference between the KL divergences in Group 1 and Group 2 using a t-test.\n\nWe did not see how this translates to a G-test because we want to compare the KL divergences between the two groups, rather than testing whether the human distributions are different from the theoretical, true distribution.\n\nAs you mentioned, proper statistical tests are crucial to interpret our results. We realized another obstacle to easily interpreting our results is the interpretability of the measure. It is difficult for anyone to interpret what the KL divergence really means in this context.\n\nSo, we devised a more direct measure that also lent itself to clear statistical analysis. We emphasize that we did not change any of the data in the experiment, we merely reanalyzed it with a less obfuscated measure than KL divergence.\n\nIn the experiment, subjects were shown five test lines for each concept, two of which had a high probability of being in the concept and three of which had a very low probability of being in the concept. They were asked to rate on a scale from 1 to 5 how likely it was that a line was in the concept. Rather, than calculating KL divergence, we calculate an accuracy score where a success is defined as rating the high-probability lines > 3 and the low-probability lines <= 3.\n\nUsing an accuracy measure leads to much more interpretable analysis. Here are the statistics for the teacher and random group:\n\nGroup 1 (Teacher exs):\nMean accuracy: 0.183, Min accuracy: 0, Max accuracy: 0.7, Standard deviation: 0.190\n\nGroup 2 (Random exs):\nMean accuracy: 0.083, Min accuracy: 0, Max accuracy: 0.2, Standard deviation: 0.058\n\nUsing this measure makes it more clear how well people are actually doing. It is a hard task because they are tested on getting the entire distribution correct. And as you can see, neither group does very well. As mentioned in the paper, we believe a reason for the low accuracy is that humans have a different, incorrect prior over the structure of concepts. In particular, it seems like many believed that the concepts were unimodal distributions, rather than bimodal.\n\nHowever, the teacher group is better on average (18% versus 8%), and has much more variance, with the best getting 70% of the questions right, whereas the best under random examples is only 20%. \n\nThese differences were highly significant (p < 0.001), as calculated using the methodology from [Jaeger, 2008]. We also now share identical analytical methodology across our two experiments.\n\nWe have uploaded a revised version of the paper with these modifications in Section 5. Thank you again for the information regarding the statistical tests. It is important to us to ensure that our analysis is solid, and we welcome any further suggestions.", "Thank you for the kind words overall and the suggestion with regards to the exposition. We realized that although we gave intuition for why the joint strategy would not produce interpretable results in the introduction, we did not give much intuition for why the best response strategy would produce interpretable results. The intuition is essentially that step one allows the student to learn the relation between examples and concepts, and then in step two, the teacher exploits this to give informative examples. Since the teacher is exploiting the relationship between concepts and examples, the examples look interpretable because they are still grounded to the concepts.\n\nFor example, for the case with the rectangles, training on random examples in step one allows the student learns that the perimeter of the rectangle must contain all examples given to it. It learns a strategy of guessing approximately the smallest rectangle that will encompass all examples. Because the student expects examples to be within the rectangle, then in step two, the teacher automatically learns to give examples that are within the rectangle, without being constrained to. Thus, the teacher learns to pick the most informative examples that are still within the rectangle, which are the corners of the rectangle.\n\nWe have incorporated this extra exposition into the updated version of the paper, in the introduction and Section 3.", "We wish to clarify that the goal of our work is to investigate the scientific question, “What would lead to human interpretable teaching strategies in neural networks?”. We focus on interpretability because it is, as pointed out by the other reviewers, important for a variety of problems [1,2,3]. The hypothesis that we aim to test in the paper is that training the student and teacher in an iterative fashion will lead to more interpretable strategies than training the student and teacher jointly.\n\nIn order to evaluate “interpretability”, we need to operationalize it. Unfortunately, there is no consensus on what “interpretable” means [4,5], so instead, we devise two metrics that we hope together are robust in capturing what we mean by “interpretable”. The two metrics are (1) how close the teacher strategy is to human-designed strategies in each domain (2) how effective the strategies are for teaching humans.\n\nWe believe that the reviewer’s statement “there actual results are not reported” is based on a misunderstanding of what our goal is. We do report on the above metrics, and based on these two metrics find support for our hypothesis.\n\n[1] NIPS Interpretable Machine Learning Symposium (2017) http://interpretable.ml/\n[2] DARPA Explainable AI Program https://www.darpa.mil/program/explainable-artificial-intelligence\n[3] ICML Tutorial on Interpretable Machine Learning (2017) http://people.csail.mit.edu/beenkim/icml_tutorial.html\n[3] Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning.\n[4] Lipton, Z. C. (2016). The mythos of model interpretability." ]
[ 8, 8, 4, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1 ]
[ "iclr_2018_H1wt9x-RW", "iclr_2018_H1wt9x-RW", "iclr_2018_H1wt9x-RW", "Hk2pegqlf", "Bk8IzGblG", "r1RtXlk-f" ]
iclr_2018_B13EC5u6W
Thinking like a machine — generating visual rationales through latent space optimization
Interpretability and small labelled datasets are key issues in the practical application of deep learning, particularly in areas such as medicine. In this paper, we present a semi-supervised technique that addresses both these issues simultaneously. We learn dense representations from large unlabelled image datasets, then use those representations to both learn classifiers from small labeled sets and generate visual rationales explaining the predictions. Using chest radiography diagnosis as a motivating application, we show our method has good generalization ability by learning to represent our chest radiography dataset while training a classifier on an separate set from a different institution. Our method identifies heart failure and other thoracic diseases. For each prediction, we generate visual rationales for positive classifications by optimizing a latent representation to minimize the probability of disease while constrained by a similarity measure in image space. Decoding the resultant latent representation produces an image without apparent disease. The difference between the original and the altered image forms an interpretable visual rationale for the algorithm's prediction. Our method simultaneously produces visual rationales that compare favourably to previous techniques and a classifier that outperforms the current state-of-the-art.
rejected-papers
The paper proposes a semi-supervised method to make deep learning more interpretable and at the same time be accurate on small datasets. The main idea is to learn dense representations from unlabelled data and then use those for building classifiers on small datasets as well as generate visual explanations. The idea is interesting, however, as one reviewer points out the presentation is poor. For instance, Table 2 is not understandable. Given the high standards of ICLR this cannot be ignored especially given the fact that the authors had the benefit of updating the paper which is a luxury for conference submissions.
test
[ "BJvBh8sEz", "HJ8fKmLVz", "SkjR3ZUNM", "r1MKh-INz", "rJIBvHSEz", "ByWcetHlG", "rygt6qdxM", "SkMSOTOlG", "S1Hw58XEz", "H1Qslp1XG", "BJY8gayXz", "r1QHxTJXM", "SJ2XQTJQf" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Thank you for your reply - as per your request Table 2 has been updated to include some of our response to Reviewer 1 as a caption to help understand the table's contents. \n\n", "Thank you for updating the paper. I am satisfied with the changes.\n\nHowever, and as noted by the other reviewers, the description of the newly added Table 2 in the paper is very unclear. I needed to read your response to reviewer 1 to understand what was being presented. Please update the paper with a comprehensible description (ideally in the caption to Table 2). ", "Thank you for your comments - Reviewer 1 has also requested further explanation of Table 2 which we have detailed in our response to their comment below. ", "Thank you for your response to our comments - our reply to your concerns are as follows: \n\n\"The authors say they have \"included a blinded survey of domain experts in radiology [...] to address the concerns that readers may not be able to evaluate the images in Fig 4.\"\"\n\nDomain experts (radiologists) were consulted throughout our project which formed the driving force for us to produce these interpretable visual rationales. These visual rationales are intended to help domain experts build confidence in the model by demonstrating that features identified by the model correlate with features in the medical literature. \n\nIn order to show this, simply asking domain experts what features are being identified by the model is not useful - for instance the fact that Reviewer A identified that 35 out of 50 visual rationales were consistent with that of heart failure is not particularly informative without something to compare to. \n\nHence, we demonstrate that when using a model that is incorrectly trained (i.e. one that does not split its training and test data and hence overfits), generated visual rationales show less useful features, and in fact spurious features not normally used in radiograph interpretation (e.g. pacemakers). The lack of features is likely to trigger suspicion in the end user that the model may in fact be incorrectly trained.\n100 images with accompanying visual rationales were reviewed by domain experts. 50 images had their visual rationales generated by a correctly trained model and 50 by an incorrectly trained model. All images were predicted positive by their respective models. Two reviewers were sought (Reviewers A and B) who saw these 100 images twice in a randomized order, resulting in the columns A1, A2, B1 and B2. \n\nHence, out of the 50 radiographs predicted positive for heart failure in the correctly trained model, reviewer A identified 35 with cardiomegaly in their first run and 34 in the second run through. Each reviewer rated each image twice to demonstrate the intra as well as inter observer differences, which allows comparison between the visual rationales produced by the correctly and incorrectly trained models. The results, while not statistically analysed, shows that the incorrectly trained model demonstrates less recognizable features - suggesting that this may be a useful tool for end users to help decide if the model can be relied upon. \n\nWe are encouraged that you see the merit in our idea and hope that our explanation helps in the understanding of this table. \n\n\n\n\"The paper still does not properly separate the underlying theoretical idea, and specific implementational details in section 2 (Methods).\"\n\nOur paper is presented in a format that seeks reproducibility and hence follows a chronological development of each component. As you have pointed out, a drawback of this approach is that we do not properly separate the underlying theoretical idea from the specific implementation details, and domain specific details such as DCGANs are introduced prior to the key idea of the paper. We agree that more exposition on the underlying idea would be beneficial and further experiments could be conducted on different specific implementations in future work. \n", "Dear authors, \n\nthanks for your changes. I think overall the paper improved. \n\nThe newly added Table 2 however is entirely ununderstandable.This definitely needs a better caption and possibly more description in the text.", "* This paper models images with a latent code representation, and then tries to modify the latent code to minimize changes in image space, while changing the classification label. As the authors indicate, it lies in the space of algorithms looking to modify the image while changing the label (e.g. LIME etc).\n\n* This is quite an interesting paper with a sensible goal. It seems like the method could be more informative than the other methods. However, there are quite a number of problems, as explained below.\n\n* The explanation of eqs 1 and 2 is quite poor. \\alpha in (1) seems to be \\gamma in Alg 1 (line 5). \"L_target is a target objective which can be a negative class probability ..\" this assumes that the example is a positive class. Could we not also apply this to negative examples?\n\n\"or in the case of heart failure, predicted BNP level\" -- this doesn't make sense to me -- surely it would be necessary to target an adjusted BNP level? Also specific details should be reserved until a general explanation of the problem has been made.\n\n* The trade-off parameter \\gamma is a \"fiddle factor\" -- how was this set for the lung image and MNIST examples? Were these values different?\n\n* In typical ICLR style the authors use a deep network to learn the encoder and decoder networks. It would be v interesting (and provide a good baseline) to use a shallow network (i.e. PCA) instead, and elucidate what advantages the deep network brings.\n\n* The example of 4/9 misclassification seems very specific. Does this method also work on say 2s and 3s? Why have you not reported results for these kinds of tasks?\n\n* Fig 2: better to show each original and reconstructed image close by (e.g. above below or side-by-side).\n\nThe reconstructions show poor detail relative to the originals. This loss of detail could be a limitation.\n\n* A serious problem with the method is that we are asked to evaluate it in terms of images like Fig 4 or Fig 8. A serious study would involve domain experts and ascertain if Fig 4 conforms with what they are looking for.\n\n* The references section is highly inadequate -- no venues of publication are given. If these are arXiv give the proper ref. Others are published in conferences etc, e.g. Goodfellow et al is in Advances in Neural Information Processing Systems 27, 2014.\n\n* Overall: the paper contains an interesting idea, but given the deficiencies raised above I judge that it falls below the ICLR threshold.\n\n* Text:\n\nsec 2 para 4. \"reconstruction loss on the validation set was similar to the reconstruction loss on the validation set.\" ??\n\n* p 3 bottom -- give size of dataset\n\n* p 5 AUC curve -> ROC curve\n\n* p 6 Fig 4 use text over each image to better specify the details given in the caption.\n\n\n\n", "The main contribution of the paper is a method that provides visual explanations of classification decisions. The proposed method uses \n - a generator trained in a GAN setup\n - an autoencoder to obtain a latent space representation\n - a method inspired by adversarial sample generation to obtain a generated image from another class - which can then be compared to the original image (or rather the reconstruction of it). \nThe method is evaluated on a medical images dataset and some additional demonstration on MNIST is provided.\n\n\n - The paper proposes a (I believe) novel method to obtain visual explanations. The results are visually compelling although most results are shown on a medical dataset - which I feel is very hard for most readers to follow. The MNIST explanations help a lot. It would be great if the authors could come up with an additional way to demonstrate their method to the non-medical reader.\n\n - The paper shows that the results are plausible using a neat trick. The authors train their system with the testdata included which leads to very different visualizations. It would be great if this analysis could be performed for MNIST as well.\n\n\nFrom the related work, it would be nice to mention that generative models (p(x|c)) also often allow for explaining their decisions, e.g. the work by Lake and Tenenbaum on probabilistic program induction.\nAlso, there is the work by Hendricks et al on Generating Visual Explanations. This should probably also be referenced.\n\nminor comments: \n- some figures with just two parts are labeled \"from left to right\" - it would be better to just write left: ... right: ...\n- figure 2: do these images correspond to each other? If yes, it would be good to show them pairwise.\n- figure 5: please explain why the saliency map is relevant. This looks very noisy and non-interesting.\n\n", "The authors address two important issues: semi-supervised learning from relatively few labelled training examples in the presence of many unlabelled examples, and visual rationale generation: explaining the outputs of the classifiier by overlaing a visual rationale on the original image. This focus is mainly on medical image classification but the approach could potentially be useful in many more areas. The main idea is to train a GAN on the unlabeled examples to create a mapping from a lower-dimensional space in which the input features are approximately Gaussian, to the space of images, and then to train an encoder to map the original images into this space minimizing reconstruction error with the GAN weights fixed. The encoder is then used as a feature extractor for classification and regression of targets (e.g. heard disease). The visual rationales are generated by optimizing the encoded representation to simultaneously reconstruct an image close to the original and to minimize the probability of the target class. This gives an image that is similar to the original but with features that caused the classification of the disease removed. The resulting image can be subtracted from the original encoding to highlight problematic areas. The approach is evaluated on an in-house dataset and a public NIH dataset, demonstrating good performance, and illustrative visual rationales are also given for MNIST.\n\nThe idea in the paper is, to my knowledge, novel, and represents a good step toward the important task of generating interpretable visual rationales. There are a few limitations, e.g. the difficulty of evaluating the rationales, and the fact that the resolution is fixed to 128x128 (which means discarding many pixels collected via ionizing radiation), but these are readily acknowledged by the authors in the conclusion.\n\nComments:\n1) There are a few details missing, like the batch sizes used for training (it is difficult to relate epochs to iterations without this). Also, the number of hidden units in the 2 layer MLP from para 5 in Sec 2.\n2) It would be good to include PSNR/MSE figures for the reconstruction task (fig 2) to have an objective measure of error.\n3) Sec 2 para 4: \"the reconstruction loss on the validation set was similar to the reconstruction loss on the validation set\" -- perhaps you could be a little more precise here. E.g. learning curves would be useful.\n4) Sec 2 para 5: \"paired with a BNP blood test that is correlated with heart failure\" I suspect many readers of ICLR, like myself, will not be well versed in this test, correlation with HF, diagnostic capacity, etc., so a little further explanation would be helpful here. The term \"correlated\" is a bit too broad, and it is difficult for a non-expert to know exactly how correlated this is. It is also a little confusing that you begin this paragraph saying that you are doing a classification task, but then it seems like a regression task which may be postprocessed to give a classification. Anyway, a clearer explanation would be helpful. Also, if this test is diagnostic, why use X-rays for diagnosis in the first place?\n5) I would have liked to have seen some indicative times on how long the optimization takes to generate a visual rationale, as this would have practical implications.\n6) Sec 2 para 7: \"L_target is a target objective which can be a negative class probability or in the case of heart failure, predicted BNP level\" -- for predicted BNP level, are you treating this as a probability and using cross entropy here, or \nmean squared error?\n7) As always, it would be illustrative if you could include some examples of failure cases, which would be helpful both in suggesting ways of improving the proposed technique, and in providing insight into where it may fail in practical situations.", "I have read the responses to all reviewers, and the revised paper.\n\nI think there are still two significant problems with the paper\n\n1) The paper still does not properly separate the underlying theoretical idea, and specific implementational details in section 2 (Methods).\n\nThe key idea, that L_{target}(z) should depend on the *flipped* or *transformed* target is not articulated at all clearly. We have on p 3 only \"L_{target}(z) is a target objective which can be a class probability or a regression target\". This is inadequate.\n\nThe basic idea of the paper should be properly specified first, then details like the \"DCGAN ... with Scaled Exponential Linear Units\" (and a whole load more deep net alchemy) need to come later.\n\nWhy does this matter? -- because if the idea is to be applicable to other domains, all the domain-specific detail obliterates the key idea of the paper.\n\n2) The authors say they have \"included a blinded survey of domain experts in radiology [...] to address the concerns that readers may not be able to evaluate the images in Fig 4.\"\n\nThis is welcome, but Table 2 is COMPLETELY INCOMPREHENSIBLE. What are A1, A2, B1, B2? What are the 4 row labels? How do these relate to the difference images (as per Fig 5)? The authors need to EXPLAIN in the text what is going on in this table, and how this \"clearly shows that the contaiminated classifier indeed produces less interpretable visual rationales\". I am also not sure that I really care about the contaminated classifier -- what I want to know is how the domain experts were able to use the difference image to aid their interpretations.\n\nI do note the positive scores of the other reviewers. I believe there is a good idea in this paper, but I still feel it is not explained properly, nor is the important domain expert evidence properly explained. To me it is still below threshold.", "Thank you for your review and comments. \n\n1) \"The explanation of eqs 1 and 2 is quite poor. \\alpha in (1) seems to be \\gamma in Alg 1 (line 5). \"L_target is a target objective which can be a negative class probability ..\" this assumes that the example is a positive class. Could we not also apply this to negative examples?\"\n\nThank you for pointing out the errors - textual details in Alg 1 and Eqs 1 and 2 have been fixed. This method can equally be applied to negative class, one need only flip the sign of L_target to achieve this. \n\n2) \"or in the case of heart failure, predicted BNP level -- this doesn't make sense to me -- surely it would be necessary to target an adjusted BNP level? Also specific details should be reserved until a general explanation of the problem has been made.\"\n\nWe have removed the specific details at this stage of the paper. \n\n3) \"The trade-off parameter \\gamma is a \"fiddle factor\" -- how was this set for the lung image and MNIST examples? Were these values different?\"\n\nThe trade-off parameter \\gamma is indeed a ‘fiddle factor’ which was determined by the percentage of classes that were successfully switched while optimizing the latent space. As MNIST for instance is an easier problem than classifying heart failure, the classifier is more confident in predicting classes. The parameter gamma attempts to capture this by allowing more of the image to change in order to change the prediction of the classifier. In future work we hope to be able to derive a method of estimating gamma from the uncertainty of the predicted class probabilities but currently without an objective way of assessing these visual rationales we are unable to do so. \n\n4) \"In typical ICLR style the authors use a deep network to learn the encoder and decoder networks. It would be v interesting (and provide a good baseline) to use a shallow network (i.e. PCA) instead, and elucidate what advantages the deep network brings.\"\n\nAs mentioned in the original paper, we did not test other methods of encoding and decoding images, for instance variational autoencoders or as suggested, shallower methods such as PCAs. However since the first draft of the paper, we have tried vanilla autoencoders as well as VAEs which fail to demonstrate the same ability to reconstruct images to the level of detail required - and we believe that PCA would run into similar obstacles.\n\n5) \"The example of 4/9 misclassification seems very specific. Does this method also work on say 2s and 3s? Why have you not reported results for these kinds of tasks?\"\n\nThis method also works for different number sets, including 2 and 3, however with differing rates of success. We have included a set of 3s to 2s in the updated version of our paper to illustrate this. As mentioned in the reply to Reviewer 3, this type of failure is observed more in digits that are less similar to each other, such as from converting from the digits 3 to 2, as simply removing the lower curve of the digit may not always result in a centered \"two\" digit. This precludes the simple interpretation that we are able to attribute to the 9 to 4 task. \n\n6) \"Fig 2: better to show each original and reconstructed image close by (e.g. above below or side-by-side). The reconstructions show poor detail relative to the originals. This loss of detail could be a limitation.\"\n\nFigure 2 has been updated with your suggestion that the reconstructions be presented side by side for easier evaluation. You are correct in that the loss of detail could be a limitation - in fact we chose the training method we used (pretraining a GAN as the decoder part of an autoencoder) to preserve as much detail as possible (at the time of writing). The loss of detail means that our model is unable to explain predictions based on finer detail and we hope that future advances in generative learning will help overcome this. \n\n7) \"A serious problem with the method is that we are asked to evaluate it in terms of images like Fig 4 or Fig 8. A serious study would involve domain experts and ascertain if Fig 4 conforms with what they are looking for.\"\n\nWe have included a blinded survey of domain experts in radiology in our revised paper to address the concern that readers may not be able to evaluate the images in Fig 4. This clearly demonstrates that the contaminated classifier produces visual rationales with fewer relevant features. \n\n8) \"The references section is highly inadequate -- no venues of publication are given. If these are arXiv give the proper ref. Others are published in conferences etc, e.g. Goodfellow et al is in Advances in Neural Information Processing Systems 27, 2014.\"\n\nOur references have been updated to include venues of publication as far as possible. \n", "Thank you for your review and comments. We were unaware of the work by Hendricks et al on Generating Visual Explanations and have sought to reference this in our discussion. \n\nIn response to your comments: \n\n1) \"Some figures with just two parts are labeled \"from left to right\" - it would be better to just write left: ... right: …\"\n2) \"Figure 2: do these images correspond to each other? If yes, it would be good to show them pairwise.\"\n\nWe have rewritten our figure caption labels and also rearranged Figure 2 to demonstrate the original and reconstructed images pairwise for ease of comparison. \n\n2) \"Figure 5: please explain why the saliency map is relevant. This looks very noisy and non-interesting”\n\n In Figure 5, the saliency map is indeed noisy and this serves to illustrate the deficiencies of the saliency map compared to the visual rationale generated using our method. We have added a statement in our paper to reflect this. \n", "Thank you for the comments and your review. Your description of our process is accurate. We have addressed each of your comments.\n\n1) “There are a few details missing, like the batch sizes used for training (is it difficult to relate epochs to iterations without this). Also, the number of hidden units in the 2 layer MLP from para 5 in sec 2” \n\nIn this updated version, we have included batch sizes and the number of hidden units in our methods section.\n\n2) “It would be good to include PSNR/MSE figures for the reconstruction task (fig 2) to have an objective measure of error” \n3) “Sec 2 para 4: the reconstruction loss on the validation set was similar to the reconstruction loss on the validation set -- perhaps you could be a little more precise here. E.g. learning curves would be useful.\"\n\n\nWe have included additional figures showing the Laplacian loss functions for training and testing sets as well as corresponding MSE figures. This illustrates our point that when the decoder is fixed, overfitting for the autoencoder is not observed. \n\n4) \"Sec 2 para 5: paired with a BNP blood test that is correlated with heart failure\" I suspect many readers of ICLR, like myself, will not be well versed in this test, correlation with HF, diagnostic capacity, etc., so a little further explanation would be helpful here. The term \"correlated\" is a bit too broad, and it is difficult for a non-expert to know exactly how correlated this is. It is also a little confusing that you begin this paragraph saying that you are doing a classification task, but then it seems like a regression task which may be postprocessed to give a classification. Anyway, a clearer explanation would be helpful. Also, if this test is diagnostic, why use X-rays for diagnosis in the first place?\"\n\nWe have updated the BNP section to clarifying some important points that you've brought up. Even in the medical literature, the diagnosis of heart failure is not well defined and usually relies on a mix of patient symptoms, BNP results, and radiology. Whilst not readily available in every hospital services, BNP serves as an objective measure to diagnose heart failure and is being increasingly used by clinicians. Hence these are useful to predict as they represent an objective label for the chest X-ray, whereas current deep learning methods tend to utilize radiologist reports of the X-ray image which can often omit diagnoses that were deemed irrelevant by the radiologist. \n\nBNP levels are continuous and hence we train our network as a regression task, however we evaluate this using AUC as clinicians are often interested specifically if BNP levels are over a laboratory-defined threshold, and AUC is often the metric used in the medical literature for comparing the diagnostic capacities of different tests. Lastly, BNP tests are not available in all laboratories and may take a while to return while chest X-ray images are easily available although tricky to interpret, even for medical doctors, as outlined in Kennedy et al (2011).\n\n5) \"I would have liked to have seen some indicative times on how long the optimization takes to generate a visual rationale, as this would have practical implications.\"\n\nIndicative times have been added in our results section as well. Times may vary depending on the confidence of the classifier as inputs that do not lie close to the target class may take more steps to convert or in fact may fail to convert if the maximum number of steps have been completed. \n\n6) \"Sec 2 para 7: L_target is a target objective which can be a negative class probability or in the case of heart failure, predicted BNP level -- for predicted BNP level, are you treating this as a probability and using cross entropy here, or mean squared error?\"\n\nFor predicted BNP level we are using mean squared error - as the network was trained on the regression task of predicting the BNP level\n\n\n7) \"As always, it would be illustrative if you could include some examples of failure cases, which would be helpful both in suggesting ways of improving the proposed technique, and in providing insight into where it may fail in practical situations.\"\n\nWe have included (also based on the suggestions of Reviewer 1) other examples on MNIST - in particular changing the predicted class from 3 to 2. This is a significantly harder task as most digits are centered in the MNIST dataset and hence we cannot simply remove the bottom curve of the 3 to convert it to a 2, as we can with a 9 to a 4. This generates several failure cases where the algorithm instead converts the 3 into something else, or fails to convert it at all. \n", "We would like to thank all the reviewers for their comments and contributions. The paper has been modified with several suggestions from all reviewers included, namely: \n\n* Added domain expert ratings for visual rationales produced by correctly and incorrectly trained algorithms\n* Added figure for autoencoder training v.s. validation loss functions \n* Symbols fixed in equations and algorithms\n* References updated with correct publication venues \n* Textual and spelling errors corrected\n* Added section explaining the choice of BNP as the label for chest X-rays and the real world applications of this\n* Edited figure 2 so that original and reconstructed images are displayed pairwise\n* Add indicative times in Results (Sec 3) \n* Added ChestX-ray8 dataset size\n* Added batch sizes and hidden units for classification MLP\n* Added references to ‘Generating Visual Explanations’ in Discussion \n* Edited caption and accompanying text for Figure 5 to explain why the saliency map is more relevant \n* Included additional MNIST examples as well as failure cases.\n" ]
[ -1, -1, -1, -1, -1, 4, 8, 7, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 3, 2, 4, -1, -1, -1, -1, -1 ]
[ "HJ8fKmLVz", "r1QHxTJXM", "rJIBvHSEz", "S1Hw58XEz", "BJY8gayXz", "iclr_2018_B13EC5u6W", "iclr_2018_B13EC5u6W", "iclr_2018_B13EC5u6W", "H1Qslp1XG", "ByWcetHlG", "rygt6qdxM", "SkMSOTOlG", "iclr_2018_B13EC5u6W" ]
iclr_2018_ByYPLJA6W
Distribution Regression Network
We introduce our Distribution Regression Network (DRN) which performs regression from input probability distributions to output probability distributions. Compared to existing methods, DRN learns with fewer model parameters and easily extends to multiple input and multiple output distributions. On synthetic and real-world datasets, DRN performs similarly or better than the state-of-the-art. Furthermore, DRN generalizes the conventional multilayer perceptron (MLP). In the framework of MLP, each node encodes a real number, whereas in DRN, each node encodes a probability distribution.
rejected-papers
The paper proposes a method to map input probability distributions to output probability distributions with few parameters. They show the efficacy of their method on synthetic and real stock data. After revision they seemed to have added another dataset, however, it is not carefully analyzed like the stock data. More rigorous experimentation needs to be done to justify the method.
train
[ "B10sNZ9gM", "ByjbpsngM", "B1LsdVTlf", "HJtDnjDGG", "HJSgXjgGz", "SkS3fjgfG", "SJs8MoxfM", "Sym4GjlGM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The paper considers distribution to distribution regression with MLPs. The authors use an energy function based approach. They test on a few problems, showing similar performance to other distribution to distribution alternatives, but requiring fewer parameters.\n\nThis seems to be a nice treatment of distribution to distribution regression with neural networks. The approach is methodological similar to using expected likelihood kernels. While similar performance is achieved with fewer parameters, it would be more enlightening to consider accuracy vs runtime instead of accuracy vs parameters. That’s what we really care about. In a sense, because this problem has been considered several times in slightly different model classes, there really ought to be a pretty strong empirical investigation. In the discussion, it says \n“For future work, a possible study is to investigate what classes of problems DRN can solve.” It feels like in the present work there should have been an investigation about what classes of problems the DRN can solve. Its practical utility is questionable. It’s not clear how much value there is adding yet another distribution to distribution regression approach, this time with neural networks, without some pretty strong motivation (which seems to be lacking), as well as experiments. In the introduction, it would also improve the paper to outline clear points of methodological novelty. \n", "Summary:\n\nThis paper presents a new network architecture for learning a regression of probability distributions.\n\nThe distribution output from a given node is defined in terms of a learned conditional probability function, and the output distributions of its input nodes. The conditional probability function is an unnormalized distribution with the same form as the Boltzman distribution, and distributions are approximated from point estimates by discretizing the finite support into predefined equal-sized bins. By letting the conditional distribution between nodes be unnormalized, and using an energy function that incorporates child nodes independently, the approach admits efficient computation that does not need to model the interaction between the distributions output by nodes at a given level.\n\nUnder these dynamics and discretization, the chain rule can be used to derive a matrix of gradients at each node that denotes the derivative of the discretized output distribution with respect to the current node's discretized distribution. These gradients are in turn used to calculate updates for the network parameters with respect to the Jensen Shannon divergence between the predicted distribution and a target distribution.\n\nThe approach is evaluated on three tasks, two synthetic and one real world. The baselines are the state of the art triple basis estimator (3BE) or a standard MLP that represents the output distribution using a softmax over quantiles. On both of the synthetic tasks --- which involve predicting gaussians --- the proposed approach can fit the data reasonably using far fewer parameters than the baselines, although 3BE does achieve better overall performance. On a real world task that involves predicting a distribution of future stock market prices from multiple input stock marked distributions, the proposed approach significantly outperforms both baselines. However, this experiment uses 3BE outside of its intended use case --- which is for a single input distribution --- so it's not entirely clear how well the very simple proposed model is doing.\n\nNotes to authors:\n\nI'm not familiar with 3BE but the fact that it is used outside of its intended use case for the stock data is worrying. How does 3BE perform at predicting the FTSE distribution at time t + k from the FTSE distribution at time t only? Do the multiple input distributions actually help?\n\nYou use a kernel density estimate with a Gaussian kernel function to estimate the stock market pdf, but then you apply your network directly to this estimate. What would happen if you built more complex networks using the kernel values themselves as inputs?\n\nCould you also run experiments on the real-world datasets used by the 3BE paper?\n\nWhat is the structure of the DRN that uses > 10^3 parameters (from Fig. 4)? The width of the network is bounded by the two input distributions, so is this network just incredibly deep? Also, is it reasonable to assume that both the DRN and MLP are overfitting the toy task when they have access to an order of magnitude more parameters than datapoints.\n\nIt would be nice if section 2.4 was expanded to actually define the cost gradients for the network parameters, either in line or in an appendix.", "This is an intriguing paper on running regressions on probability distributions: i.e. a target distribution is expressed as a function of input distributions. A well-written manuscript, though the introduction could have motivated the problem a little better (i.e. why would we want to do this). The novelty in the paper is implementing such a regression in a layered network. The paper shows how the densities at each nodes are computed (and normalised). Optimisation by back propagation and discretization of the densities to carry out numerical integration are well explained and easy to follow. The paper uses three problems to illustrate the idea -- a synthetic dataset, a mean reverting stochastic process and a prediction problem on stock indices. \nMy only two reservations of this paper is the illustration on the stock index data -- it seems to me, returns on individual constituent stocks of an index are used as samples of the return on the index itself. But this cannot be true when the index is a weighted sum of the constituent assets. Secondly, it is not clear to me why one would force a kernel density estimate on the asset returns and then bin the density into 100 bins for numerical reasons -- does the smoothing that results from this give any advantage over a histogram of the returns in 100 bins?\n ", "We have completed our experiment on the real-world cell dataset used by the LSE paper (Oliva et. al., 2013) and have added it to Section 4 of our paper. Similar to the findings in the stock dataset, DRN achieved the highest log-likelihood compared to MLP and 3BE, using fewer model parameters.", "We appreciate your feedback and suggestions. We address your concerns in the following points:\n\n1) While similar performance is achieved with fewer parameters, it would be more enlightening to consider accuracy vs runtime instead of accuracy vs parameters.\n\nThank you for the suggestion, we have included a comparison of the runtimes in Appendix C. DRN’s runtime is competitive compared to the other methods. MLP has the fastest prediction time, followed by DRN and then 3BE.\n\n2) It feels like in the present work there should have been an investigation about what classes of problems the DRN can solve.\n\nIn this paper, we have shown DRN to work well for various forms of univariate distributions (eg. unimodal and bimodal distributions, symmetric and assymetric distributions), and for a variety of mappings as described in the synthetic dataset (peak splitting, peak shifting, peak spreading, peak coalescence). For future work, we look to extend DRN other classes of the distribution regression task (eg. distribution-to-real, distribution classification) and to handle multivariate distributions. We have improved on our explanation of future work in the conclusion.\n\n3) It’s not clear how much value there is adding yet another distribution to distribution regression approach, this time with neural networks, without some pretty strong motivation (which seems to be lacking), as well as experiments.\n\nWe appreciate your concern on the value of our proposed method in comparison with the existing works. We address them in the following, and have included the comparisons in the revised manuscript. As far as we know, these are the works related to distribution-to-distribution regression which we have cited in our paper:\n1.\tLinear Smoother Estimator (LSE) by Olivia et. al., 2013\n2.\tTriple-Basis Estimator (3BE) by Olivia et. al., 2015\n3.\tExtrapolating the Distribution Dynamics (EDD) by Lampert 2015\n4.\tMultilayer perceptron (MLP): has not been used for distribution-to-distribution regression in literature, we have adapted it for this task\n\nLSE is a instance-based learning method which does not scale well with data size, whereas the inference time of our proposed method is independent of training size. EDD addresses a specific task of predicting the future state of a time-varying probability distribution and it is unclear how it can be used for a more general case of regressing between distributions of different objects.\n\nIn addition, LSE and EDD are designed for single input, single output regressions and their effectiveness on multiple input, multiple output distributions is unclear. In comparison, our proposed Distribution Regression Network’s (DRN) network architecture gives it the flexibility to handle arbitrary number of input and output distributions.\n\nDistribution Regression Network is able to learn the regression mappings with fewer model parameters compared to 3BE and MLP. In 3BE, the number of model parameters scales linearly with the number of projection coefficients of the distributions and the number of Random Kitchen Sink features. From our experiments, DRN is able to achieve similar or better regression performance using less parameters than 3BE.\n\nThough both DRN and MLP are network-based methods, they encode the distribution very differently – in DRN, each node encodes a distribution while in MLP, each node encodes a real number corresponding to a bin of the distribution. By generalizing each node to encode a distribution, each distribution is treated as a single object which is then processed by the connecting weight. Thus, the propagation behavior in DRN is much richer, enabling DRN to represent the regression mappings with fewer parameters. \n\n4) In the introduction, it would also improve the paper to outline clear points of methodological novelty.\n\nThank you for the suggestion. Our methodological novelty is as discussed in the earlier comparison with other methods. We have added points of methodological novelty in the introduction.", "Thank you for your comments and suggestions. Here are our replies:\n\n1) However, this experiment uses 3BE outside of its intended use case --- which is for a single input distribution --- so it's not entirely clear how well the very simple proposed model is doing. \n\nWe would like to clarify that the authors of 3BE performed a regression on multiple input functions for one of their experiments (see joint motion prediction experiment in Olivia et. al, 2015). We followed their method to extend to multiple input distributions by concatenating the basis coefficients from the input distributions. Hence, this is not considered using 3BE outside of its intended use case. We have updated our manuscript to explain this more clearly, in Section 3.3, paragraph 4.\n\n2) For the stock dataset, we reported that 3BE’s predicted distributions have means that are near zero for all test data. We found out this was because we followed the 3BE paper and mistakenly applied cosine basis estimator directly on the return samples without any pre-processing. The stock return values are small (range of [-0.015, 0.015]) and centered around zero, and cosine basis functions are symmetric with respect to the y-axis. Hence, all of the estimated distributions are centred about zero. We have corrected this by rescaling the returns to range of [0, 1] before applying cosine basis projection. The new results are shown in our revision. While 3BE’s accuracy has improved and the predicted distribution means are more reasonable (see Fig. 7), overall 3BE’s accuracy still lags behind DRN and MLP.\n\n3) How does 3BE perform at predicting the FTSE distribution at time t + k from the FTSE distribution at time t only? Do the multiple input distributions actually help? \n\nFor single input distribution, 3BE’s performance is still lagging behind the other methods. We observe that having multiple inputs improved the accuracy for DRN, but not for the other two methods.\n\nLog-likelihood on test set for next-day prediction:\nDRN single input: 474.37 +- 0.01\nDRN multiple input: 474.43 +- 0.01\nMLP single input: 471.81 +- 0.09\nMLP multiple input: 471.50 +- 0.08\n3BE single input: 467.46 +- 0.93\n3BE multiple input: 466.76 +- 0.73\n\n4) What would happen if you built more complex networks using the kernel values themselves as inputs? \n\nWe would like to clarify what you meant by ‘kernel values’ in your comments. Do you mean to feed the companies returns directly into a more complex network without first estimating the distribution?\n\n5) Could you also run experiments on the real-world datasets used by the 3BE paper?\n\nThe 3BE paper experiments on multivariate distributions. However, our current model is designed for univariate distributions; multivariate distributions are planned for future work. The LSE paper (Oliva et. al., 2013) experimented on a dataset of time series microscopy images of cells. Each image frame contains a number of cells. For each time frame, given the distribution of the long-axis length of cells, they predict the distribution of short-axis length. We are currently running experiments on a similar dataset and will add it to our paper.\n\n6) What is the structure of the DRN that uses > 10^3 parameters (from Fig. 4)? The width of the network is bounded by the two input distributions, so is this network just incredibly deep?\n\nWe would like to clarify that for the first synthetic dataset, each input and output distribution is a weighted sum of two Gaussians and not simply a Gaussian distribution, as illustrated in Fig. 3(b). Hence there is one input node and one output node connected by a hidden layers of arbitrary width. All layers are fully-connected. We have included the network structure for DRN in Appendix B.\n\n7) Also, is it reasonable to assume that both the DRN and MLP are overfitting the toy task when they have access to an order of magnitude more parameters than datapoints.\n\nFor DRN and MLP, there is no significant overfitting as the gaps between train and test loss are not significant. We have included the training losses in Fig. 4(b).\n\n8) It would be nice if section 2.4 was expanded to actually define the cost gradients for the network parameters, either in line or in an appendix.\n\nThank you for the suggestion, we have expanded the cost gradient derivations and included them in Appendix A.", "We appreciate your succinct summary of our paper and your comments. We have added in stronger motivations for the distribution-to-distribution task and better explanation of our model’s novelty. Please refer to the introduction of the revised paper.\n\n1) it seems to me, returns on individual constituent stocks of an index are used as samples of the return on the index itself. But this cannot be true when the index is a weighted sum of the constituent assets.\n\nThe stock index is a single number defined by the weighted sum of constituent stock prices. In this paper, we are not concerned about the return of the stock index, nor to form a distribution of it. Instead, we work on the distribution of the stock returns of constituent companies of the index, with individual company returns used as samples. We have provided a more accurate description of the dataset in the revised paper, in Section 3.3.\n\n2) it is not clear to me why one would force a kernel density estimate on the asset returns and then bin the density into 100 bins for numerical reasons -- does the smoothing that results from this give any advantage over a histogram of the returns in 100 bins?\n\nWe use kernel density estimate on the return samples as it provides a principled way to account for the uncertainty in sample measurements, where the kernel width correlates to the extent of uncertainty. In the case of measuring stock returns, we use the closing price to estimate the daily stock price of a company. Furthermore, a practical reason is that using histogram binning may result in empty bins especially when number of samples is small. It also causes discontinuities in the estimation of the probability distribution.", "Thank you for your constructive comments. We have replied to the individual reviews separately and revised our paper accordingly.\n\nWe would like to highlight that we have previously misrepresented 3BE’s performance on the stock dataset. We reported that 3BE’s predicted distributions have means that are near zero for all test data. We found out this was because we followed the 3BE paper and mistakenly applied cosine basis estimator directly on the return samples without any pre-processing. The stock return values are small (range of [-0.015, 0.015]) and centered around zero, and cosine basis functions are symmetric with respect to the y-axis. Hence, all of the estimated distributions are centred about zero. We have corrected this by rescaling the returns to range of [0, 1] before applying cosine basis projection. The new results are shown in our revision. While 3BE’s accuracy has improved and the predicted distribution means are more reasonable (see Fig. 7), overall 3BE’s accuracy still lags behind DRN and MLP.\n\nWe have also conducted more runs over different random seeds for all three methods on the stock dataset to obtain lower standard errors." ]
[ 5, 7, 7, -1, -1, -1, -1, -1 ]
[ 4, 2, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ByYPLJA6W", "iclr_2018_ByYPLJA6W", "iclr_2018_ByYPLJA6W", "SkS3fjgfG", "B10sNZ9gM", "ByjbpsngM", "B1LsdVTlf", "iclr_2018_ByYPLJA6W" ]
iclr_2018_ry9tUX_6-
Entropy-SGD optimizes the prior of a PAC-Bayes bound: Data-dependent PAC-Bayes priors via differential privacy
We show that Entropy-SGD (Chaudhari et al., 2017), when viewed as a learning algorithm, optimizes a PAC-Bayes bound on the risk of a Gibbs (posterior) classifier, i.e., a randomized classifier obtained by a risk-sensitive perturbation of the weights of a learned classifier. Entropy-SGD works by optimizing the bound’s prior, violating the hypothesis of the PAC-Bayes theorem that the prior is chosen independently of the data. Indeed, available implementations of Entropy-SGD rapidly obtain zero training error on random labels and the same holds of the Gibbs posterior. In order to obtain a valid generalization bound, we show that an ε-differentially private prior yields a valid PAC-Bayes bound, a straightforward consequence of results connecting generalization with differential privacy. Using stochastic gradient Langevin dynamics (SGLD) to approximate the well-known exponential release mechanism, we observe that generalization error on MNIST (measured on held out data) falls within the (empirically nonvacuous) bounds computed under the assumption that SGLD produces perfect samples. In particular, Entropy-SGLD can be configured to yield relatively tight generalization bounds and still fit real labels, although these same settings do not obtain state-of-the-art performance.
rejected-papers
The paper proposes a new analysis of the optimization method called entropy-sgd which seemingly leads to more robust neural network classifiers. This is a very important problem if successful. The reviewers are on the fence with this paper. On the one hand they appreciate the direction and theoretical contribution, while on the other they feel the assumptions are not clearly elucidated or justified. This is important for such a paper. The author responses have not helped in alleviating these concerns. As one of the reviewers points out, the writing needs a massive overhaul. I would suggest the authors clearly state their assumptions and corresponding justifications in future submissions of this work.
train
[ "Skza1ggrG", "Bk1HygxSM", "ryq2cm9xG", "r1dNqr9xf", "Hy0bdarZG", "Hk9z76OfG", "BkbCG6dzM", "SJMcMTOMz", "rJXPfp_GM" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "We revised our paper considerably over a month ago. We have since had a long back and forth conversation with AnonReviewer3 discussing the privacy approximation, which seems to have addressed their misgivings. \n\nWe would much appreciate it if you could update your reviews and/or score. ", "Dear AnonReviewer1,\n\nWe have addressed all your concerns. We've also had a lengthy conversation with AnonReviewer3 around the privacy approximation. That reviewer appears to be now convinced of the reasonableness of our approximation. \n\nWe would very much appreciate if you could update your reviews/scores.\n", "This paper connects Entropy-SGD with PAC-Bayes learning. It shows that maximizing the local entropy during the execution of Entropy-SGD essentially minimize a PAC-Bayes bound on the risk of the Gibbs posterior. Despite this connection, Entropy-SGD could lead to dependence between prior and data and thus violate the requirement of PAC-Bayes theorem. The paper then proposes to use a differentially private prior to get a valid PAC-Bayes bound with SGLD. Experiments on MNIST shows such algorithm does generalize better.\n\nLinking Entropy-SGD to PAC-Bayes learning and making use of differential privacy to improve generalization is quite interesting. However, I'm not sure if the ideas and techniques used to solve the problem are novel enough.\nIt would be better if the presentation of the paper is improved. The result in Section 4 can be presented in a theorem, and any related analysis can be put into the proof. Section 5 about previous work on differentially private posterior sampling and stability could follow other preliminaries in Section 2. The figures are a bit hard to read. Adding sub-captions and re-scaling y-axis might help.\n", "1) I would like to ask for the clarification regarding the generalization guarantees. The original Entropy-SGD paper shows improved generalization over SGD using uniform stability, however the analysis of the authors rely on an unrealistic assumption regarding the eigenvalues of the Hessian (they are assumed to be away from zero, which is not true at least at local minima of interest). What is the enabling technique in this submission that avoids taking this assumption? (to clarify: the analysis is all-together different in both papers, however this aspect of the analysis is not fully clear to me).\n2) It is unclear to me what are the unrealistic assumptions made in the paper. Please, list them all in one place in the paper and discuss in details.\n", "Brief summary:\n Assume any neural net model with weights w. Assume a prior P on the weights. PAC-Bayes risk bound show that for ALL other distributions Q on the weights, the the sample risk (w.r.t to the samples in the data set) and expected risk (w.r.t distribution generating samples) of the random classifier chosen according to Q, averaged over Q, are close by a fudge factor that is KL divergence of P and Q scaled by m^{-1} + some constant.\n\nNow, the authors first show that optimizing the objective of the Entropy SGD algorithm is equivalent to optimizing the empiricial risk term + fudge term over all data dependent priors P and the best Q for that prior. However, PAC-Bayes bound holds only when P is NOT dependent on the data. So the authors invoke results from differential privacy to show that as long as the prior choosing mechanism in the optimization algorithm is differentially private with respect to data, differentially private priors can be substituted for valid PAC-Bayes bounds rectifying the issue. They show that when entrop SGD is implemented with pure gibbs sampling steps (as in Algorithm 3), the bounds hold.\n\nWeakness that remains is that the gibbs sampling step in Entropy SGD (as in algo 3 in the appendix) is actually approximated by samples from SGLD that converges to this gibbs distribution when run for infinite hops. The authors leave this hole unsolved. But under the very strong sampling assumption, the bound holds. The authors do some experiments with MNIST to demonstrate that their bounds are not trivial. \n\nStrengths:\n Simple connections between PAC-Bayes bound and entropy SGD objective is the first novelty. Invoking results from differential privacy for fixing the issue of validity of PAC-Bayes bound is the second novelty. Although technically the paper is not very deep, leveraging existing results (with strong assumptions) to show generalization properties of entropy-SGD is good.\n\nWeakness:\n a) Obvious issue : that analysis assumes the strong gibbs sampling step.\n b) Experimental results are ok. I see that the bounds computed are non-vacuous. - but can the authors clarify what exactly they seek to justify ? \n c) Typos: \n Page 4 footnote \"the local entropy should not be <with>..\" - with is missing.\n Eq 14 typo - r(h) instead of e(h) \n Definition A.2 in appendix - must have S and S' in the inequality -both seem S.\n\nd) Most important clarification: The way Thm 5.1, 5.2 and the exact gibbs sampling step connect with each other to produce Thm 6.1 is in Thm B.1. How do multiple calls on the same data sample do not degrade the loss ? Explanation is needed. Because the whole process of optimization in TRAIN with may steps is the final 'data dependent prior choosing mechanism' that has to be shown to be differentially private. Can the authors argue why the number of iterations of this does not matter at all ?? If I get run this long enough, and if I get several w's in the process (like step 8 repeated many times in algorithm 3) I should have more leakage about the data sample S intuitively right ?\n\ne) The paper is unclear in many places. Intro could be better written to highlight the connection at the expression level of PAC-Bayes bound and entropy SGD objective and the subsequent fix using differentially private prior choosing mechanism to make the connection provably correct. Why are all the algorithms in the appendix on which the theorems are claimed in the paper ??\n\nFinal decision: I waver between 6 and 7 actually. However I am willing to upgrade to 7 if the authors can provide sound arguments to my above concerns.", "This comment summarizes the major changes we made to the document while addressing the reviewers' comments. We have also crafted responses to each individual reviewer.\n\nWe took all of the reviewers’ comments seriously and made extensive edits to the article. Some of the major changes include:\n\n1. stating our main results as theorems and writing up the analysis in the form of a proof. This should make our contributions clearer to readers. These results include: i) the connection between Entropy-SGD optimization and PAC-Bayes prior optimization, ii) our differentially private PAC-Bayes bound, iii) our privacy analysis for the data-dependent prior.\n\n2. giving a single unified description of the Entropy-SGD and Entropy-SGLD algorithms, so the difference is obvious.\n\n3. rewriting our differential privacy analysis, to make it easier for the reader to understand our assumptions/approximations.\n\n3. adding experiments comparing SGLD and Entropy-SGD at different levels of thermal noise, which highlights the role of thermal noise in generalization and the difference between empirical risk minimization and local entropy maximization.\n\n4. discussing the relationship between our differentially private PAC-Bayes priors and data-distribution-dependent priors. \n", "Thank you for your feedback. \n\nYou raise two issues regarding novelty and clarity/presentation. We will address these in turns.\n\nRegarding novelty. We have recently presented this work to experts at a PAC-Bayes workshop. They expressed great interest in our results using differential privacy, and we have fielded a number of requests for preprints. Our private data-dependent priors can be viewed as a new type of data-distribution-dependent prior. The classical technique for dealing with data-distribution-dependent priors is due to Catoni and Lever et al., but these techniques have only been applied to Gibbs distributions, whereas our approaches offers much more flexibility. We now explain this connection more carefully in the related work section. We believe that our approach opens up the avenue to more advanced uses of stable, data-dependent priors. \n\nBeyond connecting PAC-Bayes theory and privacy, our work makes a number of other contributions: \n- We reveal the importance of the thermal noise to the generalization performance of Entropy-SGD, and tie this parameter to stability/privacy. We also make a detailed study of the role of thermal noise in overfitting on MNIST, not only for Entropy-SGD, but also for SGLD and Entropy-SGLD.\n- We identify the deep connection between Entropy-SGD and PAC-Bayes bounds, which guides us to new ways to improve the generalization performance of Entropy-SGD. Our modifications lead to new learning algorithms that do not overfit, yet still have very good risk.\n- We obtain risk/generalization bounds for neural networks that, up to our privacy approximation, are much tighter than any bounds previously published for MNIST.\n\nRegarding clarity/presentation. We have rewritten several sections in the paper using your feedback as a guideline. Our connection between Entropy-SGD and PAC-Bayes priors is now stated as a theorem and our argument is now structured as a proof. Our derivations concerning privacy are now also organized into a theorem in Section 5. Indeed, Section 5 has been reworked from the ground up to have much clearer logical structure. We have reproduced all figures with larger fonts and careful attention to readability. The organization of Figure 1 now makes it immediately clear which figures are on true or random labels, and which algorithms are being compared.\f", "Thank you for your questions. We'll address both in turns, paraphrasing each as we understood it. We close with two remarks about uniform stability.\n\n1. In our paper we repeat the statement by Chaudhari et al. that their analysis has some violated assumptions about curvature. How does our result sidestep this issue with the curvature.\n\nOur PAC-Bayes bound is tight provided that the KL(Q||P) term is small. In our case, P is a Gaussian whose mean is differentially private. Q is then the corresponding Gibbs posterior. Whether the empirical risk surface near the mean of P is exactly flat or nearly flat does not matter. In both cases, Q and P will be nearly identical and KL(Q||P) will be very small. This is what we find empirically.\n\n2. What are the \"unrealistic assumptions\" you refer to?\n\nThere is one approximation made in our paper: our \"privacy approximation\". We now discuss this approximation in the Section 1, Introduction; Section 5, Data-dependent PAC-Bayes priors via differential privacy; Section 6, Numerical results on MNIST, and Section 7, Discussion. \n\nOur approximation is as follows (Section 1 and especially 5 give these details): The gold standard way to minimize a bounded function f is the exponential mechanism, namely generating a sample from the distribution with density exp(- c*f) where c > 0 is a constant. The bound on f and c determine the privacy. However, if f is high-dimensional and nonconvex, then exact sampling can be intractable. SGLD is a way to get an approximate sample, and it is know that the longer you run SGLD, the better the approximation. We approximate the exponential mechanism (i.e., an exact sample), with an approximate sample from SGLD, and calculate the privacy as if we got an exact sample. Differential privacy is a worst case framework and so we might not notice this approximation on \"nice\" data. An adversary might be able to exploit our approximation if they could carefully craft the data distribution. In the text, we point out that our bounds may be optimistic as a result, but they behave in a way that the theory predicts, and so we can still learn something from studying them.\n\nFinally, we'll make two remarks about the uniform stability of SGD and Entropy-SGD.\n\nFirst, the stability analysis in Chaudhari et al.'s Entropy-SGD paper does not account for the thermal noise required to get reasonable empirical results. Once you add in the amount of thermal noise they were advocating in their experiments, their results flips: Entropy-SGD is less stable. Our results actually point to using less thermal noise in order to get good generalization at the cost of excess empirical risk.\n\nSecond, in the now well-known \"Rethinking generalization\" paper by Zhang, et al 2017, the authors, which include Hardt and Recht themselves, say that the uniform stability result cannot explain the difference between the performance on random and true labels, because stability does not care about the labels. The uniform stability bounds degrade to vacuous bouds after several passes through the data. The same issues are relevant to the stability analyses of Entropy-SGD.\n", "Thank you for the comments and pinpointing several typos. \n\nWe have made an extensive rewrite to address the weaknesses you identified. We will respond to each of them, but in a different order.\n\n(e) and (c) Regarding clarity/presentation and typos.\n\nWe have rewritten and rearranged much of the paper to improve the logical structure. We have also addressed all the typos. \n\n- Entropy-SGD and Entropy-SGLD are now presented in the main body of the paper as a single combined algorithm, with the one difference highlighted.\n- Our analysis of the idealized exponential mechanism (what you refer to as gibbs sampling) is now presented as Theorem 5.5, and its relationship to Entropy-SGLD is clearly laid out in the same section. We also discuss our privacy approximation here in depth.\n- Our result relating Entropy-SGD and PAC-Bayes bound optimization are now presented as Theorem 4.1. \n- Our argument establishing the differentially-private PAC-Bayes bound is now structured as a proof.\n\n\n(d) and (a). Regarding strong gibb sampling (i.e., the exponential mechanism and our \"privacy approximation\" regarding SGLD). \n\nWe have updated this part of the paper considerably, and the logical structure is much improved. The material is now entirely in the main body. We highlight some aspects of the argument here:\n\n- Note that we only use a SINGLE sample produced by SGLD (namely the last one). This last sample is what is used as the prior mean to produce the resulting Gibbs posterior classifier. When we plot the learning curves, the bounds are the bounds that would hold if we stopped SGLD at that iteration. \n- The fact we only use one sample is the reason why we think it is reasonable to approximate the privacy of SGLD by that of its limiting invariant distribution (i.e., the exponential mechanism). Since we are far from the worst case with MNIST, we expect not to see much difference. There is likely a worst-case distribution where our bounds would end up being badly violated.\n- Typical analyses of SGLD don't try to deal with the fact that it begins to mix. So they make a step by step analysis, where information is leaked at every stage. Because they do this, there is no reason not to release the whole trajectory. However, in an analysis that took advantage of mixing (very hard!), they would NOT release the whole trajectory (or at least, they certainly wouldn't release the early parts).\n- In our experiments where we run SGLD for 1000's of epochs (!) on random noise, we see zero overfitting when we set the thermal noise to the settings suggested by theory.\n\n\n(b) Regarding the goal of the experiments.\n\nWe have significantly revised the section describing our numerical experiments. We feel that the motivation for our experiments in much clearer now. Here are some particular points we wanted to highlight:\n\nPAC-Bayes bounds are data dependent and so it is an empirical question whether they are useful or not, and how they compare to previously established bounds. On top of this, we are using private data-dependent priors and a differentially private PAC-Bayes theorem and so it is an empirical question whether a sufficiently private optimization finds a decent prior. (Generalization bounds require a very high degree of privacy!) One way to think about the quantity tau/m (which determines the privacy along with our loss bound) is that, when tau/m < 1, it specifies what fraction of your data you \"throw away\" while doing your sampling in order to not learn \"too much\" about your data itself, rather than the distribution underlying it. We have to \"throw away\" quite a bit of data while privately optimizing our PAC-bayes prior. And so it is an empirical question whether we can find anything useful still. Indeed, we do. We can also study our privacy approximation regarding SGLD empirically. If the privacy/stability of SGLD degraded over time, we might have seen overfitting occur on very long runs. In fact, we don't see this, even after 1000's of epochs! The private versions of the algorithms we tested reach some level of performance and stay there. \n" ]
[ -1, -1, 6, 6, 6, -1, -1, -1, -1 ]
[ -1, -1, 3, 3, 3, -1, -1, -1, -1 ]
[ "ryq2cm9xG", "r1dNqr9xf", "iclr_2018_ry9tUX_6-", "iclr_2018_ry9tUX_6-", "iclr_2018_ry9tUX_6-", "iclr_2018_ry9tUX_6-", "ryq2cm9xG", "r1dNqr9xf", "Hy0bdarZG" ]
iclr_2018_HJUOHGWRb
Contextual Explanation Networks
We introduce contextual explanation networks (CENs)---a class of models that learn to predict by generating and leveraging intermediate explanations. CENs are deep networks that generate parameters for context-specific probabilistic graphical models which are further used for prediction and play the role of explanations. Contrary to the existing post-hoc model-explanation tools, CENs learn to predict and to explain jointly. Our approach offers two major advantages: (i) for each prediction, valid instance-specific explanations are generated with no computational overhead and (ii) prediction via explanation acts as a regularization and boosts performance in low-resource settings. We prove that local approximations to the decision boundary of our networks are consistent with the generated explanations. Our results on image and text classification and survival analysis tasks demonstrate that CENs are competitive with the state-of-the-art while offering additional insights behind each prediction, valuable for decision support.
rejected-papers
The paper proposes a method to learn and explain simultaneously. The explanations are generated as part of the learning and in some sense come for free. It also goes the other way in that the explanations also help performance in simpler settings. Reviewers found the paper easy to follow and the idea has some value, however, the related work is sparse and consequently comparison to existing state-of-the-art explanation methods is also sparse. These are nontrivial concerns which should have been addressed in the main article not hidden away in the supplement.
test
[ "H1wsCJjez", "Bk-6h6Txz", "B1E57a-ZG", "SkESQBOMz", "ryOeXB_Mf", "r1SpfB_fG", "ryKwMH_Mz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "the paper is clearly written; it works on a popular idea of combining graphical models and neural nets.\n\nthis work could benefit from differentiating more from previous literature.\n\none key component is interpretability, which comes from the use of graphical models. the authors claim that the previous art directly integrate neural networks into the graphical models as components, which renders the models uninterpretable. however, it is unclear, following the same logic, why the proposed method has interpretability. after all, how to go from the context to the parameters of the graphical models is still uninterpretable. specifically, it is helpful to pinpoint what is special in this model that makes it interpretable, compared to works like Gao, Y., Archer, E. W., Paninski, L., & Cunningham, J. P. (2016). NIPS or Johnson, M., Duvenaud, D. K., Wiltschko, A., Adams, R. P., & Datta, S. R. (2016). NIPS. also, is there any methodological advancement essential to CENs? \n\nthe other idea is to go context specific. this idea has been present in language modeling, for example, amortized embedding models like M. Rudolph, F. Ruiz, S. Athey, and D. Blei (2017). NIPS and L. Liu, F. Ruiz, S. Athey, and D. Blei. (2017). NIPS. application to medical data is interesting. but it could be helpful for the readers to understand if the idea in this work is fundamentally different from these previous ideas from amortized inference.\n\na final thing. a common challenge with composing graphical models and neural networks (in interpretable or uninterpretable ways) is that the neural networks will usually eat up all the representational power. the variance captured by graphical models becomes negligible. to this end, the power of graphical models for interpretability is limited. interpretability in this case is not much different from fitting only a neural network, taking the penultimate layer to the output as \"context specific features\" can claim that we are composing a linear model with a neural network, and the linear model is interpretable. it would be interesting to be clear about how the authors get around this issue.", "The article \"Contextual Explanation Networks\" introduces the class of models which learn the intermediate explanations in order to make final predictions. The contexts can be learned by, in principle, any model including neural networks, while the final predictions are supposed to be made by some simple models like linear ones. The probabilistic model allows for the simultaneous training of explanation and prediction parts as opposed to some recent post-hoc methods.\n\nThe experimental part of the paper considers variety of experiments, including classification on MNIST, CIFAR-10, IMDB and also some experiments on survival analysis. I should note, that the quality of the algorithm is in general similar to other methods considered (as expected). However, while in some cases the CEN algorithm is slightly better, in other cases it appears to sufficiently loose, see for example left part of Figure 3(b) for MNIST data set. It would be interesting to know the explanation. Also, it would be interesting to have more examples of qualitative analysis to see, that the learned explanations are really useful. I am a bit worried, that while we have interpretability with respect to intermediate features, these features theirselves might be very hard to interpret.\n\nTo sum up, I think that the general idea looks very natural and the results are quite supportive. However, I don't feel myself confident enough in this area of research to make strong conclusion on the quality of the paper.", "The paper proposes an interesting combination of neural nets and graphical models by using a deep neural net to predict the parameters of a graphical model. When the nets are trained on contexts \"C\" (e.g. satellite images associated with a neighborhood) related to an input \"X\" (e.g. categorical features describing the neighborhood); and a graphical model relates \"X\" to targets \"Y\" (e.g. binary variable encoding poverty level of the neighborhood), then the proposed combination can produce interpretable explanations for its predictions.\nThis approach compares favorably with post-hoc explanation methods like LIME in experiments on conducted on images (CIFAR-10, MNIST), text (IMDB reviews) and time series (Satellite dataset). The paper is clearly written and might inspire follow-up work in other applications. The description of related work is sparse (beyond a derivation of an equivalence with LIME in some settings, explained in the appendix).\nThe experiments study interesting effects: what happens when the model relating X and Y is degraded (e.g. by introducing noise into X, or sub-selecting X). The paper can be substantially improved by studying the effect of dictionary size and sparsity regularization more thoroughly.\n", "First, we appreciate your pointers to the existing relevant literature that we overlooked / weren’t aware of at the time of submission. We have included these and other recent related work and elaborated the differences between CENs and previous literature in Section 2 (please also see our general comment on the major changes above).\n\nRegarding interpretability of CENs\n------------------------------------------------\nWe agree that interpretability of going from the context to parameters of a graphical model in CEN is a valid concern. Although, there is no silver bullet -- if the data is complex, the model that can accurately represent such data will end up being complex in one way or another. Using neural nets as components of a graphical model (e.g., neural potential functions) results in a powerful model. However, to understand patterns of the relationships between variables of interest learned by such model would require “digging” into each neural component separately.\n\nCENs, on the other hand, manage this complexity by explicitly localizing it in the conditional p(\\theta | C) -- once we conditioned on the context of interest, we get an explanation which we can understand by simply inspecting its parameters (see footnote 1 on page 2). In this sense, CENs are akin to modular meta-learning approaches (where one constructs models that generate other models) rather than “monolithic” deep graphical models.\n\nRegarding novelty of our approach, we wish to emphasize that, to the best of our knowledge, we are the first to propose using deep networks for generating parameters for simple graphical models which are then used for prediction and inference. We have elaborated on the differences and similarities in Section 2 in the revised version.\n\nContext representation & amortized inference\n----------------------------------------------------------------\nCENs assume that the representation of the context is given and fixed; learning context embeddings along with predictions is beyond the scope of this work. But thank you for pointing out relevant work -- in the future work, it would interesting to extend CENs to scenarios where context embeddings [1] are learned jointly with the model, or borrow ideas from context selection [2] to improve interpretability of the map from contexts to explanations.\n\nAmortized inference is unnecessary for the types of CENs considered in this work because p(\\theta | C) takes a fairly simple form. If one wishes to have a more hierarchical or structured representation of p(\\theta | C), ideas from [3] would be very useful.\n\nRepresentational power\n---------------------------------\nNeural networks “eating up” the entire representational power is a valid concern. Dictionary and sparsity constraints were chosen to protect us from such scenarios: by constraining the dictionary size and imposing a small sparsity penalty on its atoms, we force CEN to select explanations from a restricted class of models, and hence implicitly control the representational power available the neural part of the model.\n\nWe also wish to emphasize a critical point: the context encoder in CEN does not output “context-specific features” as you point out. Instead, it outputs *parameters* for a graphical model which is then used on top of features X (where X is fixed, not learned). The predictive power of CEN necessarily depends on the quality of X and the class of graphical models that are used as explanations (see Appendix A for a detailed discussion).\n\n---\n[1] M. Rudolph, F. Ruiz, S. Mandt, and D. Blei (2016). NIPS\n[2] L. Liu, F. Ruiz, S. Athey, and D. Blei. (2017). NIPS\n[3] M. Rudolph, F. Ruiz, S. Athey, and D. Blei (2017). NIPS", "We agree that the qualitative results given in the main text may seem limited, so we have included almost 2 pages of discussion of the additional qualitative results in Appendix F.2 (please also see our general comment on the major changes above).\n\nRegarding your comment on Figure 3b, we don’t think we follow what exactly you mean by that. Figure 3b (left) showcases the decrease in the training error at the early stage of training for a baseline CNN (blue) and two CEN models (green and red) that constructed explanations on different features. CEN-hog attains lower training error faster than the other two models. Performance on the held out test set are given in Table 1. We wish to emphasize that, generally, CENs closely match the performance of the vanilla deep nets when the data is abundant and perform better when the data is scarce.", "We have elaborated on the contrast between our approach and the related work (please see our general comment on the major changes above).\n\nDictionary size\n--------------------\nRegarding the effect of the dictionary size, Figure 3a answers this question: when the dictionary size is very small, CENs behave almost like the linear models (in the extreme case when the dictionary size is 1, CEN becomes equivalent a single linear model). Larger dictionaries allow for more flexibility so that CENs approach or surpass the performance of the vanilla deep networks.\n\nSparsity regularization\n------------------------------\nWe found that the results were quite stable and not very sensitive to the sparsity regularization hyperparameter. Adding a small sparsity penalty on the dictionary (between 1e-6 and 1e-3) helped to avoid overfitting for very large dictionary sizes -- the model learned to use only a few explanations from the dictionary for prediction while shrinking the rest of the dictionary to zeros. For instance, on the Satellite data, we set the dictionary size to an arbitrary number (16 or 32) and the model learned to always select between only 2 explanations (M1 and M2) and kept using those for prediction.\n\nWe have elaborated on these two points in Section 4.1.1 in the revised version.", "We would like to thank all the reviewers for their time and valuable feedback.\n\nWe have updated our submission to address reviewer’s concerns and suggestions. Here we detail the major changes that have been made to the manuscript. We answer specific questions raised in the reviews by separately replying to each of them.\n\nRelated work\n------------------\nWe have extended the related work (Section 2) and (i) elaborated the key differences between the existing work and contextual explanation networks, (ii) included the related recent work suggested by the reviewers. Additionally, we have extended conclusion (Section 6) with a brief discussion of the limitations of our method and potentially interesting avenues for future work, again connecting CENs to the existing literature.\n\nQualitative analysis\n--------------------------\nQualitative results for MNIST and IMDB datasets were originally included in Figures 9, 10, 11, 12, 13 in the appendix. In the updated version, we further added a detailed discussion of each of these qualitative results in Appendix F.2. To keep the length of the main text under a reasonable page limit, we restricted qualitative analysis in the main text to only one of the applications." ]
[ 6, 6, 6, -1, -1, -1, -1 ]
[ 5, 2, 3, -1, -1, -1, -1 ]
[ "iclr_2018_HJUOHGWRb", "iclr_2018_HJUOHGWRb", "iclr_2018_HJUOHGWRb", "H1wsCJjez", "Bk-6h6Txz", "B1E57a-ZG", "iclr_2018_HJUOHGWRb" ]
iclr_2018_HyPpD0g0Z
Grouping-By-ID: Guarding Against Adversarial Domain Shifts
When training a deep neural network for supervised image classification, one can broadly distinguish between two types of latent features of images that will drive the classification of class Y. Following the notation of Gong et al. (2016), we can divide features broadly into the classes of (i) “core” or “conditionally invariant” features X^ci whose distribution P(X^ci | Y) does not change substantially across domains and (ii) “style” or “orthogonal” features X^orth whose distribution P(X^orth | Y) can change substantially across domains. These latter orthogonal features would generally include features such as position, rotation, image quality or brightness but also more complex ones like hair color or posture for images of persons. We try to guard against future adversarial domain shifts by ideally just using the “conditionally invariant” features for classification. In contrast to previous work, we assume that the domain itself is not observed and hence a latent variable. We can hence not directly see the distributional change of features across different domains. We do assume, however, that we can sometimes observe a so-called identifier or ID variable. We might know, for example, that two images show the same person, with ID referring to the identity of the person. In data augmentation, we generate several images from the same original image, with ID referring to the relevant original image. The method requires only a small fraction of images to have an ID variable. We provide a causal framework for the problem by adding the ID variable to the model of Gong et al. (2016). However, we are interested in settings where we cannot observe the domain directly and we treat domain as a latent variable. If two or more samples share the same class and identifier, (Y, ID)=(y,i), then we treat those samples as counterfactuals under different style interventions on the orthogonal or style features. Using this grouping-by-ID approach, we regularize the network to provide near constant output across samples that share the same ID by penalizing with an appropriate graph Laplacian. This is shown to substantially improve performance in settings where domains change in terms of image quality, brightness, color changes, and more complex changes such as changes in movement and posture. We show links to questions of interpretability, fairness and transfer learning.
rejected-papers
The paper proposes a method to robustify neural networks which is an important problem. They uses ideas from causality and create a model that would only depend on "stable" features ignoring the easy to manipulate ones. The paper has some interesting ideas, however, the main concern is regarding insufficient comparison to existing literature. One of the reviewers also has concerns regarding novelty of the approach.
train
[ "rJqVoyclf", "SkKuc-Kef", "BkDbv9qlM", "HyyaEhBbG", "SJ0jX3Bbz", "rJmAGnrbG", "ryWwGhrWf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper discusses ways to guard against adversarial domain shifts with so-called counterfactual regularization. The main idea is that in several datasets there are many instances of images for the same object/person, and that taking this into account by learning a classifier that is invariant to the superficial changes (or “style” features, e.g. hair color, lighting, rotation etc.) can improve the robustness and prediction accuracy. The authors show the benefit of this approach, as opposed to the naive way of just using all images without any grouping, in several toy experimental settings.\n\nAlthough I really wanted to like the paper, I have several concerns. First and most importantly, the paper is not citing several important related work. Especially, I have the impression that the paper is focusing on a very similar setting (causally) to the one considered in [Gong et al. 2016] (http://proceedings.mlr.press/v48/gong16.html), as can be seen from Fig. 1. Although not focusing on classification directly, this paper also tries to a function T(X) such that P(Y|T(X)) is invariant to domain change. Moreover, in that paper, the authors assume that even the distribution of the class can be changed in the different domains (or interventions in this paper).\nBesides, there are also other less related papers, e.g. http://proceedings.mlr.press/v28/zhang13d.pdf, https://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/10052/0, https://arxiv.org/abs/1707.09724, (or potentially https://arxiv.org/abs/1507.05333 and https://arxiv.org/abs/1707.06422), that I think may be mentioned for a more complete picture. Since there is some related work, it may be also worth to compare with it, or use the same datasets.\n\nI’m also not very happy with the term “counterfactual”. As the authors mention in footnote, this is not the correct use of the term, since counterfactual means “against the fact”. For example, a counterfactual query is “we gave the patient a drug and the patient died, what would have happened if we didn’t give the drug?” In this case, these are just different interventions on possibly the same object. I’m not sure that in the practical applications one can assure that the noise variables stay the same, which, as the authors correctly mention, would make it a bit closer to counterfactuals. It may sound pedantic, but I don’t understand why use the wrong and confusing terminology for no specific reason, also because in practice the paper reduces to the simple idea of finding a classifier that doesn’t vary too much in the different images of the single object.\n\n**EDIT**: I was satisfied with the clarifications from the authors and I appreciated the changes that they did with respect to the related work and terminology, so I changed my evaluation from a 5 (marginally below threshold) to a 7 (good paper, accept).", "Proposal is to restrict the feasible parameters to ones that have produce a function with small variance over pre-defined groups of images that should be classified the same. As authors note, this constraint can be converted into a KKT style penalty with KKT multiplier lambda. Thus this is very similar to other regularizers that increase smoothness of the function, such as total variation or a graph Laplacian defined with graph edges connecting the examples in each group, as well as manifold regularization (see e.g. Belkin, Niyogi et al. JMLR). Heck, in practie ridge regularization will also do something similar for many function classes. \n\nExperiments didn't compare to any similar smoothness regularization (and my preferred would have been a comparison to graph Laplacian or total variation on graphs formed by the same clustered examples). It's also not clear either how important it is that they hand-define the groups over which to minimize variance or if just generally adding smoothness regularization would have achieved the same results. That made it hard to get excited about the results in a vacuum. \n\nWould this proposed strategy have thwarted the Russian tank legend problem? Would it have fixed the Google gorilla problem? Why or why not?\n\nOverall, I found the writing a bit bombastic for a strategy that seems to require the user to hand-define groups/clusters of examples. \n\nPage 2: calling additional instances of the same person “counterfactual observations” didn’t seem consistent with the usual definition of that term… maybe I am just missing the semantic link here, but this isn't how we usually use the term counterfactual in my corner of the field.\n\nRe: “one creates additional samples by modifying…” be nice to quote more of the early work doing this, I believe the first work of this sort was Scholkopf’s, he called it “virtual examples” and I’m pretty sure he specifically did it for rotation MNIST images (and if not exactly that, it was implied). I think the right citation is “Incorporating invariances in support vector learning machines\n“ Scholkopf, Burges, Vapnik 1996, but also see Decoste * Scholkopf 2002 “Training invariant support vector machines.” ", "This paper aims at robust image classification against adversarial domain shifts. In the used model, there are two types of latent features, \"core\" features and \"style\" features, and the goal is to achieved by avoiding using the changing style features. The proposed method, which makes use of grouping information, seems reasonable and useful. \n\nIt is nice that the authors use \"counterfactual regularization\". But I failed to see a clear, new contribution of using this causal regularization, compared to some of the previous methods to achieve invariance (e.g., relative to translation or rotation). For examples of such methods, one may see the paper \"Transform Invariant Auto-encoder\" (by Matsuo et al.) and references therein.\n\nThe data-generating process for the considered model, given in Figure 2, seems to be consistent with Figure 1 of the paper \"Domain Adaptation with Conditional Transferable Components\" (by Gong et al.). Perhaps the authors can draw the connection between their work and Gong et al.'s work and the related work discussed in that paper.\n\nBelow are some more detailed comments. In Introduction, it would be nice if the authors made it clear that \"Their high predictive accuracy might suggest that the extracted latent features and learned representations resemble the characteristics our human cognition uses for the task at hand.\" Why do the features human cognition uses give an optimal predictive accuracy? On page 2, the authors claimed that \"These are arguably one reason why deep learning requires large sample sizes as large sample size is clearly not per se a guarantee that the confounding effect will become weaker.\" Could the authors give more detail on this? A reference would be appreciated. ", "Thank you for your helpful comments and sharing your concerns. \nWe think your concerns are partially based on sloppy writing on our part. We changed the version of the manuscript accordingly. \n \n1. Comparison to ridge regularization \n\nIt was hidden in the supplementary material, but the baseline of the pooled estimator is already computed with a ridge penalty. We have now made this more explicit in Section 4.4.1.\n\n2. Comparison to different smoothness regularization (preferred one a graph Laplacian or total variation on graphs formed by the same clustered examples)\n\nAgain, this is due to our writing for which we apologize. In fact, the proposed estimator is exactly equal to the regularization you propose, namely a smoothness regularization if using the graph Laplacian. We have clarified this in Section 4.5 and also mention it in the abstract now.\n\nThe underlying graph is as you propose: all examples that share the same identifier ID are fully connected so that there are n connectivity components in the graph. All edges have a unit weight. Using this graph Laplacian is then equivalent to penalizing the variance across all connected examples that we mentioned in the first version of the text. In most examples we only observe a few pairs of samples that share the same identifier and the graph is then the empty graph with the exception of a few isolated edges. \n\nOur proposed estimator is indeed very simple. The simplicity could be argued to be a strength or weakness. Our point to make in this paper is that (a) we motivate why this specific form of penalty makes sense in a causal context, (b) we derive some theoretical results for logistic regression where we can show the right properties of the estimator for adversarial domain changes and (c) show that the approach works very well empirically when trying to protect against domain changes in a variety of settings. \n\nFurthermore, in the supplement “B.2.1 Counterfactual settings 2 and 3” we compare grouping by the same object vs. grouping different objects and show that grouping by the same object is empirically much more desirable (as expected). Specifically, in Figures B.4 and B.5 we compare if we use as groupings: \n(1) the same picture under different brightness settings (this is a perhaps making the problem a bit too simple)\n(2) pictures of the same person (with varying brightness but also varying backgrounds, postures etc.)\n(3) as comparison: pictures of different people\nSetting 2 is what we have in mind as a realistic application scenario and it can be seen that it performs better than using Setting 3, where we group pictures of different people. Setting 1 produces the best results but then we have in this setting more or less pre-specified brightness as the style variable we want to achieve invariance against.\n\n3. Russian tank legend/ Google gorilla problem.\n\nWe explicitly mimic the Russian tank legend problem (image quality) in Section 5.2 and show that counterfactual regularization performs much better than the standard approach. It is interesting to note for this example that it is enough to have two images of a Russian tank (if translating setting of 5.2 back to the Russian tank problem) in bad and very bad quality. There does not need to be a picture of Russian tank in good quality in the database (although it would not hurt). Having both these examples of the same tank in bad and even worse quality connected as ‘counterfactuals’ (if we can still use the term) leads to automatic exclusion of image quality as a feature for classification.\n\nThe Google gorilla problem is more involved. However, we do show an example where the style features correspond to color in Section 5.3. \n\n4. Hand-define groups/clusters of examples. \n\nTo bring the paper more in line with notation in Gong et al, we have now explicitly included the identifier ID that is used for grouping in the causal graph. The identifier ID relates to the person in an image or a specific animal, a specific house or different objects. The core estimator needs information about such an identifier variable ID. If the information is present in a dataset, we would not call it hand-picked even though we acknowledge that not all datasets contain such information. Assuming that we have an identifier ID saves us, on the other hand, from having to know the domain D explicitly as in Gong et al. so the approaches are complementary in this sense. \n\n5. Terminology\n\nThanks. The same concern was raised by the second reviewer. We have tried to explain the reasoning in the new version (and in the answer above to the second reviewer). Having said this, we would be happy to delete the term counterfactual if it is seen as too “bombastic” or too confusing. \n\n6. Virtual examples \n\nThank you for the references which are now included.", "Thanks for the helpful comments. \n\n1. Relationship to “Domain Adaptation with Conditional Transferable Components” by Gong et al.\n\nThanks for the reference which was also highlighted by the first reviewer. It is inexcusable that we omitted the reference. However, please see answer to point 2 of the first reviewer. Domain D is latent in our approach while it is observed in Gong et al. In contrast we have to observe the identifier ID that is used for the grouping of samples. The approaches thus share the same goal but work on different datasets (we could not run the core estimator on their datasets as the identifier ID is missing and --vice versa-- the approach of Gong et al would not work on our examples as there is no explicit domain variable in the data). \n\n2. Terminology\n\nThanks for the comment and concern regarding the use of the term “counterfactual”. We acknowledge that the term is used in a perhaps non-standard way. We certainly did not want to confuse and have rewritten Section 4.4 accordingly. \n\nIn a standard medical example, let Z be health outcome and T the treatment under confounders U (either observed or not). A counterfactual is then for example Z(T=0), the health outcome if no treatment is taken, if in truth treatment has been taken (T=1) and we observed Z(T=1) in the Neyman-Rubin potential outcome notation. We think you criticised that our notation deviates from this standard setting. Let us explain why we used the notation.\n\nWe highlight now in Section 4.4 that in the model of Figure 2, the core or “conditionally invariant” features (if using the terminology of Gong) are functions of the class Y and identifier ID only, while the image X=X(Y,ID,Delta) is a function of class Y, identifier ID, and the style interventions Delta.\nIf we fix (Y,ID)=(y,id), we can observe an image X(y,id,Delta) under different style interventions Delta. In this sense, the images X(y,id,Delta_1), X(y,id,Delta_2),... observed for a fixed (Y,ID)=(y,id) form counterfactuals, just as Z(T=0) and Z(T=1) would form counterfactuals in the medical setting if we fix all confounders for (T,Z). The treatment interventions T are thus set equal to the style interventions Delta in the image setting here. The image setting is clearly different to the medical example in two ways.\n(i) We would like style interventions Delta in general to have no appreciable effect on the predicted label while in a medical setting we would like treatment T to have an effect as large as possible. (ii) The images X(y,id,Delta) under different style interventions Delta are observable but the different health outcomes Z(T) for different treatments T are in general not all observable as we cannot fix the confounders in practice. We thought this difference is perhaps interesting to note in this context, even if it might appear trivial. We certainly did not want to sow confusion or use the term with no specific reason. We deleted the term “counterfactual” from the title already and would be happy to use different terminology altogether if you still feel that the term is too misleading in this context. ", "The first sentence was not meant to claim that human cognition yields optimal predictive accuracy but we can see the misunderstanding it can cause. We deleted it in the new version. \nLastly, more details regarding the following sentence were requested: “These are arguably one reason why deep learning requires large sample sizes as large sample sizes tend to ensure that the effect of the confounding factors averages out (although a large sample size is clearly not per se a guarantee that the confounding effect will become weaker).” If one only considers a small sample of images, there will be many confounding factors that might be picked up by the classifier. For instance, consider the example of classifying images of dogs versus cats. Now, suppose a dog image always shows a dog on a green lawn while cats are always shown indoors. Thus, the green lawn would be picked up as being predictive for the class “dog”. As the sample size increases and the images come from perhaps more diverse set of sources, more dog and cat images are collected in more diverse environments such that the confounding effect averages out. However, in some cases the data collection might introduce a systematic, persistent confounding: In the Russian tank example, even millions of images of Russian resp. American tanks would not have helped, provided that the quality of American tank images would have always been much better than that of Russian images. As an example, in ImageNet almost all instances of the class “rugby ball” are images showing a rugby ball together with a rugby field. Even though ImageNet is a large dataset, confounding effects can still be present. ", "Thank you for the helpful feedback. We would like to address and clarify the following points. \n\n1. Relationship to “Transform Invariant Auto-encoder” by Matsuo et al.\n\nWe have included the reference in the new version. Besides their work being on autoencoders (which is clearly related to our classification setting), one crucial difference is in our view that the style variable is pre-defined in Matsuo et al. and they only show the instance of shift invariance. In our manuscript, the style variable could be background of an image, color, image quality or any combination of these and, most importantly, the style variable is not pre-defined. Instead we use the grouping variable (called ID in the new version for more succinct notation--see Figure 2) to exclude these style features. \nEven if the notion of transformation would be made much wider in Matsuo et al. in the cost term in Section III.A, another crucial difference is that our work is able to achieve invariance in the presence of confounding, which is not discussed at all in Matsuo et al. The situation arises if the distribution of the style features differs conditional on the class label. As such, the situation is more naturally dealt with in a classification framework as the class label Y does not even appear typically in an auto-encoder setting. \n\nA bit more detail: in the autoencoder setting considered by Matsuo et al., the transform invariant autoencoder generates a “typical spatial subpattern”. In the presence of confounding, there is no guarantee that the transform invariant autoencoder will be able to achieve the desired invariance as the “typical spatial subpattern” will be subject to the same bias as the input data, arising through the confounding. Concretely, consider applying the transform invariant autoencoder to the stickmen dataset from Section 5.1 in our work. We have essentially two groups of images in the training data: moving children and non-moving adults (with very few non-moving children and moving adults). In the absence of a classification setting and class label Y we cannot resolve the confounding: should we sum over all children and then over all adults in (2) in Matsuo et al (hence ignoring movement) or sum over not moving and then all non-moving pictures (hence ignoring age)? Even if we decide (more or less arbitrarily in absence of a class label) on the former: while the transform variance term would ensure that, say, two input images of children map to the same output image, the restoration error term would reproduce the bias stemming from the confounding. That is, images of children would still be associated with large movements in the reconstruction while adults would still be associated with small or no movement. If one would then try to classify “adult vs. child” from the pooled reconstructed images, the performance is going to be very similar compared to using the original images---that is, the learned estimator would include `movement’ in its representation and therefore, it would not be robust to adversarial domain shifts, arising through interventions on the style feature `movement’. \n\nFinally, we would also like to highlight that we contribute a theoretical analysis to show robustness to adversarial domain shifts.\n\n2. Relationship to “Domain Adaptation with Conditional Transferable Components” by Gong et al.\n\nThanks for the reference which was also highlighted by the second referee. It is inexcusable that we left this reference out in the first version. The settings in both papers are very similar and we have used the revision to make the notation as similar as possible (see for example Figure 2 in the new version). This also led us to include the grouping or identifier variable ID explicitly to make the differences to related approaches more succinct. We discuss the relationship now extensively in the new version. While the goal is similar or even identical, the main difference is that we use a different data basis for the estimators:\nIn Gong et al. the domain variable D can be observed but no equivalent of our identifier ID is available (ID is latent). \nHere we assume domain D is latent but we can observe the identifier ID (at least for a small fraction of samples), where the identifier ID can for example be the person in an image (while the class Y might be whether the person is wearing glasses or not). \nAs a result, our methodologies are quite different even if they share the same goal. The differences are discussed in Section 3 and 4.2 in the new version. " ]
[ 7, 4, 5, -1, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_HyPpD0g0Z", "iclr_2018_HyPpD0g0Z", "iclr_2018_HyPpD0g0Z", "SkKuc-Kef", "rJqVoyclf", "BkDbv9qlM", "BkDbv9qlM" ]
iclr_2018_H1xJjlbAZ
INTERPRETATION OF NEURAL NETWORK IS FRAGILE
In order for machine learning to be deployed and trusted in many applications, it is crucial to be able to reliably explain why the machine learning algorithm makes certain predictions. For example, if an algorithm classifies a given pathology image to be a malignant tumor, then the doctor may need to know which parts of the image led the algorithm to this classification. How to interpret black-box predictors is thus an important and active area of research. A fundamental question is: how much can we trust the interpretation itself? In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different}interpretations. We systematically characterize the fragility of the interpretations generated by several widely-used feature-importance interpretation methods (saliency maps, integrated gradient, and DeepLIFT) on ImageNet and CIFAR-10. Our experiments show that even small random perturbation can change the feature importance and new systematic perturbations can lead to dramatically different interpretations without changing the label. We extend these results to show that interpretations based on exemplars (e.g. influence functions) are similarly fragile. Our analysis of the geometry of the Hessian matrix gives insight on why fragility could be a fundamental challenge to the current interpretation approaches.
rejected-papers
The paper tries to show that many of the state-of-the-art interpretability methods are brittle and do not provide consistent stable explanations. The authors show this by perturbing (even randomly) the inputs so that the differences are imperceptible to a human observer but the interpretability methods provide completely different explanations. Although the output class is maintained before and after the perturbation it is not clear to me or the reviewers why one shouldn't have different explanations. The difference in explanations can be attributed to the fragility of the learned models (highly non-smooth decision boundaries) rather than the explanation methods. This is a critical point and has to come out more clearly in the paper.
test
[ "HJIRZCbeG", "S1Z4Lgqez", "Sk-uqZ9xG", "rkjVKPJGG", "H1XWYv1GM", "SkDAOvkfz", "HktDzGqgf", "BkgXE1qef", "SJBmRuQxG", "S1UARBXgG", "HkJSs4Xlf", "Sk7hGWhkM", "SJtR6e3kz", "BJhbOjiJG", "HyNgEDoJM", "SylgNFXJf", "ryEWYqGJM", "HJAUq8CA-", "r1g2UGCRZ", "HyA39DpR-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "author", "public", "author", "public", "author", "public", "public", "author", "public", "author", "public" ]
[ "The authors study cases where interpretation of deep learning predictions is extremely fragile. They systematically characterize the fragility of several widely-used feature-importance interpretation methods. In general, questioning the reliability of the visualization techniques is interesting. Regarding the technical details, the reviewer has the following comments: \n\n- What's the limitation of this attack method?\n\n- How reliable are the interpretations? \n\n- The authors use spearman's rank order correlation and Top-k intersection as metrics for interpretation similarity. \n\n- Understanding whether influence functions provide meaningful explanations is very important and challenging problem in medical imaging applications. The authors showed that across the test images, they were able to perturb the ordering of the training image influences. I am wondering how this will be used and evaluated in medical imaging setting. \n", "The key observation is that it is possible to generate adversarial perturbations wherein the behavior of feature importance methods (e.g. simple gradient method (Simonyan et al, 2013), integrated gradient (Sundararajan et al, 2017), and DeepLIFT ( Shrikumar et al, 2016) ) have large variation while predicting same output. Thus the authors claim that one has to be careful about using feature importance maps.\n\nPro: The paper raises an interesting point about the stability of feature importance maps generated by gradient based schemes.\n\nCons:\nThe main problem I have with the paper is that there is no precise definition of what constitutes the stable feature importance map. The examples in the paper seem to be cherry picked to illustrate dramatic effects. The experimental protocol used does not provide enough information of the variability of the salience maps shown around small perturbations of adversarial inputs. The paper would benefit from more systematic experimentation and a better definition of what authors believe are important attributes of stability of human interpretability of neural net behavior.", "The paper shows that interpretations for DNN decisions, e.g. computed by methods such as sensitivity analysis or DeepLift, are fragile: Visually (to a human) inperceptibly different image cause greatly different explanations (and also to an extent different classifier outputs). The authors perturb input images and create explanations using different methods. Even though the image is inperceptibly different to a human observer, the authors observe large changes in the heatmaps visualizing the explanation maps. This is true even for random perturbations. \n\nThe images have been modified wrt. to some noise, such that they deviate from the natural statistics for images of that kind. Since the explanation algorithms investigated in this papers merely react to the interactions of the model to the input and thus are unsupervised processes in nature, the explanation methods merely show the model's reaction to the change.\nFor one, the model itself reacts to the perturbation, which can be measured by the (considarbly) increased class probability. Since the prediction score is given in probabilty values, the reviewer assumes the final layer of the model is a SoftMax activation. In order to see change in the softmax output score, especially if the already dominant prediction score is further increased, a lot of change has to happen to the outputs of the layer serving as input to the SoftMax layer.\n\nIt can thus be expected, that the input- and class specific explanations change as well, to an also not so small extent. The explanation maps mirror for the considered methods the model's reaction to the input. They are thus not meaningless, but are a measure to model reaction instead of an independent process. The excellent Figure 2 supports this point. Not the interpretation itself is fragile, but the model.\nAdding a small delta to the sample x shifts its position in data space, completely altering the prediction rule applied by the model due to the change in proximity to another section of the decision hyperplane. The fragility of DNN models to marginally perturbed inputs themselves is well known. \nThis especially true for adversial perturbations, which have been used as test cases in this work. The explanation methods are expected to highlight highly important areas in an image, which have been targetet by these perturbation approaches.\n\nThe authors give an example of an adversary manipulating the input in order to draw the activation to specific features to draw confusing/malignant explanation maps. In a settig of model verification, the explanation via heatmaps is exactly what one wants to have: If tiny change to the image causes lots of change to the prediction (and explanation) we can visualize the instability of the model not the explanation method.\nFurther do targeted perturbations not show the fragility of explanation methods, but rather that the models actually find what is important to the model. It can be expected, that after a change to these parts of the input, the model will decide differently, albeit coming to the same conclusion (in terms of predicted class membership), which reflects in the explanation map computed for the perturbed input.\n\nFurther remarks:\nIt would be interesting to see the size and position of the center of mass attacks in the appendix. The reviewer closely follows and is experienced with various explanation methods, their application and the quality of the expected explanations. The reviewer is therefore surprised by the poor quality and lack of structure in the maps obtained from the DeepLift method. Can bugs and suboptimal configurations be ruled out during the experiments? The DeepLift explanations are almost as noisy as the ones obtained for Sensitivity Analysis (i.e. the gradient at the input point). However, recent work (e.g. Samek et al., IEEE TNNLS, 2017 or Montavon et al., Digital Signal Processing, 2017) showed that decomposition-based methods (such as DeepLift) provide less noisy explanations than Sensitivity Analysis.\n\nHave the authors considered training the net with small random perturbations added to the samples, to compare the \"vanilla\" model to the more robust one, which has seen noisy samples, and compared explanations? \nWhy not train (finetune) the considered models using softplus activations instead of exchanging activation nodes?\nAppendix B: Heatmaps through the different stages of perturbation should be normalized using a common factor, not individually, in order to better reflect the change in the explanation\n\nConclusion:\nThe paper follows an interesting approach, but ultimately takes the wrong view point:\nThe authors try to attribute fragility to explaining methods, which visualize/measure the reaction of the model to the perturbed inputs. A major rework should be considered.", "Thank you for the review and feedback. The main contribution of our paper is to systematically demonstrate for the first time that interpretation of neural networks are fragile to attacks. This is an important topic and your questions raise interesting future research directions. \n\n1. The limitation of the attack method is a very interesting research direction. The attacks that we designed in our paper are all white-box attacks that need to know the NN model. Our next question to answer would be the dangers of interpretations attacks in the black-box setting without access to the model.\n\n2. The reviewer asks how reliable are the interpretation methods. Although these methods are widely used(e.g. Quang and Xie 17, Kelly and Reshev 17), there is not a unified definition of reliability that has been investigated and it is an active area of research(Doshi-Velez & Kim, 2017). One of the contributions of our work is to systematically compare the robustness of the interpretations generated by these different methods. Our work shows that is possible to change regions of high saliency through careful perturbations of the test images (see Fig. 2). So the methods are correctly identifying new interpretations, but these interpretations disagree with human notions of what part of an image is most related to interpretation. \n\n3. As the reviewer mentions, we defined metrics (rank correlation and top-k intersection, and also center shift metric in Appendix D) to compare the interpretations of two different images. In order to make it clear how these metrics correspond to intuitive notions of stability, we have included a new figure, Figure 3, and a new appendix, Appendix C, which provides an example of how rank correlations and top-k intersection change as randomly sampled validation images are adversarially perturbed. \n\n4 .We agree that medical case is one of the most important problems for the application influence functions and one way to evaluate the perturbations is to look at the concordance with human studies. \n", "Thank you for the review and feedback. Here, we address the points made in the review as well as describe changes to the original submission to incorporate the reviewer’s feedback.\n\nOur paper proposes a precise definition of what it means for an interpretation to be fragile. As we stated in the abstract, our definition is “two perceptively indistinguishable inputs with the same predicted label can be assigned very different interpretations”. A stable feature map is one that is not fragile by this definition. We have included additional discussions of our definitions in the Our Contributions section to clarify any question. Moreover, we proposed two clear metrics, rank correlation and top-K intersect, to quantify exactly how different two interpretations are. We have also included a new figure, Figure 3, and a new appendix, Appendix C, which provides an example of how rank correlations and top-k intersection change as randomly sampled validation images are adversarially perturbed.\n\nThe examples in Figure 1 are representative of how interpretations can be attacked by our perturbations. We have released our code at [https://goo.gl/6usSEk] and the reviewer can verify for him/herself that our attacks are reproducible and consistent for ImageNET and CIFAR10. Moreover, our experiments on ImageNET and CIFAR10 does systematically support that the interpretations are fragile by our definitions (Figs 4, 5 in the main text and Figs 12, 13 in the Appendix). \n\nCould you please let us know if you have any more questions regarding the paper or if there are specific experiments that you’d like to see? We’d like to engage in a dialogue until we resolve all of your questions. \n", "Thank you for the thorough review and feedback. Our conclusions are actually exactly the same as yours! Our main contribution is to show that the interpretation (e.g. the saliency map of an image) can be significantly perturbed by adversarial attacks; we do not claim that the interpretation method (e.g. DeepLift) is broken by the attacks. This completely agrees with your point. \n\nOur operational definition that an interpretation is fragile is “two perceptively indistinguishable inputs with the same predicted label can be assigned very different interpretations”. All of our experiments support our conclusion that interpretations can be adversarially attacked by this definition. We used this definition because it’s analogous to the fragility notion used in the broader adversarial attack and ML security community. We do not claim in the paper that the interpretation method itself is broken by our attacks. This is a subtle and important distinction. We have added a new paragraph in the Conclusion section to clarify this:\n\n“Our results demonstrate that the \\emph{interpretations} (e.g. saliency maps) are vulnerable to perturbations, but this does not imply that the \\emph{interpretation methods} are broken by the perturbations. This is a subtle but important distinction. Methods such as saliency measure the infinitesimal sensitivity of the neural network at a particular input $\\xb$. After a perturbation, the input has changed to $\\tilde{\\xb} = \\xb + \\deltb$, and the salency now measures the sensitivity at the perturbed input. The saliency \\emph{correctly} captures the infinitesimal sensitivity at the two inputs; it's doing what it is supposed to do. The fact that the two resulting saliency maps are very different is fundamentally due to the network itself being fragile to such perturbations, as we illustrate with Fig.~\\ref{fig:concept}.”\n\nYou asked about the quality of our DeepLIFT saliency maps. We implemented DeepLIFT with rescale rule as described by the original authors [https://arxiv.org/abs/1704.02685 ]. We have released the code for our implementation at [https://goo.gl/6usSEk]. We have checked the code several times and we also closely work with the the lab developing DeepLIFT in the area of interpretability.\n\nRegarding your remark that heat-maps should be normalized using a common factor. It’s important to notice that the measures used for comparing the original and perturbed saliency maps (top-K intersection and rank order correlation) are independent from the normalizing scheme.\n\nRegarding your suggestion a training strategy to make networks more robust to such adversarial examples, and suggested retraining a network using softplus activations. These are very helpful suggestions that we will consider for future direction.\n\nDoes this help to address your question? We believe that our conclusions are the same as yours. Please let us know if the new discussions we’ve added to the revision clarify this confusion. The key contribution of our work is to extend adversarial attacks to interpretations for the first time, and this raises interesting security questions given how important interpretations are. Please let us know if there are additional analysis you’d like to see. We hope to engage in a dialogue until we can resolve all of your concerns. \n", "Thanks for your comment.\n\nOur operational definition of fragility is that the interpretation is fragile if, given a fixed network, two similar inputs with the same prediction have very different ‘interpretations’. All of our results support the claim that interpretations are fragile by this metric. \n\nIt seems like you are basically questioning whether this is a good metric of fragility and you might have a different metric in mind. We think it’s certainly interesting to explore other metrics, and our paper opens the door for that. There are two main motivations for our metric/definition of fragility. \n\n1. Our definition is well-motivated by practice. Suppose you have a pathology image that is predicted to be a malignant tumor, as an example. The clinician might then use some saliency map to interpret which part of the image is the most informative. The reliability of this interpretation exactly maps onto our fragility metric: the input image always have measurement errors and our fragility metric quantifies whether the same parts of the image would light up as informative across different measurement errors. This motivates defining fragility as we do. \n\n2. Our definition is consistent with other notions of fragility to adversarial attacks. \n\nYou said that the perturbation to the input changes the function. Yes, that’s exactly why the interpretation is fragile by our metric. \n", "The paper describes an interesting effect but it attributes the observation incorrectly to interpretability method.\nTherefore the title and conclusions drawn in the paper are not correct.\n\nTo be precise the paper claims that interpretability methods are fragile and that the experiments support this.\nTo claim this it is necessary to have two aspects.\nA) A change is made such that the network is explained in a different way\nB) This change should be controlled such that we are guaranteed that the explanation MUST be the same.\nIn this paper effect (A) is present, however part (B) is NOT present since the function the network computes changes.\nIf the network makes a different computation it is possible that the explanation needs to change as well.\nTherefore the paper does not show that interpretation is fragile.\n\nTo understand why point (B) is missing we have to look at what is done and what is being interpreted.\n\nIt is shown that it is possible to imperceptibly change an image such that the gradient w.r.t. to the input changes, but without changing the classifier’s output.\nSince most interpretability methods are based on the gradient this changes their interpretation.\nThis is where the paper makes a mistake by ignoring the meaning of the gradient or saliency map (as defined by Simonyan).\n\nAs noted in the Saliency Map paper by Simonyan, the gradient of the output (logit) w.r.t. the input \nis a local linear approximation of the network’s function in the neighborhood of the datapoint, i.e. a Taylor series according to said paper.\nFor relu and max-pooling networks this approximation is actually exact given correct treatment of the bias.\nNow when the gradient changes after an input change, this shows that the network computes a different function.\n\nThe approach in this paper assures that the most probable class does not change. \nIt does not control how this decision was made.\nSince the function might be different, the explanation should not necessarily be the same. \nIt might very well be possible that the network’s decision needs to be interpreted differently now.\nIntuitively, this can be understood by realizing that for many problems the same decision can be made based on different subsets of evidence.\n\nSince there is no certainty that the decision was made in an identical manner for the original datapoint and the modified one,\n It is NOT possible to conclude that interpretability methods are fragile.\n\nAdditionally, the theoretical analysis also make an incorrect claim. \nThe paper states that a logistic regression network is susceptible to the adversarial attack on the interpretability.\nThis is not correct.\nIn the case that the logistic model (two or multi-class) is analyzed from the logit (as is done commonly in the papers by Bach et al, Simonyan et al ,…..) the gradient w.r.t. the input is always the weight vector. \nIn a two class setting where a sigmoid is added, the gradient becomes a scaled version of the weight vector. \nFor this reason interpretability is not fragile here either.\nTherefore at least a multi class logistic regression is needed and the methods have to be applied to the output post-softmax.\nPlease not that this is not how the original papers describe the approach how to use these methods.\n\nTo conclude, I think the manuscript shows an interesting effect but it draws the wrong conclusions.\nThe fact that the network’s function can change drastically due to a small change in the input without affecting the prediction is surprising.\nThis is an effect similar to the original observation of adversarial examples.\nHowever it does NOT show fragility of interpretability maps.\nFor this reason, I believe that the paper needs to be updated before it can be accepted.\nThe observed effect does warrant further investigation and is an interesting finding.", "1) \"All three saliency methods, including the simple gradient method, have drawbacks and are unstable\"\n Thus the conclusion \"Interpretation of Neural networks is fragile\" is based on unstable metrics and hence it requires much more evidence (stable metrics) than shown in the paper.\n\n2) \"most of them led to very small changes in the prediction confidence\"\n Thus the figure 1 is not a good representative of the experimental results (since 1.a, 1.c have a huge change in their prediction confidence)\n\n3) Sure. So the authors agree that the statement \"importance of individual features or training examples is highly fragile to even small random perturbations to the input image\" is after all over-emphasized. As figure 3 illustrates, for small perturbations L_\\infty = 1, random perturbations are around 3-10 times weaker than adversarial perturbations.\n\n4) Why was spearman ranking chosen over L2? Can you share results for other distance measures anonymously? Many other submissions in ICLR do this.\n\n5) Sure.", "Thanks for your interest. \n\nAll three saliency methods, including the simple gradient method, have drawbacks and are fragile. The best way to see this is the systematic experiments in Fig. 3. \n\nOne of our findings is that, in general, perturbations can substantially change the interpretation (saliency) without changing the predicted label. In the systematic experiments of Sec. 4, ALL of the perturbations preserved the original label, and most of them led to very small changes in the prediction confidence. \n\nOur results show that our targeted perturbations are much more effective than random perturbations. We also demonstrate that the saliency of individual features (e.g. particular pixels) is fragile to even random perturbations. Fig. 3 top row shows that there is a ~80% turnover in the top features under random perturbation. These two results are complementary. \n\nYes, we have used other similarity metrics such as intersection of the top features, L2 distance, etc. and the same results hold. \n\nAs is common for most of the ICLR submissions, we plan make all the code public soon. ", "Dear Authors,\n\n1) In Figure 8 the three saliency maps differ from each other. This implies that at least 2 of the saliency maps are incorrect/irrelevant to this problem (may be all the 3 are). \n To corroborate the above point:\n a) In Figure 9, the DEEPLift saliency map differs from the other two \n b) In Figure 7, the Integrated saliency map differs from the other two. \n From this, one would think that the simple gradient method is the most reliable of the three methods but Section 2.1 contradictingly states that (Shrikumar, Sundarajan) \"simple gradient method\" has drawbacks.\n\n2) In Figure 1, the confidence of the output of the network has increased in all the three of your examples. Is this a general phenomenon or are these hand picked examples?\n\n3) The introduction stresses equally on\n a) \"importance of individual features or training examples is highly fragile to even small random perturbations to the input image\". \n b) \"we show how targeted perturbations can lead to different global interpretations\".\n However, surprisingly Figure 1 doesn't have any examples based on random perturbation attack. \n On further searching, I was able to find an example in the appendix, where the changes in the saliency map due to random pertubation is nothing compared to changes in the saliency map due to attacked perturbation. I feel the case for random permutation attack is over-emphasized.\n\n4) In section 2.2, each test image has a vector v of size |train_images|, where the element v[i] is the importance of training image i for the classification of the test image. For high dimensional vectors v1, v2, the spearman rank correlation is very prone to small noise.\nTo overcome this, the most 'natural' way to quantify the similarity between two different test images would be to take a dot product of their corresponding vectors v1 and v2. One could also using other similarity measures such as 2 -d(v1,v2), where d(v1,v2) is a distance metric designed to overcome the noise in high dimensions.\n\n5) I was not able to find the public code of the experiments, even though this paper heavily relies on the experiments. Is the code public?\n\nThanks", "We will release our code soon to show how the algorithm is implemented. The gradients can be implemented in Tensorflow like we described in the paper.", "Thanks for your reply.\n\nI think you would agree that to apply your attack algorithm in the paper, you cannot avoid calculating the above gradient in the title, i.e the gradient of the saliency map w.r.t the input. However, this expression itself is not legal. Notice that the first gradient inside is actually the saliency map. We do can calculate it because we know the forward pass. However, for the gradient outside, we will have a problem because now we don't know what's the gradient for the pooling layers. I don't think Softplus will solve this issue. \n\nI was wondering if you could release your code. Maybe you have a smart way to get around based on some math derivations. \n\nThank you again.", "The interpretation attacks in our paper do work for NNs with pooling layers. In fact, our experimental results on ImageNet (Figs 1 and 3) were all produced for Sqeezenet, which has maxpooling and average pooling layers. The reason this works is as follows: when computing the attack direction, we replace ReLU with Softplus so we have non-zero second gradients. Max-pooling simply picks out a particular neuron to pass through, so it will have non-zero second order gradients as long as the activation has non-zero second gradient (except on a set of measure zero). The second order chain rule has two terms, one of which, is zero as the second order gradient of maxpooling is zero. The other term is non-zero as long as the activation functions of the network have non-zero second order gradients. For more clarification please look at Faà di Bruno's formula for second order derivative. (You can empirically confirm it by building a simple network in Tensroflow with softplus activation and a maxpool layer and take the second order gradient).\nNote that we use this Softplus network only for the purpose of finding an attack direction, we still compute saliency with respect to the original ReLU network, as we discussed in the paper. This means that our attacks can be applied to any ReLU network with or without pooling. \nYou are correct that the main message of our paper is that reliability of neural network interpretation is fragile, and we developed approaches to systematically quantify this effect for the first time. We expect that there are many ideas to improve upon the specific attacks we proposed, and this opens up an interesting research direction.", "To apply the attack method, we cannot avoid calculating the gradient of the saliency map w.r.t to the input. However, notice that the saliency map itself is a gradient of the maxlogit w.r.t the input. The gradient for a neural network is usually calculated by using the backpropagation, which depends on the forward pass. So, for any network with pooling layers, the gradient of the saliency map w.r.t the input is not calculable, because the pooling layers, when you calculate the gradient for the second time, are not differentiable. In the paper, you only point out that for the widely used activation function Relu, in order to avoid the zero gradient problem, we can replace it with a softplus activation. \n\nOne possible way to solve this problem is to stop the gradient of D w.r.t the input x at the saliency map level. However, this fix doesn't make sense mathematically, though it might can still be an attack method. \n\nConsidering that most of the networks will contain pooling layers, I think the application of this attack method is limited. But questioning the reliability of the visualization techniques is a good point in general. ", "My question is more about identifying important features rather than finding influential training images. I wonder if feature importance methods produce similar good results on training and test images? ", "Generally speaking, we find similar performance between interpretability on training images and test images. Of course, if a training image is used at test time, influence functions will return the training image itself as the most influential image.", "Thanks for your thorough reply. Another question: regarding subjective measure of interpretation, do feature importance methods perform similarly on training and test data?", "Very good question. \n To have a sense of reliability of saliency methods, one can argue with both subjective and objective measures. The most prevalent objective measure in the literature has been weakly-supervised object localization which is basically trying to localize the classified object using the most salient input dimensions(pixels). Simonyan et al discussed this measure in their original work for the simple gradient method and reported less than 50% error. DeepLIFT and Integrated Gradients methods have not been yet applied to the task of localization to the best of our knowledge.\n As a subjective comment, however, I have to say that all of the three mentioned feature importance methods are successful (nearly all the time) in pointing to the region of important pixels (in other words the region of image containing the classified object); what makes them different in performance, is how noisy the saliency map is. In other words, how many non-important pixels(subjectively) are pointed out by the feature importance method to be important or how many missing important pixels one can detect in the salient part of the heat map. It could be said that Integrated Gradients results in the best subjective saliency map and also DeepLIFT has acceptable performance. Examples of both could be found in https://github.com/ankurtaly/Integrated-Gradients and https://github.com/kundajelab/deeplift, respectively. Also, the recent \"SmoothGrad: removing noise by adding noise\" paper has tried to solve the problem of noisy saliency maps and they have reported convincing results.\n Understanding whether influence functions provide meaningful explanations is more challenging since training examples can influence the prediction of a test image in different ways. For example, a very similar-looking training image from the same class may help the classifier identify the test image, but so can a very different-looking training image from a different class. As such, we find that the training examples that are identified as the most helpful by influence functions are not generally the same as those images that are visually similar to the test image. However, regardless of the meaningfulness of influence functions on a particular test image, we show that across the test images, we are able to perturb the ordering of the training image influences.", "I was wondering how reliable are the interpretations? i.e., for what fraction of original images do they generate a \"meaningful\" interpretation? " ]
[ 6, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1xJjlbAZ", "iclr_2018_H1xJjlbAZ", "iclr_2018_H1xJjlbAZ", "HJIRZCbeG", "S1Z4Lgqez", "Sk-uqZ9xG", "BkgXE1qef", "iclr_2018_H1xJjlbAZ", "S1UARBXgG", "HkJSs4Xlf", "iclr_2018_H1xJjlbAZ", "SJtR6e3kz", "BJhbOjiJG", "HyNgEDoJM", "iclr_2018_H1xJjlbAZ", "ryEWYqGJM", "HJAUq8CA-", "r1g2UGCRZ", "HyA39DpR-", "iclr_2018_H1xJjlbAZ" ]
iclr_2018_S1EzRgb0W
Explaining the Mistakes of Neural Networks with Latent Sympathetic Examples
Neural networks make mistakes. The reason why a mistake is made often remains a mystery. As such neural networks often are considered a black box. It would be useful to have a method that can give an explanation that is intuitive to a user as to why an image is misclassified. In this paper we develop a method for explaining the mistakes of a classifier model by visually showing what must be added to an image such that it is correctly classified. Our work combines the fields of adversarial examples, generative modeling and a correction technique based on difference target propagation to create an technique that creates explanations of why an image is misclassified. In this paper we explain our method and demonstrate it on MNIST and CelebA. This approach could aid in demystifying neural networks for a user.
rejected-papers
The paper proposes a way to find why a classifier misclassified a certain instance. It tries to find pertubations in the input space to identify the appropriate reasons for the misclassification. The reviewers feel that the idea is interesting, however, it is insufficiently evaluated. Even for the datasets they do evaluate not enough examples of success are provided. In fact, for CelebA the results are far from flattering.
train
[ "BJncxx9gf", "HJ3Gw8clz", "B1IHpaWWM", "BkKZxuaQz", "S1an1uaQz", "HyhKJ_pXM", "rJ072Pa7f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes a method for explaining the classification mistakes of neural networks. For a misclassified image, gradient descent is used to find the minimal change to the input image so that it will be correctly classified. \n\nMy understanding is that the proposed method does not explain why a classifier makes mistakes. Instead, it is about: what can be added/removed from the input image so that it can be correctly classified. Strictly speaking, \"Explaining the Decisions of Neural Networks\" is not the most relevant title for the proposed method. \n\nBased on my understanding about what the paper proposes, I am not sure how useful this method is, from the application point of view. It is unclear to me how this method can shed light to the mistakes of a classifier. \n\nThe technical aspects of the paper are straight forward optimization problem, with a sensible formulation and gradient descent optimization problem. There is nothing extraordinary about the proposed technique. \n\nThe method assumes the availability of a generative model, VAE. The implicit assumption is that this VAE performs well, and it raises a concern about the application domains where VAE does not work well. In this case, would the visualization reflect the shortcoming of VAE or the mistake of the classifier? \n", "In this paper, the authors aim to better understand the classification of neural networks. The authors explore the latent space of a variational auto encoder and consider the perturbations of the latent space in order to obtain the correct classification. They evaluate their method on CelebA and MNIST datasets.\n\nPros:\n\n 1) The paper explores an alternate methodology that uses perturbation in latent spaces to better understand neural networks \n2) It takes inspiration from adversarial examples and uses the explicit classifier loss to better perturb the $z$ in the latent space\n3) The method is quite simple and captures the essence of the problem well\n\nCons:\nThe main drawback of the paper is it claims to understand working of neural networks, however, actually what the authors end up doing are perturbations of the encoded latent space. This would evidently not explain why a deep network generates misclassifications for instance understanding the failure modes of ResNet or DenseNet cannot be obtained through this method. Other drawbacks include:\n\n1) They do not show how their method would perform against standard adversarial attack techniques, since by explaining a neural network they should be able to guard against attacks, or at-least explain why they work well. \n2) The paper reports results on 2 datasets, out of which on 1 of them it does not perform well and gets stuck in a local minima therefore implying that it is not able to capture the diversity in the data well. \n\n3) The authors provide limited evaluation on a few attributes of CelebA. Extensive evaluation that would show on a larger scale with more attributes is not performed.\n\n4) The authors also have claimed that the added parts should be interpretable and visible. However, the perturbations of the latent space would yield small $\\epsilon$ variation in the image and it need not actually explain why the modification is yielding a correct classification, the same way an imperceptible adversarial attack yields a misclassification. Therefore there is no guarantee that the added parts would be interpretable. What would be more reasonable to claim would be that the latent transformations that yield correct classifications are projected into the original image space. Some of these yield interpretations that are semantically meaningful and some of these do not yield semantically meaningful interpretations.\n\n5) Solving mis-classification does not seem to equate with explaining the neural network, but rather only suggest where it makes mistakes. That is not equal to an explanation about how it is making a classification decision. That would rather be done by using the same input and perturbing the weights of the classifier network. \n\nIn conclusion, the paper in its current form provides a direction in terms of using latent space exploration to understand classification errors and corrections to them in terms of perturbations of the latent space. However, these are not conclusive yet and actually verifying this would need a more thorough evaluation.", "Summary: The authors propose a method for explaining why neural networks make mistakes by learning how to modify an image on a mistaken classification to make it a correct classification. They do this by perturbing the image in an encoded latent space and then reconstructing the perturbed image. The explanation is the difference between the reconstructed perturbed encoded image and the reconstructed original encoded image.\n\nThe title is too general as this paper only offers an explanation in the area of image classification, which by itself, is still interesting. \n\nA method for explaining the results of neural networks is still open ended and visually to the human eye, this paper does offer an explanation of why the 8 is misclassified. However, if this works very well for MNIST, more examples should be given. This single example is interesting but not sufficient to illustrate the success of this method. \n\nThe examples from CelebA are interesting but inconclusive. For example, why should adding blue to the glasses fix the misclassification. If the explanation is simply visual for a human, then this explanation does not suffice. And the two examples with one receding and the other not receding hairlines look like their correct classifications could be flipped.\n\nRegarding epsilon, it is unclear what a small euclidean distance for epsilon is without more examples. It would also help to see how the euclidean distance changes along the path. But also it is not clear why we care about the size of epsilon, but rather the size of the perturbation that must be made to the original image, which is what is defined in the paper as the explanation. \n\nSince it is the encoded image that is perturbed, and this is what causes the perturbations to be selective to particular features of the image, an analysis of what features in the encoded space that are modified would greatly help in the interpretability of this explanation. The fact that perturbations are made in the latent space, and that this perturbation gets reflected in particular areas in the reconstructed image, is the most interesting part of this work. More discussion around this would greatly enhance the paper, especially since the technical tools of this method are not very strong.\n\nPros: Interesting explanation, visually selects certain parts of the image relevant to classification rather than obscure pixels\n\nCons: No discussion or analysis about the latent space where perturbations occur. Only one easy example from MNIST shown and examples on CelebA are not great. No way (suggested) to use this tool outside of image recognition. ", "Hello AnonReviewer1,\n\nJust a notification that we have given our reply at the top of the page! In one comment for all reviewers.\n\nThis is done to keep this forum tidy!\n\nKind regards", "Hello AnonReviewer3,\n\nJust a notification that we have given our reply at the top of the page! In one comment for all reviewers.\n\nThis is done to keep this forum tidy!\n\nKind regards", "Hello AnonReviewer2,\n\nJust a notification that we have given our reply at the top of the page! In one comment for all reviewers.\n\nThis is done to keep this forum tidy!\n\nKind regards", "Dear Anonymous Reviewers,\n\nThank you for taking the time to respond to our paper. We have taken your points for improvement into consideration and made changes to the paper. \n\nWhat we have done is the following. Firstly, we have changed the title to more specifically refer to explaining why input is misclassified by a neural network. The title is now: ‘ Explaining the Mistakes of Neural Networks Using Latent Sympathetic Examples’. We have also reformulated the paper to communicate this intent more clearly. Secondly, we have included more examples on MNIST. \nThirdly, we have changed some of the examples on CelebA to ones that are more illustrative of the technique’s ability to perturb images. We have changed the normalization on the perturbations to make it more salient what is changed. We have also made a connection to heat mapping methods that exist in the prior literature.\nFourthly, the reason why epsilon must stay as small as possible is to a) reduce the reconstruction error by the generative model and b) such that it generates only those features that are absolutely necessary to perturb the image to the correct class. For instance, assume we have an image of a person that has a mustache but is misclassified as not having a mustache. If we use our method to go to the absolute minimum error, we may end up with an entirely new face projected over the original image and not just the area of the mustache. Though technically correct, this would not yield meaningful explanations, since one can always make a face more mustache like by continuing to project the prototypical mustached face on it. Therefore it is important that epsilon be encouraged to stay small. We have more clearly explained this in the paper.\nFifthly, we attempt to explain what features are often perturbed in the latent space by including an example. We have done an analysis of why certain males were misclassified as female. We found that often males with long hair were misclassified and that the hair was perturbed to get them to be correctly classified. This also reveals a novel application: discovering non-trivial dataset weaknesses, or biases. In this case our method points to insufficient males with long hair in the dataset. \n\nAnother concern that was raised was that we do not show how our method would perform against a standard adversarial attack. It is not our intent to adversarially attack, nor to guard against adversarial attacks. Normal heatmapping methods would not allow you to do so either. The goal of our method is to locate potential areas in the input image that caused a misclassification, different from the unconstrained manner (Simonyan 2013) where the input image can be perturbed in any possible way (Ancona et al. (2017)). This would mean that the perturbations very rarely have any semantically meaningful footprint, since the perturbations would be small changes in pixel values all over the image. Because we are interested in those perturbations that could lead to reasonable misclassifications, we put constraints on it. There is no guarantee that these perturbations are meaningful, however, experimental results show us that this is often the case.\n\nThe issue of performance was raised. Specifically, the issue of local minima. To clarify there are several causes for bad performance. 1) The generative model is not strong enough. 2) local minima. 3) Attributes that are hard to demarcate for annotators. As the size of the latent space increases the local minima problem becomes less prevalent because there are simply more ways to ‘go down’ in the error space. This is made clear by the differing success rates using a VAE with a 2-D and a 10-D latent spaces (Section 3.2.1). \n\nFinally, the issue of the performance of the generative model is also raised. We use a VAEGAN for the CelebA data, which is more powerful than a VAE. In order to check whether the generative model is to blame, one can simply compare the reconstruction error to the perturbation and see whether the generative model was indeed capable of capturing the data well. \n\nWe hope that we have addressed all of your questions, and improved where improvement was possible. Thank you again for your feedback.\n\nKind regards,\n\n\nAnonymous.\n" ]
[ 4, 4, 6, -1, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_S1EzRgb0W", "iclr_2018_S1EzRgb0W", "iclr_2018_S1EzRgb0W", "BJncxx9gf", "HJ3Gw8clz", "B1IHpaWWM", "iclr_2018_S1EzRgb0W" ]
iclr_2018_r1Oen--RW
The (Un)reliability of saliency methods
Saliency methods aim to explain the predictions of deep neural networks. These methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction. We use a simple and common pre-processing step ---adding a mean shift to the input data--- to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute. We define input invariance as the requirement that a saliency method mirror the sensitivity of the model with respect to transformations of the input. We show, through several examples, that saliency methods that do not satisfy a input invariance property are unreliable and can lead to misleading and inaccurate attribution.
rejected-papers
This paper showcases how saliency methods are brittle and cannot be trusted to obtain robust explanations. They define a property called input invariance that they claim all reliable explanation methods must possess. The reviewers have concerns regarding the motivation of this property in terms of why is it needed. This is not clear from the exposition. Moreover, even after having the opportunity to update the manuscript they seem to have not touched upon this issue other than providing a generic response.
train
[ "BJ6e_ttgf", "HyVpmntgG", "B1nPks1-f", "BkVa3BTQz", "By_xnrTQf", "S14SnSpmM", "rk1tiS67M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The scope of the paper is interesting i.e. taking a closer look at saliency methods in view of explaining deep learning neural networks. The authors state that saliency methods that do not satisfy an input invariance property can be misleading.\n\nOn the other hand the paper can be improved in my opinion in different aspects:\n- it would be good to be more precise on the type of invariance (e.g. translation invariance, rotation invariance etc.) or is the paper only about invariance to mean shifts? I suggest to explain in the introduction which type of invariances have been considered in the area of deep learning and then position the paper relative to it.\n- in the introduction the authors talk about an \"invariance axiom\": it was difficult to see where in the paper this axiom is precisely stated.\n- While in section 2.1 specifies a 3-layer MLP as the considered deep learning network. It is not clear why CNN haven't been used here (especially because the examples are on MNIST images), while in section 3.1 this is mentioned. \n- I think that the conclusion with respect to invariances could also depend on the choice of the activation function. Therefore the authors should from the beginning make more clear to which class of deep learning networks the study and conclusions apply.\n- From section 3.1 it becomes rather unclear which parts of the paper relate to the literature and which parts relate to section 2.1. Also the new findings or recommendations are not clear.\n\n\n\n\n\n", "The authors explore how different methods of visualizing network decisions (saliency methods) react to mean shifts of the input data by comparing them on two networks that are build to compensate for this mean shift. With the emergence of more and more saliency methods, the authors contribute an interesting idea to a very important and relevant discussion.\n\nHowever, I'm missing a more general and principled discussion. The question that the authors address is how different saliency methods react to transformations of the input data. Since the authors make sure that their two models compensate for these transformation, the difference in saliency can be only due to underlying assumptions about the input data made by the saliency methods and therefore the discussion boils down to which invariance properties are justified for which kind of input -- it is not by chance that the attribution methods that work are exactly those that extract statistics from the input data and therefore compensate for the input transformation: IG with black reference point and Pattern Attribution.\nThe mean shift explored by the authors assumes that there is no special point in the input space (especially that zero is not a special point).\nHowever, since images usally are considered bounded by 0 and 1 (or 255), there are in fact two special points (as a side note, in Figure 2 left column, the two inputs look very different which might be due to the fact that it is not at all obvious how to visualize \"image\" input that does not adhere to the common image input structure).\nWould the authors argue that scaling the input with a positive factor should also lead to invariant saliency methods?\nWhat about scaling with a negative factor?\nI would argue that if the input has a certain structure, then it should be allowed for the saliency method to make use of this structure.\n\nMinor points:\n\nUnderstanding the two models in section 3 is a bit hard since the main point (both networks share the weights and biases except for the bias of the first layer) is only said in 2.1\n", "Saliency methods are effective tools for interpreting the computation performed by DNNs, but evaluating the quality of interpretations given by saliency methods are often largely heuristic. Previous work has tried to address this shortcoming by proposing that saliency methods should satisfy \"implementation invariance\", which says that models that compute the same function should be assigned the same interpretation. This paper builds on this work by proposing and studying \"input invariance\", a specific kind of implementation invariance between two DNNs that compute identical functions but where the input is preprocessed in different ways. Then, they examine whether a number of existing saliency methods satisfy this property.\n\nThe property of \"implementation invariance\" proposed in prior work seems poorly motivated, since the entire point of interpretations is that they should explain the computation performed by a specific network. Even if two DNNs compute the same function, they may do so using very different computations, in which case it seems natural that their interpretations should be different. Nevertheless, I can believe that the narrower property of input invariance should hold for saliency methods.\n\nA much more important concern I have is that the proposed input invariance property is not well motivated. A standard preprocessing step for DNNs is to normalize the training data, for example, by subtracting the mean and dividing by the standard deviation. Similarly, for image data, pixel values are typically normalized to [0,1]. Assuming inputs are transformed in this way, the input invariance property (for mean shift) is always trivially satisfied. The paper does not justify why we should consider networks where the training data is not normalized is such a way.\n\nEven if the input is not normalized, the failures they find in existing saliency methods are typically rather trivial. For example, for the gradient times input method, they are simply noting that the interpretation is translated by the gradient times the mean shift. The paper does not discuss why this shift matters. It is not at all clear to me that the quality of the interpretation is adversely affected by these shifts.\n\nI believe the notion that saliency methods should be invariant to input transformations may be promising, but more interesting transformations must be considered -- as far as I can tell, the property of invariance to linear transformations to the input does not provide any interesting insight into the correctness of saliency methods.\n", "Quote: \n“- it would be good to be more precise on the type of invariance (e.g. translation invariance, rotation invariance etc.) or is the paper only about invariance to mean shifts? I suggest to explain in the introduction which type of invariances have been considered in the area of deep learning and then position the paper relative to it.”\n\nAnswer:\nWe agree with the reviewer that we could have been more explicit. Our broad motivation is determining failure points by formulating unit tests of commonly used saliency methods. We propose that one criteria methods should fulfill in order to be reliable is input invariance (II). We benchmark methods by considering one possible transformation, a mean shift of the input. The reviewer is correct that additional transformation to the inputs should also be considered but that is beyond the current scope.\n\n\n\nQuote:\n “- in the introduction the authors talk about an \"invariance axiom\": it was difficult to see where in the paper this axiom is precisely stated.”\n\nAnswer:\nThe term axiom was used in the same vein as prior work (integrated gradients) to articulate a desirable property. On reflection upon the reviewers feedback we agree that there may be more suitable terminology. A rephrasing consistent with our original intent is that input invariance is a desideratum of interpretability methods.\n\n\n\nQuote: \n “- While in section 2.1 specifies a 3-layer MLP as the considered deep learning network. It is not clear why CNN haven't been used here (especially because the examples are on MNIST images), while in section 3.1 this is mentioned. “\n\n\nAnswer:\nWe argue attribution methods should work for all architectures. Therefore the validity of a test does not depend on the architecture chosen. This also implies that a test might not be possible for all architectures.\n\n\n\nQuote: \n“- I think that the conclusion with respect to invariances could also depend on the choice of the activation function. Therefore the authors should from the beginning make more clear to which class of deep learning networks the study and conclusions apply.”\n\nAnswer:\nThe given test does not depend on the activation function since from the first linear operation all activations are identical. However, we do agree with the reviewer that in future, more advanced tests, the activation function could play a role.\n\n\n\nQuote:\n “- From section 3.1 it becomes rather unclear which parts of the paper relate to the literature and which parts relate to section 2.1. Also the new findings or recommendations are not clear.”\n\nAnswer:\nWe acknowledge the reviewer style feedback and agree that we could be more clear. To clarify, the key contributions of this manuscript are, which we will add to a future version of the manuscript:\n- determine whether commonly used saliency methods reliably attribute by considering the input invariance of said methods.\n- Our first recommendation is that new methods be benchmarked against this test.\n- Our second recommendation is that this serves as a starting point for considering additional failure points. By extending our repertoire of tests we can develop more robust interpretability methods. This is crucial for an emerging field where the lack of ground truth means there there is no known way of measuring success. If we can reliably determine failure cases and fix these, we can weigh how to use these methods going forward.\n", "Quote:\n “A much more important concern I have is that the proposed input invariance property is not well motivated. A standard preprocessing step for DNNs is to normalize the training data, for example, by subtracting the mean and dividing by the standard deviation. Similarly, for image data, pixel values are typically normalized to [0,1].”\n\nAnswer:\nIn response to the statement that input invariance is not well motivated:\nOur stance is that interpretability methods should pass our test regardless of the pre-processing of the data. The onus should not be that the researcher has to ensure correct preprocess to guarantee a reliable explanation.\nWe argue that even among popular architectures like resnet and inception there is no standard image normalization procedure (e.g., some are [0,1] whereas others are [-1,1]).\n\n\n\nQuote: \n“Even if the input is not normalized, the failures they find in existing saliency methods are typically rather trivial. For example, for the gradient times input method, they are simply noting that the interpretation is translated by the gradient times the mean shift. The paper does not discuss why this shift matters. It is not at all clear to me that the quality of the interpretation is adversely affected by these shifts.”\n\nAnswer:\nThe failures are regarded by the reviewer as unsubstantial given that the explanation is still interpretable. We acknowledge that we choose a simple transformation to illustrate a simple point of failure. This is sufficient to show that a point of failure exists. Furthermore, the motivation for interpretability research is to explain model predictions for data we do not yet fully understanding. Illustrating that many methods fail in the simple case is therefore valuable.\n\n\n\nQuote: \n“I believe the notion that saliency methods should be invariant to input transformations may be promising, but more interesting transformations must be considered -- as far as I can tell, the property of invariance to linear transformations to the input does not provide any interesting insight into the correctness of saliency methods.”\n\nAnswer:\nWe agree with the reviewer that future work on additional input transformations is needed. However, we justify our approach by the logic that a single transformation is sufficient to demonstrate a method is unreliable. Our contribution is to formulate a unit test that can detect a specific failure point and allows us to proactively improve existing methods. Note that our unit test does not account for all possible failure points, and thus more research is needed to consider whether the methods deemed to be reliable remain so. We invite other researchers to design additional unit tests.\n", "Quote: \n“However, I'm missing a more general and principled discussion. The question that the authors address is how different saliency methods react to transformations of the input data. Since the authors make sure that their two models compensate for these transformation, the difference in saliency can be only due to underlying assumptions about the input data made by the saliency methods and therefore the discussion boils down to which invariance properties are justified for which kind of input -- it is not by chance that the attribution methods that work are exactly those that extract statistics from the input data and therefore compensate for the input transformation: IG with black reference point and Pattern Attribution.”\n\nAnswer: \nThe reviewer is correct in stating that we designed the experiment in such a way that the transformation does not affect the model predict or weights.This allows for a principled evaluation of input invariance which is the contribution of this manuscript. \n\n\n\n\nQuote: \n“The mean shift explored by the authors assumes that there is no special point in the input space (especially that zero is not a special point).\nHowever, since images usually are considered bounded by 0 and 1 (or 255), there are in fact two special points (as a side note, in Figure 2 left column, the two inputs look very different which might be due to the fact that it is not at all obvious how to visualize \"image\" input that does not adhere to the common image input structure).”\n\nAnswer:\nArguably, even zero is not a special point, since mean shifts of the data can be compensated for by the biases of the first layer in the network. The encoding of 0 to 1 is common but not necessarily special, furthermore inception using a encoding of [-1,1] and resnet uses 0 mean. Therefore it is unclear whether a reference point of 0 or 1 given a [0,1] encoding is more special than using the mean of the image. The question of what is a good reference (i.e. special point) is relevant and an open research question for both Deep Taylor Decomposition and Integrated Gradients. \nFor the Deep-Taylor decomposition, PatternAttribution proposes a learned reference point that is invariant to the mean vector shift since it is based on covariances. It is not yet clear how to do this for IG, and we encourage the community to solve this open problem.\n\n\n\n\nQuote: \n“Would the authors argue that scaling the input with a positive factor should also lead to invariant saliency methods?\nWhat about scaling with a negative factor?\nI would argue that if the input has a certain structure, then it should be allowed for the saliency method to make use of this structure.”\n\nAnswer:\nIf we scale the image with a positive factor, the weights would be required to compensate for this change to ensure the activations remain the same. The same holds for scaling with a negative factor. For this reason, all attribution methods would (not the signal or gradients) remain intact. \nThe problem is not that the input has a specific structure and that the saliency method picks up on this structure. The issue is that we included a mean shift, which the network compensates for effectively. This mean shift does not contain class information, yet it dominates the attribution. \n", "Before replying to the individual comments let us restate the motivation of this work.\nThe key motivation is to raise awareness about potential issues with methods for interpretability. This is a young field and developing tests for reliable methods can help our field to become (even) more rigorous and scientific. \n\nOur work is based on the following observations:\n\n- Interpretability methods should be reliable. We define reliability as being insensitive to factors that do not affect the decision making process learnt by the model. \n\n- The utility of these methods, particularly in sensitive domains like health care, depends upon demonstrating reliability regardless of the model architecture and data preprocessing chosen.\n\n- There is no ground truth for what a model finds important which has led to a large number of methods with surprisingly different outcomes. Benchmarking the “quality” of saliency methods is a difficult and unsolved problem. \n\n- Our framework for measuring the utility of saliency methods is to determine points of failure in reliability. A single point of failure is sufficient to show that a method is unreliable; an analogue to proving by counterexample.\n\n- Determining failure points allows us to proactively improve existing methods. We invite other researchers to find additional failure points. Determining where methods fall short is a crucial step in choosing appropriate methods for given tasks and improving these methods.\n\n\nBased on the points above we argue that our contribution is important because we demonstrate that a simple, commonly used transformation, causes many (recently published) saliency methods to fail. It is necessary to initiate this conversation because visually determining points of failure is far from trivial in high dimensional data and in modalities other than images such as audio and word vectors.\n\nProving that many methods are unreliable using a very simple transformation case is a starting point for the community to develop more reliable methods. It is akin to a unit test which does not guarantee that your code solves the correct problem but highlights when your code clearly does not solve the problem. In this paper we formulate a single “unit test” allows us to identify points of failure and develop robust methods in the future. \n" ]
[ 5, 4, 4, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_r1Oen--RW", "iclr_2018_r1Oen--RW", "iclr_2018_r1Oen--RW", "BJ6e_ttgf", "B1nPks1-f", "HyVpmntgG", "iclr_2018_r1Oen--RW" ]
iclr_2018_SJPpHzW0-
Influence-Directed Explanations for Deep Convolutional Networks
We study the problem of explaining a rich class of behavioral properties of deep neural networks. Our influence-directed explanations approach this problem by peering inside the network to identify neurons with high influence on the property of interest using an axiomatically justified influence measure, and then providing an interpretation for the concepts these neurons represent. We evaluate our approach by training convolutional neural networks on Pubfig, ImageNet, and Diabetic Retinopathy datasets. Our evaluation demonstrates that influence-directed explanations (1) localize features used by the network, (2) isolate features distinguishing related instances, (3) help extract the essence of what the network learned about the class, and (4) assist in debugging misclassifications.
rejected-papers
The paper defines a new measure of influence and uses it to highlight important features. The definition is novel however, the reviewers have concerns regarding its significance, novelty and a thorough empirical comparison to existing literature is missing.
train
[ "ryI_k_PeM", "r1ieZZcxf", "HyPSFK2gf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Notions of \"influence\" have become popular recently, and these notions try to understand how the output of a classifier or a learning algorithm is influenced by its training set. In this paper, the authors propose a way to measure influence that satisfies certain axioms. This notion of influence may be used to identify what part of the input is most influential for the output of a particular neuron in a deep neural network. Using a number of examples, the authors show that this notion of influence seems useful and may yield non-trivial results.\n\nMy main criticism of this paper is the definition of influence. It is easy to see that sometimes, influence of $x_i$ in a function $f(x_1, \\dots, x_n)$ will turn out to be 0, simply because the integral in equation (1) is 0. However, this does not mean that $x_i$ is irrelevant to the the output f. This is not a desirable property for any notion of influence. A better definition would have been taking the absolute value of the partial derivative of f wrt x_i, or square of the same. This will ensure that equation (1) will always lead to a positive number as the influence, and 0 influence will indeed imply x_i is completely irrelevant to the output of f. These alternate notions do not satisfy Axiom 1, and possibly Axiom 5. But it is likely that tweaking the axioms will fix the issue. The authors should have at least mentioned why they preferred to use df/dx instead of |df/dx| or (df/dx)^2, since the latter clearly make more intuitive sense.\n\nThe examples in section 3 are quite thorough, but I feel the basic idea of measuring influence by equation (1) is not on solid footing. ", "SUMMARY \n========\nThis paper proposes to measure the \"influence\" of single neurons w.r.t. to a quantity of interest represented by another neuron, typically w.r.t. to an output neuron for a class of interest, by simply taking the gradient of the corresponding output neuron w.r.t to the considered neuron. This gradient is used as is, given a single input instance, or else, gradients are averaged over several input instances. \nIn the latter case the averaging is described by an ad-hoc distribution of interest P which is introduced in the definition of the influence measure, however in the present work only two types of averages are practically used: either the average is performed over all instances belonging to one class, or over all input instances.\n\nIn other words, standard gradient backpropagation values (or average of them) are used as a proxy to quantify the importance of neurons (these neurons being within hidden layers or at the input layer), and are intended to better explain the classification, or sometimes even misclassification, performed by the network.\n\nThe proposed importance measure is theoretically justified by stating a few properties (called axioms) an importance measure should generally verify, and then showing the proposed measure fullfills these requirements.\n\nEmpirically the proposed measure is used to inspect the classification of a few input instances, to extract \"class-expert\" neurons, and to find a preprocessing bug in one model. The only comparison to a related work method is done qualitatively on one image visualization, where the proposed method is compared to Integrated Gradients [Sundararajan et al. 2017].\n\nWEAKNESSES\n==========\nThe similarity and differences between the proposed method and related work is not made clear. For example, in the case of a single input instance, and when the quantity of interest is one output neuron corresponding to one class, the proposed measure is identical to the image-specific class saliency of [Simonyan et al. 2014].\nThe difference to Integrated Gradients [Sundararajan et al. 2017] at the end of Section 1.1 is also not clearly formulated: why is the constraint on distribution marginality weaker here ?\nAn important class of explanation methods, namely decomposition-based methods (e.g. LRP, Excitation Backprop, Deep Taylor Decomposition), are not mentioned. Recent work (Montavon et al., Digital Signal Processing, 2017), discusses the advantages of decomposition-based methods over gradient-based approaches. Thus, the authors should clearly state the advantages/disadvantes of the proposed gradient-based method over decomposition-based techniques.\n\nConcerning the theoretical justification:\nIt is not clear how Axiom 2 ensures that the proposed measure only depends on points within the input data manifold. This is indeed an important issue, since otherwise the gradients in equation (1) might be averaged completely outside the data manifold and thus the influence measure be unrelated to the data and problem the neural network was trained on. Also the notation used in Axiom 5 is very confusing. Moreover it seems this axiom is even not used in the proof of Theorem 2.\n\nConcerning the experiments:\nThe experimental setup, especially in Section 3.3.1, is not well defined: on which layer of the network is the mask applied? What is the \"quantity of interest\": shouldn't it be an output neuron value rather than h|i (as stated at the begin of the fourth paragraph of Section 3.3.1)?\nThe proposed method should to be quantitatively compared with other explanation techniques (e.g. by iteratively perturbing most relevant pixels and tracking the performance drop, see Samek et al., IEEE TNNLS, 2017).\nThe last example of explaining the bug is not very convincing, since the observation that class 2 distinctive features are very small in the image space, and thus might have been erased through gaussian blur, is not directly related to the influence measure and could have been made aso independently from it.\n\nCONCLUSION\n==========\nOverall this work does not introduce any new importance measure for neurons, it merely formalizes the use of standard backpropagation gradients as influence measure.\nUsing gradients as importance measure was already done in previous work (e.g. [Simonyan et al. 2014]). Though taking the average of gradients over several input instances is new, this information might not be of great help for practical applications.\nRecent work also showed that raw gradients are less informative than decomposition-based quantities to explain the classification decisions made by a neural network.", "The authors extend traditional approach of examining the gradient in order to understand which features/units are the most relevant to given class.\n\nTheir extension proposes to measure the influence over a set of images by adding up influences over individual images. They also propose measuring influence for the classification decision restricted to two classes, by taking the difference of two class activations as the objective.\n\nThey provide an axiomatic treatment which shows that this gradient-based approach has desirable qualities.\n\nOverall it's not clear what this paper adds to existing body of work:\n1. axiomatic treatment takes a bulk of the paper, but does not motivate any significantly new method\n2. from experimental evaluation it's not clear the results are better than existing work, ie Yosinsky http://yosinski.com/deepvis" ]
[ 5, 4, 4 ]
[ 3, 5, 3 ]
[ "iclr_2018_SJPpHzW0-", "iclr_2018_SJPpHzW0-", "iclr_2018_SJPpHzW0-" ]
iclr_2018_rJhR_pxCZ
Interpretable Classification via Supervised Variational Autoencoders and Differentiable Decision Trees
As deep learning-based classifiers are increasingly adopted in real-world applications, the importance of understanding how a particular label is chosen grows. Single decision trees are an example of a simple, interpretable classifier, but are unsuitable for use with complex, high-dimensional data. On the other hand, the variational autoencoder (VAE) is designed to learn a factored, low-dimensional representation of data, but typically encodes high-likelihood data in an intrinsically non-separable way. We introduce the differentiable decision tree (DDT) as a modular component of deep networks and a simple, differentiable loss function that allows for end-to-end optimization of a deep network to compress high-dimensional data for classification by a single decision tree. We also explore the power of labeled data in a supervised VAE (SVAE) with a Gaussian mixture prior, which leverages label information to produce a high-quality generative model with improved bounds on log-likelihood. We combine the SVAE with the DDT to get our classifier+VAE (C+VAE), which is competitive in both classification error and log-likelihood, despite optimizing both simultaneously and using a very simple encoder/decoder architecture.
rejected-papers
The paper proposes a new model called differential decision tree which captures the benefits of decision trees and VAEs. They evaluate the method only on the MNIST dataset. The reviewers thus rightly complain that the evaluation is thus insufficient and one also questions its technical novelty.
train
[ "SJCjWdiJG", "H1v-LprxG", "HktiHfugG", "Sy9KgIP-G", "rJ09X3Tlf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "\nSummary\n\nThis paper proposes a hybrid model (C+VAE)---a variational autoencoder (VAE) composed with a differentiable decision tree (DDT)---and an accompanying training scheme. Firstly, the prior is specified as a mixture distribution with one component per class (SVAE). During training, the ELBO’s KL term uses the component that corresponds to the known label. Secondly, the DDT’s leaves are parametrized with the encoder distribution q(z|x), and thus gradient information flows back through the DDT into the posterior approximations in order to make them more discriminative. Lastly, the VAE and DDT are trained together by alternating optimization of each component (plus a ridge penalty on the decoder means). Experiments are performed on MNIST, demonstrating tree classification performance, (supervised) neg. log likelihood performance, and latent space interpretability via the DDT. \n\n\nEvaluation\n\nPros: Giving the VAE discriminative capabilities is an interesting line of research, and this paper provides another take on tree-based VAEs, which are challenging to define given the discrete nature of the former and continuous nature of the latter. Thus, I applaud the authors for combining the two in a way that admits efficient training. Moreover, I like the qualitative experiment (Figure 2) in which the tree is used to vary a latent dimension to change the digit’s class. I can see this being used for dataset augmentation or adversarial example generation, for instance.\n\nCons: An indefensible flaw in the work is that the model is evaluated on only MNIST. As there is no strong theory in the paper, this limited experimental evaluation is reason enough for rejection. Yet, moreover, the negative log likelihood comparison (Table 2) is not an informative comparison, as it speaks only to the power of adding supervision. Lastly, I do not think the interpretability provided by the decision tree is as great as the authors seem to claim. Decision trees provide rich and interpretable structure only when each input feature has clear semantics. However, in this case, the latent space is being used as input to the tree. As the decision tree, then, is merely learning hard, class-based partitioning rules for the latent space, I do not see how the tree is representing anything especially revealing. Taking Figure 2 as an example (which I do like the end result of), I could generate similar results with a black-box classifier by using gradients to perturb the latent ‘4’ mean into a latent ‘7’ mean (a la DeepDream). I could then identify the influential dimension(s) by taking the largest absolute values in the gradient vector. Maybe there is another use case in which a decision tree is superior; I’m just saying Section 4.3 doesn’t convince me to the extent that was promised earlier in the paper (and by the title).\n\nComment: It's easier to make a latent variable model interpretable when the latent variables are given clear semantics in the model definition, in my opinion. Otherwise, the semantics of the latent space become too entangled. Could you, somehow, force the tree to encode an identifiable attribute at each node, which would then force that attribute to be encoded in a certain dimension of latent space? \n", "The paper tries to build an interpretable and accurate classifier via stacking a supervised VAE (SVAE) and a differentiable decision tree (DTT). The problem is important and interesting. The authors list the contributions of each part but it seems that only the final contribution, i.e. analysis of the interpretability, is interesting and should be further extended and emphasized. Here with the detailed comments.\n\n1. I think Table 2 does not make sense at all. This is not only because the authors use the label information but also because the authors compare different quantities. The the previous methods evaluate log p(x) while the proposed method evaluates log p(x, y) which should be much lower as the proposed method potentially trains a separated model for each class of the x for evaluation.\n\n2. The generation results of the SVAE shown in Figure 7 in Appendix A seem strange as the diversity of the samples is much less than those from the vanilla VAEs. Could the authors explain this mode collapse phenomenon? \n\n3. The results in Table 1 are not interesting. It is most useful to interpret the state-of-the-art classifier while the results of the proposed methods are far from the state-of-the-art even on such simple MNIST dataset.\n\n4. The most interesting results of this paper are shown in Figure 1. However, I think the results on the interpretability should be further extended. Several questions are as follows: \n\nWhy other dimensions are not so interpretable, compared with 21?\n\nCan we also interpret a VAE given labels by varying each dimension of the latent variables without jointly training a DTT? I personally think some of the dimensions of the latent variables of the vanilla VAEs can also be interpreted via interpolation in each dimension. \n\nCan these results be generalized to other datasets, consisting of natural images? \n\nOverall, this paper is below the acceptance threshold.\n ", "This paper addresses a method of building an interpretable model for classification, where two key ingredients are (1) supervised variational autoencoder and (2) differentiable decision tree. Recently one important line of research is to build interpretable models which have more modeling capacity while maintaining interpretability, over existing models such as linear models or decision trees. In this sense, the current work is timely research. A few contributions are claimed in this paper: (1) differentiable decision tree which allows for gradient-based optimization; (2) supervised VAE where class-specific Gaussian prior is used for the probabilistic decoder in the VAE; (3) combination of these two models. Regarding the differentiable decision tree, I am not an expert in decision tree. However, I understand that there have been various work on probabilistic decision tree, Bayesian decision tree, and Mondrian tree. More literature survey might be needed to pin-point what's new and what's common with previous work. Regarding the supervised VAE, the term \"supervised VAE\" is misleading. To me, the current model is nothing but VAE with class-specific Gaussian prior. (3) Regarding the combination of supervised VAE and DDT, it would be much better to show us a graphical illustration of the model to improve the readability. I see the encoder is common for both the decoder and DDT. However, it is not clear how DDT is coupled with the encoder. It seems that DDT takes the output of the encoder as input but the output of DDT is not coupled with VAE. ", "We greatly appreciate the detailed feedback from the reviewers, and will look into refocusing our paper on the interpretability aspects.\n\nWe updated the pdf to fix the bug mentioned in our earlier comment, but made no other changes at this time, pending the refocusing described above. \n", "We recently discovered a numerical error of calculation of KL-divergence, which impacted final calculation of log-likelihood of our models SVAE and C+VAE. Our updated bounds for log-likelihood are -102.77 for SVAE and -110.12 for C+VAE. (Classification results were unchanged.)\n\nIn the new version we plan to upload soon, we also updated the discussion to reflect that, while our models no longer greatly improve over more complex, state-of-the-art models in terms of log-likelihood, SVAE still improves over an unmodified VAE (which uses the same encoder-decoder pair that we use), and C+VAE is comparable to an unmodified VAE when simultaneously optimizing for both classification and generative performance.\n" ]
[ 3, 4, 5, -1, -1 ]
[ 5, 4, 4, -1, -1 ]
[ "iclr_2018_rJhR_pxCZ", "iclr_2018_rJhR_pxCZ", "iclr_2018_rJhR_pxCZ", "iclr_2018_rJhR_pxCZ", "iclr_2018_rJhR_pxCZ" ]
iclr_2018_B1ydPgTpW
Predicting Auction Price of Vehicle License Plate with Deep Recurrent Neural Network
In Chinese societies, superstition is of paramount importance, and vehicle license plates with desirable numbers can fetch very high prices in auctions. Unlike other valuable items, license plates are not allocated an estimated price before auction. I propose that the task of predicting plate prices can be viewed as a natural language processing (NLP) task, as the value depends on the meaning of each individual character on the plate and its semantics. I construct a deep recurrent neural network (RNN) to predict the prices of vehicle license plates in Hong Kong, based on the characters on a plate. I demonstrate the importance of having a deep network and of retraining. Evaluated on 13 years of historical auction prices, the deep RNN's predictions can explain over 80 percent of price variations, outperforming previous models by a significant margin. I also demonstrate how the model can be extended to become a search engine for plates and to provide estimates of the expected price distribution.
rejected-papers
Reviewers concur that the paper and the application area are interesting but that the approaches are not sufficiently novel to justify presentation at ICLR.
val
[ "r1ffKGS4M", "rkMjOyqlM", "BJT4Wx5ez", "Bk3B7T5gf", "ryXwsKp7M", "S1xQ8F67M" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "Thank you for your detailed comments and suggestions. The following are improvements I have made:\n- The odd reference in the introduction was in response to a referee's inquiry in a previous submission. It has been removed. The introduction has also been shortened.\n- Citation has been added for Akita et al.\n- The construction of the character embeddings has been clarified. The lookup table maps each character into a different vector. The dimension of the vector is a hyperparameter. The elements of each vector are initialized with random values and learnt through training.\n- I have tried both both LSTM and GRU and find that they provided minimal improvement while lengthening the training time. The most likely reason is that plates are limited to at most 6 characters, which limit the impact of long-term dependency. Detailed comparison will be added in a future version of the paper.\n- Improvements to cross validation procedure and a new section on price distribution have been added.\n", "The author(s) proposed to use a deep bidirectional recurrent neural network to estimate the auction price of license plates based on the sequence of letters and digits. The method uses a learnable character embedding to transform the data, but is an end-to-end approach. The analysis of squared error for the price regression shows a clear advantage of the method over previous models that used hand crafted features. \nHere are my concerns:\n1) As the price shows a high skewness in Fig. 1, it may make more sense to use relative difference instead of absolute difference of predicted and actual auction price in evaluating/training each model. That is, making an error of $100 for a plate that is priced $1000 has a huge difference in meaning to that for a plate priced as $10,000. \n\n2) The time-series data seems to have a temporal trend which makes retraining beneficial as suggested by authors in section 7.2. If so, the evaluation setting of dividing data into three *random* sets of training, validation, and test, in 5.3 doesn't seem to be the right and most appropriate choice. It should however, be divided into sets corresponding to non-overlapping time intervals to avoid the model use of temporal information in making the prediction. ", "Summary: The authors take two pages to describe the data they eventually analyze - Chinese license plates (sections 1,2), with the aim of predicting auction price based on the \"luckiness\" of the license plate number. The authors mentions other papers that use NN's to predict prices, contrasting them with the proposed model by saying they are usually shallow not deep, and only focus on numerical data not strings. Then the paper goes on to present the model which is just a vanilla RNN, with standard practices like batch normalization and dropout. The proposed pipeline converts each character to an embedding with the only sentence of description being \"Each character is converted by a lookup table to a vector representation, known as character embedding.\" Specifics of the data, RNN training, and the results as well as the stability of the network to hyperparameters is also examined. Finally they find a \"a feature vector for each plate by summing up the output of the last recurrent layer overtime.\" and the use knn on these features to find other plates that are grouped together to try to explain how the RNN predicts the prices of the plates. In section 7, the RNN is combined with a handcrafted feature model he criticized in a earlier section for being too simple to create an ensemble model that predicts the prices marginally better. \n\nSpecific Comments on Sections: \nComments: Sec 1,2\nIn these sections the author has somewhat odd references to specific economists that seem a little off topic, and spends a little too much time in my opinion setting up this specific data.\n\nSec 3\nThe author does not mention the following reference: \"Deep learning for stock prediction using numerical and textual information\" by Akita et al. that does incorporate non-numerical info to predict stock prices with deep networks.\n\nSec 4\nWhat are the characters embedded with? This is important to specify. Is it Word2vec or something else? What does the lookup table consist of? References should be added to the relevant methods. \n\nSec 5\nI feel like there are many regression models that could have been tried here with word2vec embeddings that would have been an interesting comparison. LSTMs as well could have been a point of comparison. \n\nSec 6\n Nothing too insightful is said about the RNN Model. \n\nSec 7\nThe ensembling was a strange extension especially with the Woo model given that the other MLP architecture gave way better results in their table.\n\nOverall: This is a unique NLP problem, and it seems to make a lot of sense to apply an RNN here, considering that word2vec is an RNN. However comparisons are lacking and the paper is not presented very scientifically. The lack of comparisons made it feel like the author cherry picked the RNN to outperform other approaches that obviously would not do well.\n", "The authors present a deep neural network that evaluates plate numbers. The relevance of this problem is that there are auctions for plate numbers in Hong Kong, and predicting their value is a sensible activity in that context. I find that the description of the applied problem is quite interesting; in fact overall the paper is well written and very easy to follow. There are some typos and grammatical problems (indicated below), but nothing really serious.\n\nSo, the paper is relevant and well presented. However, I find that the proposed solution is an application of existing techniques, so it lacks on novelty and originality. Even though the significance of the work is apparent given the good results of the proposed neural network, I believe that such material is more appropriate to a focused applied meeting. However, even for that sort of setting I think the paper requires some additional work, as some final parts of the paper have not been tested yet (the interesting part of explanations). Hence I don't think the submission is ready for publication at this moment.\n\nConcerning the text, some questions/suggestions:\n- Abstract, line 1: I suppose \"In the Chinese society...\"--- are there many Chinese societies?\n- The references are not properly formatted; they should appear at (XXX YYY) but appear as XXX (YYY) in many cases, mixed with the main text. \n- Footnote 1, line 2: \"an exchange\".\n- Page 2, line 12: \"prices. Among\".\n- Please add commas/periods at the end of equations.\n- There are problems with capitalization in the references. ", "Thank you for your thoughtful comments and suggestions. \n\nTaking your comments into consideration, the major improvement the revised paper has made is the use of a Mixture Density Model to estimate the distribution of realized price for a given predicted price. This is particularly useful for pricing rare, high-value plates, which has little historical data to speak of due to the lack of similar plates in records.\n\nI have also corrected all textual and formatting mistakes. The only exception is \"Chinese societies\"---in my view, there are indeed multiple Chinese societies. Take for example China, Taiwan and Hong Kong. Even though they share the same written language, they each have their own culture and dialects that are distinct from the others. ", "Thank you for your comments and suggestions.\n\n1. The use of log prices addresses this issue specifically. A $100 error for a $1000 plate increases the cost function exactly as much as a $100K error for a $1M plate.\n\n2. The revised paper now includes the statistics from a sequential split of the data. Specifically, the oldest 64% of the data was used to train the model, the middle 16% was used for validation and the most recent 20% was used for testing. As explained in the paper, new plates were issued alphabetically by the government over time. Due to the lack of comparable plates in the training data, predicting the price of these new plates is very difficult for any model. Even so, the RNN model still maintain its performance lead over the other models. \n\nThe performance penalty largely disappears if the most recent data is used for training and the oldest for testing. This is because the government routinely auction off plates that had been returned. " ]
[ -1, 6, 4, 4, -1, -1 ]
[ -1, 5, 4, 4, -1, -1 ]
[ "BJT4Wx5ez", "iclr_2018_B1ydPgTpW", "iclr_2018_B1ydPgTpW", "iclr_2018_B1ydPgTpW", "Bk3B7T5gf", "rkMjOyqlM" ]
iclr_2018_HJcjQTJ0W
PrivyNet: A Flexible Framework for Privacy-Preserving Deep Neural Network Training
Massive data exist among user local platforms that usually cannot support deep neural network (DNN) training due to computation and storage resource constraints. Cloud-based training schemes provide beneficial services but suffer from potential privacy risks due to excessive user data collection. To enable cloud-based DNN training while protecting the data privacy simultaneously, we propose to leverage the intermediate representations of the data, which is achieved by splitting the DNNs and deploying them separately onto local platforms and the cloud. The local neural network (NN) is used to generate the feature representations. To avoid local training and protect data privacy, the local NN is derived from pre-trained NNs. The cloud NN is then trained based on the extracted intermediate representations for the target learning task. We validate the idea of DNN splitting by characterizing the dependency of privacy loss and classification accuracy on the local NN topology for a convolutional NN (CNN) based image classification task. Based on the characterization, we further propose PrivyNet to determine the local NN topology, which optimizes the accuracy of the target learning task under the constraints on privacy loss, local computation, and storage. The efficiency and effectiveness of PrivyNet are demonstrated with CIFAR-10 dataset.
rejected-papers
Reviews are marginal. I concur with the two less-favorable reviews that the metrics for privacy protection are not sufficiently strong for preserving privacy.
train
[ "HyAGOnOgM", "ryOaYRdez", "SJKRQb5lz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. This is an interesting paper - introduces useful concepts such as the formulation of the utility and privacy loss functions with respect to the learning paradigm\n2. From the initial part of the paper, it seems that the proposed PrivyNet is supposed to be a meta-learning framework to split a DNN in order to improve privacy while maintaining a certain accuracy level\n3. However, the main issue is that the meta-learning mechanism is a bit ad-hoc and empirical - therefore not sure how seamless and user-friendly it will be in general, it seems it needs empirical studies for every new application - this basically involves generation of a pareto front and then choose pareto-optimal points based on the user's requirements, but it is unclear how a privy net construction based on some data set considered from the internet has the ability to transfer and help in maintaining privacy in another type of data set, e.g., social media pictures", "Summary: The paper studies the problem of effectively training Deep NN under the constraint of privacy. The paper first argues that achieving privacy guarantees like differential privacy is hard, and then provides frameworks and algorithms that quantify the privacy loss via Signal-to-noise ratio. In my opinion, one of the main features of this work is to split the NN computation to local computation and cloud computation, which ensures that unnecessary amount of data is never released to the cloud.\n\nComments: I have my concerns about the effectiveness of the notion of privacy introduced in this paper. The definition of privacy loss in Equation 5 is an average notion, where the averaging is performed over all the sensitive training data samples. This notion does not seem to protect the privacy of every individual training example, in contrast to notions like differential privacy. Average case notions of privacy are usually not appreciated in the privacy community because of their vulnerability to a suite of attacks.\n\nThe paper may have a valid point that differential privacy is hard to work with, in the case of Deep NN. However, the paper needs to make a much stronger argument to defend this claim.", "1. Paper summary \n\nThis paper describes a technique using 3 neural networks to privatize data and make predictions: a feature extraction network, an image classification network, and an image reconstruction network. The idea is to learn a feature extraction network so that the image classification network performs well and the image reconstruction network performs poorly.\n\n\n2. High level paper - subjective\n\nI think the presentation of the paper is somewhat scattered: In section 2 the authors introduce their network and their metric for utility and privacy and then immediately do a sensitivity analysis. Section 3 continues with a sensitivity analysis now considering performance and storage of the method. Then 2.5 pages are spent on channel pruning.\nI would have liked if the authors spent more time justifying why we should trust their method as a privacy preserving technique (described in detail below). \nThe authors clearly performed an impressive amount of sensitivity experiments. Assuming the privacy claims are reasonable (which I have some doubts about below) then this paper is clearly useful to any company wanting to do privacy preserving classification. At the same time I think the paper does not have a significant amount of machine learning novelty in it. \n\n\n3. High level technical\n\nI have a few doubts about this method as a privacy-preserving technique:\n- Nearly every privacy-preserving technique gives a guarantee, e.g., differential privacy guarantees a statistical notion of privacy and cryptographic methods guarantee a computational notion of privacy. In this work the authors provide a way to measure privacy but there is no guarantee that if someone uses this method their data will be private, by some definition, even under certain assumptions.\n- Another nice thing about differential privacy and cryptography is that they are impervious to different algorithms because it is statistically hard or computationally hard to reveal sensitive information. Here there could be a better image reconstruction network that does a better job of reconstructing images than the ones used in the paper.\n- It's not clear to my why PSNR is a useful way to measure privacy loss. I understand that it is a metric to compare two images that is based on the mean-squared error so a very private image should have a low PSNR while a not private image should have a high PSNR, but I have no intuition about how small the PSNR should be to afford a useful amount of privacy. For instances, in nearly all of the images of Figures 21 and 22 I think it would be quite easy to guess the original images.\n\n\n4. 1/2 sentence summary\n\nWhile the authors did an extensive job evaluating different settings of their technique I have serious doubts about it as a privacy-preserving method." ]
[ 6, 5, 3 ]
[ 5, 3, 3 ]
[ "iclr_2018_HJcjQTJ0W", "iclr_2018_HJcjQTJ0W", "iclr_2018_HJcjQTJ0W" ]
iclr_2018_H1DJFybC-
Learning to Infer Graphics Programs from Hand-Drawn Images
We introduce a model that learns to convert simple hand drawings into graphics programs written in a subset of \LaTeX.~The model combines techniques from deep learning and program synthesis. We learn a convolutional neural network that proposes plausible drawing primitives that explain an image. These drawing primitives are like a trace of the set of primitive commands issued by a graphics program. We learn a model that uses program synthesis techniques to recover a graphics program from that trace. These programs have constructs like variable bindings, iterative loops, or simple kinds of conditionals. With a graphics program in hand, we can correct errors made by the deep network and extrapolate drawings. Taken together these results are a step towards agents that induce useful, human-readable programs from perceptual input.
rejected-papers
The paper addresses an interesting problem, is novel and works. While the paper improved through reviews + rebuttal, the reviewers still find the presentation lacking.
train
[ "ryUoLK6VG", "Sk-ZlwcgG", "B1Te809gM", "HJR0yoJ-z", "B1Rzx_Zzf", "HyOFkdZMG", "ryAfyd-fz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "I think the paper became better. However, it still needs more work.\n\nOverall, it is not very clear what to be solved in the paper -- if they want to verify the trace hypothesis, or they want to show that the combination of the proposed components is important to build a system for the problem, or the improvement of each component to deal with difficult challenges is important, or all of these.\n\nFor example, in Section 1, the authors summarize the contributions of the paper. However, right after that, they mention challenges of the paper which do not correspond to the contributions. The contributions are made not by addressing the challenges.\n\nI would suggest a major revision to the paper. I think the concept is good and it can be potentially a great paper. \n\nMinor comments:\n\nExperiment 1 is not very clear to me. Neural network with SMC is explained in Section 2, but other methods are not well explained. Figure 4 implies all methods use \"particle\" in some ways. I do not know how \"particle\" is used in beach search for example.\n\nShould Section 6 be \"Conclusion\"?", "I think the idea of inferring programmatic descriptions of handwritten diagrams is really cool, and that the combination of SMC-based inference with constraint-based synthesis is nice. I also think the application is clearly useful – one could imagine that this type of technology would eventually become part of drawing / note-taking applications.\n\nThat said, based on the current state of the manuscript, I find it difficult to recommend acceptance. I understand that the ICLR does not strictly have a page limit, but I think submitting a manuscript of over 11 pages is taking things a bit too far. The manuscript would greatly benefit from a thorough editing pass and some judicious reconsideration of space allocated to figures. Moreover, despite its relative verbosity, or perhaps because of it, I found it surprisingly difficult to extract simple implementation details from the text (for example I had to dig up the size of the synthetic training corpus from the 44-page appendix). \n\nPresentation issues aside, I think this is great work. There is a lot here, and I am sympathetic to the challenges of explaining everything clearly in a single (short) paper. That said, I do think that the authors need to take another stab at this to get the manuscript to a point where it can be impactful. \n\nMinor Comments \n\n- I don't understand what the \"hypothesis\" is in the trace hypothesis. Breaking down the problem into an AIR-style sequential detection task and a program induction is certainly a reasonable thing to do. However, the word \"hypothesis\" is generally used to refer to a testable explanation of a phenomenon, which is not really applicable here. \n\n- How is the edit distance defined? In particular, are we treating the drawing commands as a set or a sequence when we calculate \"the number of drawing commands by which two trace sets differ\"?\n\n- I took me a while to understand that the authors first consider the case of SMC for synthetic images with a pixel-based likelihood, and then move on to SMC with and edit-distance based surrogate likelihood for hand-drawn pictures. The text seems to suggest that only 100 of such hand drawn images were actually used, is that correct?\n \n- What does the (+) operator do in Figure 3?\n\n- I am not sure that \"correcting errors made by the neural network\" is the most accurate way to describe a reranking of the top-k samples returned by the SMC sweep.\n\n- Table 3 is very nice, but does not need to be a full page. \n\n- I would recommend that the authors consolidate wrap-around figures into full-width figures. \n", "Summary of paper:\n\nThis paper tackles the problem of inferring graphics programs from hand-drawn images by splitting it into two separate tasks:\n(1) inferring trace sets (functions to use in the program) and\n(2) program synthesis, using the results from (1).\nThe usefulness of this split is referred to as the trace hypothesis.\n\n(1) is done by training a neural network on data [input = rendered image; output = trace sets] which is generated synthetically. During test time, a trace set is generated using a population-based method which samples and assigns weights to the guesses made by the neural network based on a similarity metric. Generalization to hand-drawn images is ensured by by learning the similarity metric.\n\n(2) is done by feeding the trace set into a program synthesis tool of Solar Lezama. Since this is too slow, the authors design a search policy which proposes a restriction on the program search space, making it faster. The final loss for (2) in equation 3 takes into consideration the time taken to synthesize images in a search space. \n\n---\n\nQuality: The experiments are thorough and it seems to work. The potential limitation is generalization to non-synthetic data.\nClarity: The high level idea is clear however some of the details are not clear.\nOriginality: This work is one of the first that tackles the problem described.\nSignificance: There are many ad-hoc choices made in the paper, making it hard to extract an underlying insight that makes things work. Is it the trace hypothesis? Or is it just that trying enough things made this work?\n\n---\n\nSome questions/comments:\n- Regarding the trace set inference, the loss function during training and the subsequent use of SMC during test time is pretty unconventional. The use of the likelihood P_{\\theta}[T | I] as a proposal, as the paper also acknowledges, is also unconventional. One way to look at this which could make it less unconventional is to pose the training phase as learning the proposal distribution in an amortized way (instead of maximizing likelihood) as, for example, in [1, 2].\n- In section 2.1., the paper talks about learning the surrogate likelihood function L_{learned} in order to work well for actual hand drawings. This presumably stems from the problem of mismatch between the distribution of the synthetic data used for training and the actual hand drawings. But then L_{learned} is also learned from synthetic data. What makes this translate to non-synthetic data? Does this translate to non-synthetic data?\n- What does \"Intersection over Union\" in Figure 8 mean?\n- The details for 3.1 are not clear. In particular, what does t(\\sigma | T) in equation 3 refer to? Time to synthesize all images in \\sigma? Why is the concept of Bias-optimality important?\n- It seems from Table 4 that by design, the learned policy for the program search space already limits the search space to programs with maximum depth of the abstract syntax tree of 3. What is the usual depth of an AST when using Sketch?\n\n---\n\nMinor Comments:\n- In page 4, section 2.1: \"But pixel-wise distance fares poorly... match the model's renders.\" and \"Pixel-wise distance metrics are sensitive... search space over traces.\" seem to be saying the same thing\n- End of page 5: \\citep Polozov & Gulwani (2015)\n- Page 6: \\citep Solar Lezama (2008)\n\n---\n\nReferences\n\n[1] Paige, B., & Wood, F. (2016). Inference Networks for Sequential Monte Carlo in Graphical Models. In Proceedings of the 33rd International Conference on Machine Learning, JMLR W&CP 48: 3040-3049.\n[2] Le, T. A., Baydin, A. G., & Wood, F. (2017). Inference Compilation and Universal Probabilistic Programming. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (Vol. 54, pp. 1338–1348). Fort Lauderdale, FL, USA: PMLR.", "This paper proposes a method to infer lines of code that produces a given image. The method consists of two components. One is to generate traces, which are primitive commands of a graphic program, given an image. The other is to infer lines of code given traces. The first component uses a deep neural network for the conversion and a novel architecture is used for the network. The second component uses a learnt search polity to speed up the inference. Experimental results on a small dataset show that the proposed method can generate lines of code of a graphics program for the images reasonably well. It also discusses possible applications of the method.\n\nOverall, the paper is interesting and the proposed method seems reasonable. Also, it is well contrasted with related work. However, the paper contains too many contents and it is hard to understand the important details without reading supplement and the references. It might be even worth considering to split the paper into two ones and each paper proposes one idea (component) at a time with more details.\n\nThat said, I understood the basic ideas of the paper and I liked them. My concern is only around how to write.", "Thank you for your helpful review.\n\nWe agree that this paper tries to pack a lot of content into one manuscript and would be much improved by an editing pass. Our posted revision is much shorter (9.25 pages, excluding references) and more clearly outlines the content.\n\nWe believe one of the reasons why the initial manuscript was difficult to read is because it did not clearly delineate the domain-specific design choices (like the neural architecture and learned distance metric) from the domain-general ideas (the trace hypothesis and the learned search policy). In the posted revision we draw attention to this distinction in the introduction and outline where in the paper each model component is explained.\n\n\"The trace hypothesis\" is a hypothesis in the sense that it is a claim about how to architect certain AI systems, and the word \"hypothesis\" is sometimes used for claims like this (for example, \"the strong story hypothesis\" and the \"directed perception hypothesis\": see [1]). But we agree that this might be confusing and we are open to renaming it to something like the trace set architecture/framing.\n\nRegarding \"How is the edit distance defined?\": We treat the drawing commands as a set, and define the edit distance as the size of the symmetric difference between the ground truth set and the set produced by the model.\n\nIt is correct and that we evaluate our model on only 100 real hand drawings. These 100 drawings are best thought of as an out-of-sample test set.\n\nRegarding the (+) operator in Figure 3: This is the direct sum operator, which in here takes 2 single-channel images and stacks them to make a single 2-channel image.\n\nReferences:\n[1] Winston, Patrick Henry. \"The Strong Story Hypothesis and the Directed Perception Hypothesis.\" AAAI Fall Symposium: Advances in Cognitive Systems. 2011.", "Thank you for the thoughtful review.\n\nRegarding \"The potential limitation is generalization to non-synthetic data\": We wish to clarify that, even though the neural network is trained exclusively on synthetic data, we apply it to real hand drawings. We have prominently clarified this in the posted revision.\n\nThank you for suggesting framing the proposal training as amortized inference. This is a correct and insightful way of communicating the purpose of the neural network, and we have used this framing in the revision.\n\nRegarding \"There are many ad-hoc choices made in the paper\": Although many of engineering decisions are specific to our domain, we believe that the core generalizable idea is the trace hypothesis, which factors the problem into two independent pieces that can be tackled separately, rather than trying to go straight from raw input to a program. One could also consider using an amortized inference approach that does not use the trace set as an intermediate steppingstone, like RobustFill [1], or other ICLR papers currently under review [2]. There would be two problems with this hypothetical alternative perception->program approach:\n1. We would need a large data set of (image, program) pairs, where the programs are drawn from the actual distribution that real-world diagrams are drawn from. \n2. As shown experimentally in DeepCoder [3], neural approaches to program synthesis that attempt to go directly from the problem specification to the program tend to not work as well in practice as those that also leverage symbolic approaches to program synthesis.\nIn the posted revision, we have clarified the boundary between the domain-specific design choices and what we believe to be the domain-general ideas.\n\nThe reason we need to learn a surrogate likelihood function L_{learned} is not because of the mismatch between the distribution of the synthetic data and the actual hand drawings. Instead, it is because we need to wrap a stochastic search procedure (SMC) around the neurally-guided proposals, and the SMC sampler needs some way of measuring how well a particle explains an image that is robust to variations in the exact details of how something was drawn, which means we can't use pixel-wise distance. L_{learned} generalizes to real data because it is trained on LaTeX TikZ output rendered with the \"pencildraw\" package, which causes LaTeX output to look like it was drawn with a pencil.\n\nThank you for pointing out the fact that we did not define the \"Intersection over Union\" (IoU). The IoU for two sets A and B is $|A\\cap B|/|A\\cup B|$. We use IoU to measure the system's accuracy at recovering trace sets. Here the sets are sets of primitive drawing commands.\n\n$t(\\sigma | T)$ is the length of time it takes the program synthesizer to find the minimum cost program in $\\sigma$ such that that program evaluates to the trace set $T$. Our revision now clarifies this point of confusion.\n\nBias optimality buys us three important things. First, it guarantees that the policy will always eventually find the minimum cost program. Second, it explicitly takes into account the cost of searching, in contrast to e.g. DeepCoder [3]. Lastly it gives us a differentiable loss function for the policy parameters. The posted revision now discusses these points.\n\nYou are correct to notice that the program space is already limited to programs with a maximum depth of 3 - meaning that we can have loops within loops, but not loops within loops within loops. Sketch does not support unbounded program spaces. Most of our graphics programs have depth 2-3.\n\nReferences:\n[1] Devlin, J., Uesato, J., Bhupatiraju, S., Singh, R., Mohamed, A.R. and Kohli, P., RobustFill: Neural Program Learning under Noisy I/O. 2017.\n[2] Neural Program Search: Solving Data Processing Tasks from Description and Examples. Under review at ICLR 2018.\n[3] Balog, M., Gaunt, A. L., Brockschmidt, M., Nowozin, S., & Tarlow, D. Deepcoder: Learning to write programs. ICLR 2017.\n", "Thank you for your thoughtful reviewing. We agree that this paper tries to pack a lot of content into one manuscript and would be much improved by an editing pass. Our posted revision is much shorter (9.25 pages, excluding references) and more clearly outlines the content." ]
[ -1, 4, 6, 4, -1, -1, -1 ]
[ -1, 4, 4, 2, -1, -1, -1 ]
[ "ryAfyd-fz", "iclr_2018_H1DJFybC-", "iclr_2018_H1DJFybC-", "iclr_2018_H1DJFybC-", "Sk-ZlwcgG", "B1Te809gM", "HJR0yoJ-z" ]
iclr_2018_HJWGdbbCW
Reinforcement and Imitation Learning for Diverse Visuomotor Skills
We propose a general deep reinforcement learning method and apply it to robot manipulation tasks. Our approach leverages demonstration data to assist a reinforcement learning agent in learning to solve a wide range of tasks, mainly previously unsolved. We train visuomotor policies end-to-end to learn a direct mapping from RGB camera inputs to joint velocities. Our experiments indicate that our reinforcement and imitation approach can solve contact-rich robot manipulation tasks that neither the state-of-the-art reinforcement nor imitation learning method can solve alone. We also illustrate that these policies achieved zero-shot sim2real transfer by training with large visual and dynamics variations.
rejected-papers
While the reviewers agree that this paper does provide a contribution, it is small and does overlap with several concurrent works. it is a bit hand-engineered. The authors have provided a lengthy rebuttal, but the final reviews are not strong enough.
train
[ "rySp4xnBf", "Bkc_ExhHf", "ByNRqb5lz", "B1oZGo_ez", "ry6Xu6UEz", "r1C8JhrNz", "HJge1dvgz", "rJA2tLHQM", "BkVdKIrmf" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "We thank this reviewer for the additional feedback. We would like to address the reviewer’s comments on the use of simulation and the amount of hand engineering in this work. We will also make an effort to clearly describe our engineering components in the next version of the draft.\n\nWe acknowledged that simulation is at the center of our approach. The use of simulation is a design choice due to the high sample complexity of today’s model-free deep RL methods as well as practical concerns such as safety and cost. Admittedly, the value of simulation is hampered by the model mismatch between sim and real. This requires better sim2real techniques to minimize the domain gap. Sim2real is currently an active field of research, which is not yet solved. Our effort to perform real-world experiments is to exemplify how zero-shot sim2real transfer can achieve some initial success, or in other words, fail less than expected on the real robot. Our results demonstrated that it is hopeful to learn end-to-end control from raw pixel inputs to velocity commands with the aid of simulation. Furthermore, this zero-shot transfer experiments implied the possibility of leveraging real experiences to improve the controllers.\n\nRL training assumes a reward function, which is generally unavailable or at least hard to obtain on real hardware. The use of simulation permitted us to define the reward functions and object-centric features, which are crucial for policy training. For training in simulation, it is a common practice to leverage the simulator states and domain knowledge to define the rewards (see Nair 17, Rajeswaran 17). For long-horizon tasks, a binary task-completion reward is often insufficient for RL random exploration (see Popov 17). Our reward functions are sparse piecewise constant functions that correspond to different stages of a task. A similar “step” reward has been used in Nair 17 to enable solving longer horizon tasks (e.g., stacking more than 4 blocks). Empirically, we found that defining such multi-stage rewards is easier than designing a dense shaping reward and less prone to converging on suboptimal behaviors. In terms of the object-centric features, these features are important for the GAIL discriminator to concentrate on the features that are relevant to the task goal rather than discriminating based on spurious information. Our preliminary experiments found that our model is not sensitive to the particular choice of object-centric features. As long as these features carry enough task-specific information, the GAIL discriminator is able to provide supervision signal for successful training.\n\nFinally, the exploitation of privileged and task-specific information is only required for training. We ultimately produced vision-based policy that does not rely on such hand-engineered features.", "We thank the reviewer for agreeing that \"the particular experimental details matter a great deal\" when it comes to the technical contributions. We will clearly circumscribe our contribution claims in the context of previous work in our next revision.", "Given that imitation learning is often used to initialize reinforcement learning, the authors should consider using a more descriptive title for this paper. \n\nThe main contribution of the paper is to use a mixture of the reinforcement learning reward and the imitation learning signal from GAIL. This approach is generally fairly straightforward, but seems to be effective in simulation, and boils down to equation 2. It would be interesting to discuss and evaluate the effects of changing lambda overtime. \n\nThe second contribution can be seen as a list of four techniques for getting the most out of using the simulation environment and the state information that it provides. A list of these types of techniques could be useful for students training networks on simulations, although each point on the list is fairly straightforward. This part also leads to an ablation study to determine the effects of the proposed techniques. The results plot should include error bars.\n\nThe earlier parts of the experiment were evaluated on three additional tasks. Although these tasks are all variations of putting things into a box, they do add some variability to the experiments. It was also great seeing the robot learning multiple strategies for the table clearing task. \n\nThe third part is transferring the simulation policy to the real robot. This section and the additional supplementary material are fairly short and should be expanded upon. It seems as though the transfer mainly depends on learning from randomized domains to achieve more robust policies that can then be applied to the real domain. The transfer learning is a crucial step of the pipeline. Unfortunately the results are not great. A 64% success rate for lifting the block and 35% for stacking are both fairly low. The lifting success is somewhat higher for the stacking at 80% with repeated attempts. The authors need to discuss these results. What is the cause of these low success rates? Is it the transfer learning or due to an earlier step in the pipeline? How do these success rates change with the variance in the training scenarios?\n\nHow much of the shape variability is being accounted for by the natural adaptability of the hand? If you give the robot images with one set of objects, but the actual task is performed using objects of different shapes and sizes, how much does the performance decrease?\n", "This paper claims to present a \"general deep reinforcement learning\" method that addresses the issues of real-world robotics: data constraints, safety, and lack of state information, and exploration by using demonstrations. However, this paper actually addresses these problems by training in a simulator, and only transferring 2 of the 6 tasks to the real world. The real world results are lackluster. However, the simulated results are nice.\n\nThe method in the paper is as follows: the environment reward is augmented by a reward function learned from human demonstrations using GAIL on full state (except for the arm). Then, an actor-critic method is used where the critic gets full state information, while the actor needs to learn from an image. However, the actor's convolutional layers are additionally trained to detect the object positions. \n\nStrengths:\n+ The simulated tasks are novel and difficult (sorting, clearing a table)\n+ Resetting to demonstration states is a nice way to provide curriculum\n\nLimitations:\n+ The results make me worries that the simulation environments have been hyper-tailored to the method, as the real environments looks very similar, and should transfer. \n+ Each part of the method is not particularly novel. Combining IRL and RL has been done before (as the authors point out in the related work), side-training perception module to predict full state has been done before (\"End-to-end visuomotor learning\"), diversity of training conditions has been done before (Domain randomization).\n+ Requiring hand-specified clusters of states for both selecting starting states and defining a reward functions requires domain knowledge. Why can't they be clustered using a clustering method?\n+ Because the method needs simulation to learn a policy, it is limited to tasks that can be simulated somewhat accurately (e.g. ones with simple dynamics). As shown by the poor transfer of the stacking task, block stacking with foam blocks is not a such task.\n\n\nQuestions:\n+ How many demonstrations do you use per task?\n+ What are the \"relative\" positions included in the \"object-centric\" state input? \n\nMisleading parts of the paper:\n+ The introduction of the paper primes the reader to expect a method that can work on a real system. However, this method only gets 64% accuracy on a simple block lifting task, 35% on a stacking task.\n+ \"Appendix C. \"We define functions on the underlying physical state to determine the stage of a state…The definition of stages also gives rise to a convenient way of specifying the reward functions without hand-engineering a shaping reward. \"-> You are literally hand engineering a shaping reward. The main text misleadingly refers to \"sparse reward\", which usually refers to a single reward upon task completion.\n\nIn conclusion, I find that the work lacks significance because the results are dependent on a list of hacks that are only possible in simulation.", "I agree with the authors that they should not be judged against Nair 17, Rajeswaran 17, and Chebotar 17, as those are concurrent papers that are probably under review at this time.\n\nHowever, the author response has not addressed what I consider to be major limitations to this paper. These are:\n+ The algorithm itself requires simulation. As the authors themselves point out: “dynamics mismatch between the simulated physical engine and real systems, e.g., simulated rigid body v.s. soft foam, introduces another major challenge.”\n+ Requiring hand-specified clusters of states for both selecting starting states and defining a reward functions requires domain knowledge. \n\nAs the authors point out, “Our goal of applying deep RL to robotic manipulation is *not* to find one solution that can solve a particular instance of tasks, e.g., block stacking, with a 100% success rate. As a matter of fact, the latest video of the backflipping robot from Boston Dynamics has demonstrated how far we can go with an hand-engineered solution.”\n\nUnfortunately, I find that there solution is significantly more hand engineered than they claim. The hand-specified clusters require domain knowledge for each task. The “object centric features” differ between tasks: while some tasks just use all pairwise relative positions between objects, the “pouring” task only uses the relative position between the gripper and the mug, and the relative position between the mug and the container. \n——>>More surprisingly, for the plane/car sorting task, only the NEAREST plane and car are included in the features! This is hidden away in the last section of the appendix.\n\nIn sum, this paper uses a lot of hand-tweaked representations and rewards in order to obtain impressive looking simulation results. While it is good to get these results at all, I do not think that this is a good fit for ICLR. The method and results would be better at a robotics conference. However, I also find that the method/results were not presented in good faith, and were often misleading or overstated. My review rating remains a 4.", "I’ve read the author response and maintain my score of the paper.\n\nI will add that I find the author response quite disappointing. It seems that instead of toning down the claims in their paper, the authors instead chose to double down by stating that\n\n> it is only until recently that model-free deep RL models achieved some initial success in robotic manipulation tasks\n\nI would argue that this statement is incorrect. As discussed at length in my review, prior work has demonstrated a range of robotic manipulation skills, many more successful than those in this paper, and some without the use of simulation or without demonstrations. This is not just true for the relatively recent Rajeswaran and Nair papers, but also James, Chebotar, and many many others. I agree that the particular experimental details matter a great deal, but the authors seem to want to have it both ways: dismiss the importance of additional assumptions and downsides of their method (demonstrations, an accurate simulator, etc) and emphasize the assumptions in prior work. As I wrote before, I don’t think the contribution is below bar, but the claims are in my opinion quite excessive.", "Paper summary: The authors propose a number of tricks to enable training policies for pick and place style tasks using a combination of GAIL-based imitation learning and hand-specified rewards, as well as use of unobserved state information during training and hand-designed curricula. The results demonstrate manipulation policies for stacking blocks and moving objects, as well as preliminary results for zero-shot transfer from simulation to a real robot for a picking task and an attempt at a stacking task.\n\nReview summary: The paper proposes a limited but interesting contribution that will be especially of interest to practitioners, but the scope of the contribution is somewhat incremental in light of recent work, and the results, while interesting, could certainly be better. In the balance, I think the paper should be accepted, because it will be of value to practitioners, and I appreciate the detail and real-world experiments. However, some of the claims should be revised to better reflect what the paper actually accomplishes: the contribution is a bit limited in places, but that's *OK* -- the authors should just be up-front about it.\n\nPros:\n- Interesting tasks that combine imitation and reinforcement in a logical (but somewhat heuristic) way\n- Good simulated results on a variety of pick-and-place style problems\n- Some initial attempt at real-world transfer that seems promising, but limited\n- Related work is very detailed and I think many will find it to be a very valuable overview\nCons:\n- Some of the claims (detailed below) are a bit excessive in my opinion\n- The paper would be better if it was scoped more narrowly\n- Contribution is a bit incremental and somewhat heuristic\n- The experimental results are difficult to interpret in simulation\n- The real-world experimental results are not great\n- There are a couple of missing citations (but overall related work is great)\n\nDetailed discussion of potential issues and constructive feedback:\n\n> \"Our approach leverages demonstration data to assist a reinforcement learning agent in learning to solve a wide range of tasks, mainly previously unsolved.\"\n>> This claim is a bit peculiar. Picking up and placing objects is certainly not \"unsolved,\" there are many examples. If you want image-based pick and place with demonstrations for example, see Chebotar '17 (not cited). If you want stacking blocks, see Nair '17. While it's true that there is a particular combination of factors that doesn't exactly appear in prior work, the statement the authors make is way too strong. Chebotar '17 shows picking and placing a real-world objective with a much higher success rate than reported here, without simulation. Nair '17 shows a much harder stacking task, but without images -- would that method have worked just as well with image-based distillation? Very likely. Rajeswaran '17 shows tasks that arguably are much harder. Maybe a more honest statement is that this paper proposes some tasks that prior methods don't show, and some prior methods show tasks that the proposed method can't solve. But as-is, this statement misrepresents prior work.\n\n> Previous RL-based robot manipulation policies (Nair et al., 2017; Popov et al., 2017) largely rely on low-level states as input, or use severely limited action spaces that ignore the arm and instead learn Cartesian control of a simple gripper. This limits the ability of these methods to represent and solve more complex tasks (e.g., manipulating arbitrary 3D objects) and to deploy in real environments where the privileged state information is unavailable.\n>> This is a funny statement. Some use images, some don't. There is a ton of prior work on RL-based robot manipulation that does use images. The current paper does use object state information during training, which some prior works manage to avoid. The comments about Cartesian control are a bit peculiar... the proposed method controls fingers, but the hand is simple. Some prior works have simpler grippers (e.g., Nair) and some have much more complex hands (e.g., Rajeswaran). So this one falls somewhere in the middle. That's fine, but again, this statement overclaims a bit.\n\n> To sidestep the constraints of training on real hardware we embrace the sim2real paradigm which\nhas recently shown promising results (James et al., 2017; Rusu et al., 2016a).\n>> Probably should cite Sadeghi et al. and Tobin et al. in regard to randomization, both of which precede James '17.\n\n> we can, during training, exploit privileged information about the true system state\n>> This was done also in Pinto et al. and many of the cited GPS papers\n\n> our policies solve the tasks that the state-of-the-art reinforcement and imitation learning cannot solve\n>> I don't think this statement is justified without much wider comparisons -- the authors don't attempt any comparisons to prior work, such as Chebotar '17 (which arguably is closest in terms of demonstrated behaviors), Nair '17 (which is also close but doesn't use images, though it likely could).\n\n> An alternative strategy for dealing with the data demand is to train in simulation and transfer\n>> Aside from previously mentioned citations, should probably cite Devin \"Towards Adapting Deep Visuomotor Representations\"\n\n> Sec 3.2.1\n>> This method seems a bit heuristic. It's logical, but can you say anything about what this will converge to? GAIL will try to match the demonstration distribution, and RL will try to maximize expected reward. What will this method do?\n\n> Experiments\n>> Would it be possible to indicate some measure of success rate for the simulated experiments? As-is, it's hard to tell how well either the proposed method or the baselines actually work.\n\n> Transfer\n>> My reading of the transfer experiments is that they are basically unsuccessful. Picking up a rectangular object with 80% success rate is not very good. The stacking success rate is too low to be useful. I do appreciate the authors trying out their method on a real robotic platform, but perhaps the more honest assessment of the outcome of these experiments is that the approach didn't work very well, and more research is needed. Again, it's *OK* to say this! Part of the purpose of publishing a paper is to stimulate future research directions. I think the transfer experiments should definitely be kept, but the authors should discuss the limitations to help future work address them, and present the transfer appropriately in the intro.\n\n> Diverse Visuomotor Skills\n>> I think this is a peculiar thing to put in the title. Is the implication that prior work is not diverse? Arguably several prior papers show substantially more diverse skills. It seems that all the skills here are essentially pick and place skills, which is fine (these are interesting skills), but the title seems like a peculiar jab at prior work not being \"diverse\" enough, which is simply misleading.", "+ How many demonstrations do you use per task?\nWe used 30 demonstrations for each task, which can be collected within half an hour, described in Sec. 4.1.\n\n+ What are the \"relative\" positions included in the \"object-centric\" state input? \nThe relative positions include the difference between (x,y,z) coordinates of the objects and the robot gripper. The details are described in the appendix Sec. C.", "We agree with the reviewers that some revision is necessary for clarifying the claims and better summarizing our contributions in presence of previous and concurrent work. We will tone down the introduction and provide more backups for our claims in the next version of the draft.\n\nHowever, we would like to point out that it is only until recently that model-free deep RL models achieved some initial success in robotic manipulation tasks. Several *concurrent* works, which are cited in our draft, have explored similar techniques used in our model. [Nair et al. '17, arxiv 28 Sep 2017] and [Rajeswaran et al. '17, arxiv 28 Sep 2017] have leveraged demonstrations in reinforcement learning; [Pinto et al. '17, arxiv 18 Oct 2017] have also low-level simulation states to train the critic. [Peng et al. '17, arxiv 18 Oct 2017] randomized system dynamics to achieve sim2real transfer. Note that, all these works are released on arxiv less than one month before the ICLR deadline, and none is thus far accepted in a peer-reviewed conference. We developed our approach completely independently. The concurrent works each solved one small piece of a big puzzle. Our model integrates the techniques used in these works along with a series of novel features into a single approach. As a result, it can solve more challenging robotic control problems than what have been demonstrated in these works. Therefore, we believe that the contributions of our work should not be diminished in presence of these concurrent works.\n\nAlso, while similar tasks may have been studied in the literature, seemingly similar tasks can exhibit quite different degrees of complexity based on a combination of factors, such as the controller (position [Pinto et al. '17] vs 9-DoF joint velocity [Ours]), the input modalities (states [Nair et al. '17] v.s. pixels [Ours]), variations of initial configurations and objects (a fixed set of objects [Rajeswaran et al. '17] vs procedurally generated objects [Ours]), etc. Even for the same task, different training protocols can have a major impact on the complexity of the task as well the generality and flexibility of the approach, such as behavior cloning [James et al. '17] v.s. reinforcement learning [Ours], pretrained vision module [Chebotar et al. '17] v.s. end-to-end learning [Ours]).\n\nA simple thought experiment that we can use to estimate the complexity of a task is to imagine the efforts required to hand-engineer a controller that solves the task. In the case of block stacking, it seems feasible to design a scripted policy to stack a set of fixed-sized blocks with Cartesian control given low-level state information [Nair et al. '17]. Indeed, such scripted controllers have been used e.g. in [Duan et al. '17]. However, it is significantly more demanding, if not intractable, to write such a controller to stack blocks of different sizes with a 9-DoF joint velocity controller given camera inputs [Ours]. \n\nOur goal of applying deep RL to robotic manipulation is *not* to find one solution that can solve a particular instance of tasks, e.g., block stacking, with a 100% success rate. As a matter of fact, the latest video of the backflipping robot from Boston Dynamics has demonstrated how far we can go with an hand-engineered solution. Our goal is to derive a flexible approach that can be applied to a wide variety of tasks. To this end, in the paper we show that the same techniques and network can be used to solve a wide range of tasks, including stacking and lifting, pouring, etc.\n\nOur sim2real experiments focused on zero-shot policy transfer. This requires us to minimize the domain gap between the simulator and the real system. As shown in Fig. 3, even after careful tuning, the discrepancy between simulation rendering and the real camera frame is still apparent. Neural network policies can be sensitive to such nuances. This caused a major challenge for sim2real transfer. Furthermore, dynamics mismatch between the simulated physical engine and real systems, e.g., simulated rigid body v.s. soft foam, introduces another major challenge. Future research is required to enable better policy transfer, potentially by leveraging a small amount of real-world experiences on the real hardware. The state-of-the-art work closest to our setup is [Rusu et al. '17], which used progressive networks to train a pixel-to-action RL policies for real robots. It has been only demonstrated in a block reaching task. In comparison, our sim2real block stacking agent can perfectly solve the block reaching subtask at a 100% success rate." ]
[ -1, -1, 4, 4, -1, -1, 6, -1, -1 ]
[ -1, -1, 4, 4, -1, -1, 5, -1, -1 ]
[ "ry6Xu6UEz", "r1C8JhrNz", "iclr_2018_HJWGdbbCW", "iclr_2018_HJWGdbbCW", "B1oZGo_ez", "HJge1dvgz", "iclr_2018_HJWGdbbCW", "BkVdKIrmf", "iclr_2018_HJWGdbbCW" ]
iclr_2018_S1FFLWWCZ
LSD-Net: Look, Step and Detect for Joint Navigation and Multi-View Recognition with Deep Reinforcement Learning
Multi-view recognition is the task of classifying an object from multi-view image sequences. Instead of using a single-view for classification, humans generally navigate around a target object to learn its multi-view representation. Motivated by this human behavior, the next best view can be learned by combining object recognition with navigation in complex environments. Since deep reinforcement learning has proven successful in navigation tasks, we propose a novel multi-task reinforcement learning framework for joint multi-view recognition and navigation. Our method uses a hierarchical action space for multi-task reinforcement learning. The framework was evaluated with an environment created from the ModelNet40 dataset. Our results show improvements on object recognition and demonstrate human-like behavior on navigation.
rejected-papers
This paper describes active vision for object recognition learned in an RL framework. Reviewers think the paper is not of sufficient quality: Insufficient detail, and insufficient evaluation. While the authors have provided a lengthy rebuttal, the shortcomings have not yet been addressed in the paper.
train
[ "rJKiKBGef", "Bk9Z3ZQlG", "HJwAZZvxG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Paper Summary: The paper proposes an approach to perform object classification and changing the viewpoint simultaneously. The idea is that the viewpoint changes until the object is recognized. The results have been reported on ModelNet40.\n\nPaper Strength: The idea of combining active vision with object classification is interesting.\n\nPaper Weaknesses:\nI have the following concerns about this paper: (1) The paper performs the experiments on ModelNet40, which is a toy dataset for this task. The background is white and there is only a single object in each image. (2) The simple CNN baselines in MVCNN (Su et al., 2015) achieve higher performance than the proposed model, which is more complicated. (3) The paper seems unfinished. It mentions THOR and Active Vision, but there is no quantitative or qualitative results on them. (4) Some of the implementation details are unclear.\n\ncomments:\n\n- It is unfair to use (Ammirato et al., 2017) as the citation for active vision. Active vision has been around for decades.\n\n- It is not clear how the hierarchical soft-max layers have been implemented. There cannot be two consecutive soft-max layers. Also, for example, we cannot select an action from A, and then select an action from C since the operation is not differentiable. This should be clarified in the rebuttal.\n\n- In Table 3, why is there a difference between the performance with and without LSTM in the first column? The LSTM does not see any history at the first step so the performance should be the same in both cases.\n\n- According to Table 1 of MVCNN (Su et al., 2015), a simple CNN with one view as input achieves 83% accuracy (w/o fine-tuning), which is higher than the performance of the proposed method.\n\n- It is better not to call the approach navigation. It is just changing the azimuth of the camera view.", " \nThe paper proposes LSD-NET, an active vision method for object classification. In the proposed method, based on a given view of an object, the algorithm can decide to either classify the object or to take a discrete action step which will move the camera in order to acquire a different view of the object. Following this procedure the algorithm iteratively moves around the object until reaching a maximum number of allowed moves or until a object view favorable for classification is reached.\n\nThe main contribution of the paper is a hierarchical action space that distinguishes between camera-movement actions and classification actions. At the top-level of the hierarchy, the algorithm decides whether to perform a movement or a classification -type action. At the lower-level, the algorithm either assign a specific class label (for the case of classification actions) or performs a camera movement (for the case of camera-movement actions). This hierarchical action space results in reduced bias towards classification actions.\n\n\nStrong Points\n- The content is clear and easy to follow.\n- The proposed method achieves competitive performance w.r.t. existing work.\n\nWeak Points\n- Some aspects of the proposed method could have been evaluated better.\n- A deeper evaluation/analysis of the proposed method is missing.\n\nOverall the proposed method is sound and the paper has a good flow and is easy to follow. The proposed method achieves competitive results, and up to some extent, shows why it is important to have the proposed hierarchical action space.\n\nMy main concerns with this manuscript are the following:\n\nIn some of the tables a LSTM variant? of the proposed method is mentioned. However it is never introduced properly in the text. Can you indicate how this LSTM-based method differs from the proposed method?\n\nAt the end of Section 5.2 the manuscript states: \"In comparison to other methods, our method is agnostic of the starting point i.e. it can start randomly on any image and it would get similar testing accuracies.\" This suggests that the method has been evaluated over different trials considering different random initializations. However, this is unclear based on the evaluation protocol presented in Section 5. If this is not the case, perhaps this is an experiment that should be conducted.\n\nIn Section 3.2 it is mentioned that different from typical deep reinforcement learning methods, the proposed method uses a deeper AlexNet-like network. In this context, it would be useful to drop a comment on the computation costs added in training/testing by this deeper model.\n\nTable 3 shows the number of correctly and wrongly classified objects as a function of the number of steps taken. Here we can notice that around 50% of the objects are in the step 1 and 12, which as correctly indicated by the manuscript, suggests that movement does not help for those cases. Would it be possible to have more class-specific (or classes grouped into intermediate categories) visualization of the results? This would provide a better insight of what is going on and when exactly actions related to camera movements really help to get better classification performance. \nOn the presentation side, I would recommend displaying the content of Table 3 in a plot. This may display the trends more clearly. Moreover, I would recommend to visualize the classification accuracy as a function of the step taken by the method. In this regard, a deeper analysis of the effect of the proposed hierarchical action space is a must.\n\nI would encourage the authors to address the concerns raised on my review.\n", "The ambition of this paper is to address multi-view object recognition and the associated navigation as a unified reinforcement learning problem using a deep CNN to represent the policy.\n\nMulti-view recognition and active viewpoint selection have been studied for more than 30 years, but this paper ignores most of this history. The discussion of related work as well as the empirical evaluation are limited to very recent methods using neural networks. I encourage the authors to look e.g. at Paletta and Pinz [1] (who solve a very similar and arguably harder problem in related ways) and at Bowyer & Dyer [2] as well as the references contained in these papers for history and context. Active vision goes back to Bajcsy, Aloimonos, and Ballard; these should be cited instead of Ammirato et al. Conversely, the related work cites a handful of papers (e.g. in the context of Atari 2600 games) that are unrelated to this work.\n\nThe navigation aspect is limited to fixed-size left or right displacements (at least for ModelNet40 task which is the only one to be evaluated and discussed). This is strictly weaker than active viewpoint selection. Adding this to the disregard of prior work, it is (at best) misleading to claim that this is \"the first framework to combine learning of navigation and object recognition\".\n\nCalling this \"multi-task\" learning is also misleading. There is only one ultimate objective (object recognition), while the agent has two types of actions available (moving or terminating with a classification decision).\n\nThere are other misleading, vague, or inaccurate statements in the paper, for example:\n\n- \"With the introduction of deep learning to reinforcement learning, there has been ... advancements in understanding ... how humans navigate\": I don't think such a link exists; if it does, a citation needs to be provided.\n\n- \"inductive bias like image pairs\": Image pairs do not constitute inductive bias. Either the term is misused or the wording must be clarified; likewise for other occurrences of \"inductive bias\".\n\n- \"a single softmax layer is biased towards tasks with larger number of actions\": I think I understand what this is intended to say, but a \"softmax layer\" cannot be \"biased towards tasks\" as there is only one, given, task.\n\n- I do not understand what the stated contribution of \"extrapolation of the action space to a higher dimension for multi-task learning\" is meant to be.\n\n- \"Our method performs better ... than state-of-the-art in training for navigation to the object\": The method does not involve \"navigation to the object\", at least not for the ModelNet40 dataset, the only for which results are given.\n\nIt is not clear what objective function the system is intended to optimize. Since the stated task is object recognition and from Table 2 I was expecting it to be the misclassification rate, but this is clearly not the case, as the system is not set up to minimize it. What \"biases\" the system towards classification actions (p. 5)? Why is it bad if the agent shows \"minimal movement actions\" as long as the misclassification rate is minimized? No results are given to show whether this is the case or not. The text then claims that the \"hierarchical method gives superior results\", but this is not shown either.\n\nTable 3 reveals that the system fails to learn much of interest at all. Much of the time the agent chooses not to move and performs relatively poorly; taking more steps improves the results; often all 12 views are collected before a classification decision is made. Two of the most important questions remain open: (1) What would be the misclassification rate if all views are always used? (2) What would be the misclassification rate under a random baseline policy not involving navigation learning (e.g., taking a random number of steps in the same direction)?\n\nExperiments using the THOR dataset are announced but are left underspecified (e.g., the movement actions), but no results or discussion are given.\n\nSUMMARY\n\nQuality: lacking in may ways; see above.\n\nClarity: Most of the paper is clear enough, but there are confusions and missing information about THOR and problems with phrasing and terminology. Moreover, there are many grammatical and typographical glitches.\n\nOriginality: Harder tasks have been looked at before (using methods other than CNN). Solving a simpler version using CNN I do not consider original unless there is a compelling pay-off, which this paper does not provide.\n\nSignificance: Low.\n\nPros: The problem would be very interesting and relevant if it was formulated in a more ambitious way (e.g., a more elaborate action space than that used for ModelNet40) with a clear objective function,\n\nCons: See above.\n\n\n[1] Lucas Paletta and Axel Pinz, Active object recognition by view integration and reinforcement learning, Robotics and Autonomous Systems 31, 71-86, 2000\n\n[2] Bowyer, K. W. and Dyer, C. R. (1990), Aspect graphs: An introduction and survey of recent results. Int. J. Imaging Syst. Technol., 2: 315–328. doi:10.1002/ima.1850020407\n" ]
[ 4, 6, 3 ]
[ 4, 4, 4 ]
[ "iclr_2018_S1FFLWWCZ", "iclr_2018_S1FFLWWCZ", "iclr_2018_S1FFLWWCZ" ]
iclr_2018_SkmM6M_pW
Egocentric Spatial Memory Network
Inspired by neurophysiological discoveries of navigation cells in the mammalian brain, we introduce the first deep neural network architecture for modeling Egocentric Spatial Memory (ESM). It learns to estimate the pose of the agent and progressively construct top-down 2D global maps from egocentric views in a spatially extended environment. During the exploration, our proposed ESM network model updates belief of the global map based on local observations using a recurrent neural network. It also augments the local mapping with a novel external memory to encode and store latent representations of the visited places based on their corresponding locations in the egocentric coordinate. This enables the agents to perform loop closure and mapping correction. This work contributes in the following aspects: first, our proposed ESM network provides an accurate mapping ability which is vitally important for embodied agents to navigate to goal locations. In the experiments, we demonstrate the functionalities of the ESM network in random walks in complicated 3D mazes by comparing with several competitive baselines and state-of-the-art Simultaneous Localization and Mapping (SLAM) algorithms. Secondly, we faithfully hypothesize the functionality and the working mechanism of navigation cells in the brain. Comprehensive analysis of our model suggests the essential role of individual modules in our proposed architecture and demonstrates efficiency of communications among these modules. We hope this work would advance research in the collaboration and communications over both fields of computer science and computational neuroscience.
rejected-papers
Authors do not respond to significant criticism - e.g. lack of a critical reference Reviewers unanimously reject.
train
[ "r1hg0NjgG", "BkfIZxFlG", "Bk5nrSoeG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper is well written, well-motivated and the idea is very interesting for the computer vision and robotic communities. The technical contribution is original. The vision-based agent localization approach is novel compared to the methods of the literature. However, the experimental validation of the proposed approach could be more convincing (e.g. by testing on real data, with different training and testing splitting configurations). \n\nMajor concern: \n1) The authors depict in section 2 “there is no existing end-to-end neural network for visual SLAM to our best knowledge” but they should discuss the positioning with respect to the paper of M Garon and JF Lalonde, “Deep 6-DOF Tracking”, ISMAR 2017 which propose a fully neural network based camera tracking method.\n\nMinor concerns:\n2) Table 3: the comparison is not rigorous in the sense that the proposed method estimates a 2D pose (3-DOF) while ORB-SLAM and EKF-SLAM are methods designed for 3D pose estimation (6-DOF). Is it possible to generalize your method to this case (6-DOF) for a more consistent comparison? At least, the fact that your method is more restrictive should be discussed in the paper. \n\n3) In the same vein than point 2), ORB-SLAM and EKF-SLAM are methods based on regression while the proposed method is restricted to the classification pose estimation. Is it possible to test your method with a regression task? \n\n4) It would be interesting to test the proposed method on real data to measure its robustness in terms of noise sensor and in terms of motion blur.\n\n5) It would also be interesting to test the proposed method on datasets usually used in the SLAM community (e.g. using the sequences of the odometry benchmark of KITTI dataset).\n\n6) In the SLAM context, the running time aspect on the test phase is crucial. Hence, the authors should compare the running time of their method with algorithms of literature (e.g. ORB-SLAM). \n\n", "The paper proposes a biologically inspired model of mammalian navigation which includes head direction cells, boundary vector cells, place cells, and grid cells. The proposed model includes modules for all of these kinds of cells and includes: an Neural Touring Machine, Spatial Transformer Network, Recurrent Neural Networks, and CNNs. The model is trained with supervision to output the overhead map of the global map. All components are trained with dense supervision (e.g. loop closure, ego motion with orientation-position, and the ground truth local overhead map). The model is trained on 5 mazes and tested on 2 others.\n\nI believe that this paper is severely flawed. Firstly, the model has ample free parameters to overfit when such a tiny test set is used. Are the test environments sufficiently different from the training ones? For example, when showing that the head direction cells generalize in the new mazes how can we be sure that it is not using a common lighting scheme common to both train and test mazes to orient itself? Also , because MSE is not scale free error measure it is hard to tell how significant the errors are. What is the maximal possible MSE error in these environments? \n\nTo quote the authors \"However, there is no existing end-to-end neural network for\nvisual SLAM to our best knowledge.\" For example \"RatSLAM: a hippocampal model for simultaneous localization and mapping\" by Milford at al. was a successful biologically inspired SLAM algorithm (able to map neighborhoods using a car mounted monocular camera) first published in 2004--with many orders of magnitude fewer free parameters. ", "Significance of Contributions Unclear\n\n\nThe paper describes a neural network architecture for monocular SLAM that is argued to take inspiration from neuroscience. The architecture is comprised of four components: one that estimates egomotion (HDU) much like prediction in a filtering framework; one that fuses the current image into a local 2D metric map (BVU); one that detects loop closures (PCU); and one that integrates local maps (GU). These modules along with their associated representations are learned in an end-to-end fashion. The method is trained and evaluated on simulated grid environments and compared to two visual SLAM algorithms. \n\nThe contributions and significance of the paper are unclear. SLAM is arguably a solved problem at the scales considered here, with existing solutions capable of performing localization and mapping in large (city-scale), real-world environments. That aside, one can appreciate the merits of representation learning in the context of SLAM and a handful of neural network-based approaches to SLAM and the related problem of navigation have been proposed of-late. However, the paper doesn't do a sufficient job making the advantages of the proposed approach over these methods clear. Further, the paper emphasizes parallels to neuroscience models for navigation as being a contribution, however these similarities are largely hand wavy and one could argue that they also exist for the many other SLAM algorithms that perform prediction (as in HDU), local/global mapping (as in BVU and GU) and loop closure detection (as in PCU). More fundamentally, the proposed method does not appear to account for motion or measurement noise that are inherent in any SLAM problem and, related, does not attempt to model the uncertainty in the resulting map or pose estimates.\n\nThe paper evaluates the individual components of the architecture. The results suggest that the different modules are doing something reasonable, though the evaluation is rather limited (in terms of spatial scale) and a bit arbitrary (e.g., comparing local maps to the ground truth at a seemingly arbitrary 32s). The evaluation of the loop closure is limited to a qualitative measure and is therefore not convincing. The authors should quantitatively evaluate the performance of loop closure in terms of precision and recall (this is particularly important given effects of erroneous loop closure detections and the claims that the proposed method is robust). Meanwhile, it isn't clear that much can be concluded from the ablation studies as there is relatively little difference in MSE between the two ablated models.\n\n\nAdditional comments/questions:\n\n\n* A stated advantage of this method over that of Gupta et al. is that the agent's motion is not assumed to be known. However, it isn't clear whether and how the model incorporates motion or measurement uncertainty, which is fundamental to any SLAM (or navigation) framework.\n\n* Related, an important aspect of any SLAM algorithm is an explicit estimate of the uncertainty in the agent's pose and the map, however it doesn't seem that the proposed model attempts to express this uncertainty.\n\n* The paper claims to estimate the agent's pose as it navigates, but it is not apparent how the pose is maintained beyond estimating egomotion by comparing the current image to a local map.\n\n* Related, it is not clear how the method balances egomotion estimates and exteroceptive measurements (e.g, as are fused with traditional filtering frameworks). There are vague references to \"eliminating discrepancies\" when merging measurements, but it isn't clear what this actually means, whether the output is consistent, or how the advantages of egomotion estimation and measurements are balanced.\n\n* The BVU module is stated as generating a \"local\" map, but it is not clear from the discussion what limits the resulting map to the area in the vicinity of the robot vs. the entire environment.\n\n* It is not clear why previous data is transformed to the agent's reference frame as a result of motion vs. the more traditional approach of transforming the agent's pose to a global reference frame.\n\n* The description of loop closure detection and the associated heuristics is confusing. For example, Section 3.4 states that the agent only considers positions that are distant from the most recent visited position as a means of avoiding trival loop closures, however Section 3.4 states that GU provides memory vectors near the current location for loop closure classification.\n\n* The description of the GU module is confusing. How does spatial indexing deal with changes to the map (e.g., as a result of loop closures/measurement updates) or transformations to the robot's frame-of-reference? What are h, H, w, and W and how are they chosen?\n\n* The architecture assumes a discrete (and course) action space, whereas actions are typically continuous. Have the authors tried regressing to continuous actions or experimenting with finer discretizations that are more suitable to real applications?\n\n* It is not clear what is meant by the statement that the PU \"learns to encode the representation of visited places\".\n\n* The means by which the architecture is trained is unclear. What is the loss that is optimized? How is the triplet loss (Eqn. 3) incorporated (e.g., is it weighted differently than other terms in the loss)?\n\n* Section 3.2 states that the \"agent has to learn to take actions to explore its surroundings\", however it isn't apparent that the method reasons over the agent's policy. Indeed, this is an open area of research. Instead, the results section suggests that the agent acts randomly.\n\n* Section 4.1 draws comparisons between HDU and Head Direction Cells, however the latter estimate location/orientation whereas the former (this method) predicts egomotion. While egomotion can be integrated to estimate pose (as is done in Fig 4), these are not the same thing.\n\n* The authors are encouraged to tone down claims regarding parallels to navigation models from neuroscience as they are largely unjustified.\n\n* The comparison to existing monocular SLAM baselines is surprising and the reviewer remains skeptical regarding the stated advantages of the proposed method. How much of this difference is a result of testing in simulation? It would be more convincing to compare performance in real-world environments, for which these baselines have proven effective.\n\n* Figure 1: \"Border\" --> \"Boundary\"\n\n* Figure 1: The camera image should also go to the BVU block\n\n* Many of the citations are incorrectly not parenthesized\n\n* The paper should be proof-read for grammatical errors" ]
[ 5, 3, 4 ]
[ 4, 4, 4 ]
[ "iclr_2018_SkmM6M_pW", "iclr_2018_SkmM6M_pW", "iclr_2018_SkmM6M_pW" ]
iclr_2018_rJqfKPJ0Z
Clipping Free Attacks Against Neural Networks
During the last years, a remarkable breakthrough has been made in AI domain thanks to artificial deep neural networks that achieved a great success in many machine learning tasks in computer vision, natural language processing, speech recognition, malware detection and so on. However, they are highly vulnerable to easily crafted adversarial examples. Many investigations have pointed out this fact and different approaches have been proposed to generate attacks while adding a limited perturbation to the original data. The most robust known method so far is the so called C&W attack [1]. Nonetheless, a countermeasure known as fea- ture squeezing coupled with ensemble defense showed that most of these attacks can be destroyed [6]. In this paper, we present a new method we call Centered Initial Attack (CIA) whose advantage is twofold : first, it insures by construc- tion the maximum perturbation to be smaller than a threshold fixed beforehand, without the clipping process that degrades the quality of attacks. Second, it is robust against recently introduced defenses such as feature squeezing, JPEG en- coding and even against a voting ensemble of defenses. While its application is not limited to images, we illustrate this using five of the current best classifiers on ImageNet dataset among which two are adversarialy retrained on purpose to be robust against attacks. With a fixed maximum perturbation of only 1.5% on any pixel, around 80% of attacks (targeted) fool the voting ensemble defense and nearly 100% when the perturbation is only 6%. While this shows how it is difficult to defend against CIA attacks, the last section of the paper gives some guidelines to limit their impact.
rejected-papers
The reviewers have various reservations. While the paper has interesting suggestions, it is slightly incremental and the results are not sufficiently compared to other techniques. We not that one reviewer revised his opinion
train
[ "B1fZIQcxM", "ryU7ZMsgf", "BkrIo4ixG", "ryWp9xwGf", "SkWHqxDzf", "Byu9txvMM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper is not anonymized. In page 2, the first line, the authors revealed [15] is a self-citation and [15] is not anonumized in the reference list.\n\n", "This paper presents a reparametrization of the perturbation applied to features in adversarial examples based attacks. It tests this attack variation on against Inception-family classifiers on ImageNet. It shows some experimental robustness to JPEG encoding defense.\n\nSpecifically about the method: Instead of perturbating a feature x_i by delta_i, as in other attacks, with delta_i in range [-Delta_i, Delta_i], they propose to perturbate x_i^*, which is recentered in the domain of x_i through a heuristic ((x_i ± Delta_i + domain boundary that would be clipped)/2), and have a similar heuristic for computing a Delta_i^*. Instead of perturbating x_i^* directly by delta_i, they compute the perturbed x by x_i^* + Delta_i^* * g(r_i), so they follow the gradient of loss to misclassify w.r.t. r (instead of delta). \n\n+/-:\n+ The presentation of the method is clear.\n+ ImageNet is a good dataset to benchmark on.\n- (!) The (ensemble) white-box attack is effective but the results are not compared to anything else, e.g. it could be compared to (vanilla) FGSM nor C&W.\n- The other attack demonstrated is actually a grey-box attack, as 4 out of the 5 classifiers are known, they are attacking the 5th, but in particular all the 5 classifiers are Inception-family models.\n- The experimental section is a bit sloppy at times (e.g. enumerating more than what is actually done, starting at 3.1.1.).\n- The results on their JPEG approximation scheme seem too explorative (early in their development) to be properly compared.\n\nI think that the paper need some more work, in particular to make more convincing experiments that the benefit lies in CIA (baselines comparison), and that it really is robust across these defenses shown in the paper.", "In this paper the authors present a new method for generating adversarial examples by constraining the perturbations to fall in a bounded region. Further, experimentally, they demonstrate that learning the perturbations to balance errors against multiple classifiers can overcome many common defenses used against adversarial examples.\n\nPros:\n- Simple, easy to apply technique\n- Positive results in a wide variety of settings.\n\nCons:\n- Writing is a bit awkward at points.\n- Approach seems fairly incremental.\n\nOverall, the results are interesting but the technique seems relatively incremental.\n\nDetails:\n\n\"To find the center of domain definition...\" paragraph should probably go after the cases are described. Confusing as to what is being referred to where it currently is written.\n\nTable 1: please use periods not commas (as in Table 2), e.g. 96.1 not 96,1\n\ninexistent --> non-existent\n", "We honestly do not understand your evaluation since sentence “Evolutionary algorithms are also used by authors in [15] to find adversarial examples ...” is not a self citation. Believe us that we are not Nguyen et al and we are not related to them at all. You will see it clearly when the names well be unveiled. \nSo, please take the time to reconsider your evaluation and give us a fair review as our paper is the result of several months of work.", "Thanks a lot for taking the time to read the paper and provide us with you review.\nWe do not claim in the paper that CIA attacks are the most robust ones as we did not indeed give any comparison to other methods. We however show that they are an answer to some issues met in literature. First, avoid the clipping that degrades the quality of attacks. We give a comparison to C&W. (Figure 1). Second, we show that CIA attacks are effective against recent published defenses : ensembling (by the way, at least three papers submitted to ICLR2018 claim this defense to be effective), smoothing and JPG encoding. After the paper submission, we continued our experiments and made comparison to C&W and FGSM. They show a non negligible improvement in attacks success using CIA approach. If adding the results would change your review to an acceptance, we would like to do it.\nAbout the grey-box attack, you are definitely right . However, the purpose of this section 3.3 was to show that ensembling can be considered as a defense as it limits the attacks but not totally effective. Using another classifier would likely show that the transferability is even more limited. This would only reinforce our claim about the lack of transferability which was already tackled in the previous sections.\nFinally, the English of the paper can be improved. We will do it in the revised version.", "First, thank for taking the time to read the paper and provide this review.\nThe approach is not really incremental as we do not go from an easy case then harden it at each new experiment. In the Tabl2 we show the non transferabity of attacks when making targeted attacks which is against what is often claimed in literature. Table 3 gives the results of attacking several classifiers at once. This shows that ensembling is not always effective as often claimed (by the way, at least three papers submitted to ICLR2018 claim it!). Table 4 provides results of attacking another defense, i.e. spatial smoothing. Table 5 shows that smoothed attacks are not necessarily successful if defense does not use smoothing. Once again, this reveals that the idea behind smoothing as an efficient defense is not true. Table 6 gives the results of attacking a defense with and without smoothing at the same time. Table 7 presents a combination of ensembling and smoothing defense. We could have given this last table directly at the beginning but we think honestly that this would make the paper more difficult to read. \nThe paper is obviously not written in a perfect English and this can be improved. We will do it in the revised version. But overall we think that we bring some interesting results to the community:\n- Avoid clipping to perform more robust attacks\n- Perform effective attacks against strong defenses like ensembling and smoothing.\n- Make partial crafting without affecting the whole content of input data (images in Figure 3)\n- Finally, CIA attacks can be applied beyond images.\nWe hope this answer will give you satisfaction and change your review to an acceptance." ]
[ 3, 4, 5, -1, -1, -1 ]
[ 3, 3, 2, -1, -1, -1 ]
[ "iclr_2018_rJqfKPJ0Z", "iclr_2018_rJqfKPJ0Z", "iclr_2018_rJqfKPJ0Z", "B1fZIQcxM", "ryU7ZMsgf", "BkrIo4ixG" ]
iclr_2018_ByCPHrgCW
Deep Learning Inferences with Hybrid Homomorphic Encryption
When deep learning is applied to sensitive data sets, many privacy-related implementation issues arise. These issues are especially evident in the healthcare, finance, law and government industries. Homomorphic encryption could allow a server to make inferences on inputs encrypted by a client, but to our best knowledge, there has been no complete implementation of common deep learning operations, for arbitrary model depths, using homomorphic encryption. This paper demonstrates a novel approach, efficiently implementing many deep learning functions with bootstrapped homomorphic encryption. As part of our implementation, we demonstrate Single and Multi-Layer Neural Networks, for the Wisconsin Breast Cancer dataset, as well as a Convolutional Neural Network for MNIST. Our results give promising directions for privacy-preserving representation learning, and the return of data control to users.
rejected-papers
While the reviewers all seem to think this is interesting and basically good work, the Reviewers are consistent and unanimous in rejecting the paper. While the authors did provide a thorough rebuttal, the original paper did not meet the criteria and the reviewers have not changed their scores.
train
[ "HkCG-X5lG", "rJhC64Olf", "Sy52MUdgG", "S174cGs7z", "BJxS1hFXG", "ryMApotmG", "B1p3soF7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes a hybrid Homomorphic encryption system that is well suited for privacy-sensitive data inference applications with the deep learning paradigm. \nThe paper presents a well laid research methodology that shows a good decomposition of the problem at hand and the approach foreseen to solve it. It is well reflected in the paper and most importantly the rationale for the implementation decisions taken is always clear.\n\nThe results obtained (as compared to FHEW) seem to indicate well thought off decisions taken to optimize the different gates' operations as clearly explained in the paper. For example, reducing bootstrapping operations by two-complementing both the plaintext and the ciphertext, whenever the number of 1s in the plain bit-string is greater than the number of 0s (3.4/Page 6).\n\nResult interpretation is coherent with the approach and data used and shows a good understanding of the implications of the implementation decisions made in the system and the data sets used.\nOverall, fine work, well organized, decomposed, and its rationale clearly explained. The good results obtained support the design decisions made.\nOur main concern is regarding thorough comparison to similar work and provision of comparative work assessment to support novelty claims.\n\nNota: \n - In Figure 4/Page 4: AND Table A(1)/B(0), shouldn't A And B be 0?\n - Unlike Figure 3/Page 3, in Figure 2/page 2, shouldn't operations' precedence prevail (No brackets), therefore 1+2*2=5?", "Summary:\nThis paper proposes a framework for private deep learning model inference using FHE schemes that support fast bootstrapping.\nThe main idea of this paper is that in the two-party computation setting, in which the client's input is encrypted while the server's deep learning model is plain.\nThis \"hybrid\" argument enables to reduce the number of necessary bootstrapping, and thus can reduce the computation time.\nThis paper gives an implementation of adder and multiplier circuits and uses them to implement private model inference.\n\nComments:\n1. I recommend the authors to tone down their claims. For example, the authors mentioned that \"there has been no complete implementation of established deep learning approaches\" in the abstract, however, the authors did not define what is \"complete\". Actually, the SecureML paper in S&P'17 should be able to privately evaluate any neural networks, although at the cost of multi-round information exchanges between the client and server.\n\nAlso, the claim that \"we show efficient designs\" is very thin to me since there are no experimental comparisons between the proposed method and existing works. Actually, the level FHE can be very efficient with a proper use of message packing technique such as [A] and [C]. For a relatively shallow model (as this paper has used), level FHE might be faster than the binary FHE.\n\n2. I recommend the author to compare existing adder and multiplier circuits with your circuits to see in what perspective your design is better. I think the hybrid argument (i.e., when one input wire is plain) is a very common trick that used in the circuit design field, such as garbled circuit [B], to reduce the depth of the circuit. \n\n3. I appreciate that optimizations such as low-precision and point-wise convolution are discussed in this paper. Such optimizations are very common in deep learning field while less known in the field of security.\n\n[A]: Dowlin et al. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy.\n[B]: V. Kolesnikov et al. Improved garbled circuit: free xor gates and applications. \n[C]: Liu et al. Oblivious Neural Network Predictions via MiniONN transformations.", "The paper presents a means of evaluating a neural network securely using homomorphic encryption. A neural network is already trained, and its weights are public. The network is to be evaluated over a private input, so that only the final outcome of the computation-and nothing but that-is finally learned.\n\nThe authors take a binary-circuit approach: they represent numbers via a fixed point binary representation, and construct circuits of secure adders and multipliers, based on homomorphic encryption as a building block for secure gates. This allows them to perform the vector products needed per layer; two's complement representation also allows for an \"easy\" implementation of the ReLU activation function, by \"checking\" (multiplying by) the complement of the sign bit. The fact that multiplication often involves public weights is used to speed up computations, wherever appropriate. A rudimentary experimental evaluation with small networks is provided.\n\nAll of this is somewhat straightforward; a penalty is paid by representing numbers via fixed point arithmetic, which is used to deal with ReLU mostly. This is somewhat odd: it is not clear why, e.g., garbled circuits where not used for something like this, as it would have been considerably faster than FHE.\n\nThere is also a work in this area that the authors do not cite or contrast to, bringing the novelty into question; please see the following papers and references therein:\n\nGILAD-BACHRACH, R., DOWLIN, N., LAINE, K., LAUTER, K., NAEHRIG, M., AND WERNSING, J. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In Proceedings of The 33rd International Conference on Machine Learning (2016), pp. 201–210.\n\nSecureML: A System for Scalable Privacy-Preserving Machine Learning\nPayman Mohassel and Yupeng Zhang\n\nSHOKRI, R., AND SHMATIKOV, V. Privacy-preserving deep learning. In\nProceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (2015), ACM, pp. 1310–1321.\n\nThe first paper is the most related, also using homomorphic encryption, and seems to cover a superset of the functionalities presented here (more activation functions, a more extensive analysis, and faster decryption times). The second paper uses arithmetic circuits rather than HE, but actually implements training an entire neural network securely.\n\n Minor details:\n\nThe problem scenario states that the model/weights is private, but later on it ceases to be so (weights are not encrypted).\n\n\"Both deep learning and FHE are relatively recent paradigms\". Deep learning is certainly not recent, while Gentry's paper is now 7 years old.\n\n\"In theory, this system alone could be used to compute anything securely.\" This is informal and incorrect. Can it solve the halting problem?\n\n\"However in practice the operations were incredibly slow, taking up to 30 minutes in some cases.\" It is unclear what operations are referred to here.\n\n\n\n\n\n\n\n\n", "This comment provides a summary of all changes we have made to the paper.\n\nWe have fixed the AND table in Figure 4.\n\nWe have added brackets to Figure 2, and updated Figure 3 to show the Homomorphic Encryption process more clearly.\n\nWe have updated the “Activation Functions” subsection of the design section, to discuss the square activation in more detail.\n\nWe have updated the results section to better clarify why we did not include decryption timings.\n\nWe have updated the problem scenario to reduce ambiguity, to more clearly state that the server does not reveal the model or weights to the client (as opposed to the server explicitly securing the weights), and to more clearly explain what we consider to be a “complete implementation”.\n\nWe have added a short “Privacy-Preserving Model Training” subsection to the background section, to reference some related works, and better clarify why we do not consider model training.\n\nWe have added a short “Privacy-Preserving Deep Learning” subsection to the background section, to reference some works which do not use homomorphic encryption, and the trade-offs which result from this.\n\nWe have changed \"Both deep learning and FHE are relatively recent paradigms\" to “Both deep learning and FHE have seen significant advances in the past ten years”, reflecting that this work is built upon advances in the past decade.\n\nFor the sentence \"In theory, this system alone could be used to compute anything securely.\" We have changed the end to “compute any arithmetic circuit”, better reflecting what Gentry's cryptosystem does.\n\nFor the sentence \"However in practice the operations were incredibly slow, taking up to 30 minutes in some cases.\" We have clarified that this was for the bootstrapping operation in an implementation of Gentry's cryptosystem, and added a reference.\n\nWe have updated the end of the results section, to better clarify the comparison between our work and Cryptonets, with regards to model size, depth and efficiency.\n\nWe have updated the “Hybrid Homomorphic Encryption” subsection of the design section, to explain that this hybrid approach is why we consider our approach to be efficient and simple.\n\nWe have also updated the abstract, using the language “no complete implementation of common deep learning operations, for arbitrary model depths, using homomorphic encryption”, and “efficiently implementing many deep learning functions with bootstrapped homomorphic encryption”. This should more cleanly cover the advantages of our work, compared to related literature, to the best of our knowledge.", "Thank you for your constructive review!\n\nIt is fair to challenge our claims that “there has been no complete implementation of established deep learning approaches”, because there have been some implementations of deep learning models whereby a server can perform inference, including SecureML, Cryptonets [A] and MiniONN [C]. With this in mind, it is important that we clarify our problem scenario. The server does not want to reveal the model to the client, and the client does not want to reveal the input to the server. While all of the given approaches secure the client input, only Cryptonets and our paper secure the model structure from a client, who may wish to reverse-engineer the model. MiniONN proposes obfuscating the model to alleviate this issue, but an implementation of this for arbitrary architectures is not given, would not be trivial, and would increase the number of client-server exchanges. \n\nBecause SecureML and MiniONN are related works, we have updated the background section in our paper to discuss these works. We have also updated the problem scenario to more clearly explain what we consider to be a “complete implementation”.\n\nWe agree that for a shallow model using message packing, leveled FHE could be faster than binary FHE, and conversely a leveled FHE would become impractical for sufficiently deep models. \n\nIt is also important to note that Cryptonets uses the square activation instead of ReLU, and they present some disadvantages to this approach, in particular the unbounded derivative, making training difficult and limiting model depth. The square activation is also one of the most expensive operations in their network, because two ciphertexts must be multiplied. \nMiniONN can perform ReLU, but it does not use FHE, leading to other tradeoffs as discussed.\n\nWe have updated the end of the results section, to better clarify the comparison between our work and Cryptonets, with regards to model size, depth and efficiency.\n\nWe considered our circuits efficient in that they were much faster using a hybrid approach, compared to using only ciphertexts, and also that they allowed for an simpler implementation by abstracting plaintext, ciphertext and hybrid adders into a single unit. We have updated the “Hybrid Homomorphic Encryption” subsection of the design section, to better clarify that this is why we considered our approach efficient and simple.\n\nWe have also updated the abstract, with the intention of toning down our claims, by using the language “no complete implementation of common deep learning operations, for arbitrary model depths, using homomorphic encryption”, and “efficiently implementing many deep learning functions with bootstrapped homomorphic encryption”. This should more cleanly cover the advantages of our work, compared to related literature, to the best of our knowledge.\n", "Thank you for your detailed review!\n\nWe chose not to use a garbled circuit approach for our work, because this would reveal at least in part, the structure of the model. Part of our problem scenario is that the server does not wish to reveal the model to the client, and by extension the model’s structure.\n\nWe do make comparisons between our work and Cryptonets, under the reference “Dowlin et al. (2016)”. It is fair to compare our paper with theirs, since they share a common goal. Their paper discusses three activation functions: sigmoid, ReLU and square. They do not attempt to implement sigmoid and ReLU, and instead use the square activation exclusively. Their paper presents the disadvantages to the square activation, in particular the unbounded derivative, making training difficult and limiting the depth of any model using this approach. It is also one of the most expensive operations in their network, because they must multiply two ciphertexts together. We have updated the “Activation Functions” subsection of the design section, to discuss the square activation in more detail.\n\nBecause we use binary circuits, our approach can exactly replicate the ReLU activation, and a piecewise linear approximation of sigmoid. We implement both of these, however we did not feel that it was necessary to implement the square activation, because this was used in Cryptonets as a replacement for ReLU, to solve a problem unique to arithmetic circuits.\n\nWe did not include decryption times in our results, because they were executing in less than a microsecond. Because our system only requires decryption at the very end of the process, it is a negligible cost compared to overall execution time. We have updated the results section to better clarify why we did not include these measurements.\n\nWe did not feel that securely training a neural network, such as with SecureML, would be of benefit for our problem scenario. If a model is securely trained, all weights are restricted to those clients which provided training data, leading to a different scenario where the server hosts the model structure, the client provides the training data, and neither party has access to the weights. If the server chose to give the weights to the client, then the client could reconstruct the model and run it in plaintext, removing the need for the server. \nSimilarly with “Privacy-preserving deep learning”, their goal is to have multiple parties collaboratively train a model, without revealing their respective training data. This is also leads to a different scenario, where each client has a local model. \nWe have added a short “Privacy-Preserving Model Training” subsection to the background section, to reference these works and better clarify why we do not consider model training.\n\nIt is fair to challenge the novelty of our work. As discussed, there have been a number of works which implement neural networks, and secure client inputs. However we feel that under our problem scenario, where the server does not wish to reveal the model to the client, and the client does not wish to reveal the input to the server, our approach is novel because it permits important functionality that is not present in Cryptonets, and allows the server to keep its model completely private, unlike SecureML and “Privacy-preserving deep learning”. We have updated the problem scenario, to hopefully prevent any ambiguity over what our goals were for this work.\n\nTo address the minor details:\n\nBy “weight privacy”, the intended message was that under our problem scenario, the server does not have to reveal the model structure or weights to the client. While they could encrypt their weights in our framework, it would substantially slow down operations as shown in our comparison between hybrid and ciphertext multipliers. We suggest that the weights are unencrypted, but are kept internal to the server. We have updated the problem scenario to more clearly state that the server does not reveal the model or weights to the client, as opposed to the server explicitly securing the weights.\n\n\"Both deep learning and FHE are relatively recent paradigms\". It is reasonable to consider deep learning and fully homomorphic encryption to be old paradigms. We have changed this sentence to “Both deep learning and FHE have seen significant advances in the past ten years”, reflecting that this work is built upon advances in the past decade.\n\n\"In theory, this system alone could be used to compute anything securely.\" Indeed their system would not solve the halting problem! We have changed this to “compute any arithmetic circuit”.\n\n\"However in practice the operations were incredibly slow, taking up to 30 minutes in some cases.\" We were referring to the time needed to run one bootstrapping operation, using an early implementation of Gentry’s FHE scheme. We have now clarified and referenced this.\n", "Thank you for your positive comments!\n\nTo clarify, our work can extend both TFHE, FHEW, or any system implementing Fully Homomorphic Encryption over binary. As part of our results, we compare TFHE and FHEW, to help show that advances in this field will continue to benefit our work directly, since we can support any new system with minimal effort. We have updated the start of the design section in our paper, to make this statement more carefully.\n\nTo address the notes:\n\nWe have fixed the AND table in Figure 4, thank you for pointing this out.\n\nFor Figure 2, we meant to show the operations getting applied in a linear order, but indeed 1+2*2=5. We have added brackets to Figure 2, and updated Figure 3 to show the process more clearly. Thank you again for finding this.\n" ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ 4, 5, 5, -1, -1, -1, -1 ]
[ "iclr_2018_ByCPHrgCW", "iclr_2018_ByCPHrgCW", "iclr_2018_ByCPHrgCW", "iclr_2018_ByCPHrgCW", "rJhC64Olf", "Sy52MUdgG", "HkCG-X5lG" ]
iclr_2018_H1u8fMW0b
Toward predictive machine learning for active vision
We develop a comprehensive description of the active inference framework, as proposed by Friston (2010), under a machine-learning compliant perspective. Stemming from a biological inspiration and the auto-encoding principles, a sketch of a cognitive architecture is proposed that should provide ways to implement estimation-oriented control policies. Computer simulations illustrate the effectiveness of the approach through a foveated inspection of the input data. The pros and cons of the control policy are analyzed in detail, showing interesting promises in terms of processing compression. Though optimizing future posterior entropy over the actions set is shown enough to attain locally optimal action selection, offline calculation using class-specific saliency maps is shown better for it saves processing costs through saccades pathways pre-processing, with a negligible effect on the recognition/compression rates.
rejected-papers
All 3 reviewers consider the paper insufficiently good, including a post-rebuttal updated score. All reviewers + anonymous comment find that the paper isn't well-enough situated with the appropriate literature. Two reviewers cite poor presentation - spelling /grammar errors making hte paper hard to read. Authors have revised the paper and promise further revisions for final version.
train
[ "SyKJh-qlM", "Hy9KrjINf", "HJedzYOxf", "HkEi3Koxf", "BJLFk84-G", "rJenZIEbf", "Bk9Se8NZM" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper introduces a machine learning adaptation of the active inference framework proposed by Friston (2010), and applies it to the task of image classification on MNIST through a foveated inspection of images. It describes a cognitive architecture for the same, and provide analyses in terms of processing compression and \"confirmation biases\" in the model.\n– Active perception, and more specifically recognition through saccades (or viewpoint selection) is an interesting biologically-inspired approach and seems like an intuitive and promising way to improve efficiency. The problem and its potential applications are well motivated.\n– The perception-driven control formulation is well-detailed and simple to follow.\n– The achieved compression rates are significant and impressive, though additional demonstration of performance on more challenging datasets would have been more compelling\n\nQuestions and comments:\n– While an 85% compression rate is significant, 88% accuracy on MNIST seems poor. A plot demonstrating the tradeoff of \naccuracy for compression (by varying Href or other parameters) would provide a more complete picture of performance. Knowing baseline performance (without active inference) would help put numbers in perspective by providing a performance bound due to modeling choices.\n– What does the distribution of number of saccades required per recognition (for a given threshold) look like over the entire dataset, i.e. how many are dead-easy vs difficult?\n– Steady state assumption: How can this be relaxed to further generalize to non-static scenes?\n– Figure 3 is low resolution and difficult to read.\n\nPost-rebuttal comments:\n\nI have revised my score after considering comments from other reviewers and the revised paper. While the revised version contains more experimental details, the paper in its present form lacks comparisons to other gaze selection and saliency models which are required to put results in context. The paper also contains grammatical errors and is somewhat difficult to understand. Finally, while it proposes an interesting formulation of a well-studied problem, more comparisons and analysis are required to validate the approach.\n", "We want to make the point clear that the connection with Friston Free Energy is not a analogy. Sorry to insist but the predictive policy presented in this paper is a direct derivation of Friston active inference principle (minimize Evidence lower bound with action). The trick is to consider the latent state as coding for the entire scene (which only visible in parts). Each partial view thus allows to refine the estimation of z, which in turn makes the next perception less \"surprising\" (i.e. lowers E_q -log P(x|z, u) for all u). So please reconsider your statement for it is quite deceiving to other reviewers and readers. \nNext the Free Energy principle is a coding optimization principle so it is neither blind or watchful to rewards, it depends how you formulate your problem (this being not the subject though). \nThe active inference framework is not \"much different\" for it has tight relations with the active vision litterature and grounds on the same probabilistic framework (partial observation of a scene, bayesian inference etc..). The static simplification is also extremely classic and present in most cited papers, so it is not unusual at all. This is finally quite a bunch of surprising comments though not critically related to the actual content of the paper!\n \nNext the fovea-based model is given with full implementation detail. Pages 5-6 provide everything needed to reproduce the numerical experiments. \nLast, more comparisons with existing models should indeed be done though there is little room for improvement in the current setting. The missing part/future work being comparing inhibition of return simplification with trajectory-based optimization. \n\nWe also tried to better separate in the new version the review part from the contributions. Apart from the introduction, most of the related work has been pushed to p.8. Pages 3-4 ontain original derivations of the original formula that are not present in the initial papers.\n ", "It is rather difficult to evaluate the manuscript. A large part of the manuscript reviews various papers from the active vision domain and subsequently proposes that this can directly be modeled using Friston’s free energy principle, essentially, by “analogy”, as the authors state. This extends up to page 4. I would argue, that this is quite a stretch, as the free energy principle is essentially blind to the idea of rewards and preferable states such that all tasks are essentially evaluated in terms surprise reduction. This is very much different from large part of the cited classic active vision literature. The authors furthermore introduce a simplification of the setting, i.e. that nothing changes in a scene during saccadic exploration, which is rather unusual for active vision problems. \nThe authors provide some detail about the actual implementation of their model, section 4, but the in depth details required at ICLR are missing. No comparisons to other gaze selection models or saliency models are given. \nFurthermore, the manuscript seems to suggest, that the simulation results are somehow related to human vision as it is stated:\n“The model provides apparently realistic saccades, for they cover the full range of the image and tend to point over regions that contain class-characteristic pixels.”\nbut no actual comparisons or evaluations are provided. ", "In this paper, the authors present a computational framework for the active vision problem. Motivating the study biologically, the authors explain how the control policy can be learned to reduce the entropy of the posterior belief, and present an application (MNIST digit classification) to substantiate their proposal.\n\nI am not convinced about the novelty and contribution of the work. The active vision/sensing problem has been well studied and both the information theory and Bayes risk formulations have already been considered in previous works (see Najemnik and Geisler, 2005; Butko and Movellan, 2010; Ahmad and Yu, 2013).\n\nThe paper is also rife with spelling mistakes and grammatical errors and needs a thorough revision. Examples: foveate inspection the data (abstract), may allow to (motivation), tu put it clear (motivation), on contrary to animals retina (footnote 1), minimize at most the current uncertainty (perception-driven control), center an keep (fovea-based implementation), degrade te recognition (outlook and perspective). The citations are in non-standard format (section 1.2: Kalman (1960)).\n\nOverall, I think the paper considers an important problem but the contribution to the state of the art is minimal, and editing highly lacking. \n\n1. J Najemnik and W S Geisler. Optimal eye movement strategies in visual search. Nature, 434(7031):387–91, 2005.\n2. N J Butko and J R Movellan. Infomax control of eye movements. IEEE Transactions on Autonomous Mental Development, 2(2):91–107, 2010.\n3. S Ahmad and A J Yu. Active sensing as Bayes-optimal sequential decision-making. Uncertainty in Artificial Intelligence, 2013.", "We agree the 3 kindly provided references address a similar problem for they use a foveated (partial) view of the scene, and use a sequential evidence accumulation process based on a generative model to uncover the scene. The differences with our work are however substantial. A quite common mistake made in ref. 1 is to define the objective as the one-step forward reconstruction accuracy. This may coincide with the recognition accuracy in special cases, but should fail in most cases (as already pointed out in 2. and 3). In contrast, papers 2 and 3 are about trajectory-based policy optimization with policy learned from a continuous belief vector with function approximators (namely Policy gradient in 2., Monte Carlo/RBF in 3.).\nIn our case the policy is not learned but directly processed (optimized) from the generative model. The trick is to do a local optimization using a “two-steps ahead” prediction that predicts the next posterior (the effect of the next observation), consistently with Friston’s active inference/predictive coding approach. Another substantial difference is our scene decoding/feature selection approach that contrasts with the standard visual search tasks and provides a link with classical ML setups. \nThose references will be included in the final version with appropriate comments / comparisons.\n", "Sorry but the connection with the encoding/free energy principles is substantial here (it is not a mere analogy). The posterior entropy minimization from action selection directly derives from free energy minimization first principles (see Friston et al 2012 ref in paper that links free energy to posterior entropy minimization). By the way surprise reduction is an objective in itself that can be set up as a reward (an ‘intrinsic’ reward) with possible connections with reinforcement learning “extrinsic” rewards and action optimization.\nSecond, the steady state assumption (nothing changes in the scene) is not that uncommon in active vision (see previous comment). Maybe are you puzzled by the difference between a visual scene (the entire image) that doesn’t change (but is covered) and and a view (which is the current perception given viewpoint u) which changes at each saccade?\nThird, a comparison with low-level Itti & Koch saliency models is not relevant here for the MNIST images are not “natural” enough (no texture, flat background etc.). The saliency maps that we build relate to the critical viewpoints where discriminating features are expected (high-level / recognition oriented saliency maps). \nAdditional details should be put in the final version regarding the effect of H_ref on classification rates, the comparison with baseline (full image) recognition, random saccades, two-steps ahead predictive policy and pre-processed (high-level saliency maps based) policyn, see : \nhttps://drive.google.com/open?id=1wFvPUgiN7ekaAaIAQ5KMHgA-H0plttv2\nLast remark : the apparent realism of the saccades relates to the full image coverage but you are right this is not quantified here. \n", "Thanks for the positive feedback. We intend to put additional figures in the final version, some of them showing the effect of varying Href (speed/accuracy trade-off) as well as the distribution of saccades length. You can find a preview here : https://drive.google.com/open?id=1wFvPUgiN7ekaAaIAQ5KMHgA-H0plttv2\nwhich should respond most of your questions.\nThe steady-state (or static) assumption is quite common in the field for the eyes (or sensor) movements have typically no effect on the environment intrinsic states. The generalization to non-static scenes is straightforward but more difficult to handle (the difficulty lies on building appropriate generative models, i.e. predicting the effect of compound actions and/or external moves on a sensory field made of pixels at reasonable computational cost) \n\n" ]
[ 5, -1, 3, 3, -1, -1, -1 ]
[ 2, -1, 4, 5, -1, -1, -1 ]
[ "iclr_2018_H1u8fMW0b", "HJedzYOxf", "iclr_2018_H1u8fMW0b", "iclr_2018_H1u8fMW0b", "HkEi3Koxf", "HJedzYOxf", "SyKJh-qlM" ]
iclr_2018_r1ayG7WRZ
Don't encrypt the data; just approximate the model \ Towards Secure Transaction and Fair Pricing of Training Data
As machine learning becomes ubiquitous, deployed systems need to be as accu- rate as they can. As a result, machine learning service providers have a surging need for useful, additional training data that benefits training, without giving up all the details about the trained program. At the same time, data owners would like to trade their data for its value, without having to first give away the data itself be- fore receiving compensation. It is difficult for data providers and model providers to agree on a fair price without first revealing the data or the trained model to the other side. Escrow systems only complicate this further, adding an additional layer of trust required of both parties. Currently, data owners and model owners don’t have a fair pricing system that eliminates the need to trust a third party and training the model on the data, which 1) takes a long time to complete, 2) does not guarantee that useful data is paid valuably and that useless data isn’t, without trusting in the third party with both the model and the data. Existing improve- ments to secure the transaction focus heavily on encrypting or approximating the data, such as training on encrypted data, and variants of federated learning. As powerful as the methods appear to be, we show them to be impractical in our use case with real world assumptions for preserving privacy for the data owners when facing black-box models. Thus, a fair pricing scheme that does not rely on secure data encryption and obfuscation is needed before the exchange of data. This pa- per proposes a novel method for fair pricing using data-model efficacy techniques such as influence functions, model extraction, and model compression methods, thus enabling secure data transactions. We successfully show that without running the data through the model, one can approximate the value of the data; that is, if the data turns out redundant, the pricing is minimal, and if the data leads to proper improvement, its value is properly assessed, without placing strong assumptions on the nature of the model. Future work will be focused on establishing a system with stronger transactional security against adversarial attacks that will reveal details about the model or the data to the other party.
rejected-papers
The reviewers highlight a lack of technical content and poor writing. They all agree on rejection. There was no author rebuttal or pointer to a new version.
test
[ "Hy0ZkHuxG", "BktJHw_lM", "BJ8Ijatxz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary\n\nThe paper addresses the issues of fair pricing and secure transactions between model and data providers in the context of machine learning real-world application.\n\nMajor\n\nThe paper addresses an important issue regarding the real-world application of machine learning, that is, the transactions between data and model provider and the associated aspects of fairness, pricing, privacy, and security.\n\nThe originality and significance of the work reported in this paper are difficult to comprehend. This is largely due to the lack of clarity, in general, and the lack of distinction between what is known and what is proposed. I failed to find any clear description of the proposed approach and any evaluation of the main idea.\n\nMost of the discussions in the paper are difficult to follow due to that many of the statements are vague or unclear. There are some examples of this vagueness illustrated under “minor issues”. Together, the many minor issues contribute to a major communication issue, which significantly reduces readability of the paper. A majority of the references included in the reference section lack some or all of the required meta data.\n\nIn my view, the paper is out of scope for ICLR. Neither the CFP overview nor the (non-exhaustive) list of relevant topics suggest otherwise. In very general terms, the paper could of course be characterised as dealing with machine learning implementation/platform/application but the issues discussed are more connected to privacy, security, fair transactions, and pricing.\n\nIn summary; although there is no universal rule on how to structure research papers, a more traditional structure (introduction, aim & scope, background, related work, method, results, analysis, conclusions & future work) would most certainly have benefitted the paper through improved clarity and readability. Although some interesting works on adversarial learning, federated learning, and privace-preserving training are cited in the paper, the review and use of these references did not contribute to a better understanding of the topic or the significance of the contribution in this paper. I was unable to find any support in the paper for the strong general result stated in the abstract (“We successfully show that without running the data through the model, one can approximate the value of the data”).\n\nMinor issues (examples)\n\n- “Models trained only a small scale of data” (missing word)\n- “to prevent useful data from not being paid” (unclear meaning)\n- “while the company may decline reciprocating gifts such as academic collaboration, while using the data for some other service in the future” (unclear meaning)\n- “since any data given up is given up ” (unclear meaning)\n- “a user of a centralized service who has given up their data will have trouble telling if their data exchange was fair at all (even if their evaluation was purely psychological)” (unclear meaning)\n- “For a generally deployed model, it can take any form. Designing a transaction strategy for each one can be time-consuming and difficult to reason about” (unclear meaning)\n- “(et al., 2017)” (unknown reference)\n- “Osbert Bastani, Carolyn Kim, and Hamsa Bastani. Interpreting blackbox models via model extraction, 2017” (incomplete reference data)\n- “Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding, 2015.\nGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network, 2015.\nPang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions, 2017.” (Incomplete reference data)\n- “H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agera y Arcas. Communication-efficient learning of deep networks from decentralized data. 2016.” (Incomplete reference data)\n- “et al. Richard Craid.” (Incorrect author reference style)\n- “Ryo Yonetani, Vishnu Naresh Boddeti, Kris M. Kitani, and Yoichi Sato. Privacy-preserving visual learning using doubly permuted homomorphic encryption, 2017.\nChiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization, 2016.” (Incomplete reference data)", "The paper discusses a setting in which an existing dataset/trained model is augmented/refined by adding additional datapoints. Issues of how to price the new data are discussed in a high level, abstract way, and arguments against retrieving the new data for free or encrypting it are presented.\n\nOverall, the paper is of an expository nature, discussing high-level ideas rather than actually implementing them, and does not experimentally or theoretically substantiate any of its claims. This makes the technical contribution rather shallow. Interesting questions do arise, such as how to assess the value of new data and how to price datapoints, but these questions are never addressed (neither theoretically nor empirically). Though main points are valid, the paper is also rife with informal statements and logical jumps, perhaps due to the expository/high-level approach taken in discussing these issues.\n\nDetailed comments:\n\nThe (informal) information theoretic argument has a few holes. The claim is roughly that every datapoint (~1Mbyte image) contributes ~1M bits of changes in a model, which can be quite revealing. As a result, there is no benefit from encrypting the datapoint, as the mapping from inputs to changes is insecure (in an information-theoretic sense) in itself. This assumes that every step of stochastic gradient descent (one step per image) is done in the clear; this is not what one would consider secure in cryptography literature. A secure function evaluation (SFE) would encrypt the data and the computation in an end-to-end fashion; in particular, it would only reveal the final outcome of SGD over all images in the dataset without revealing any intermediate steps. Presuming that the new dataset is large (i.e., having N images), the \"information theoretic\" limit becomes ~N x 1Mbyte inputs for ~1M function outputs (the finally-trained model). In this sense, this argument that \"encryption is hopeless\" is somewhat brittle.\n\nEncryption-issues aside, the paper would have been much stronger if it spent more effort in formalizing or evaluating different methods for assessing the value of data. The authors approach this by treating the ML algorithm as a blackbox, and using influence functions (a la Bastani 2017) to assess the impact of different inputs on the finally trained model (again, this is proposed but not implemented/explored/evaluated in any way). This is a design choice, but it is not obvious. There is extensive literature in statistics and machine learning on the areas of experimental design and active learning. Both are active, successful research areas, and both can be provide tools to formally reason about the value of data/labels not yet seen; the paper summarily ignores this literature.\n\n\nExamples of imprecise/informal statements:\n\n\"The fairness in the pricing is highly questionable\"\n\"implicit contracts get difficult to verify\"\n\"The fairness in the pricing is dubious\"\n\"As machine learning models become more and more complicated, its (sic) capability can outweigh the privacy guarantees encryption gives us\"\n\"as an image classifier's model architecture changes, all the data would need to be collected and purchased again\"\n\"Interpretability solutions aim to alleviate the notoriety of reasonability of neural networks\"", "This paper's abstract is reasonably interesting and has importance given the landscape that is developing. Unfortunately, however, the body of the paper disappoints, as it has no real technical content or contribution. The paper also needs a spelling, grammar, typesetting, and writing check. \n\nI don't mind the restriction of the setting under study to be adding a small dataset to a model trained on a large dataset, but I don't agree with the way the authors have stated things in the first paragraph of the paper because there are many real-world domains and applications that are necessarily of the small data variety.\n\nIn Section 3.3., the authors should either make a true information-theoretic statement or shorten significantly.\n" ]
[ 2, 4, 3 ]
[ 4, 5, 5 ]
[ "iclr_2018_r1ayG7WRZ", "iclr_2018_r1ayG7WRZ", "iclr_2018_r1ayG7WRZ" ]
iclr_2018_H1BHbmWCZ
TOWARDS ROBOT VISION MODULE DEVELOPMENT WITH EXPERIENTIAL ROBOT LEARNING
n this paper we present a thrust in three directions of visual development us- ing supervised and semi-supervised techniques. The first is an implementation of semi-supervised object detection and recognition using the principles of Soft At- tention and Generative Adversarial Networks (GANs). The second and the third are supervised networks that learn basic concepts of spatial locality and quantity respectively using Convolutional Neural Networks (CNNs). The three thrusts to- gether are based on the approach of Experiential Robot Learning, introduced in previous publication. While the results are unripe for implementation, we believe they constitute a stepping stone towards autonomous development of robotic vi- sual modules.
rejected-papers
Reviewers unanimous on rejection. Authors don't maintain anonymity. No rebuttal from authors. Poorly written
train
[ "S1w1mX_xG", "B1IfKjYgM", "B1XsJ_3lf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper is motivated with building robots that learn in an open-ended way, which is really interesting. What it actually investigates is the performance of existing image classifiers and object detectors. I could not find any technical contribution or something sufficiently mature and interesting for presenting in ICLR.\n\nSome issues:\n- submission is supposed to be double blind but authors reveal their identity at the start of section 2.1.\n- implementation details all over the place (section 3. is called \"Implementation\", but at that point no concrete idea has been proposed, so it seems too early for talking about tensorflow and keras).\n", "This work explores some approaches in the object detection field of computer vision: (a) a soft attention map based on the activations on convolutional layers, (b) a classification regarding the location of an object in a 3x3 grid over the image, (c) an autoencoder that the authors claim to be aware of the multiple object instances in the image. These three proposals are presented in a framework of a robot vision module, although neither the experiments nor the dataset correspond to this domain.\n\nFrom my perspective, the work is very immature and seems away from current state of the art on object detection, both in the vocabulary, performance or challenges. The proposed techniques are assessed in a dataset which is not described and whose results are not compared with any other technique. This important flaw in the evaluation prevents any fair comparison with the state of the art.\n\nThe text is also difficult to follow. The three contributions seem disconnected and could have been presented in separate works with a more deeper discussion. In particular, I have serious problems understanding:\n\n1. What is exactly the contribution of the CNN pre-trained with IMageNet when learning the soft-attention maps ? The reference to a GAN architecture seems very forced and out of the scope.\n\n2. What is the interest of the localization network ? The task it addresses seems very simple and in any case it requires a manual annotation of a dataset of objects in each of the predefined locations in the 3x3 grid.\n\n3. The authors talk about an autoencoder architecture, but also on a classification network where the labels correspond to the object count. I could not undertstand what is exactly assessed in this section.\n\nFinally, the authors violate the double-bind review policy by clearly referring to their previous work on Experiental Robot Learning.\n\nI would encourage the authors to focus in one of the research lines they point in the paper and go deeper into it, with a clear understanding of the state of the art and the specific challenges these state of the art techniques may encounter in the case of robotic vision.", "The authors disclosed their identity and violated the terms of double blind reviews.\nPage 2 \"In our previous work (Aly & Dugan, 2017)\n\nAlso the page 1 is full of typos and hard to read." ]
[ 3, 2, 2 ]
[ 4, 3, 4 ]
[ "iclr_2018_H1BHbmWCZ", "iclr_2018_H1BHbmWCZ", "iclr_2018_H1BHbmWCZ" ]